feat: add E2E test suite with nightly CI, fix dashboard Dockerfile #52

Merged
sharang merged 3 commits from feat/e2e-tests into main 2026-03-30 10:04:07 +00:00
Owner

17 E2E tests covering repos, findings, cascade delete, DAST, and stats. Nightly CI with MongoDB. Also fixes dioxus-cli version in Dockerfile.

17 E2E tests covering repos, findings, cascade delete, DAST, and stats. Nightly CI with MongoDB. Also fixes dioxus-cli version in Dockerfile.
sharang added 1 commit 2026-03-30 09:01:29 +00:00
feat: add E2E test suite with nightly CI, fix dashboard Dockerfile
Some checks failed
CI / Check (pull_request) Failing after 9m4s
CI / Detect Changes (pull_request) Has been skipped
CI / Deploy Agent (pull_request) Has been skipped
CI / Deploy Dashboard (pull_request) Has been skipped
CI / Deploy Docs (pull_request) Has been skipped
CI / Deploy MCP (pull_request) Has been skipped
5b07d38907
E2E Tests:
- 17 integration tests covering: health, repos CRUD, findings lifecycle,
  cascade delete (SAST + DAST + pentest), DAST targets, stats overview
- TestServer harness: spins up agent API on random port with isolated
  MongoDB database per test, auto-cleanup
- Added lib.rs to expose agent internals for integration tests
- Nightly CI workflow with MongoDB service container (3 AM UTC)

Tests verify:
- Repository add/list/delete + duplicate rejection + invalid ID handling
- Finding creation, filtering by severity/repo, status updates, bulk updates
- Cascade delete: repo deletion removes all DAST targets, pentest sessions,
  attack chain nodes, DAST findings, SAST findings, and SBOM entries
- DAST target CRUD and empty finding list
- Stats overview accuracy with zero and populated data

Also:
- Fix Dockerfile.dashboard: bump dioxus-cli 0.7.3 → 0.7.4 (compile fix)
- Fix clippy: allow new_without_default for pattern scanners

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sharang reviewed 2026-03-30 09:02:57 +00:00
sharang left a comment
Author
Owner

Compliance scan found 40 issue(s) in this PR:

  • [medium] code-review/complexity: Complex boolean expression in test server startup
  • [high] code-review/security: Hardcoded MongoDB Credentials in Workflow Configuration
  • [medium] code-review/logic: Potential Dead Code Removal
  • [high] code-review/security: Insecure Direct Object Reference (IDOR) Potential in Repository Deletion
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [high] code-review/logic: Missing Error Handling in E2E Test Step
  • [medium] code-review/convention: Hardcoded MongoDB URI in test code
  • [high] code-review/logic: Incorrect repository ID used in stats test
  • [high] code-review/logic: Incorrect MongoDB collection name in findings test
  • [medium] code-review/convention: Hardcoded MongoDB URI in test code
  • [medium] code-review/security: Hardcoded MongoDB Credentials in Test Code
  • [medium] code-review/convention: Potential panic in test cleanup when dropping database
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/convention: Inconsistent test module naming
  • [medium] code-review/logic: Inconsistent MongoDB connection handling across test helper functions
  • [low] code-review/convention: Unnecessary allow(dead_code) in lib.rs
  • [medium] code-review/logic: Unnecessary Dev Dependencies Added
  • [medium] code-review/security: Hardcoded MongoDB Credentials in Test Code
  • [medium] code-review/convention: Hardcoded timeout in health check loop
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/logic: Missing error handling for MongoDB connection in findings test
  • [high] code-review/logic: Missing error handling for MongoDB operations
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/complexity: Deeply nested control flow in server readiness check
  • [medium] code-review/logic: Missing database cleanup on test server drop
  • [medium] code-review/complexity: Complex boolean expression in test helper
  • [medium] code-review/complexity: Deeply nested control flow in test assertions
  • [high] code-review/logic: Incorrect Git Checkout in Nightly Workflow
  • [medium] code-review/complexity: Complex boolean expression in test assertion
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/convention: Duplicated MongoDB connection setup in test helpers
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/convention: Missing error propagation in main function
  • [medium] code-review/logic: Potential race condition in bulk update test due to direct DB access
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/complexity: Complex boolean expression in GitHub Actions workflow
  • [medium] code-review/complexity: Duplicated MongoDB connection setup
  • [medium] code-review/complexity: Deeply nested control flow in test setup
  • [medium] code-review/convention: Inconsistent error handling in E2E workflow
  • [medium] code-review/logic: Incorrect Dependency Version Bump
Compliance scan found **40** issue(s) in this PR: - **[medium]** code-review/complexity: Complex boolean expression in test server startup - **[high]** code-review/security: Hardcoded MongoDB Credentials in Workflow Configuration - **[medium]** code-review/logic: Potential Dead Code Removal - **[high]** code-review/security: Insecure Direct Object Reference (IDOR) Potential in Repository Deletion - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[high]** code-review/logic: Missing Error Handling in E2E Test Step - **[medium]** code-review/convention: Hardcoded MongoDB URI in test code - **[high]** code-review/logic: Incorrect repository ID used in stats test - **[high]** code-review/logic: Incorrect MongoDB collection name in findings test - **[medium]** code-review/convention: Hardcoded MongoDB URI in test code - **[medium]** code-review/security: Hardcoded MongoDB Credentials in Test Code - **[medium]** code-review/convention: Potential panic in test cleanup when dropping database - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/convention: Inconsistent test module naming - **[medium]** code-review/logic: Inconsistent MongoDB connection handling across test helper functions - **[low]** code-review/convention: Unnecessary allow(dead_code) in lib.rs - **[medium]** code-review/logic: Unnecessary Dev Dependencies Added - **[medium]** code-review/security: Hardcoded MongoDB Credentials in Test Code - **[medium]** code-review/convention: Hardcoded timeout in health check loop - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/logic: Missing error handling for MongoDB connection in findings test - **[high]** code-review/logic: Missing error handling for MongoDB operations - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/complexity: Deeply nested control flow in server readiness check - **[medium]** code-review/logic: Missing database cleanup on test server drop - **[medium]** code-review/complexity: Complex boolean expression in test helper - **[medium]** code-review/complexity: Deeply nested control flow in test assertions - **[high]** code-review/logic: Incorrect Git Checkout in Nightly Workflow - **[medium]** code-review/complexity: Complex boolean expression in test assertion - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/convention: Duplicated MongoDB connection setup in test helpers - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/convention: Missing error propagation in main function - **[medium]** code-review/logic: Potential race condition in bulk update test due to direct DB access - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/complexity: Complex boolean expression in GitHub Actions workflow - **[medium]** code-review/complexity: Duplicated MongoDB connection setup - **[medium]** code-review/complexity: Deeply nested control flow in test setup - **[medium]** code-review/convention: Inconsistent error handling in E2E workflow - **[medium]** code-review/logic: Incorrect Dependency Version Bump
@@ -0,0 +14,4 @@
concurrency:
group: nightly-e2e
cancel-in-progress: true
Author
Owner

[high] Hardcoded MongoDB Credentials in Workflow Configuration

The nightly CI workflow (.gitea/workflows/nightly.yml) contains hardcoded MongoDB credentials in the TEST_MONGODB_URI environment variable. These credentials are exposed in the workflow configuration and could be accessed by anyone with read access to the repository, potentially allowing unauthorized access to the test database.

Suggested fix: Remove hardcoded credentials from the workflow file and use GitHub Actions secrets or a secure vault service to inject the database credentials at runtime.

Scanner: code-review/security | CWE: CWE-798

**[high] Hardcoded MongoDB Credentials in Workflow Configuration** The nightly CI workflow (.gitea/workflows/nightly.yml) contains hardcoded MongoDB credentials in the TEST_MONGODB_URI environment variable. These credentials are exposed in the workflow configuration and could be accessed by anyone with read access to the repository, potentially allowing unauthorized access to the test database. Suggested fix: Remove hardcoded credentials from the workflow file and use GitHub Actions secrets or a secure vault service to inject the database credentials at runtime. *Scanner: code-review/security | CWE: CWE-798* <!-- compliance-fp:b89ca0314ccd87c2ebae0f837d53434ab52e1ea699b7aa1f44da43e353815f65 -->
@@ -0,0 +21,4 @@
name: E2E Tests
runs-on: docker
container:
image: rust:1.94-bookworm
Author
Owner

[high] Incorrect Git Checkout in Nightly Workflow

The nightly workflow uses git checkout FETCH_HEAD which may not correctly handle shallow clones or specific commit references. When using --depth=1, FETCH_HEAD might point to an unexpected commit or fail entirely if the repository state isn't properly initialized.

Suggested fix: Use git checkout -b temp_branch FETCH_HEAD or ensure proper handling of the fetched commit with explicit reference resolution.

*Scanner: code-review/logic | *

**[high] Incorrect Git Checkout in Nightly Workflow** The nightly workflow uses `git checkout FETCH_HEAD` which may not correctly handle shallow clones or specific commit references. When using `--depth=1`, FETCH_HEAD might point to an unexpected commit or fail entirely if the repository state isn't properly initialized. Suggested fix: Use `git checkout -b temp_branch FETCH_HEAD` or ensure proper handling of the fetched commit with explicit reference resolution. *Scanner: code-review/logic | * <!-- compliance-fp:8514398dee99f9b2c2ce5751bcd8aa59184e21da957b2c6286756caf8fc91d0a -->
@@ -0,0 +42,4 @@
| tar xz --strip-components=1 -C /usr/local/bin/ sccache-v0.9.1-x86_64-unknown-linux-musl/sccache
chmod +x /usr/local/bin/sccache
env:
RUSTC_WRAPPER: ""
Author
Owner

[high] Missing Error Handling in E2E Test Step

The E2E test step does not explicitly check for failure of the cargo test command. If the tests fail, the workflow will continue without indicating failure, potentially masking test failures.

Suggested fix: Add explicit error checking such as set -e before the cargo test command or use || exit 1 to ensure workflow fails on test failure.

*Scanner: code-review/logic | *

**[high] Missing Error Handling in E2E Test Step** The E2E test step does not explicitly check for failure of the `cargo test` command. If the tests fail, the workflow will continue without indicating failure, potentially masking test failures. Suggested fix: Add explicit error checking such as `set -e` before the cargo test command or use `|| exit 1` to ensure workflow fails on test failure. *Scanner: code-review/logic | * <!-- compliance-fp:c1caa3bb910cc6ebd28a7a8138a1dca0cf8daffd56aa3965dcb5de590f3b1166 -->
@@ -0,0 +45,4 @@
RUSTC_WRAPPER: ""
- name: Run E2E tests
run: cargo test -p compliance-agent --test e2e -- --test-threads=4
Author
Owner

[medium] Inconsistent error handling in E2E workflow

The nightly E2E workflow uses cargo test without the --quiet flag and doesn't capture or log test failures explicitly. If tests fail, the workflow may not properly report the failure to the CI system, potentially masking test failures.

Suggested fix: Add explicit error handling and logging around the test execution step, such as using set -e or checking exit codes explicitly.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling in E2E workflow** The nightly E2E workflow uses `cargo test` without the `--quiet` flag and doesn't capture or log test failures explicitly. If tests fail, the workflow may not properly report the failure to the CI system, potentially masking test failures. Suggested fix: Add explicit error handling and logging around the test execution step, such as using `set -e` or checking exit codes explicitly. *Scanner: code-review/convention | * <!-- compliance-fp:0e3143be5f7b05063028617a8644cbace0a40ed2a07e97f7164bd0cf55524da6 -->
@@ -0,0 +49,4 @@
- name: Show sccache stats
run: sccache --show-stats
if: always()
Author
Owner

[medium] Complex boolean expression in GitHub Actions workflow

The workflow uses an 'if: always()' condition on the final step which could mask failures in previous steps. This makes it harder to detect when tests actually fail since the step will always run regardless of previous step outcomes.

Suggested fix: Remove the 'if: always()' condition and rely on standard workflow failure handling instead. If you need to show stats regardless, consider using a separate job or conditional logic that explicitly checks for success/failure states.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in GitHub Actions workflow** The workflow uses an 'if: always()' condition on the final step which could mask failures in previous steps. This makes it harder to detect when tests actually fail since the step will always run regardless of previous step outcomes. Suggested fix: Remove the 'if: always()' condition and rely on standard workflow failure handling instead. If you need to show stats regardless, consider using a separate job or conditional logic that explicitly checks for success/failure states. *Scanner: code-review/complexity | * <!-- compliance-fp:faf02a438de27ddef1487637a3be46588628d484be0009002f05bfcffad9c281 -->
@@ -1,6 +1,6 @@
FROM rust:1.94-bookworm AS builder
RUN cargo install dioxus-cli --version 0.7.3
RUN cargo install dioxus-cli --version 0.7.4
Author
Owner

[medium] Incorrect Dependency Version Bump

The dioxus-cli version was bumped from 0.7.3 to 0.7.4. While this seems like a minor update, it could introduce breaking changes or compatibility issues with existing code that depends on the previous version.

Suggested fix: Verify that all dependencies and integrations work correctly with dioxus-cli 0.7.4 before merging this change.

*Scanner: code-review/logic | *

**[medium] Incorrect Dependency Version Bump** The dioxus-cli version was bumped from 0.7.3 to 0.7.4. While this seems like a minor update, it could introduce breaking changes or compatibility issues with existing code that depends on the previous version. Suggested fix: Verify that all dependencies and integrations work correctly with dioxus-cli 0.7.4 before merging this change. *Scanner: code-review/logic | * <!-- compliance-fp:a1c9b817ce2d3144852b328a8baa62e3a6b99e0b11d5dde6616c674eac3176e1 -->
@@ -43,2 +43,4 @@
dashmap = { workspace = true }
tokio-stream = { workspace = true }
[dev-dependencies]
Author
Owner

[medium] Unnecessary Dev Dependencies Added

Several dev-dependencies were added to the compliance-agent Cargo.toml including axum and tower-http. These are typically only needed for integration tests or server-side code but are now part of the main crate's dev-deps, which can increase build times and complexity.

Suggested fix: Move these dependencies to a dedicated integration test profile or remove them if they're not actually required for development/testing purposes.

*Scanner: code-review/logic | *

**[medium] Unnecessary Dev Dependencies Added** Several dev-dependencies were added to the compliance-agent Cargo.toml including axum and tower-http. These are typically only needed for integration tests or server-side code but are now part of the main crate's dev-deps, which can increase build times and complexity. Suggested fix: Move these dependencies to a dedicated integration test profile or remove them if they're not actually required for development/testing purposes. *Scanner: code-review/logic | * <!-- compliance-fp:b192cfa323eab5d28a5a4ff14e0a715e13e33f6c534225e4c81f81c2fedc4a7d -->
@@ -0,0 +12,4 @@
pub mod scheduler;
pub mod ssh;
#[allow(dead_code)]
pub mod trackers;
Author
Owner

[low] Unnecessary allow(dead_code) in lib.rs

The #[allow(dead_code)] attribute is applied to the trackers module in lib.rs, which suggests that the module is intentionally unused. However, this can mask potential dead code issues and makes it harder to maintain the codebase by hiding unused code.

Suggested fix: Remove the #[allow(dead_code)] attribute and either implement functionality for the trackers module or remove it entirely if it's truly unused.

*Scanner: code-review/convention | *

**[low] Unnecessary allow(dead_code) in lib.rs** The `#[allow(dead_code)]` attribute is applied to the trackers module in lib.rs, which suggests that the module is intentionally unused. However, this can mask potential dead code issues and makes it harder to maintain the codebase by hiding unused code. Suggested fix: Remove the `#[allow(dead_code)]` attribute and either implement functionality for the trackers module or remove it entirely if it's truly unused. *Scanner: code-review/convention | * <!-- compliance-fp:5bdd3605df83bebae2ab51f87fce135a82a11f45f9a2da88cf484c02fa291201 -->
@@ -12,3 +1,1 @@
#[allow(dead_code)]
mod trackers;
mod webhooks;
use compliance_agent::{agent, api, config, database, scheduler, ssh, webhooks};
Author
Owner

[medium] Potential Dead Code Removal

The removal of individual module declarations in main.rs and the introduction of a single use statement suggests refactoring. However, there's no clear indication that dead code has been removed or that the new structure is fully validated. This could lead to subtle runtime issues if modules are not properly exposed.

Suggested fix: Ensure that all necessary modules are correctly re-exported through the lib.rs and that the new import structure doesn't break any expected functionality.

*Scanner: code-review/logic | *

**[medium] Potential Dead Code Removal** The removal of individual module declarations in main.rs and the introduction of a single use statement suggests refactoring. However, there's no clear indication that dead code has been removed or that the new structure is fully validated. This could lead to subtle runtime issues if modules are not properly exposed. Suggested fix: Ensure that all necessary modules are correctly re-exported through the lib.rs and that the new import structure doesn't break any expected functionality. *Scanner: code-review/logic | * <!-- compliance-fp:41f88a7df4fd8b57780764c05bbf3c04d375edaa0c709869cedec9141321d696 -->
Author
Owner

[medium] Missing error propagation in main function

The main function in main.rs returns Result<(), Box<dyn std::error::Error>> but doesn't handle the error case explicitly. While it will panic on error, this is less robust than explicitly handling and logging the error before exiting.

Suggested fix: Explicitly handle the error case by logging it before exiting, e.g., if let Err(e) = main() { eprintln!("Error: {}", e); std::process::exit(1); }

*Scanner: code-review/convention | *

**[medium] Missing error propagation in main function** The main function in main.rs returns `Result<(), Box<dyn std::error::Error>>` but doesn't handle the error case explicitly. While it will panic on error, this is less robust than explicitly handling and logging the error before exiting. Suggested fix: Explicitly handle the error case by logging it before exiting, e.g., `if let Err(e) = main() { eprintln!("Error: {}", e); std::process::exit(1); }` *Scanner: code-review/convention | * <!-- compliance-fp:530afaaf6714da0d7083e6eb34285a5ecd61987dbc04f436897e1faaa2a48f7e -->
@@ -4,0 +13,4 @@
/// A running test server with a unique database.
pub struct TestServer {
pub base_url: String,
Author
Owner

[medium] Missing database cleanup on test server drop

The TestServer struct does not implement Drop, which means the cleanup() method is never automatically called when a test server goes out of scope. This can lead to leftover test databases in MongoDB, causing resource leaks and potential test interference.

Suggested fix: Implement the Drop trait for TestServer to ensure cleanup() is called when the server instance is dropped. Add impl Drop for TestServer { fn drop(&mut self) { tokio::runtime::Handle::current().block_on(self.cleanup()); } }

*Scanner: code-review/logic | *

**[medium] Missing database cleanup on test server drop** The TestServer struct does not implement Drop, which means the cleanup() method is never automatically called when a test server goes out of scope. This can lead to leftover test databases in MongoDB, causing resource leaks and potential test interference. Suggested fix: Implement the Drop trait for TestServer to ensure cleanup() is called when the server instance is dropped. Add `impl Drop for TestServer { fn drop(&mut self) { tokio::runtime::Handle::current().block_on(self.cleanup()); } }` *Scanner: code-review/logic | * <!-- compliance-fp:7f2eda4eaf506407e7f067c8a66cab6bf7f6c8f98f68978664613963d67637d4 -->
@@ -4,0 +18,4 @@
db_name: String,
mongodb_uri: String,
}
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The test code uses unwrap_or_else() followed by expect() in multiple places, which creates inconsistent error handling patterns. Specifically, line 21 uses unwrap_or_else(|_| "mongodb://root:example@localhost:27017/?authSource=admin".into()) followed by expect("Failed to connect to MongoDB — is it running?") on line 28. This pattern should be consistent - either use expect() directly or handle errors more gracefully.

Suggested fix: Use consistent error handling by either removing the unwrap_or_else and using expect directly on the environment variable, or refactor to use proper error propagation with ? operator.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The test code uses `unwrap_or_else()` followed by `expect()` in multiple places, which creates inconsistent error handling patterns. Specifically, line 21 uses `unwrap_or_else(|_| "mongodb://root:example@localhost:27017/?authSource=admin".into())` followed by `expect("Failed to connect to MongoDB — is it running?")` on line 28. This pattern should be consistent - either use `expect()` directly or handle errors more gracefully. Suggested fix: Use consistent error handling by either removing the `unwrap_or_else` and using `expect` directly on the environment variable, or refactor to use proper error propagation with `?` operator. *Scanner: code-review/convention | * <!-- compliance-fp:469d9c0a9c5c001577c353a26de6f37b6dd58630952a89d41e578b4d7b7c5e24 -->
@@ -4,0 +32,4 @@
.await
.expect("Failed to connect to MongoDB — is it running?");
db.ensure_indexes().await.expect("Failed to create indexes");
Author
Owner

[medium] Complex boolean expression in test server startup

The TestServer::start() function contains a complex boolean expression in the AgentConfig construction that mixes environment variable fallbacks with hardcoded defaults, making it difficult to track which values are being used under what conditions.

Suggested fix: Extract the environment variable loading into separate variables with clear names, and use a more structured approach like a builder pattern or helper functions to make the configuration loading easier to follow and less error-prone.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in test server startup** The TestServer::start() function contains a complex boolean expression in the AgentConfig construction that mixes environment variable fallbacks with hardcoded defaults, making it difficult to track which values are being used under what conditions. Suggested fix: Extract the environment variable loading into separate variables with clear names, and use a more structured approach like a builder pattern or helper functions to make the configuration loading easier to follow and less error-prone. *Scanner: code-review/complexity | * <!-- compliance-fp:e9e6fefe855d78d58f4616ebc55d03a86a68c1166fff2dcb4f9b592358e726c9 -->
@@ -4,0 +74,4 @@
// Build the router with the agent extension
let app = api::routes::build_router()
.layer(axum::extract::Extension(Arc::new(agent)))
.layer(tower_http::cors::CorsLayer::permissive());
Author
Owner

[medium] Deeply nested control flow in server readiness check

The server readiness check in TestServer::start() uses a nested loop with repeated conditional checks that could be flattened to improve readability and reduce the chance of logic errors when modifying the retry mechanism.

Suggested fix: Replace the for loop with a while let Some(result) = client.get(...).send().await.ok() { ... } pattern or extract the health check logic into a separate function to flatten the nesting.

*Scanner: code-review/complexity | *

**[medium] Deeply nested control flow in server readiness check** The server readiness check in TestServer::start() uses a nested loop with repeated conditional checks that could be flattened to improve readability and reduce the chance of logic errors when modifying the retry mechanism. Suggested fix: Replace the for loop with a while let Some(result) = client.get(...).send().await.ok() { ... } pattern or extract the health check logic into a separate function to flatten the nesting. *Scanner: code-review/complexity | * <!-- compliance-fp:e10cc2c8ba2703e6eb21ba7415358a7664b937f3eb7c2775f63e20823a13ff66 -->
@@ -4,0 +128,4 @@
.post(format!("{}{path}", self.base_url))
.json(body)
.send()
.await
Author
Owner

[medium] Hardcoded timeout in health check loop

The health check loop on lines 131-137 has a hardcoded timeout of 50ms between retries and a fixed number of attempts (50). This makes the test brittle and dependent on timing. If the server takes longer to start, the test may fail even though the server is functioning correctly.

Suggested fix: Make the retry interval and maximum attempts configurable or use a more adaptive approach to waiting for the server to become ready.

*Scanner: code-review/convention | *

**[medium] Hardcoded timeout in health check loop** The health check loop on lines 131-137 has a hardcoded timeout of 50ms between retries and a fixed number of attempts (50). This makes the test brittle and dependent on timing. If the server takes longer to start, the test may fail even though the server is functioning correctly. Suggested fix: Make the retry interval and maximum attempts configurable or use a more adaptive approach to waiting for the server to become ready. *Scanner: code-review/convention | * <!-- compliance-fp:6bf1cee307ab60c1a3a4a42bf4fb3db372df72373d2dbec007d26b2ddd6cc8aa -->
@@ -4,0 +154,4 @@
/// Get the unique database name for direct MongoDB access in tests.
pub fn db_name(&self) -> &str {
&self.db_name
}
Author
Owner

[medium] Potential panic in test cleanup when dropping database

The cleanup method on line 157 uses ok() to ignore errors when dropping the database. While this prevents panics, it hides potential failures in database cleanup that could leave test artifacts behind. The test suite should ensure cleanup is reliable or explicitly log failures.

Suggested fix: Consider logging cleanup failures or using a more robust cleanup strategy that ensures databases are properly dropped, rather than silently ignoring errors.

*Scanner: code-review/convention | *

**[medium] Potential panic in test cleanup when dropping database** The cleanup method on line 157 uses `ok()` to ignore errors when dropping the database. While this prevents panics, it hides potential failures in database cleanup that could leave test artifacts behind. The test suite should ensure cleanup is reliable or explicitly log failures. Suggested fix: Consider logging cleanup failures or using a more robust cleanup strategy that ensures databases are properly dropped, rather than silently ignoring errors. *Scanner: code-review/convention | * <!-- compliance-fp:d19397e2892a5b583d7eeb7ab41ca336640683bf0a139bc6f720fbd539a4d6d6 -->
@@ -0,0 +7,4 @@
.unwrap_or_else(|_| "mongodb://root:example@localhost:27017/?authSource=admin".into());
let client = mongodb::Client::with_uri_str(&mongodb_uri).await.unwrap();
let db = client.database(&server.db_name());
Author
Owner

[medium] Hardcoded MongoDB URI in test code

The test code hardcodes the MongoDB connection string in multiple places instead of using environment variables or configuration. This makes tests less flexible and potentially insecure, as it exposes connection details directly in the source code rather than allowing them to be configured externally.

Suggested fix: Use a centralized configuration approach for database connections in tests, possibly through a shared test utility that handles environment variable loading and connection management consistently.

*Scanner: code-review/convention | *

**[medium] Hardcoded MongoDB URI in test code** The test code hardcodes the MongoDB connection string in multiple places instead of using environment variables or configuration. This makes tests less flexible and potentially insecure, as it exposes connection details directly in the source code rather than allowing them to be configured externally. Suggested fix: Use a centralized configuration approach for database connections in tests, possibly through a shared test utility that handles environment variable loading and connection management consistently. *Scanner: code-review/convention | * <!-- compliance-fp:ae6e223faef673f642a0ce214f5ed44b8e24155b374bc75b5b52ce4bbd091eb6 -->
Author
Owner

[medium] Duplicated MongoDB connection setup in test helpers

Multiple test helper functions (insert_dast_target, insert_pentest_session, insert_attack_node, insert_dast_finding, count_docs) each create their own MongoDB client connection using the same hardcoded URI logic. This creates unnecessary connection overhead and violates DRY principle, making it harder to maintain connection configuration.

Suggested fix: Extract MongoDB client creation into a shared helper function that can be reused across all test helpers to reduce duplication and improve maintainability.

*Scanner: code-review/convention | *

**[medium] Duplicated MongoDB connection setup in test helpers** Multiple test helper functions (`insert_dast_target`, `insert_pentest_session`, `insert_attack_node`, `insert_dast_finding`, `count_docs`) each create their own MongoDB client connection using the same hardcoded URI logic. This creates unnecessary connection overhead and violates DRY principle, making it harder to maintain connection configuration. Suggested fix: Extract MongoDB client creation into a shared helper function that can be reused across all test helpers to reduce duplication and improve maintainability. *Scanner: code-review/convention | * <!-- compliance-fp:fc1aa5c7b5158ab4641b468aca6c47d2df8936030f891932ab7c0070c7c21ae9 -->
@@ -0,0 +9,4 @@
let db = client.database(&server.db_name());
let result = db
.collection::<mongodb::bson::Document>("dast_targets")
Author
Owner

[medium] Hardcoded MongoDB Credentials in Test Code

The test code contains hardcoded MongoDB credentials ('root:example') in multiple helper functions. While this is only in test files, it represents poor security hygiene and could lead to accidental exposure if the test code were ever committed to a public repository or if the credentials were reused in production code.

Suggested fix: Use environment variables or a secure configuration management system for database credentials even in test environments. Consider using a dedicated test database with unique credentials.

Scanner: code-review/security | CWE: CWE-798

**[medium] Hardcoded MongoDB Credentials in Test Code** The test code contains hardcoded MongoDB credentials ('root:example') in multiple helper functions. While this is only in test files, it represents poor security hygiene and could lead to accidental exposure if the test code were ever committed to a public repository or if the credentials were reused in production code. Suggested fix: Use environment variables or a secure configuration management system for database credentials even in test environments. Consider using a dedicated test database with unique credentials. *Scanner: code-review/security | CWE: CWE-798* <!-- compliance-fp:666a556b613798e8d21034f2afb84e6c3827c40785f03d42534652b0236ba507 -->
@@ -0,0 +11,4 @@
let result = db
.collection::<mongodb::bson::Document>("dast_targets")
.insert_one(mongodb::bson::doc! {
"name": name,
Author
Owner

[medium] Inconsistent MongoDB connection handling across test helper functions

Each helper function creates its own MongoDB connection instead of reusing a shared connection. This leads to unnecessary resource consumption and potential connection pool exhaustion during test execution, especially since all functions connect to the same database.

Suggested fix: Create a single MongoDB client instance and share it among all helper functions, or refactor to use a common connection utility to avoid repeated connection establishment.

*Scanner: code-review/logic | *

**[medium] Inconsistent MongoDB connection handling across test helper functions** Each helper function creates its own MongoDB connection instead of reusing a shared connection. This leads to unnecessary resource consumption and potential connection pool exhaustion during test execution, especially since all functions connect to the same database. Suggested fix: Create a single MongoDB client instance and share it among all helper functions, or refactor to use a common connection utility to avoid repeated connection establishment. *Scanner: code-review/logic | * <!-- compliance-fp:bdfaf8bbadabecc8883732b4d0471b8ea372966117f50ad6b7bd1168ac4aa620 -->
Author
Owner

[medium] Duplicated MongoDB connection setup

Multiple helper functions repeatedly establish MongoDB connections using the same pattern with environment variable fallback. This creates redundant code and potential inconsistency if the connection logic needs to change.

Suggested fix: Extract MongoDB connection setup into a shared helper function to reduce duplication and improve maintainability.

*Scanner: code-review/complexity | *

**[medium] Duplicated MongoDB connection setup** Multiple helper functions repeatedly establish MongoDB connections using the same pattern with environment variable fallback. This creates redundant code and potential inconsistency if the connection logic needs to change. Suggested fix: Extract MongoDB connection setup into a shared helper function to reduce duplication and improve maintainability. *Scanner: code-review/complexity | * <!-- compliance-fp:bacc9b89684ad14ff1299abb1a7cf92272bcce7f000b1d8e39b4c81927b44de0 -->
@@ -0,0 +12,4 @@
.collection::<mongodb::bson::Document>("dast_targets")
.insert_one(mongodb::bson::doc! {
"name": name,
"base_url": format!("https://{name}.example.com"),
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The test code uses unwrap() extensively when interacting with MongoDB connections and database operations. This violates the consistent error handling pattern established in the rest of the codebase where errors should be properly handled or propagated instead of panicking. This makes tests brittle and hides potential connection or database issues.

Suggested fix: Replace all unwrap() calls with proper error handling using ? operator or explicit error assertions to make tests more robust and maintainable.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The test code uses `unwrap()` extensively when interacting with MongoDB connections and database operations. This violates the consistent error handling pattern established in the rest of the codebase where errors should be properly handled or propagated instead of panicking. This makes tests brittle and hides potential connection or database issues. Suggested fix: Replace all `unwrap()` calls with proper error handling using `?` operator or explicit error assertions to make tests more robust and maintainable. *Scanner: code-review/convention | * <!-- compliance-fp:97476c58e5efd4eb5fe7e4cfe5cf8b0bb09f358119112031033fa369f083ff45 -->
Author
Owner

[high] Missing error handling for MongoDB operations

The test functions use .unwrap() on MongoDB operations which will panic if the database operations fail. This could cause tests to crash instead of properly reporting failures, making it difficult to diagnose issues with the cascade delete functionality.

Suggested fix: Replace .unwrap() with proper error handling using ? operator or explicit error checking to ensure tests fail gracefully when database operations don't succeed.

*Scanner: code-review/logic | *

**[high] Missing error handling for MongoDB operations** The test functions use `.unwrap()` on MongoDB operations which will panic if the database operations fail. This could cause tests to crash instead of properly reporting failures, making it difficult to diagnose issues with the cascade delete functionality. Suggested fix: Replace `.unwrap()` with proper error handling using `?` operator or explicit error checking to ensure tests fail gracefully when database operations don't succeed. *Scanner: code-review/logic | * <!-- compliance-fp:584716b22b11921b441c0a69c9a1076825707c003df1be233474c1f556672999 -->
@@ -0,0 +147,4 @@
// All downstream data should be gone
assert_eq!(count_docs(&server, "dast_targets").await, 0);
assert_eq!(count_docs(&server, "pentest_sessions").await, 0);
Author
Owner

[high] Insecure Direct Object Reference (IDOR) Potential in Repository Deletion

The repository deletion endpoint (/api/v1/repositories/{id}) may be vulnerable to IDOR if proper authorization checks are missing. An attacker could potentially delete repositories they don't own by guessing or enumerating repository IDs. This requires further investigation of the actual implementation but is flagged due to the nature of cascading deletions which can expose data access patterns.

Suggested fix: Ensure that repository deletion endpoints perform proper ownership verification before allowing deletion. Implement RBAC checks to ensure users can only delete repositories they own or have explicit permission to delete.

Scanner: code-review/security | CWE: CWE-639

**[high] Insecure Direct Object Reference (IDOR) Potential in Repository Deletion** The repository deletion endpoint (/api/v1/repositories/{id}) may be vulnerable to IDOR if proper authorization checks are missing. An attacker could potentially delete repositories they don't own by guessing or enumerating repository IDs. This requires further investigation of the actual implementation but is flagged due to the nature of cascading deletions which can expose data access patterns. Suggested fix: Ensure that repository deletion endpoints perform proper ownership verification before allowing deletion. Implement RBAC checks to ensure users can only delete repositories they own or have explicit permission to delete. *Scanner: code-review/security | CWE: CWE-639* <!-- compliance-fp:62d4c103e1bf533d1c71157851db26a26ed1701bd7ac76653eb8ac18220d4b7f -->
@@ -0,0 +12,4 @@
assert_eq!(body["data"].as_array().unwrap().len(), 0);
// Add a target
let resp = server
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The test code uses .unwrap() extensively on deserialization and database operations, which can cause tests to panic instead of failing gracefully when unexpected data structures are returned. This makes test failures harder to debug and can mask underlying issues.

Suggested fix: Replace unwrap() calls with proper error handling using match expressions or assert! macros that provide better failure messages.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The test code uses `.unwrap()` extensively on deserialization and database operations, which can cause tests to panic instead of failing gracefully when unexpected data structures are returned. This makes test failures harder to debug and can mask underlying issues. Suggested fix: Replace unwrap() calls with proper error handling using match expressions or assert! macros that provide better failure messages. *Scanner: code-review/convention | * <!-- compliance-fp:0abbac6de0a7badf47cceef05c9468bf716c13565eab29234691a7e5a9ab40f7 -->
@@ -0,0 +14,4 @@
// Get the DB name from the test server by parsing the health response
// For now, we use a direct insert approach
let db = client.database(&server.db_name());
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

Similar to dast.rs, the findings.rs test file uses unwrap() on JSON parsing and database operations, which can lead to panics instead of controlled test failures when data doesn't match expectations.

Suggested fix: Replace unwrap() calls with proper error handling using match expressions or assert! macros that provide better failure messages.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** Similar to dast.rs, the findings.rs test file uses unwrap() on JSON parsing and database operations, which can lead to panics instead of controlled test failures when data doesn't match expectations. Suggested fix: Replace unwrap() calls with proper error handling using match expressions or assert! macros that provide better failure messages. *Scanner: code-review/convention | * <!-- compliance-fp:ba0d34b58d2ca8f4da93986e03c31e44b80faf3a2d04fd76ea56ecef2bf80698 -->
@@ -0,0 +19,4 @@
let now = mongodb::bson::DateTime::now();
db.collection::<mongodb::bson::Document>("findings")
.insert_one(mongodb::bson::doc! {
"repo_id": repo_id,
Author
Owner

[medium] Missing error handling for MongoDB connection in findings test

The MongoDB client creation and database access in the insert_finding helper function lacks proper error handling. If the MongoDB connection fails or the database name cannot be determined, the test will panic rather than fail gracefully, potentially masking underlying infrastructure issues.

Suggested fix: Add proper error handling around the MongoDB client creation and database access operations using ? operator or explicit error checking to ensure tests fail gracefully when database connectivity issues occur.

*Scanner: code-review/logic | *

**[medium] Missing error handling for MongoDB connection in findings test** The MongoDB client creation and database access in the insert_finding helper function lacks proper error handling. If the MongoDB connection fails or the database name cannot be determined, the test will panic rather than fail gracefully, potentially masking underlying infrastructure issues. Suggested fix: Add proper error handling around the MongoDB client creation and database access operations using ? operator or explicit error checking to ensure tests fail gracefully when database connectivity issues occur. *Scanner: code-review/logic | * <!-- compliance-fp:4a87437d6459e3ade255fbcaa1a77c5469fa52f44744310eebaa9a078fa0c7f5 -->
Author
Owner

[medium] Complex boolean expression in test helper

The insert_finding function contains a complex boolean expression involving multiple nested conditions and error handling that makes it difficult to follow the control flow and increases risk of logic errors during maintenance.

Suggested fix: Extract the MongoDB connection setup and database selection into separate functions to simplify the main logic and improve readability.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in test helper** The `insert_finding` function contains a complex boolean expression involving multiple nested conditions and error handling that makes it difficult to follow the control flow and increases risk of logic errors during maintenance. Suggested fix: Extract the MongoDB connection setup and database selection into separate functions to simplify the main logic and improve readability. *Scanner: code-review/complexity | * <!-- compliance-fp:89829ae8491c5df12de66c1b9657cb500904fe3d9eeeb8557bd84d0160149b97 -->
@@ -0,0 +32,4 @@
})
.await
.unwrap();
}
Author
Owner

[high] Incorrect MongoDB collection name in findings test

The test code attempts to insert findings into a MongoDB collection named 'findings', but based on typical application patterns and the context of the code, it should likely be inserting into a collection named 'vulnerabilities' or similar. This could lead to test data not being properly inserted or retrieved, causing test failures.

Suggested fix: Verify the correct MongoDB collection name for findings in your application schema and update the collection reference accordingly. Check if the application uses 'findings', 'vulnerabilities', or another collection name.

*Scanner: code-review/logic | *

**[high] Incorrect MongoDB collection name in findings test** The test code attempts to insert findings into a MongoDB collection named 'findings', but based on typical application patterns and the context of the code, it should likely be inserting into a collection named 'vulnerabilities' or similar. This could lead to test data not being properly inserted or retrieved, causing test failures. Suggested fix: Verify the correct MongoDB collection name for findings in your application schema and update the collection reference accordingly. Check if the application uses 'findings', 'vulnerabilities', or another collection name. *Scanner: code-review/logic | * <!-- compliance-fp:0d6aab1592447d60b1015685b9b163add511d9e5b223d2f61686e49515f8b353 -->
@@ -0,0 +67,4 @@
assert_eq!(body["total"], 1);
assert_eq!(body["data"][0]["title"], "SQL Injection");
// Filter by repo
Author
Owner

[medium] Deeply nested control flow in test assertions

Multiple test functions contain deeply nested assertion chains that make it hard to track which assertions apply to which conditions, increasing the chance of incorrect test behavior when modifying or extending tests.

Suggested fix: Flatten the nested assertions by extracting them into separate assertion functions or using more structured test approaches like table-driven tests.

*Scanner: code-review/complexity | *

**[medium] Deeply nested control flow in test assertions** Multiple test functions contain deeply nested assertion chains that make it hard to track which assertions apply to which conditions, increasing the chance of incorrect test behavior when modifying or extending tests. Suggested fix: Flatten the nested assertions by extracting them into separate assertion functions or using more structured test approaches like table-driven tests. *Scanner: code-review/complexity | * <!-- compliance-fp:e242b2a49b771f920983d1e4a0bb596935e36571296564b71e7e97fb78c74101 -->
@@ -0,0 +82,4 @@
insert_finding(&server, "repo1", "Test Bug", "medium").await;
// Get the finding ID
let resp = server.get("/api/v1/findings").await;
Author
Owner

[medium] Potential race condition in bulk update test due to direct DB access

The bulk update test uses direct MongoDB access to insert findings before performing the API test. This creates a potential race condition where the API might not see the newly inserted data immediately, especially if there are caching layers or background processes involved. The test assumes immediate consistency between direct DB writes and API reads.

Suggested fix: Consider adding a small delay or retry mechanism after inserting findings via MongoDB before proceeding with API tests, or ensure that the API layer properly handles eventual consistency scenarios.

*Scanner: code-review/logic | *

**[medium] Potential race condition in bulk update test due to direct DB access** The bulk update test uses direct MongoDB access to insert findings before performing the API test. This creates a potential race condition where the API might not see the newly inserted data immediately, especially if there are caching layers or background processes involved. The test assumes immediate consistency between direct DB writes and API reads. Suggested fix: Consider adding a small delay or retry mechanism after inserting findings via MongoDB before proceeding with API tests, or ensure that the API layer properly handles eventual consistency scenarios. *Scanner: code-review/logic | * <!-- compliance-fp:20e7acd977bbff1b52d0281fd9ff2974287b2eae7d1d4fb4ba4ec4c0624a494b -->
@@ -0,0 +7,4 @@
let resp = server.get("/api/v1/health").await;
assert_eq!(resp.status(), 200);
let body: serde_json::Value = resp.json().await.unwrap();
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The health.rs test file also uses unwrap() on JSON parsing, which can cause tests to crash instead of failing gracefully when the API returns unexpected responses.

Suggested fix: Replace unwrap() calls with proper error handling using match expressions or assert! macros that provide better failure messages.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The health.rs test file also uses unwrap() on JSON parsing, which can cause tests to crash instead of failing gracefully when the API returns unexpected responses. Suggested fix: Replace unwrap() calls with proper error handling using match expressions or assert! macros that provide better failure messages. *Scanner: code-review/convention | * <!-- compliance-fp:7d42c98eb9f00cdcdb7317e853138058613ec12004afd61be60f1ecd4ddefb43 -->
@@ -0,0 +7,4 @@
// Initially empty
let resp = server.get("/api/v1/repositories").await;
assert_eq!(resp.status(), 200);
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The test code uses .unwrap() extensively on deserialization and database operations, which can cause tests to panic instead of properly reporting failures when the expected data structure doesn't match. This makes test failures harder to debug and can mask underlying issues.

Suggested fix: Replace .unwrap() calls with proper error handling using assert!(result.is_ok()) or similar assertions to make test failures more informative.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The test code uses `.unwrap()` extensively on deserialization and database operations, which can cause tests to panic instead of properly reporting failures when the expected data structure doesn't match. This makes test failures harder to debug and can mask underlying issues. Suggested fix: Replace `.unwrap()` calls with proper error handling using `assert!(result.is_ok())` or similar assertions to make test failures more informative. *Scanner: code-review/convention | * <!-- compliance-fp:0d7cc17501177dcd3ebbebd6cc55f317b7fdc2228a42b54b1df42321388f4882 -->
@@ -0,0 +14,4 @@
// Add a repository
let resp = server
.post(
"/api/v1/repositories",
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

Multiple .unwrap() calls in test code that could panic during test execution instead of providing clear failure information. This violates the principle of graceful error handling even in test code.

Suggested fix: Use explicit error checking with assertions like assert!(resp.json().await.is_ok()) instead of unwrapping.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** Multiple `.unwrap()` calls in test code that could panic during test execution instead of providing clear failure information. This violates the principle of graceful error handling even in test code. Suggested fix: Use explicit error checking with assertions like `assert!(resp.json().await.is_ok())` instead of unwrapping. *Scanner: code-review/convention | * <!-- compliance-fp:224bb81514bf2d5ddd2badfa0d12f2e20407cd8b8b9b99586aff6ac5ba716984 -->
@@ -0,0 +42,4 @@
let payload = json!({
"name": "dup-repo",
"git_url": "https://github.com/example/dup-repo.git",
Author
Owner

[medium] Complex boolean expression in test assertion

The test function 'add_duplicate_repository_fails' uses assert_ne!(resp.status(), 200) which could pass for any non-200 status code including 5xx errors, potentially masking other issues. A more specific check would improve reliability.

Suggested fix: Replace assert_ne!(resp.status(), 200) with a more specific assertion like assert_eq!(resp.status(), 409) or assert!(resp.status() >= 400 && resp.status() < 500) to ensure we're specifically checking for conflict error codes.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in test assertion** The test function 'add_duplicate_repository_fails' uses assert_ne!(resp.status(), 200) which could pass for any non-200 status code including 5xx errors, potentially masking other issues. A more specific check would improve reliability. Suggested fix: Replace assert_ne!(resp.status(), 200) with a more specific assertion like assert_eq!(resp.status(), 409) or assert!(resp.status() >= 400 && resp.status() < 500) to ensure we're specifically checking for conflict error codes. *Scanner: code-review/complexity | * <!-- compliance-fp:9b1045fc19cc41f2cb98ef86e9f014af89c8facaf7713549d12c7ba539c28ccf -->
@@ -0,0 +12,4 @@
&json!({
"name": "stats-repo",
"git_url": "https://github.com/example/stats-repo.git",
}),
Author
Owner

[medium] Deeply nested control flow in test setup

The 'stats_overview_reflects_inserted_data' test contains multiple nested operations involving MongoDB client setup, database access, and document insertion that make the test flow harder to follow. The test mixes integration concerns with data manipulation logic.

Suggested fix: Extract the MongoDB setup and data insertion into separate helper functions to flatten the control flow and improve readability. Consider using a fixture pattern to manage test data setup.

*Scanner: code-review/complexity | *

**[medium] Deeply nested control flow in test setup** The 'stats_overview_reflects_inserted_data' test contains multiple nested operations involving MongoDB client setup, database access, and document insertion that make the test flow harder to follow. The test mixes integration concerns with data manipulation logic. Suggested fix: Extract the MongoDB setup and data insertion into separate helper functions to flatten the control flow and improve readability. Consider using a fixture pattern to manage test data setup. *Scanner: code-review/complexity | * <!-- compliance-fp:21682da07278fb7ae664138e200dd6c7a37beb857124375238def2e8c9056e88 -->
@@ -0,0 +19,4 @@
// Insert findings directly
let mongodb_uri = std::env::var("TEST_MONGODB_URI")
.unwrap_or_else(|_| "mongodb://root:example@localhost:27017/?authSource=admin".into());
let client = mongodb::Client::with_uri_str(&mongodb_uri).await.unwrap();
Author
Owner

[medium] Hardcoded MongoDB URI in test code

The test code hardcodes a MongoDB connection string with default credentials, which could expose sensitive information or cause issues if the test environment differs from expectations. This also makes tests less portable and harder to configure securely.

Suggested fix: Use environment variables consistently for all database connections in tests, and consider adding validation that the required environment variables are set before proceeding with the test.

*Scanner: code-review/convention | *

**[medium] Hardcoded MongoDB URI in test code** The test code hardcodes a MongoDB connection string with default credentials, which could expose sensitive information or cause issues if the test environment differs from expectations. This also makes tests less portable and harder to configure securely. Suggested fix: Use environment variables consistently for all database connections in tests, and consider adding validation that the required environment variables are set before proceeding with the test. *Scanner: code-review/convention | * <!-- compliance-fp:03cc3a5bd2f00e9db656d0b2b85fe47ae4f6b231c155a87d7ce1a63815b8b93e -->
@@ -0,0 +22,4 @@
let client = mongodb::Client::with_uri_str(&mongodb_uri).await.unwrap();
let db = client.database(&server.db_name());
let now = mongodb::bson::DateTime::now();
Author
Owner

[medium] Hardcoded MongoDB Credentials in Test Code

The test code contains hardcoded MongoDB credentials in the form of a connection string with username 'root' and password 'example'. While this is in test code, it represents a potential security risk if these credentials were to be accidentally committed to version control or used in a non-test environment.

Suggested fix: Use environment variables or a proper test configuration mechanism to manage database credentials instead of hardcoding them. Even in test environments, sensitive information should not be hardcoded.

Scanner: code-review/security | CWE: CWE-798

**[medium] Hardcoded MongoDB Credentials in Test Code** The test code contains hardcoded MongoDB credentials in the form of a connection string with username 'root' and password 'example'. While this is in test code, it represents a potential security risk if these credentials were to be accidentally committed to version control or used in a non-test environment. Suggested fix: Use environment variables or a proper test configuration mechanism to manage database credentials instead of hardcoding them. Even in test environments, sensitive information should not be hardcoded. *Scanner: code-review/security | CWE: CWE-798* <!-- compliance-fp:d53d897b076d41d5ac92c60fa84f3f8d1b4618cfd68f2e1763f7edbd28f0a4c3 -->
@@ -0,0 +32,4 @@
db.collection::<mongodb::bson::Document>("findings")
.insert_one(mongodb::bson::doc! {
"repo_id": "test-repo-id",
"fingerprint": format!("fp-{title}"),
Author
Owner

[high] Incorrect repository ID used in stats test

In the stats_overview_reflects_inserted_data test, the MongoDB insert operation uses hardcoded "test-repo-id" as the repo_id instead of using the actual repository ID that was created. This means the findings won't be associated with any real repository, causing the stats to not reflect the expected data.

Suggested fix: Use the actual repository ID from the previous POST request instead of hardcoding "test-repo-id". Store the repo_id from the first POST response and use it in the MongoDB insert.

*Scanner: code-review/logic | *

**[high] Incorrect repository ID used in stats test** In the `stats_overview_reflects_inserted_data` test, the MongoDB insert operation uses hardcoded `"test-repo-id"` as the `repo_id` instead of using the actual repository ID that was created. This means the findings won't be associated with any real repository, causing the stats to not reflect the expected data. Suggested fix: Use the actual repository ID from the previous POST request instead of hardcoding "test-repo-id". Store the repo_id from the first POST response and use it in the MongoDB insert. *Scanner: code-review/logic | * <!-- compliance-fp:58e393684ec38fb142d980eb7174be895282d0e0fa27cfed101b2a7a7fff61bb -->
@@ -1,4 +1,9 @@
// Integration tests for the compliance-agent crate.
// E2E / Integration tests for the compliance-agent API.
Author
Owner

[medium] Inconsistent test module naming

The test file was renamed from 'integration' to 'e2e' but the module name still references 'integration'. This creates confusion about the test scope and may lead to incorrect test discovery or execution patterns.

Suggested fix: Rename the module to 'e2e' to match the file name and test runner command, or update the comment to reflect the actual module name being used.

*Scanner: code-review/convention | *

**[medium] Inconsistent test module naming** The test file was renamed from 'integration' to 'e2e' but the module name still references 'integration'. This creates confusion about the test scope and may lead to incorrect test discovery or execution patterns. Suggested fix: Rename the module to 'e2e' to match the file name and test runner command, or update the comment to reflect the actual module name being used. *Scanner: code-review/convention | * <!-- compliance-fp:d3092dba2fa76350d44f9b590a28468b619c34ea80a7523ad71ef78d7949a777 -->
sharang added 1 commit 2026-03-30 09:12:40 +00:00
fix: synthesise Contains edges and improve cross-file resolution in code graph
Some checks failed
CI / Check (pull_request) Failing after 12m5s
CI / Detect Changes (pull_request) Has been skipped
CI / Deploy Agent (pull_request) Has been skipped
CI / Deploy Dashboard (pull_request) Has been skipped
CI / Deploy Docs (pull_request) Has been skipped
CI / Deploy MCP (pull_request) Has been skipped
08a1ee2f00
The code graph produced disconnected "islands" because:
1. No Contains edges were created between File/Module nodes and their
   children (functions, classes, structs), leaving file nodes isolated
2. Cross-file call resolution was too strict — calls like
   `crate::config::load` failed to resolve to `src/config.rs::load`

Fix:
- After resolving explicit parser edges, synthesise Contains edges by
  walking each node's qualified-name hierarchy and linking to the
  closest ancestor that exists in the node map
- Improve edge resolution with module-path matching: strip Rust
  prefixes (crate::, super::, self::) and try progressively shorter
  suffix matches for cross-file calls

Adds 4 new tests covering Contains edge synthesis, dedup with existing
edges, cross-file module path resolution, and parent qname lookup.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sharang reviewed 2026-03-30 09:14:15 +00:00
sharang left a comment
Author
Owner

Compliance scan found 46 issue(s) in this PR:

  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/convention: Test server cleanup may silently ignore database drop errors
  • [medium] code-review/logic: Missing error handling for MongoDB connection in findings test
  • [medium] code-review/logic: Database connection reuse in test helpers
  • [medium] code-review/complexity: Duplicated MongoDB connection setup
  • [medium] code-review/logic: Missing database cleanup on test server drop
  • [medium] code-review/convention: Missing error propagation in main function
  • [medium] code-review/complexity: Complex boolean expression in test server startup
  • [high] code-review/security: Hardcoded MongoDB Credentials in Workflow Configuration
  • [low] code-review/convention: Hardcoded MongoDB URI in test helpers
  • [medium] code-review/logic: Incorrect Dependency Version Bump
  • [medium] code-review/convention: Direct MongoDB manipulation in integration tests
  • [medium] code-review/convention: Hardcoded MongoDB URI in test helper
  • [medium] code-review/complexity: Complex boolean expression in test assertions
  • [high] code-review/logic: Incorrect repository ID used in stats test
  • [low] code-review/convention: Hardcoded test database name prefix
  • [medium] code-review/convention: Inconsistent error handling in edge resolution
  • [high] code-review/logic: Missing Error Handling in E2E Test Step
  • [medium] code-review/convention: Potential panic in Contains edge synthesis
  • [low] code-review/convention: Unnecessary allow(dead_code) in lib.rs
  • [medium] code-review/logic: Unnecessary Dev Dependencies Added
  • [high] code-review/logic: Incorrect Git Checkout in Nightly Workflow
  • [low] code-review/convention: Unnecessary unwrap_or_default in Contains edge creation
  • [medium] code-review/complexity: Deeply nested control flow in server readiness check
  • [high] code-review/logic: Incorrect MongoDB collection name in findings test
  • [medium] code-review/complexity: Complex boolean expression in GitHub Actions workflow
  • [medium] code-review/complexity: Deeply nested control flow in Contains edge synthesis
  • [medium] code-review/complexity: Deeply nested control flow in test setup
  • [medium] code-review/complexity: Complex boolean expression in test assertion
  • [medium] code-review/security: Hardcoded MongoDB Credentials in Test Code
  • [medium] code-review/complexity: Deeply nested control flow in test helper
  • [medium] code-review/logic: Potential Dead Code Removal
  • [medium] code-review/security: Hardcoded MongoDB Credentials in Test Code
  • [medium] code-review/logic: Hardcoded MongoDB URI in test environment
  • [high] code-review/security: Insecure Direct Object Reference (IDOR) Potential in Repository Deletion
  • [medium] code-review/convention: Inconsistent error handling in E2E workflow
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [high] code-review/logic: Off-by-one error in find_parent_qname when handling empty segments
  • [medium] code-review/logic: Incorrect edge duplication prevention in Contains edge synthesis
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/convention: Hardcoded MongoDB URI in test code
  • [medium] code-review/convention: Inconsistent test module naming
  • [high] code-review/logic: Missing error handling in database operations
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test setup
  • [medium] code-review/complexity: Complex boolean expression in edge resolution
Compliance scan found **46** issue(s) in this PR: - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/convention: Test server cleanup may silently ignore database drop errors - **[medium]** code-review/logic: Missing error handling for MongoDB connection in findings test - **[medium]** code-review/logic: Database connection reuse in test helpers - **[medium]** code-review/complexity: Duplicated MongoDB connection setup - **[medium]** code-review/logic: Missing database cleanup on test server drop - **[medium]** code-review/convention: Missing error propagation in main function - **[medium]** code-review/complexity: Complex boolean expression in test server startup - **[high]** code-review/security: Hardcoded MongoDB Credentials in Workflow Configuration - **[low]** code-review/convention: Hardcoded MongoDB URI in test helpers - **[medium]** code-review/logic: Incorrect Dependency Version Bump - **[medium]** code-review/convention: Direct MongoDB manipulation in integration tests - **[medium]** code-review/convention: Hardcoded MongoDB URI in test helper - **[medium]** code-review/complexity: Complex boolean expression in test assertions - **[high]** code-review/logic: Incorrect repository ID used in stats test - **[low]** code-review/convention: Hardcoded test database name prefix - **[medium]** code-review/convention: Inconsistent error handling in edge resolution - **[high]** code-review/logic: Missing Error Handling in E2E Test Step - **[medium]** code-review/convention: Potential panic in Contains edge synthesis - **[low]** code-review/convention: Unnecessary allow(dead_code) in lib.rs - **[medium]** code-review/logic: Unnecessary Dev Dependencies Added - **[high]** code-review/logic: Incorrect Git Checkout in Nightly Workflow - **[low]** code-review/convention: Unnecessary unwrap_or_default in Contains edge creation - **[medium]** code-review/complexity: Deeply nested control flow in server readiness check - **[high]** code-review/logic: Incorrect MongoDB collection name in findings test - **[medium]** code-review/complexity: Complex boolean expression in GitHub Actions workflow - **[medium]** code-review/complexity: Deeply nested control flow in Contains edge synthesis - **[medium]** code-review/complexity: Deeply nested control flow in test setup - **[medium]** code-review/complexity: Complex boolean expression in test assertion - **[medium]** code-review/security: Hardcoded MongoDB Credentials in Test Code - **[medium]** code-review/complexity: Deeply nested control flow in test helper - **[medium]** code-review/logic: Potential Dead Code Removal - **[medium]** code-review/security: Hardcoded MongoDB Credentials in Test Code - **[medium]** code-review/logic: Hardcoded MongoDB URI in test environment - **[high]** code-review/security: Insecure Direct Object Reference (IDOR) Potential in Repository Deletion - **[medium]** code-review/convention: Inconsistent error handling in E2E workflow - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[high]** code-review/logic: Off-by-one error in `find_parent_qname` when handling empty segments - **[medium]** code-review/logic: Incorrect edge duplication prevention in Contains edge synthesis - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/convention: Hardcoded MongoDB URI in test code - **[medium]** code-review/convention: Inconsistent test module naming - **[high]** code-review/logic: Missing error handling in database operations - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test setup - **[medium]** code-review/complexity: Complex boolean expression in edge resolution
@@ -0,0 +14,4 @@
concurrency:
group: nightly-e2e
cancel-in-progress: true
Author
Owner

[high] Hardcoded MongoDB Credentials in Workflow Configuration

The nightly CI workflow (.gitea/workflows/nightly.yml) contains hardcoded MongoDB credentials in the TEST_MONGODB_URI environment variable. These credentials are exposed in the workflow configuration and could be accessed by anyone with read access to the repository, potentially allowing unauthorized access to the test database.

Suggested fix: Use GitHub Actions secrets or a secure vault to store database credentials instead of hardcoding them in the workflow file. Reference the secret using ${{ secrets.MONGODB_PASSWORD }} in the URI.

Scanner: code-review/security | CWE: CWE-798

**[high] Hardcoded MongoDB Credentials in Workflow Configuration** The nightly CI workflow (.gitea/workflows/nightly.yml) contains hardcoded MongoDB credentials in the TEST_MONGODB_URI environment variable. These credentials are exposed in the workflow configuration and could be accessed by anyone with read access to the repository, potentially allowing unauthorized access to the test database. Suggested fix: Use GitHub Actions secrets or a secure vault to store database credentials instead of hardcoding them in the workflow file. Reference the secret using ${{ secrets.MONGODB_PASSWORD }} in the URI. *Scanner: code-review/security | CWE: CWE-798* <!-- compliance-fp:b89ca0314ccd87c2ebae0f837d53434ab52e1ea699b7aa1f44da43e353815f65 -->
@@ -0,0 +20,4 @@
e2e:
name: E2E Tests
runs-on: docker
container:
Author
Owner

[high] Incorrect Git Checkout in Nightly Workflow

The nightly workflow uses git checkout FETCH_HEAD which may not correctly handle shallow clones or specific commit references. When GITHUB_SHA is not set (e.g., during manual triggers), this could result in checking out an unexpected commit or failing to checkout properly.

Suggested fix: Use git checkout ${{ github.sha }} instead of FETCH_HEAD to ensure consistent checkout behavior regardless of how the workflow is triggered.

*Scanner: code-review/logic | *

**[high] Incorrect Git Checkout in Nightly Workflow** The nightly workflow uses `git checkout FETCH_HEAD` which may not correctly handle shallow clones or specific commit references. When `GITHUB_SHA` is not set (e.g., during manual triggers), this could result in checking out an unexpected commit or failing to checkout properly. Suggested fix: Use `git checkout ${{ github.sha }}` instead of `FETCH_HEAD` to ensure consistent checkout behavior regardless of how the workflow is triggered. *Scanner: code-review/logic | * <!-- compliance-fp:c5b52d0d054cb365b5c2c5d637138fe7994386e85cc3966734e5f92be127653d -->
@@ -0,0 +42,4 @@
| tar xz --strip-components=1 -C /usr/local/bin/ sccache-v0.9.1-x86_64-unknown-linux-musl/sccache
chmod +x /usr/local/bin/sccache
env:
RUSTC_WRAPPER: ""
Author
Owner

[high] Missing Error Handling in E2E Test Step

The E2E test step does not explicitly check for failure of the cargo test command. If the tests fail, the workflow will continue without indicating failure, potentially masking test failures.

Suggested fix: Add explicit error handling or use set -e to ensure the workflow fails when tests do not pass.

*Scanner: code-review/logic | *

**[high] Missing Error Handling in E2E Test Step** The E2E test step does not explicitly check for failure of the `cargo test` command. If the tests fail, the workflow will continue without indicating failure, potentially masking test failures. Suggested fix: Add explicit error handling or use `set -e` to ensure the workflow fails when tests do not pass. *Scanner: code-review/logic | * <!-- compliance-fp:c1caa3bb910cc6ebd28a7a8138a1dca0cf8daffd56aa3965dcb5de590f3b1166 -->
@@ -0,0 +45,4 @@
RUSTC_WRAPPER: ""
- name: Run E2E tests
run: cargo test -p compliance-agent --test e2e -- --test-threads=4
Author
Owner

[medium] Inconsistent error handling in E2E workflow

The nightly E2E workflow uses cargo test without the --quiet flag and doesn't capture or log test failures explicitly. If tests fail, the workflow may not properly report the failure to the CI system, potentially masking test failures.

Suggested fix: Add explicit error handling and logging around the test execution step, ensuring that any test failures are properly reported by the workflow.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling in E2E workflow** The nightly E2E workflow uses `cargo test` without the `--quiet` flag and doesn't capture or log test failures explicitly. If tests fail, the workflow may not properly report the failure to the CI system, potentially masking test failures. Suggested fix: Add explicit error handling and logging around the test execution step, ensuring that any test failures are properly reported by the workflow. *Scanner: code-review/convention | * <!-- compliance-fp:0e3143be5f7b05063028617a8644cbace0a40ed2a07e97f7164bd0cf55524da6 -->
@@ -0,0 +49,4 @@
- name: Show sccache stats
run: sccache --show-stats
if: always()
Author
Owner

[medium] Complex boolean expression in GitHub Actions workflow

The workflow uses an 'if: always()' condition on the final step which could mask failures in previous steps. This makes it harder to detect when tests actually fail since the step will always run regardless of previous step outcomes.

Suggested fix: Remove the 'if: always()' condition and rely on standard workflow failure handling instead. If you need to show stats regardless, consider using a separate job or conditional logic that explicitly checks for success/failure states.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in GitHub Actions workflow** The workflow uses an 'if: always()' condition on the final step which could mask failures in previous steps. This makes it harder to detect when tests actually fail since the step will always run regardless of previous step outcomes. Suggested fix: Remove the 'if: always()' condition and rely on standard workflow failure handling instead. If you need to show stats regardless, consider using a separate job or conditional logic that explicitly checks for success/failure states. *Scanner: code-review/complexity | * <!-- compliance-fp:faf02a438de27ddef1487637a3be46588628d484be0009002f05bfcffad9c281 -->
@@ -1,6 +1,6 @@
FROM rust:1.94-bookworm AS builder
RUN cargo install dioxus-cli --version 0.7.3
RUN cargo install dioxus-cli --version 0.7.4
Author
Owner

[medium] Incorrect Dependency Version Bump

The dioxus-cli version was bumped from 0.7.3 to 0.7.4 in the Dockerfile.dashboard. This change might introduce breaking changes or compatibility issues with existing code that depends on the previous version.

Suggested fix: Verify that all dependencies and integrations work correctly with dioxus-cli 0.7.4 before merging this change.

*Scanner: code-review/logic | *

**[medium] Incorrect Dependency Version Bump** The dioxus-cli version was bumped from 0.7.3 to 0.7.4 in the Dockerfile.dashboard. This change might introduce breaking changes or compatibility issues with existing code that depends on the previous version. Suggested fix: Verify that all dependencies and integrations work correctly with dioxus-cli 0.7.4 before merging this change. *Scanner: code-review/logic | * <!-- compliance-fp:a1c9b817ce2d3144852b328a8baa62e3a6b99e0b11d5dde6616c674eac3176e1 -->
@@ -43,2 +43,4 @@
dashmap = { workspace = true }
tokio-stream = { workspace = true }
[dev-dependencies]
Author
Owner

[medium] Unnecessary Dev Dependencies Added

Several dev-dependencies were added to Cargo.toml including compliance-core, reqwest, serde_json, tokio, mongodb, uuid, secrecy, axum, and tower-http. These are likely meant for testing but should be reviewed to ensure they're actually needed for development/testing purposes.

Suggested fix: Ensure these dependencies are only included in dev profiles or remove them if not necessary for the project's build process.

*Scanner: code-review/logic | *

**[medium] Unnecessary Dev Dependencies Added** Several dev-dependencies were added to Cargo.toml including compliance-core, reqwest, serde_json, tokio, mongodb, uuid, secrecy, axum, and tower-http. These are likely meant for testing but should be reviewed to ensure they're actually needed for development/testing purposes. Suggested fix: Ensure these dependencies are only included in dev profiles or remove them if not necessary for the project's build process. *Scanner: code-review/logic | * <!-- compliance-fp:b192cfa323eab5d28a5a4ff14e0a715e13e33f6c534225e4c81f81c2fedc4a7d -->
@@ -0,0 +12,4 @@
pub mod scheduler;
pub mod ssh;
#[allow(dead_code)]
pub mod trackers;
Author
Owner

[low] Unnecessary allow(dead_code) in lib.rs

The #[allow(dead_code)] attribute on the trackers module in lib.rs suggests that dead code is intentionally allowed. However, this can mask potential issues with unused code and makes it harder to maintain the codebase.

Suggested fix: Remove the #[allow(dead_code)] attribute and address any actual dead code issues, or provide a clear justification for why the code is intentionally unused.

*Scanner: code-review/convention | *

**[low] Unnecessary allow(dead_code) in lib.rs** The `#[allow(dead_code)]` attribute on the trackers module in lib.rs suggests that dead code is intentionally allowed. However, this can mask potential issues with unused code and makes it harder to maintain the codebase. Suggested fix: Remove the `#[allow(dead_code)]` attribute and address any actual dead code issues, or provide a clear justification for why the code is intentionally unused. *Scanner: code-review/convention | * <!-- compliance-fp:5bdd3605df83bebae2ab51f87fce135a82a11f45f9a2da88cf484c02fa291201 -->
@@ -12,3 +1,1 @@
#[allow(dead_code)]
mod trackers;
mod webhooks;
use compliance_agent::{agent, api, config, database, scheduler, ssh, webhooks};
Author
Owner

[medium] Potential Dead Code Removal

The removal of individual module declarations in main.rs and addition of a single use statement suggests refactoring. However, it's unclear whether all modules are properly exposed through the library interface. If any module is accessed directly elsewhere, this change could break compilation.

Suggested fix: Verify that all modules referenced in other parts of the codebase are still accessible via the new import structure.

*Scanner: code-review/logic | *

**[medium] Potential Dead Code Removal** The removal of individual module declarations in main.rs and addition of a single use statement suggests refactoring. However, it's unclear whether all modules are properly exposed through the library interface. If any module is accessed directly elsewhere, this change could break compilation. Suggested fix: Verify that all modules referenced in other parts of the codebase are still accessible via the new import structure. *Scanner: code-review/logic | * <!-- compliance-fp:41f88a7df4fd8b57780764c05bbf3c04d375edaa0c709869cedec9141321d696 -->
Author
Owner

[medium] Missing error propagation in main function

The main function in main.rs returns Result<(), Box<dyn std::error::Error>> but doesn't handle or propagate specific error types from the application components. This could lead to unhandled errors being silently ignored.

Suggested fix: Ensure that errors from the application components are properly handled and propagated rather than being boxed into a generic error type.

*Scanner: code-review/convention | *

**[medium] Missing error propagation in main function** The main function in main.rs returns `Result<(), Box<dyn std::error::Error>>` but doesn't handle or propagate specific error types from the application components. This could lead to unhandled errors being silently ignored. Suggested fix: Ensure that errors from the application components are properly handled and propagated rather than being boxed into a generic error type. *Scanner: code-review/convention | * <!-- compliance-fp:530afaaf6714da0d7083e6eb34285a5ecd61987dbc04f436897e1faaa2a48f7e -->
@@ -4,0 +10,4 @@
use compliance_agent::database::Database;
use compliance_core::AgentConfig;
use secrecy::SecretString;
Author
Owner

[low] Hardcoded test database name prefix

The test database name uses a hardcoded prefix 'test_' which could collide with existing databases or make it harder to identify test-specific collections in MongoDB. Using a more distinctive prefix or namespace would improve test isolation and debugging.

Suggested fix: Consider using a more distinctive prefix like 'compliance_test_' or include a timestamp to reduce collision risks.

*Scanner: code-review/convention | *

**[low] Hardcoded test database name prefix** The test database name uses a hardcoded prefix 'test_' which could collide with existing databases or make it harder to identify test-specific collections in MongoDB. Using a more distinctive prefix or namespace would improve test isolation and debugging. Suggested fix: Consider using a more distinctive prefix like 'compliance_test_' or include a timestamp to reduce collision risks. *Scanner: code-review/convention | * <!-- compliance-fp:9aff8d47c0bb2816345efafcb1b1cb15c2ce584c7f78e8fa2b883542706f3030 -->
@@ -4,0 +13,4 @@
/// A running test server with a unique database.
pub struct TestServer {
pub base_url: String,
Author
Owner

[medium] Missing database cleanup on test server drop

The TestServer struct does not implement Drop, which means the cleanup() method is never automatically called when a test server goes out of scope. This can lead to lingering test databases in MongoDB, especially if tests panic or are interrupted before cleanup completes.

Suggested fix: Implement the Drop trait for TestServer to ensure cleanup() is always called. Add impl Drop for TestServer { fn drop(&mut self) { tokio::runtime::Handle::current().block_on(self.cleanup()); } }

*Scanner: code-review/logic | *

**[medium] Missing database cleanup on test server drop** The TestServer struct does not implement Drop, which means the cleanup() method is never automatically called when a test server goes out of scope. This can lead to lingering test databases in MongoDB, especially if tests panic or are interrupted before cleanup completes. Suggested fix: Implement the Drop trait for TestServer to ensure cleanup() is always called. Add `impl Drop for TestServer { fn drop(&mut self) { tokio::runtime::Handle::current().block_on(self.cleanup()); } }` *Scanner: code-review/logic | * <!-- compliance-fp:7f2eda4eaf506407e7f067c8a66cab6bf7f6c8f98f68978664613963d67637d4 -->
@@ -4,0 +15,4 @@
pub struct TestServer {
pub base_url: String,
pub client: reqwest::Client,
db_name: String,
Author
Owner

[medium] Inconsistent error handling with unwrap() in test setup

The test helper uses unwrap_or_else() with expect() inside the start() method, which can mask underlying configuration issues. The unwrap() calls on environment variables and database connections should be replaced with proper error propagation or more graceful fallback mechanisms to ensure tests don't silently fail due to misconfiguration.

Suggested fix: Replace unwrap_or_else(|_| ...) with explicit error handling that propagates failures or uses a more robust fallback strategy instead of panicking.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test setup** The test helper uses `unwrap_or_else()` with `expect()` inside the `start()` method, which can mask underlying configuration issues. The `unwrap()` calls on environment variables and database connections should be replaced with proper error propagation or more graceful fallback mechanisms to ensure tests don't silently fail due to misconfiguration. Suggested fix: Replace `unwrap_or_else(|_| ...)` with explicit error handling that propagates failures or uses a more robust fallback strategy instead of panicking. *Scanner: code-review/convention | * <!-- compliance-fp:fda6f59952019f2f02a88604a80615c33d2146e6bbdef1f65f431b8012b4b3bf -->
@@ -4,0 +31,4 @@
let db = Database::connect(&mongodb_uri, &db_name)
.await
.expect("Failed to connect to MongoDB — is it running?");
db.ensure_indexes().await.expect("Failed to create indexes");
Author
Owner

[medium] Complex boolean expression in test server startup

The TestServer::start() function contains a complex boolean expression in the AgentConfig construction that mixes environment variable fallbacks with hardcoded defaults, making it difficult to trace which values are being used under what conditions.

Suggested fix: Extract the environment variable loading into separate variables with clear names, and use a more structured approach like a builder pattern or helper function to reduce the cognitive load when reading the config initialization.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in test server startup** The TestServer::start() function contains a complex boolean expression in the AgentConfig construction that mixes environment variable fallbacks with hardcoded defaults, making it difficult to trace which values are being used under what conditions. Suggested fix: Extract the environment variable loading into separate variables with clear names, and use a more structured approach like a builder pattern or helper function to reduce the cognitive load when reading the config initialization. *Scanner: code-review/complexity | * <!-- compliance-fp:4d85f24a45e6921e3f0d750bcd89db21c8c9890bfcc02ccecebdbb9f6039d0d1 -->
@@ -4,0 +75,4 @@
let app = api::routes::build_router()
.layer(axum::extract::Extension(Arc::new(agent)))
.layer(tower_http::cors::CorsLayer::permissive());
Author
Owner

[medium] Deeply nested control flow in server readiness check

The server readiness check loop in TestServer::start() uses a nested structure with a for loop containing a conditional that breaks early, which increases cognitive complexity and makes it harder to follow the control flow.

Suggested fix: Flatten the control flow by extracting the health check logic into a separate function and using early returns instead of nesting, or restructure to avoid the explicit loop with break condition.

*Scanner: code-review/complexity | *

**[medium] Deeply nested control flow in server readiness check** The server readiness check loop in TestServer::start() uses a nested structure with a for loop containing a conditional that breaks early, which increases cognitive complexity and makes it harder to follow the control flow. Suggested fix: Flatten the control flow by extracting the health check logic into a separate function and using early returns instead of nesting, or restructure to avoid the explicit loop with break condition. *Scanner: code-review/complexity | * <!-- compliance-fp:53aee5eb0a41128b2ae0e2d3877dff01c717048a63461cb0018cc295ae4cb787 -->
@@ -4,0 +151,4 @@
.expect("DELETE request failed")
}
/// Get the unique database name for direct MongoDB access in tests.
Author
Owner

[medium] Test server cleanup may silently ignore database drop errors

The cleanup() method uses .ok() when dropping the database, which suppresses any potential errors during cleanup. This can hide issues like connection problems or permission errors that might affect subsequent tests or leave stale data in the test database.

Suggested fix: Log or propagate errors from the database drop operation instead of silently ignoring them to ensure cleanup failures are detected.

*Scanner: code-review/convention | *

**[medium] Test server cleanup may silently ignore database drop errors** The `cleanup()` method uses `.ok()` when dropping the database, which suppresses any potential errors during cleanup. This can hide issues like connection problems or permission errors that might affect subsequent tests or leave stale data in the test database. Suggested fix: Log or propagate errors from the database drop operation instead of silently ignoring them to ensure cleanup failures are detected. *Scanner: code-review/convention | * <!-- compliance-fp:9741533d729a93f99f64ad0b1e37a1056303311f3e205249e0f06b392fa20ecc -->
@@ -0,0 +7,4 @@
.unwrap_or_else(|_| "mongodb://root:example@localhost:27017/?authSource=admin".into());
let client = mongodb::Client::with_uri_str(&mongodb_uri).await.unwrap();
let db = client.database(&server.db_name());
Author
Owner

[low] Hardcoded MongoDB URI in test helpers

Multiple test helper functions repeatedly fetch the MongoDB URI from environment variables with a hardcoded default value. This creates redundancy and makes it harder to maintain consistent database connections across test helpers. The repeated code also increases the chance of inconsistencies if the default URI needs to change.

Suggested fix: Extract the MongoDB connection setup into a shared helper function that handles both environment variable retrieval and connection creation to reduce duplication.

*Scanner: code-review/convention | *

**[low] Hardcoded MongoDB URI in test helpers** Multiple test helper functions repeatedly fetch the MongoDB URI from environment variables with a hardcoded default value. This creates redundancy and makes it harder to maintain consistent database connections across test helpers. The repeated code also increases the chance of inconsistencies if the default URI needs to change. Suggested fix: Extract the MongoDB connection setup into a shared helper function that handles both environment variable retrieval and connection creation to reduce duplication. *Scanner: code-review/convention | * <!-- compliance-fp:e3f15772ab5ee78fdf6a1284f88de60a02cd8cca0bb39252f91be89e588618de -->
@@ -0,0 +9,4 @@
let db = client.database(&server.db_name());
let result = db
.collection::<mongodb::bson::Document>("dast_targets")
Author
Owner

[medium] Hardcoded MongoDB Credentials in Test Code

The test code contains hardcoded MongoDB credentials ('root:example') in multiple helper functions. While this is only in test files, it represents poor security hygiene and could lead to accidental exposure if the test code were ever committed to a public repository or if the credentials were reused in production code.

Suggested fix: Use environment variables or a secure configuration management system for database credentials even in test environments. Consider using a dedicated test database with unique credentials.

Scanner: code-review/security | CWE: CWE-798

**[medium] Hardcoded MongoDB Credentials in Test Code** The test code contains hardcoded MongoDB credentials ('root:example') in multiple helper functions. While this is only in test files, it represents poor security hygiene and could lead to accidental exposure if the test code were ever committed to a public repository or if the credentials were reused in production code. Suggested fix: Use environment variables or a secure configuration management system for database credentials even in test environments. Consider using a dedicated test database with unique credentials. *Scanner: code-review/security | CWE: CWE-798* <!-- compliance-fp:666a556b613798e8d21034f2afb84e6c3827c40785f03d42534652b0236ba507 -->
@@ -0,0 +11,4 @@
let result = db
.collection::<mongodb::bson::Document>("dast_targets")
.insert_one(mongodb::bson::doc! {
"name": name,
Author
Owner

[medium] Database connection reuse in test helpers

Each helper function creates its own MongoDB client connection instead of reusing the connection from the test server. This leads to unnecessary resource consumption and potential connection pool exhaustion during test execution.

Suggested fix: Modify the test helpers to accept a shared database connection or use a connection pool managed by the TestServer to avoid creating multiple connections per test helper call.

*Scanner: code-review/logic | *

**[medium] Database connection reuse in test helpers** Each helper function creates its own MongoDB client connection instead of reusing the connection from the test server. This leads to unnecessary resource consumption and potential connection pool exhaustion during test execution. Suggested fix: Modify the test helpers to accept a shared database connection or use a connection pool managed by the TestServer to avoid creating multiple connections per test helper call. *Scanner: code-review/logic | * <!-- compliance-fp:a29306b4a28b344cb9becff8f2e30ca7f8f76c7071f319f797812caf9cba2ce2 -->
Author
Owner

[medium] Duplicated MongoDB connection setup

Multiple helper functions repeatedly establish MongoDB connections using the same pattern, creating redundant code and potential inconsistency if connection parameters change. This increases maintenance burden and risk of bugs due to duplicated logic.

Suggested fix: Extract MongoDB connection setup into a shared helper function or use a test fixture pattern to avoid repeated connection logic across multiple functions.

*Scanner: code-review/complexity | *

**[medium] Duplicated MongoDB connection setup** Multiple helper functions repeatedly establish MongoDB connections using the same pattern, creating redundant code and potential inconsistency if connection parameters change. This increases maintenance burden and risk of bugs due to duplicated logic. Suggested fix: Extract MongoDB connection setup into a shared helper function or use a test fixture pattern to avoid repeated connection logic across multiple functions. *Scanner: code-review/complexity | * <!-- compliance-fp:bacc9b89684ad14ff1299abb1a7cf92272bcce7f000b1d8e39b4c81927b44de0 -->
@@ -0,0 +12,4 @@
.collection::<mongodb::bson::Document>("dast_targets")
.insert_one(mongodb::bson::doc! {
"name": name,
"base_url": format!("https://{name}.example.com"),
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The test code uses unwrap() extensively when interacting with MongoDB connections and database operations. This violates the consistent error handling pattern established in the rest of the codebase where errors should be properly handled or propagated instead of panicking. The use of unwrap() in tests can mask underlying connection or database issues and make debugging harder.

Suggested fix: Replace unwrap() calls with proper error handling using ? operator or explicit error assertions to maintain consistency with the project's error handling patterns.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The test code uses `unwrap()` extensively when interacting with MongoDB connections and database operations. This violates the consistent error handling pattern established in the rest of the codebase where errors should be properly handled or propagated instead of panicking. The use of `unwrap()` in tests can mask underlying connection or database issues and make debugging harder. Suggested fix: Replace `unwrap()` calls with proper error handling using `?` operator or explicit error assertions to maintain consistency with the project's error handling patterns. *Scanner: code-review/convention | * <!-- compliance-fp:97476c58e5efd4eb5fe7e4cfe5cf8b0bb09f358119112031033fa369f083ff45 -->
Author
Owner

[high] Missing error handling in database operations

The test functions use .unwrap() on database operations which will panic if the database operations fail. This could cause tests to crash instead of properly reporting test failures when database connectivity issues occur.

Suggested fix: Replace .unwrap() with proper error handling using ? operator or explicit error checking to ensure tests fail gracefully when database operations don't succeed.

*Scanner: code-review/logic | *

**[high] Missing error handling in database operations** The test functions use `.unwrap()` on database operations which will panic if the database operations fail. This could cause tests to crash instead of properly reporting test failures when database connectivity issues occur. Suggested fix: Replace `.unwrap()` with proper error handling using `?` operator or explicit error checking to ensure tests fail gracefully when database operations don't succeed. *Scanner: code-review/logic | * <!-- compliance-fp:e478898f300ac54e083c30ad71c2fd8e312c74055956685f7b56b1471d8e93c1 -->
@@ -0,0 +147,4 @@
// All downstream data should be gone
assert_eq!(count_docs(&server, "dast_targets").await, 0);
assert_eq!(count_docs(&server, "pentest_sessions").await, 0);
Author
Owner

[high] Insecure Direct Object Reference (IDOR) Potential in Repository Deletion

The test verifies cascade deletion functionality but doesn't validate authorization controls. If the actual implementation lacks proper access control checks, an attacker could potentially delete repositories they don't own by crafting malicious requests with arbitrary repo IDs. This would allow unauthorized data destruction.

Suggested fix: Ensure that repository deletion endpoints verify the requesting user's ownership or authorization before performing any cascade operations. Add explicit authorization checks in the delete endpoint handler.

Scanner: code-review/security | CWE: CWE-285

**[high] Insecure Direct Object Reference (IDOR) Potential in Repository Deletion** The test verifies cascade deletion functionality but doesn't validate authorization controls. If the actual implementation lacks proper access control checks, an attacker could potentially delete repositories they don't own by crafting malicious requests with arbitrary repo IDs. This would allow unauthorized data destruction. Suggested fix: Ensure that repository deletion endpoints verify the requesting user's ownership or authorization before performing any cascade operations. Add explicit authorization checks in the delete endpoint handler. *Scanner: code-review/security | CWE: CWE-285* <!-- compliance-fp:62d4c103e1bf533d1c71157851db26a26ed1701bd7ac76653eb8ac18220d4b7f -->
@@ -0,0 +12,4 @@
// We'll use the agent's internal DB through the stats endpoint to verify
let client = mongodb::Client::with_uri_str(&mongodb_uri).await.unwrap();
// Get the DB name from the test server by parsing the health response
Author
Owner

[medium] Hardcoded MongoDB URI in test helper

The insert_finding helper function contains a hardcoded MongoDB URI fallback that bypasses the normal test setup process. This creates an implicit dependency on environment variables and makes the test less portable and harder to maintain.

Suggested fix: Use a consistent test database setup mechanism that provides the MongoDB connection string through the TestServer rather than hardcoding it.

*Scanner: code-review/convention | *

**[medium] Hardcoded MongoDB URI in test helper** The insert_finding helper function contains a hardcoded MongoDB URI fallback that bypasses the normal test setup process. This creates an implicit dependency on environment variables and makes the test less portable and harder to maintain. Suggested fix: Use a consistent test database setup mechanism that provides the MongoDB connection string through the TestServer rather than hardcoding it. *Scanner: code-review/convention | * <!-- compliance-fp:bca9002ef111c226d89c6f58b68277e9b43933ef911f0ddfd5ce47c1bc01a7a8 -->
Author
Owner

[medium] Deeply nested control flow in test helper

The insert_finding helper function contains multiple nested operations including environment variable access, MongoDB client initialization, and document insertion that are tightly coupled. This makes it difficult to reason about failure modes and increases risk of bugs when modifying any part of the chain.

Suggested fix: Break down the helper into smaller, focused functions that handle one responsibility each (e.g., get DB name, connect to MongoDB, insert document) to improve maintainability and reduce coupling.

*Scanner: code-review/complexity | *

**[medium] Deeply nested control flow in test helper** The `insert_finding` helper function contains multiple nested operations including environment variable access, MongoDB client initialization, and document insertion that are tightly coupled. This makes it difficult to reason about failure modes and increases risk of bugs when modifying any part of the chain. Suggested fix: Break down the helper into smaller, focused functions that handle one responsibility each (e.g., get DB name, connect to MongoDB, insert document) to improve maintainability and reduce coupling. *Scanner: code-review/complexity | * <!-- compliance-fp:e660d47598d6efa43b2d329695dd5d5293be2715b79489acc968aef65c907b4c -->
@@ -0,0 +14,4 @@
// Get the DB name from the test server by parsing the health response
// For now, we use a direct insert approach
let db = client.database(&server.db_name());
Author
Owner

[medium] Hardcoded MongoDB URI in test environment

The MongoDB URI is hardcoded in the test file with a default value that may not match the actual test environment configuration. This can cause tests to fail when run in different environments or when the default credentials are changed.

Suggested fix: Use environment variables consistently for all database connection details and provide clear documentation about required environment variables for running integration tests.

*Scanner: code-review/logic | *

**[medium] Hardcoded MongoDB URI in test environment** The MongoDB URI is hardcoded in the test file with a default value that may not match the actual test environment configuration. This can cause tests to fail when run in different environments or when the default credentials are changed. Suggested fix: Use environment variables consistently for all database connection details and provide clear documentation about required environment variables for running integration tests. *Scanner: code-review/logic | * <!-- compliance-fp:014343c6aa85460a2615f864956c7f565eb7e15323d440cb398268f0fd08a0a0 -->
@@ -0,0 +20,4 @@
db.collection::<mongodb::bson::Document>("findings")
.insert_one(mongodb::bson::doc! {
"repo_id": repo_id,
"fingerprint": format!("fp-{title}-{severity}"),
Author
Owner

[medium] Missing error handling for MongoDB connection in findings test

The MongoDB client creation and database access in the insert_finding helper function lacks proper error handling. If the MongoDB connection fails or the database name cannot be determined, the test will panic rather than fail gracefully, potentially masking underlying infrastructure issues.

Suggested fix: Add proper error handling around the MongoDB client initialization and database access operations using ? operator or explicit error checking to ensure tests fail gracefully when database connectivity issues occur.

*Scanner: code-review/logic | *

**[medium] Missing error handling for MongoDB connection in findings test** The MongoDB client creation and database access in the insert_finding helper function lacks proper error handling. If the MongoDB connection fails or the database name cannot be determined, the test will panic rather than fail gracefully, potentially masking underlying infrastructure issues. Suggested fix: Add proper error handling around the MongoDB client initialization and database access operations using ? operator or explicit error checking to ensure tests fail gracefully when database connectivity issues occur. *Scanner: code-review/logic | * <!-- compliance-fp:98334682a9125870f93f577c05362281bdfbf6b4418399e232470db156cd67a5 -->
@@ -0,0 +22,4 @@
"repo_id": repo_id,
"fingerprint": format!("fp-{title}-{severity}"),
"scanner": "test-scanner",
"scan_type": "sast",
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The test code uses .unwrap() extensively on deserialization and database operations, which can cause tests to panic instead of failing gracefully when the test environment is misconfigured or when the API returns unexpected responses. This makes test failures harder to debug and can mask underlying issues.

Suggested fix: Replace unwrap() calls with proper error handling using ? operator or explicit assertions to make test failures more informative and prevent panics.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The test code uses `.unwrap()` extensively on deserialization and database operations, which can cause tests to panic instead of failing gracefully when the test environment is misconfigured or when the API returns unexpected responses. This makes test failures harder to debug and can mask underlying issues. Suggested fix: Replace unwrap() calls with proper error handling using ? operator or explicit assertions to make test failures more informative and prevent panics. *Scanner: code-review/convention | * <!-- compliance-fp:eba98ec82187ce4d4298917435965fa4e0ac4dc759e4ca6c857ef81a06a710da -->
@@ -0,0 +27,4 @@
"description": format!("Test finding: {title}"),
"severity": severity,
"status": "open",
"created_at": now,
Author
Owner

[medium] Direct MongoDB manipulation in integration tests

Integration tests directly manipulate MongoDB collections instead of using the public API, which violates the principle of testing through the public interface. This makes tests brittle and doesn't verify that the actual API endpoints work correctly.

Suggested fix: Refactor tests to use the public API for creating test data instead of direct database manipulation, ensuring tests validate the actual API behavior.

*Scanner: code-review/convention | *

**[medium] Direct MongoDB manipulation in integration tests** Integration tests directly manipulate MongoDB collections instead of using the public API, which violates the principle of testing through the public interface. This makes tests brittle and doesn't verify that the actual API endpoints work correctly. Suggested fix: Refactor tests to use the public API for creating test data instead of direct database manipulation, ensuring tests validate the actual API behavior. *Scanner: code-review/convention | * <!-- compliance-fp:505ae941e1edb16b3969886a562d6579f70675ed5fb7f9f5b160398cffa1dca1 -->
@@ -0,0 +32,4 @@
})
.await
.unwrap();
}
Author
Owner

[high] Incorrect MongoDB collection name in findings test

The test code attempts to insert findings into a MongoDB collection named 'findings', but based on the context and typical application structure, it should likely be inserting into a collection like 'vulnerabilities' or similar. This could lead to test data not being properly stored or retrieved during integration tests.

Suggested fix: Verify the correct MongoDB collection name for findings in your application schema and update the collection reference accordingly. Check if the actual application stores findings in a different collection than 'findings'.

*Scanner: code-review/logic | *

**[high] Incorrect MongoDB collection name in findings test** The test code attempts to insert findings into a MongoDB collection named 'findings', but based on the context and typical application structure, it should likely be inserting into a collection like 'vulnerabilities' or similar. This could lead to test data not being properly stored or retrieved during integration tests. Suggested fix: Verify the correct MongoDB collection name for findings in your application schema and update the collection reference accordingly. Check if the actual application stores findings in a different collection than 'findings'. *Scanner: code-review/logic | * <!-- compliance-fp:0d6aab1592447d60b1015685b9b163add511d9e5b223d2f61686e49515f8b353 -->
@@ -0,0 +102,4 @@
assert_eq!(body["data"]["status"], "resolved");
server.cleanup().await;
}
Author
Owner

[medium] Complex boolean expression in test assertions

The test functions use deeply nested JSON path access patterns like body["data"][0]["name"] which are hard to read and prone to runtime errors if the structure changes. The complex boolean logic in assertions increases risk of false positives/negatives during maintenance.

Suggested fix: Extract JSON path access into helper functions with proper error handling to make assertions more readable and less error-prone.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in test assertions** The test functions use deeply nested JSON path access patterns like `body["data"][0]["name"]` which are hard to read and prone to runtime errors if the structure changes. The complex boolean logic in assertions increases risk of false positives/negatives during maintenance. Suggested fix: Extract JSON path access into helper functions with proper error handling to make assertions more readable and less error-prone. *Scanner: code-review/complexity | * <!-- compliance-fp:f3d519193519dadafd88ce85d5799799919e4fddb0532f83b8addd19bcd3a920 -->
@@ -0,0 +7,4 @@
// Initially empty
let resp = server.get("/api/v1/repositories").await;
assert_eq!(resp.status(), 200);
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The test code uses .unwrap() extensively on deserialization and database operations, which can cause tests to panic instead of properly reporting test failures when these operations fail. This makes it harder to debug test failures and can mask underlying issues.

Suggested fix: Replace .unwrap() calls with proper error handling using assert! or matches! macros to ensure tests fail gracefully with meaningful error messages rather than panicking.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The test code uses `.unwrap()` extensively on deserialization and database operations, which can cause tests to panic instead of properly reporting test failures when these operations fail. This makes it harder to debug test failures and can mask underlying issues. Suggested fix: Replace `.unwrap()` calls with proper error handling using `assert!` or `matches!` macros to ensure tests fail gracefully with meaningful error messages rather than panicking. *Scanner: code-review/convention | * <!-- compliance-fp:0d7cc17501177dcd3ebbebd6cc55f317b7fdc2228a42b54b1df42321388f4882 -->
@@ -0,0 +41,4 @@
let server = TestServer::start().await;
let payload = json!({
"name": "dup-repo",
Author
Owner

[medium] Complex boolean expression in test assertion

The test function 'add_duplicate_repository_fails' uses assert_ne!(resp.status(), 200) which could pass for any non-200 status code, including 500 errors or other unexpected failures. This makes the test less precise and potentially masking real issues.

Suggested fix: Replace assert_ne!(resp.status(), 200) with a more specific assertion like assert_eq!(resp.status(), 409) or assert!(resp.status() >= 400 && resp.status() < 500) to ensure we're specifically testing for the duplicate repository error case.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in test assertion** The test function 'add_duplicate_repository_fails' uses assert_ne!(resp.status(), 200) which could pass for any non-200 status code, including 500 errors or other unexpected failures. This makes the test less precise and potentially masking real issues. Suggested fix: Replace assert_ne!(resp.status(), 200) with a more specific assertion like assert_eq!(resp.status(), 409) or assert!(resp.status() >= 400 && resp.status() < 500) to ensure we're specifically testing for the duplicate repository error case. *Scanner: code-review/complexity | * <!-- compliance-fp:79404cdc0af0daf79084f8dbc5e22f551dbf6228313b03e1777c21e138df09f7 -->
@@ -0,0 +12,4 @@
&json!({
"name": "stats-repo",
"git_url": "https://github.com/example/stats-repo.git",
}),
Author
Owner

[medium] Deeply nested control flow in test setup

The 'stats_overview_reflects_inserted_data' test contains multiple nested operations involving MongoDB client setup, database access, and document insertion that make the test flow harder to follow. The test mixes integration test logic with direct database manipulation which increases complexity.

Suggested fix: Extract the MongoDB setup and document insertion into separate helper functions to flatten the control flow and improve readability. Consider using a fixture pattern to manage the test database state.

*Scanner: code-review/complexity | *

**[medium] Deeply nested control flow in test setup** The 'stats_overview_reflects_inserted_data' test contains multiple nested operations involving MongoDB client setup, database access, and document insertion that make the test flow harder to follow. The test mixes integration test logic with direct database manipulation which increases complexity. Suggested fix: Extract the MongoDB setup and document insertion into separate helper functions to flatten the control flow and improve readability. Consider using a fixture pattern to manage the test database state. *Scanner: code-review/complexity | * <!-- compliance-fp:21682da07278fb7ae664138e200dd6c7a37beb857124375238def2e8c9056e88 -->
Author
Owner

[medium] Hardcoded MongoDB Credentials in Test Code

The test code contains hardcoded MongoDB credentials in the form of a connection string with username 'root' and password 'example'. While this is in test files, it represents a potential security risk if these credentials were accidentally committed to version control or used in production environments.

Suggested fix: Use environment variables or a secure configuration management system for database credentials even in test environments. Consider using a dedicated test database with unique credentials.

Scanner: code-review/security | CWE: CWE-798

**[medium] Hardcoded MongoDB Credentials in Test Code** The test code contains hardcoded MongoDB credentials in the form of a connection string with username 'root' and password 'example'. While this is in test files, it represents a potential security risk if these credentials were accidentally committed to version control or used in production environments. Suggested fix: Use environment variables or a secure configuration management system for database credentials even in test environments. Consider using a dedicated test database with unique credentials. *Scanner: code-review/security | CWE: CWE-798* <!-- compliance-fp:63b5369117f23b5d3eb4660e872d0f6e4ad3f19a56842e1412aa1adca30368e8 -->
@@ -0,0 +17,4 @@
.await;
// Insert findings directly
let mongodb_uri = std::env::var("TEST_MONGODB_URI")
Author
Owner

[medium] Hardcoded MongoDB URI in test code

The test code hardcodes the MongoDB connection string with default values, making tests brittle and dependent on specific environment configurations. This violates test isolation principles and makes tests less portable.

Suggested fix: Use environment variables consistently for all database connections in tests, and provide better error handling when environment variables are missing rather than falling back to hardcoded defaults.

*Scanner: code-review/convention | *

**[medium] Hardcoded MongoDB URI in test code** The test code hardcodes the MongoDB connection string with default values, making tests brittle and dependent on specific environment configurations. This violates test isolation principles and makes tests less portable. Suggested fix: Use environment variables consistently for all database connections in tests, and provide better error handling when environment variables are missing rather than falling back to hardcoded defaults. *Scanner: code-review/convention | * <!-- compliance-fp:aba496fc642f2bb45c202be95dccc4ede8e6ed348653e4238a75f57c5fd18a81 -->
@@ -0,0 +22,4 @@
let client = mongodb::Client::with_uri_str(&mongodb_uri).await.unwrap();
let db = client.database(&server.db_name());
let now = mongodb::bson::DateTime::now();
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

Multiple test files use .unwrap() on database operations and JSON parsing, which can cause tests to crash instead of failing gracefully when operations don't succeed. This violates the principle of robust test design where failures should be reported clearly.

Suggested fix: Replace .unwrap() calls with explicit error checking that converts failures into assertion failures rather than panicking.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** Multiple test files use `.unwrap()` on database operations and JSON parsing, which can cause tests to crash instead of failing gracefully when operations don't succeed. This violates the principle of robust test design where failures should be reported clearly. Suggested fix: Replace `.unwrap()` calls with explicit error checking that converts failures into assertion failures rather than panicking. *Scanner: code-review/convention | * <!-- compliance-fp:a1a9a866622cbd9a3511b0d894a3be2363e618210df0b874a11bdcb5614722ae -->
@@ -0,0 +32,4 @@
db.collection::<mongodb::bson::Document>("findings")
.insert_one(mongodb::bson::doc! {
"repo_id": "test-repo-id",
"fingerprint": format!("fp-{title}"),
Author
Owner

[high] Incorrect repository ID used in stats test

In the stats_overview_reflects_inserted_data test, the MongoDB insert operation uses hardcoded "test-repo-id" as the repo_id instead of using the actual repository ID that was created in the test. This means the findings won't be associated with any real repository, causing the stats to not reflect the expected data.

Suggested fix: Store the repository ID from the successful POST request and use it in the MongoDB insert operation instead of the hardcoded value.

*Scanner: code-review/logic | *

**[high] Incorrect repository ID used in stats test** In the `stats_overview_reflects_inserted_data` test, the MongoDB insert operation uses hardcoded `"test-repo-id"` as the `repo_id` instead of using the actual repository ID that was created in the test. This means the findings won't be associated with any real repository, causing the stats to not reflect the expected data. Suggested fix: Store the repository ID from the successful POST request and use it in the MongoDB insert operation instead of the hardcoded value. *Scanner: code-review/logic | * <!-- compliance-fp:58e393684ec38fb142d980eb7174be895282d0e0fa27cfed101b2a7a7fff61bb -->
@@ -1,4 +1,9 @@
// Integration tests for the compliance-agent crate.
// E2E / Integration tests for the compliance-agent API.
Author
Owner

[medium] Inconsistent test module naming

The test file was renamed from 'integration' to 'e2e' but the module name still references 'integration'. This creates confusion about the test scope and may lead to incorrect test discovery or execution patterns.

Suggested fix: Rename the module to 'e2e' to match the file name and test runner command, or update the comment to reflect the actual module name being used.

*Scanner: code-review/convention | *

**[medium] Inconsistent test module naming** The test file was renamed from 'integration' to 'e2e' but the module name still references 'integration'. This creates confusion about the test scope and may lead to incorrect test discovery or execution patterns. Suggested fix: Rename the module to 'e2e' to match the file name and test runner command, or update the comment to reflect the actual module name being used. *Scanner: code-review/convention | * <!-- compliance-fp:d3092dba2fa76350d44f9b590a28468b619c34ea80a7523ad71ef78d7949a777 -->
@@ -18,0 +24,4 @@
///
/// Returns the first match found, or `None` if the node is a root.
fn find_parent_qname(qname: &str, node_map: &HashMap<String, NodeIndex>) -> Option<String> {
let mut current = qname.to_string();
Author
Owner

[high] Off-by-one error in find_parent_qname when handling empty segments

The find_parent_qname function can produce incorrect results when dealing with qualified names that end with '::'. For example, given a node map containing "src/main.rs::config::" (with trailing '::'), calling find_parent_qname("src/main.rs::config::", &node_map) will incorrectly return Some("src/main.rs::") instead of None. This happens because rfind("::") finds the last '::' but truncating it leaves an empty string which may match a node if one exists with an empty string key.

Suggested fix: Add a check after current.truncate(pos) to ensure that current is not empty before checking node_map.contains_key(&current). If current becomes empty, immediately return None since there's no valid parent.

*Scanner: code-review/logic | *

**[high] Off-by-one error in `find_parent_qname` when handling empty segments** The `find_parent_qname` function can produce incorrect results when dealing with qualified names that end with '::'. For example, given a node map containing "src/main.rs::config::" (with trailing '::'), calling `find_parent_qname("src/main.rs::config::", &node_map)` will incorrectly return `Some("src/main.rs::")` instead of `None`. This happens because `rfind("::")` finds the last '::' but truncating it leaves an empty string which may match a node if one exists with an empty string key. Suggested fix: Add a check after `current.truncate(pos)` to ensure that `current` is not empty before checking `node_map.contains_key(&current)`. If `current` becomes empty, immediately return `None` since there's no valid parent. *Scanner: code-review/logic | * <!-- compliance-fp:2eb26751c2b51b0992ca797d2171add9abeef5e3b7ec00641b9438058c779075 -->
@@ -121,7 +148,48 @@ impl GraphEngine {
graph.add_edge(src, tgt, edge.kind.clone());
Author
Owner

[medium] Deeply nested control flow in Contains edge synthesis

The Contains edge synthesis logic in build_petgraph has deeply nested control flow with multiple nested conditions and loops. The code walks up the qualified name hierarchy while checking for existing edges, which creates a complex flow that's hard to reason about and maintain.

Suggested fix: Flatten the nested structure by extracting the parent lookup logic into a separate function and using early returns instead of deep nesting. This would simplify the main control flow and make it easier to verify correctness.

*Scanner: code-review/complexity | *

**[medium] Deeply nested control flow in Contains edge synthesis** The Contains edge synthesis logic in `build_petgraph` has deeply nested control flow with multiple nested conditions and loops. The code walks up the qualified name hierarchy while checking for existing edges, which creates a complex flow that's hard to reason about and maintain. Suggested fix: Flatten the nested structure by extracting the parent lookup logic into a separate function and using early returns instead of deep nesting. This would simplify the main control flow and make it easier to verify correctness. *Scanner: code-review/complexity | * <!-- compliance-fp:ebb388b8c9a39790be19560562289778152605b8e0efa7c4437eb42fe14eb898 -->
@@ -125,0 +163,4 @@
.first()
.map(|n| n.graph_build_id.as_str())
.unwrap_or("");
Author
Owner

[medium] Potential panic in Contains edge synthesis

In the build_petgraph function, when synthesizing Contains edges, the code directly indexes into node_map with node_map[qname] and node_map[&parent_qname] without checking if keys exist. If find_parent_qname returns a parent that doesn't exist in the node_map, this will panic during runtime.

Suggested fix: Use .get() instead of indexing with [] to safely access values from node_map, or ensure that find_parent_qname always returns valid keys that exist in the node_map.

*Scanner: code-review/convention | *

**[medium] Potential panic in Contains edge synthesis** In the `build_petgraph` function, when synthesizing Contains edges, the code directly indexes into `node_map` with `node_map[qname]` and `node_map[&parent_qname]` without checking if keys exist. If `find_parent_qname` returns a parent that doesn't exist in the node_map, this will panic during runtime. Suggested fix: Use `.get()` instead of indexing with `[]` to safely access values from node_map, or ensure that `find_parent_qname` always returns valid keys that exist in the node_map. *Scanner: code-review/convention | * <!-- compliance-fp:8dd7e0d4858858437bc1d8ee1aab9a0dc4707d18346582b2b885e696a5588737 -->
@@ -125,0 +173,4 @@
for qname in &qualified_names {
if let Some(parent_qname) = find_parent_qname(qname, &node_map) {
let child_idx = node_map[qname];
let parent_idx = node_map[&parent_qname];
Author
Owner

[medium] Incorrect edge duplication prevention in Contains edge synthesis

In the Contains edge synthesis logic, the check !graph.contains_edge(parent_idx, child_idx) only prevents adding duplicate edges based on the indices. However, it doesn't prevent adding multiple edges with different metadata (like file paths) between the same pair of nodes. This could lead to inconsistent graph state where the same logical relationship appears multiple times with different metadata.

Suggested fix: Consider using a more robust deduplication mechanism that also compares edge metadata, or ensure that all synthesized edges have consistent metadata across the graph.

*Scanner: code-review/logic | *

**[medium] Incorrect edge duplication prevention in Contains edge synthesis** In the Contains edge synthesis logic, the check `!graph.contains_edge(parent_idx, child_idx)` only prevents adding duplicate edges based on the indices. However, it doesn't prevent adding multiple edges with different metadata (like file paths) between the same pair of nodes. This could lead to inconsistent graph state where the same logical relationship appears multiple times with different metadata. Suggested fix: Consider using a more robust deduplication mechanism that also compares edge metadata, or ensure that all synthesized edges have consistent metadata across the graph. *Scanner: code-review/logic | * <!-- compliance-fp:0a07b5c5e60542e191d085eb18dc095e44d9bb6e67d6ba8198ed436e707b610d -->
@@ -125,0 +174,4 @@
if let Some(parent_qname) = find_parent_qname(qname, &node_map) {
let child_idx = node_map[qname];
let parent_idx = node_map[&parent_qname];
Author
Owner

[low] Unnecessary unwrap_or_default in Contains edge creation

In the Contains edge synthesis logic, file_paths.get(qname).cloned().unwrap_or_default() can silently provide empty strings when file paths aren't found. This might mask missing data or incorrect assumptions about node structure, especially since the code already checks for existence of nodes via node_map.

Suggested fix: Consider adding a debug assertion or logging when a file path is missing, rather than silently defaulting to an empty string.

*Scanner: code-review/convention | *

**[low] Unnecessary unwrap_or_default in Contains edge creation** In the Contains edge synthesis logic, `file_paths.get(qname).cloned().unwrap_or_default()` can silently provide empty strings when file paths aren't found. This might mask missing data or incorrect assumptions about node structure, especially since the code already checks for existence of nodes via `node_map`. Suggested fix: Consider adding a debug assertion or logging when a file path is missing, rather than silently defaulting to an empty string. *Scanner: code-review/convention | * <!-- compliance-fp:7b8b7aeab65def86d8a4928c52add29e13f3dc20b8300ef4ba84c3239eabda40 -->
@@ -132,33 +200,62 @@ impl GraphEngine {
})
Author
Owner

[medium] Complex boolean expression in edge resolution

The resolve_edge_target function contains a complex boolean expression that checks multiple conditions for edge resolution. The logic involves multiple string pattern matching operations with various prefixes and suffixes, making it difficult to follow and prone to subtle bugs when modifying the resolution strategies.

Suggested fix: Break down the complex conditionals into separate helper functions with clear names that describe their specific resolution strategy. This would make the intent clearer and reduce the chance of logical errors when adding new resolution types.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in edge resolution** The `resolve_edge_target` function contains a complex boolean expression that checks multiple conditions for edge resolution. The logic involves multiple string pattern matching operations with various prefixes and suffixes, making it difficult to follow and prone to subtle bugs when modifying the resolution strategies. Suggested fix: Break down the complex conditionals into separate helper functions with clear names that describe their specific resolution strategy. This would make the intent clearer and reduce the chance of logical errors when adding new resolution types. *Scanner: code-review/complexity | * <!-- compliance-fp:1df9276a3941f76ed4bfb6e0944d204c3e52011243f491b1b22c00b4e0e75c90 -->
@@ -150,3 +224,2 @@
|| qualified.ends_with(&format!(".{target}"))
{
if qualified.ends_with(&suffix_pattern) || qualified.ends_with(&dot_pattern) {
return Some(*idx);
Author
Owner

[medium] Inconsistent error handling in edge resolution

The resolve_edge_target function uses multiple resolution strategies but doesn't consistently handle cases where resolution fails. Specifically, the module-path matching strategy has a fallback that uses unwrap_or(target) which could lead to silent failures or incorrect behavior when the target string contains '::' but doesn't match any pattern.

Suggested fix: Replace unwrap_or(target) with explicit error handling or logging to make it clear when resolution fails. Consider returning an error or using a more robust fallback mechanism.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling in edge resolution** The `resolve_edge_target` function uses multiple resolution strategies but doesn't consistently handle cases where resolution fails. Specifically, the module-path matching strategy has a fallback that uses `unwrap_or(target)` which could lead to silent failures or incorrect behavior when the target string contains '::' but doesn't match any pattern. Suggested fix: Replace `unwrap_or(target)` with explicit error handling or logging to make it clear when resolution fails. Consider returning an error or using a more robust fallback mechanism. *Scanner: code-review/convention | * <!-- compliance-fp:fe0ea8db4d5285f77a7d0255e0c2c805e208c7328ec273c297250ab58f56868c -->
sharang added 1 commit 2026-03-30 09:51:42 +00:00
fix: exclude E2E tests from regular CI (no MongoDB available)
All checks were successful
CI / Check (pull_request) Successful in 10m0s
CI / Detect Changes (pull_request) Has been skipped
CI / Deploy Agent (pull_request) Has been skipped
CI / Deploy Dashboard (pull_request) Has been skipped
CI / Deploy Docs (pull_request) Has been skipped
CI / Deploy MCP (pull_request) Has been skipped
d418f8386f
The E2E tests require MongoDB and only run in the nightly workflow.
Use --lib flag to run only unit tests in the regular CI check job.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sharang reviewed 2026-03-30 09:53:25 +00:00
sharang left a comment
Author
Owner

Compliance scan found 42 issue(s) in this PR:

  • [medium] code-review/convention: Potential panic in test cleanup when MongoDB connection fails
  • [high] code-review/logic: Missing dev-dependencies in compliance-agent Cargo.toml
  • [medium] code-review/convention: Duplicated MongoDB connection setup across multiple test helper functions
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/logic: Missing database cleanup on test server drop
  • [medium] code-review/convention: Potential panic in Contains edge synthesis due to unwrapping
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/complexity: Deeply nested control flow in E2E workflow
  • [medium] code-review/security: Hardcoded MongoDB Credentials in Test Code
  • [medium] code-review/complexity: Deeply nested control flow in server readiness check
  • [medium] code-review/complexity: Complex boolean expression in test assertions
  • [medium] code-review/complexity: Complex boolean expression in test assertions
  • [medium] code-review/complexity: Deeply nested control flow in Contains edge synthesis
  • [high] code-review/logic: Missing error handling for database operations
  • [medium] code-review/logic: Database connection reused across tests
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [high] code-review/logic: Incorrect MongoDB document insertion in test helper
  • [medium] code-review/convention: Inconsistent test command in CI workflow
  • [low] code-review/complexity: Complex boolean expression in CI workflow
  • [medium] code-review/complexity: Complex boolean expression in test assertion
  • [low] code-review/convention: Unnecessary allow(dead_code) for public module
  • [medium] code-review/complexity: Complex boolean expression in edge resolution
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/complexity: Complex boolean expression in test server startup
  • [medium] code-review/security: Hardcoded MongoDB Credentials in Test Code
  • [medium] code-review/convention: Missing explicit error handling in main function
  • [high] code-review/security: Potential Insecure Direct Object Reference (IDOR) in Repository Deletion
  • [low] code-review/convention: Test assertion relies on specific ordering of edges
  • [low] code-review/convention: Hardcoded test database name prefix
  • [medium] code-review/convention: Inconsistent test module naming
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/convention: Inconsistent error handling in edge resolution
  • [medium] code-review/complexity: Deeply nested control flow in test setup
  • [medium] code-review/logic: Potential race condition in bulk update test
  • [high] code-review/logic: Incorrect module imports in main.rs
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [medium] code-review/convention: Inconsistent error handling with unwrap() in test code
  • [high] code-review/logic: Incorrect repository ID used in stats test
  • [high] code-review/logic: Off-by-one error in find_parent_qname when handling qualified names without '::'
  • [medium] code-review/complexity: Duplicated MongoDB connection setup
  • [medium] code-review/logic: Missing error handling in MongoDB connection setup
Compliance scan found **42** issue(s) in this PR: - **[medium]** code-review/convention: Potential panic in test cleanup when MongoDB connection fails - **[high]** code-review/logic: Missing dev-dependencies in compliance-agent Cargo.toml - **[medium]** code-review/convention: Duplicated MongoDB connection setup across multiple test helper functions - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/logic: Missing database cleanup on test server drop - **[medium]** code-review/convention: Potential panic in Contains edge synthesis due to unwrapping - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/complexity: Deeply nested control flow in E2E workflow - **[medium]** code-review/security: Hardcoded MongoDB Credentials in Test Code - **[medium]** code-review/complexity: Deeply nested control flow in server readiness check - **[medium]** code-review/complexity: Complex boolean expression in test assertions - **[medium]** code-review/complexity: Complex boolean expression in test assertions - **[medium]** code-review/complexity: Deeply nested control flow in Contains edge synthesis - **[high]** code-review/logic: Missing error handling for database operations - **[medium]** code-review/logic: Database connection reused across tests - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[high]** code-review/logic: Incorrect MongoDB document insertion in test helper - **[medium]** code-review/convention: Inconsistent test command in CI workflow - **[low]** code-review/complexity: Complex boolean expression in CI workflow - **[medium]** code-review/complexity: Complex boolean expression in test assertion - **[low]** code-review/convention: Unnecessary allow(dead_code) for public module - **[medium]** code-review/complexity: Complex boolean expression in edge resolution - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/complexity: Complex boolean expression in test server startup - **[medium]** code-review/security: Hardcoded MongoDB Credentials in Test Code - **[medium]** code-review/convention: Missing explicit error handling in main function - **[high]** code-review/security: Potential Insecure Direct Object Reference (IDOR) in Repository Deletion - **[low]** code-review/convention: Test assertion relies on specific ordering of edges - **[low]** code-review/convention: Hardcoded test database name prefix - **[medium]** code-review/convention: Inconsistent test module naming - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/convention: Inconsistent error handling in edge resolution - **[medium]** code-review/complexity: Deeply nested control flow in test setup - **[medium]** code-review/logic: Potential race condition in bulk update test - **[high]** code-review/logic: Incorrect module imports in main.rs - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[medium]** code-review/convention: Inconsistent error handling with unwrap() in test code - **[high]** code-review/logic: Incorrect repository ID used in stats test - **[high]** code-review/logic: Off-by-one error in `find_parent_qname` when handling qualified names without '::' - **[medium]** code-review/complexity: Duplicated MongoDB connection setup - **[medium]** code-review/logic: Missing error handling in MongoDB connection setup
@@ -71,3 +71,3 @@
# Tests (reuses compilation artifacts from clippy)
- name: Tests (core + agent)
run: cargo test -p compliance-core -p compliance-agent
run: cargo test -p compliance-core -p compliance-agent --lib
Author
Owner

[medium] Inconsistent test command in CI workflow

The test command for core and agent packages was changed from cargo test -p compliance-core -p compliance-agent to cargo test -p compliance-core -p compliance-agent --lib. This change may exclude integration tests from being run, potentially hiding failures in the agent's integration behavior.

Suggested fix: Revert to the previous command or explicitly include integration tests in the test suite.

*Scanner: code-review/convention | *

**[medium] Inconsistent test command in CI workflow** The test command for core and agent packages was changed from `cargo test -p compliance-core -p compliance-agent` to `cargo test -p compliance-core -p compliance-agent --lib`. This change may exclude integration tests from being run, potentially hiding failures in the agent's integration behavior. Suggested fix: Revert to the previous command or explicitly include integration tests in the test suite. *Scanner: code-review/convention | * <!-- compliance-fp:0318699a9f87360b01475eb9fc5c879e57a3d39cc3c95a39b027f587a481c7cd -->
Author
Owner

[low] Complex boolean expression in CI workflow

The test command in the CI workflow uses a complex boolean expression with multiple package specifications and feature flags that could be simplified for better readability and maintainability.

Suggested fix: Consider breaking down the cargo test command into separate steps or using a more structured approach to specify test targets and features.

*Scanner: code-review/complexity | *

**[low] Complex boolean expression in CI workflow** The test command in the CI workflow uses a complex boolean expression with multiple package specifications and feature flags that could be simplified for better readability and maintainability. Suggested fix: Consider breaking down the cargo test command into separate steps or using a more structured approach to specify test targets and features. *Scanner: code-review/complexity | * <!-- compliance-fp:7c6667a7ba5d7d102c83c987e9e0130f4537c7325d05f0f869a4296f75e3b05e -->
@@ -0,0 +22,4 @@
runs-on: docker
container:
image: rust:1.94-bookworm
services:
Author
Owner

[medium] Deeply nested control flow in E2E workflow

The E2E workflow contains nested control structures with multiple conditional steps that make the workflow logic harder to follow and increase the risk of errors when modifying individual steps.

Suggested fix: Flatten the nested control flow by extracting conditional logic into separate jobs or using workflow conditions to simplify the overall structure.

*Scanner: code-review/complexity | *

**[medium] Deeply nested control flow in E2E workflow** The E2E workflow contains nested control structures with multiple conditional steps that make the workflow logic harder to follow and increase the risk of errors when modifying individual steps. Suggested fix: Flatten the nested control flow by extracting conditional logic into separate jobs or using workflow conditions to simplify the overall structure. *Scanner: code-review/complexity | * <!-- compliance-fp:ee8d33daa31a5770c9a9fac970ddabaf6c59f91f0f5723a5a2e15971a9e6da55 -->
@@ -43,2 +43,4 @@
dashmap = { workspace = true }
tokio-stream = { workspace = true }
[dev-dependencies]
Author
Owner

[high] Missing dev-dependencies in compliance-agent Cargo.toml

The compliance-agent/Cargo.toml file adds dev-dependencies but does not include all necessary dependencies for the E2E tests to run properly. Specifically, it's missing the 'tokio' dependency which is required for async runtime in tests.

Suggested fix: Add 'tokio = { workspace = true }' to the dev-dependencies section to ensure proper async runtime support during E2E testing.

*Scanner: code-review/logic | *

**[high] Missing dev-dependencies in compliance-agent Cargo.toml** The compliance-agent/Cargo.toml file adds dev-dependencies but does not include all necessary dependencies for the E2E tests to run properly. Specifically, it's missing the 'tokio' dependency which is required for async runtime in tests. Suggested fix: Add 'tokio = { workspace = true }' to the dev-dependencies section to ensure proper async runtime support during E2E testing. *Scanner: code-review/logic | * <!-- compliance-fp:d2c8d75ebbc20c642be01b08b9ba94dee38a8d620dff7c740ec7c88b41fbc582 -->
@@ -0,0 +12,4 @@
pub mod scheduler;
pub mod ssh;
#[allow(dead_code)]
pub mod trackers;
Author
Owner

[low] Unnecessary allow(dead_code) for public module

The trackers module is marked with #[allow(dead_code)] but it's publicly exported via the lib.rs re-export. This suppresses dead code warnings for a public interface, which could hide unused code that should be removed or properly used.

Suggested fix: Either remove the #[allow(dead_code)] attribute if the module is actually used, or ensure the module is properly integrated into the public API.

*Scanner: code-review/convention | *

**[low] Unnecessary allow(dead_code) for public module** The `trackers` module is marked with `#[allow(dead_code)]` but it's publicly exported via the lib.rs re-export. This suppresses dead code warnings for a public interface, which could hide unused code that should be removed or properly used. Suggested fix: Either remove the `#[allow(dead_code)]` attribute if the module is actually used, or ensure the module is properly integrated into the public API. *Scanner: code-review/convention | * <!-- compliance-fp:675899286d2d36214519f014d8b16e0b45179e1ea16e55c63a84e9e3f8ad5433 -->
@@ -12,3 +1,1 @@
#[allow(dead_code)]
mod trackers;
mod webhooks;
use compliance_agent::{agent, api, config, database, scheduler, ssh, webhooks};
Author
Owner

[high] Incorrect module imports in main.rs

In compliance-agent/src/main.rs, the use statements import modules from the compliance_agent crate, but the actual modules being imported are not defined in the lib.rs file. This will lead to compilation errors because the modules like 'agent', 'api', etc., are not properly exposed through the library interface.

Suggested fix: Ensure that all modules referenced in the use statement are properly exported in lib.rs or adjust the import paths to match the actual module structure.

*Scanner: code-review/logic | *

**[high] Incorrect module imports in main.rs** In compliance-agent/src/main.rs, the use statements import modules from the compliance_agent crate, but the actual modules being imported are not defined in the lib.rs file. This will lead to compilation errors because the modules like 'agent', 'api', etc., are not properly exposed through the library interface. Suggested fix: Ensure that all modules referenced in the use statement are properly exported in lib.rs or adjust the import paths to match the actual module structure. *Scanner: code-review/logic | * <!-- compliance-fp:8e30c60ca264041404baad6c4dc86e6e9463b7a62da4a11246fe008fef4a7a94 -->
Author
Owner

[medium] Missing explicit error handling in main function

The main function in main.rs uses Box<dyn std::error::Error> which can mask specific error types and make debugging harder. The code should handle specific error cases more explicitly rather than relying on dynamic dispatch.

Suggested fix: Replace Box<dyn std::error::Error> with explicit error handling for known error types, or use a more specific error type that captures the expected failure modes.

*Scanner: code-review/convention | *

**[medium] Missing explicit error handling in main function** The main function in main.rs uses `Box<dyn std::error::Error>` which can mask specific error types and make debugging harder. The code should handle specific error cases more explicitly rather than relying on dynamic dispatch. Suggested fix: Replace `Box<dyn std::error::Error>` with explicit error handling for known error types, or use a more specific error type that captures the expected failure modes. *Scanner: code-review/convention | * <!-- compliance-fp:9b9aaa25d99e4491400eaad352310437487ed775a3a70692d86b0a7b5f446607 -->
@@ -4,0 +13,4 @@
/// A running test server with a unique database.
pub struct TestServer {
pub base_url: String,
Author
Owner

[medium] Missing database cleanup on test server drop

The TestServer struct does not implement Drop, which means the cleanup() method is never called automatically when a test server goes out of scope. This can lead to lingering test databases in MongoDB, causing resource leaks and potential test interference.

Suggested fix: Implement the Drop trait for TestServer to ensure cleanup() is called when the server instance is dropped. Add impl Drop for TestServer { fn drop(&mut self) { tokio::runtime::Handle::current().block_on(self.cleanup()); } }

*Scanner: code-review/logic | *

**[medium] Missing database cleanup on test server drop** The TestServer struct does not implement Drop, which means the cleanup() method is never called automatically when a test server goes out of scope. This can lead to lingering test databases in MongoDB, causing resource leaks and potential test interference. Suggested fix: Implement the Drop trait for TestServer to ensure cleanup() is called when the server instance is dropped. Add `impl Drop for TestServer { fn drop(&mut self) { tokio::runtime::Handle::current().block_on(self.cleanup()); } }` *Scanner: code-review/logic | * <!-- compliance-fp:7f2eda4eaf506407e7f067c8a66cab6bf7f6c8f98f68978664613963d67637d4 -->
@@ -4,0 +16,4 @@
pub base_url: String,
pub client: reqwest::Client,
db_name: String,
mongodb_uri: String,
Author
Owner

[low] Hardcoded test database name prefix

The test database name is constructed with a hardcoded prefix 'test_' (line 19). While this helps identify test databases, it could potentially conflict with existing databases named similarly. Additionally, the UUID generation approach may not be suitable for all environments where uniqueness is critical.

Suggested fix: Consider making the database name prefix configurable via an environment variable or using a more robust naming scheme that ensures uniqueness across different test runs and environments.

*Scanner: code-review/convention | *

**[low] Hardcoded test database name prefix** The test database name is constructed with a hardcoded prefix 'test_' (line 19). While this helps identify test databases, it could potentially conflict with existing databases named similarly. Additionally, the UUID generation approach may not be suitable for all environments where uniqueness is critical. Suggested fix: Consider making the database name prefix configurable via an environment variable or using a more robust naming scheme that ensures uniqueness across different test runs and environments. *Scanner: code-review/convention | * <!-- compliance-fp:885abd06eddecef12d09e88ad71da4ecc887fd059b99f2a44eb69abfbec24323 -->
@@ -4,0 +18,4 @@
db_name: String,
mongodb_uri: String,
}
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The test code uses unwrap_or_else() followed by expect() in multiple places, which creates inconsistent error handling patterns. Specifically, line 21 uses unwrap_or_else(|_| "mongodb://root:example@localhost:27017/?authSource=admin".into()) followed by expect("Failed to connect to MongoDB — is it running?") on line 28. This pattern should be consistent - either use expect() directly on the result or handle errors more gracefully.

Suggested fix: Use consistent error handling by either removing the unwrap_or_else and using expect() directly on the environment variable read, or refactor to use proper error propagation with ? operator.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The test code uses `unwrap_or_else()` followed by `expect()` in multiple places, which creates inconsistent error handling patterns. Specifically, line 21 uses `unwrap_or_else(|_| "mongodb://root:example@localhost:27017/?authSource=admin".into())` followed by `expect("Failed to connect to MongoDB — is it running?")` on line 28. This pattern should be consistent - either use `expect()` directly on the result or handle errors more gracefully. Suggested fix: Use consistent error handling by either removing the `unwrap_or_else` and using `expect()` directly on the environment variable read, or refactor to use proper error propagation with `?` operator. *Scanner: code-review/convention | * <!-- compliance-fp:469d9c0a9c5c001577c353a26de6f37b6dd58630952a89d41e578b4d7b7c5e24 -->
@@ -4,0 +32,4 @@
.await
.expect("Failed to connect to MongoDB — is it running?");
db.ensure_indexes().await.expect("Failed to create indexes");
Author
Owner

[medium] Complex boolean expression in test server startup

The TestServer::start() function contains a complex boolean expression in the AgentConfig construction that mixes environment variable fallbacks with hardcoded defaults, making it difficult to track which values are being used under what conditions.

Suggested fix: Extract the environment variable loading into separate variables with clear names, and use a more structured approach like a builder pattern or helper functions to reduce the cognitive load when reading the config initialization.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in test server startup** The TestServer::start() function contains a complex boolean expression in the AgentConfig construction that mixes environment variable fallbacks with hardcoded defaults, making it difficult to track which values are being used under what conditions. Suggested fix: Extract the environment variable loading into separate variables with clear names, and use a more structured approach like a builder pattern or helper functions to reduce the cognitive load when reading the config initialization. *Scanner: code-review/complexity | * <!-- compliance-fp:e9e6fefe855d78d58f4616ebc55d03a86a68c1166fff2dcb4f9b592358e726c9 -->
@@ -4,0 +81,4 @@
.await
.expect("Failed to bind test server");
let port = listener.local_addr().expect("no local addr").port();
Author
Owner

[medium] Deeply nested control flow in server readiness check

The server readiness check loop in TestServer::start() uses a nested structure with a for loop containing a conditional check and sleep, which could be flattened to improve readability and reduce potential logic errors during maintenance.

Suggested fix: Replace the for loop with a while let or tokio::time::sleep_until pattern to flatten the control flow and make the retry logic clearer.

*Scanner: code-review/complexity | *

**[medium] Deeply nested control flow in server readiness check** The server readiness check loop in TestServer::start() uses a nested structure with a for loop containing a conditional check and sleep, which could be flattened to improve readability and reduce potential logic errors during maintenance. Suggested fix: Replace the for loop with a while let or tokio::time::sleep_until pattern to flatten the control flow and make the retry logic clearer. *Scanner: code-review/complexity | * <!-- compliance-fp:827b6208efdd88d9826777a08d6f112fc7625b8dee3d3355e9b1dc195976147c -->
@@ -4,0 +151,4 @@
.expect("DELETE request failed")
}
/// Get the unique database name for direct MongoDB access in tests.
Author
Owner

[medium] Potential panic in test cleanup when MongoDB connection fails

In the cleanup method (lines 154-157), if mongodb::Client::with_uri_str() fails, the test will panic due to .ok() being called on a Result. The ok() method converts Result<T, E> to Option<T>, but since we're calling ok() on the result of drop(), any error during drop will be silently ignored. This can mask actual database connection issues during cleanup.

Suggested fix: Remove the .ok() call and allow potential errors to propagate, or explicitly log the error before ignoring it to maintain visibility into cleanup failures.

*Scanner: code-review/convention | *

**[medium] Potential panic in test cleanup when MongoDB connection fails** In the `cleanup` method (lines 154-157), if `mongodb::Client::with_uri_str()` fails, the test will panic due to `.ok()` being called on a `Result`. The `ok()` method converts `Result<T, E>` to `Option<T>`, but since we're calling `ok()` on the result of `drop()`, any error during drop will be silently ignored. This can mask actual database connection issues during cleanup. Suggested fix: Remove the `.ok()` call and allow potential errors to propagate, or explicitly log the error before ignoring it to maintain visibility into cleanup failures. *Scanner: code-review/convention | * <!-- compliance-fp:19abece4b8645e793765181f7b4af4e48c6bab6c4712a36413e71c5a5bd31cd1 -->
@@ -0,0 +7,4 @@
.unwrap_or_else(|_| "mongodb://root:example@localhost:27017/?authSource=admin".into());
let client = mongodb::Client::with_uri_str(&mongodb_uri).await.unwrap();
let db = client.database(&server.db_name());
Author
Owner

[medium] Duplicated MongoDB connection setup across multiple test helper functions

Multiple helper functions (insert_dast_target, insert_pentest_session, insert_attack_node, insert_dast_finding, count_docs) all duplicate the MongoDB client initialization logic. This creates redundancy and potential inconsistencies if the connection parameters change. The connection setup should be centralized to avoid duplication and improve maintainability.

Suggested fix: Create a shared helper function for MongoDB connection setup that can be reused across all test helper functions to reduce duplication and ensure consistent connection handling.

*Scanner: code-review/convention | *

**[medium] Duplicated MongoDB connection setup across multiple test helper functions** Multiple helper functions (`insert_dast_target`, `insert_pentest_session`, `insert_attack_node`, `insert_dast_finding`, `count_docs`) all duplicate the MongoDB client initialization logic. This creates redundancy and potential inconsistencies if the connection parameters change. The connection setup should be centralized to avoid duplication and improve maintainability. Suggested fix: Create a shared helper function for MongoDB connection setup that can be reused across all test helper functions to reduce duplication and ensure consistent connection handling. *Scanner: code-review/convention | * <!-- compliance-fp:aa407af431fa19ba934a9558be1b7b662573001db8106455c8b9dbd574b7f87d -->
@@ -0,0 +9,4 @@
let db = client.database(&server.db_name());
let result = db
.collection::<mongodb::bson::Document>("dast_targets")
Author
Owner

[medium] Hardcoded MongoDB Credentials in Test Code

The test code contains hardcoded MongoDB credentials ('root:example') in multiple helper functions. While this is only in test files, it represents poor security hygiene and could lead to unauthorized database access if the test environment is compromised or if these credentials are accidentally used in production code paths.

Suggested fix: Use environment variables or a secure configuration management system for database credentials even in test environments. Consider using a dedicated test database with restricted permissions.

Scanner: code-review/security | CWE: CWE-798

**[medium] Hardcoded MongoDB Credentials in Test Code** The test code contains hardcoded MongoDB credentials ('root:example') in multiple helper functions. While this is only in test files, it represents poor security hygiene and could lead to unauthorized database access if the test environment is compromised or if these credentials are accidentally used in production code paths. Suggested fix: Use environment variables or a secure configuration management system for database credentials even in test environments. Consider using a dedicated test database with restricted permissions. *Scanner: code-review/security | CWE: CWE-798* <!-- compliance-fp:666a556b613798e8d21034f2afb84e6c3827c40785f03d42534652b0236ba507 -->
@@ -0,0 +11,4 @@
let result = db
.collection::<mongodb::bson::Document>("dast_targets")
.insert_one(mongodb::bson::doc! {
"name": name,
Author
Owner

[medium] Database connection reused across tests

Each helper function creates its own MongoDB client connection instead of reusing the one from the test server. This leads to unnecessary resource consumption and potential connection pool exhaustion during test runs.

Suggested fix: Pass a shared database connection or client instance to the helper functions rather than creating new connections in each function.

*Scanner: code-review/logic | *

**[medium] Database connection reused across tests** Each helper function creates its own MongoDB client connection instead of reusing the one from the test server. This leads to unnecessary resource consumption and potential connection pool exhaustion during test runs. Suggested fix: Pass a shared database connection or client instance to the helper functions rather than creating new connections in each function. *Scanner: code-review/logic | * <!-- compliance-fp:821299b13807e9a19b9e1ed216ef41dff72060f444deb0f9bf3d5fbaa6c5d363 -->
Author
Owner

[medium] Duplicated MongoDB connection setup

Multiple helper functions repeatedly establish MongoDB connections using the same URI retrieval and client creation pattern. This creates redundant code that's hard to maintain and increases risk of inconsistencies if the connection logic needs to change.

Suggested fix: Extract MongoDB connection setup into a shared helper function to reduce duplication and improve maintainability.

*Scanner: code-review/complexity | *

**[medium] Duplicated MongoDB connection setup** Multiple helper functions repeatedly establish MongoDB connections using the same URI retrieval and client creation pattern. This creates redundant code that's hard to maintain and increases risk of inconsistencies if the connection logic needs to change. Suggested fix: Extract MongoDB connection setup into a shared helper function to reduce duplication and improve maintainability. *Scanner: code-review/complexity | * <!-- compliance-fp:bacc9b89684ad14ff1299abb1a7cf92272bcce7f000b1d8e39b4c81927b44de0 -->
@@ -0,0 +12,4 @@
.collection::<mongodb::bson::Document>("dast_targets")
.insert_one(mongodb::bson::doc! {
"name": name,
"base_url": format!("https://{name}.example.com"),
Author
Owner

[high] Missing error handling for database operations

The test helper functions use .unwrap() on database operations which will panic if the database connection fails or operations fail. This could cause tests to crash instead of failing gracefully with a proper assertion.

Suggested fix: Replace .unwrap() calls with proper error handling using ? operator or explicit error assertions to make tests more robust.

*Scanner: code-review/logic | *

**[high] Missing error handling for database operations** The test helper functions use `.unwrap()` on database operations which will panic if the database connection fails or operations fail. This could cause tests to crash instead of failing gracefully with a proper assertion. Suggested fix: Replace `.unwrap()` calls with proper error handling using `?` operator or explicit error assertions to make tests more robust. *Scanner: code-review/logic | * <!-- compliance-fp:0fcdf5def87f7cdb7a69aa0f87361d79e5d5822e2a24ead39f6800f3ed805abc -->
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The test code uses unwrap() extensively when interacting with MongoDB connections and database operations. This violates the project's convention of proper error handling, which could hide failures during testing and make debugging difficult. The code should handle potential errors more gracefully instead of panicking.

Suggested fix: Replace unwrap() calls with proper error handling using ? operator or explicit error checking to maintain consistency with the project's error handling patterns.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The test code uses `unwrap()` extensively when interacting with MongoDB connections and database operations. This violates the project's convention of proper error handling, which could hide failures during testing and make debugging difficult. The code should handle potential errors more gracefully instead of panicking. Suggested fix: Replace `unwrap()` calls with proper error handling using `?` operator or explicit error checking to maintain consistency with the project's error handling patterns. *Scanner: code-review/convention | * <!-- compliance-fp:97476c58e5efd4eb5fe7e4cfe5cf8b0bb09f358119112031033fa369f083ff45 -->
@@ -0,0 +147,4 @@
// All downstream data should be gone
assert_eq!(count_docs(&server, "dast_targets").await, 0);
assert_eq!(count_docs(&server, "pentest_sessions").await, 0);
Author
Owner

[high] Potential Insecure Direct Object Reference (IDOR) in Repository Deletion

The repository deletion endpoint (/api/v1/repositories/{repo_id}) may be vulnerable to IDOR if proper authorization checks are missing. An attacker could potentially delete repositories they don't own by guessing or enumerating valid repository IDs, especially if there's no ownership verification before deletion.

Suggested fix: Implement proper authentication and authorization checks before allowing repository deletion. Ensure that users can only delete repositories they own or have explicit permission to delete.

Scanner: code-review/security | CWE: CWE-285

**[high] Potential Insecure Direct Object Reference (IDOR) in Repository Deletion** The repository deletion endpoint (/api/v1/repositories/{repo_id}) may be vulnerable to IDOR if proper authorization checks are missing. An attacker could potentially delete repositories they don't own by guessing or enumerating valid repository IDs, especially if there's no ownership verification before deletion. Suggested fix: Implement proper authentication and authorization checks before allowing repository deletion. Ensure that users can only delete repositories they own or have explicit permission to delete. *Scanner: code-review/security | CWE: CWE-285* <!-- compliance-fp:bd27914f737daa52c5daeb876ce6deaf0b04b6513adf37d2e20bcf8c6241fc73 -->
@@ -0,0 +12,4 @@
assert_eq!(body["data"].as_array().unwrap().len(), 0);
// Add a target
let resp = server
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The test code uses .unwrap() extensively on deserialization and JSON parsing operations. While this is acceptable in test code, it creates inconsistency with the production code's error handling patterns which likely use proper error propagation. This could mask potential issues during testing if the JSON structure changes unexpectedly.

Suggested fix: Consider using more robust error handling patterns like expect() with descriptive messages or proper error propagation to maintain consistency with how errors should be handled in production code.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The test code uses `.unwrap()` extensively on deserialization and JSON parsing operations. While this is acceptable in test code, it creates inconsistency with the production code's error handling patterns which likely use proper error propagation. This could mask potential issues during testing if the JSON structure changes unexpectedly. Suggested fix: Consider using more robust error handling patterns like `expect()` with descriptive messages or proper error propagation to maintain consistency with how errors should be handled in production code. *Scanner: code-review/convention | * <!-- compliance-fp:0abbac6de0a7badf47cceef05c9468bf716c13565eab29234691a7e5a9ab40f7 -->
@@ -0,0 +21,4 @@
"target_type": "webapp",
}),
)
.await;
Author
Owner

[medium] Complex boolean expression in test assertions

The test functions use deeply nested JSON path access patterns like body["data"][0]["name"] which are hard to read and prone to runtime errors if the structure changes. This makes the tests brittle and harder to maintain.

Suggested fix: Extract JSON path access into helper functions or use more structured deserialization to make assertions clearer and safer.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in test assertions** The test functions use deeply nested JSON path access patterns like `body["data"][0]["name"]` which are hard to read and prone to runtime errors if the structure changes. This makes the tests brittle and harder to maintain. Suggested fix: Extract JSON path access into helper functions or use more structured deserialization to make assertions clearer and safer. *Scanner: code-review/complexity | * <!-- compliance-fp:f29872ce84570cf392dd7ca7f9d7d0dfa2dcdc76e65b3e817e772eec34396f8a -->
@@ -0,0 +16,4 @@
// For now, we use a direct insert approach
let db = client.database(&server.db_name());
let now = mongodb::bson::DateTime::now();
Author
Owner

[medium] Missing error handling in MongoDB connection setup

In the insert_finding function, the MongoDB client creation and database access are wrapped in .unwrap() calls which will panic if the connection fails. This could cause tests to crash instead of failing gracefully when the test environment isn't properly configured.

Suggested fix: Replace .unwrap_or_else(|_| "mongodb://root:example@localhost:27017/?authSource=admin".into()) with proper error handling using ? operator or logging and returning an error.

*Scanner: code-review/logic | *

**[medium] Missing error handling in MongoDB connection setup** In the `insert_finding` function, the MongoDB client creation and database access are wrapped in `.unwrap()` calls which will panic if the connection fails. This could cause tests to crash instead of failing gracefully when the test environment isn't properly configured. Suggested fix: Replace `.unwrap_or_else(|_| "mongodb://root:example@localhost:27017/?authSource=admin".into())` with proper error handling using `?` operator or logging and returning an error. *Scanner: code-review/logic | * <!-- compliance-fp:9d7bed2c5f134e6c91558a457de1336a4e35f1baf8a573efa3618d171020efd4 -->
@@ -0,0 +19,4 @@
let now = mongodb::bson::DateTime::now();
db.collection::<mongodb::bson::Document>("findings")
.insert_one(mongodb::bson::doc! {
"repo_id": repo_id,
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

Similar to dast.rs, this test file also uses .unwrap() on JSON parsing and MongoDB operations. The MongoDB connection setup and document insertion both use unwrap(), which could hide issues in test environments where the database might not be properly configured.

Suggested fix: Replace unwraps with proper error handling or use expect() with descriptive messages to improve test reliability and consistency with production error handling patterns.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** Similar to dast.rs, this test file also uses `.unwrap()` on JSON parsing and MongoDB operations. The MongoDB connection setup and document insertion both use unwrap(), which could hide issues in test environments where the database might not be properly configured. Suggested fix: Replace unwraps with proper error handling or use expect() with descriptive messages to improve test reliability and consistency with production error handling patterns. *Scanner: code-review/convention | * <!-- compliance-fp:a0a7e9a644c313f3313be25eb4d19764058bb9ec92d1ce5654f5b311e731748e -->
@@ -0,0 +22,4 @@
"repo_id": repo_id,
"fingerprint": format!("fp-{title}-{severity}"),
"scanner": "test-scanner",
"scan_type": "sast",
Author
Owner

[high] Incorrect MongoDB document insertion in test helper

The insert_finding helper function attempts to insert a document into the 'findings' collection but uses mongodb::bson::Document type incorrectly. It should use mongodb::bson::doc! macro directly instead of constructing a Document and then inserting it. This will likely cause a compilation error or runtime panic due to incorrect BSON structure.

Suggested fix: Change line 25 from let db = client.database(&server.db_name()); to let db = client.database(&server.db_name()); and remove the redundant mongodb::bson::Document construction. The correct approach is to use the mongodb::bson::doc! macro directly in the insert_one call.

*Scanner: code-review/logic | *

**[high] Incorrect MongoDB document insertion in test helper** The `insert_finding` helper function attempts to insert a document into the 'findings' collection but uses `mongodb::bson::Document` type incorrectly. It should use `mongodb::bson::doc!` macro directly instead of constructing a Document and then inserting it. This will likely cause a compilation error or runtime panic due to incorrect BSON structure. Suggested fix: Change line 25 from `let db = client.database(&server.db_name());` to `let db = client.database(&server.db_name());` and remove the redundant `mongodb::bson::Document` construction. The correct approach is to use the `mongodb::bson::doc!` macro directly in the insert_one call. *Scanner: code-review/logic | * <!-- compliance-fp:245827395f1d6d4d65375e5ac3d3461501f4fb35f964f130f8f90c958d9d99f1 -->
@@ -0,0 +91,4 @@
.patch(
&format!("/api/v1/findings/{finding_id}/status"),
&json!({ "status": "resolved" }),
)
Author
Owner

[medium] Complex boolean expression in test assertions

Similar to dast.rs, the findings.rs tests use complex nested JSON access patterns like body["data"][0]["_id"]["$oid"] which increase cognitive load and risk of bugs when the API response structure changes.

Suggested fix: Use structured deserialization or helper methods to simplify JSON path access and improve test robustness.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in test assertions** Similar to dast.rs, the findings.rs tests use complex nested JSON access patterns like `body["data"][0]["_id"]["$oid"]` which increase cognitive load and risk of bugs when the API response structure changes. Suggested fix: Use structured deserialization or helper methods to simplify JSON path access and improve test robustness. *Scanner: code-review/complexity | * <!-- compliance-fp:8c1b459e365bc84cb231066c8cf5dab4aa32b83bc856808b0395ea065a39d3a5 -->
@@ -0,0 +115,4 @@
let resp = server.get("/api/v1/findings").await;
let body: serde_json::Value = resp.json().await.unwrap();
let ids: Vec<String> = body["data"]
.as_array()
Author
Owner

[medium] Potential race condition in bulk update test

In the bulk_update_finding_status test, the order of operations between getting finding IDs and updating them may lead to inconsistent state if other processes modify findings during this window. While this is a test scenario, it could mask real concurrency issues in production code.

Suggested fix: Consider adding explicit synchronization or ensuring test isolation to prevent race conditions in multi-threaded test execution.

*Scanner: code-review/logic | *

**[medium] Potential race condition in bulk update test** In the `bulk_update_finding_status` test, the order of operations between getting finding IDs and updating them may lead to inconsistent state if other processes modify findings during this window. While this is a test scenario, it could mask real concurrency issues in production code. Suggested fix: Consider adding explicit synchronization or ensuring test isolation to prevent race conditions in multi-threaded test execution. *Scanner: code-review/logic | * <!-- compliance-fp:4bb0442449a77535cd5fec0a5bb198ad6403b4153804a8fbddab1604cad4828b -->
@@ -0,0 +7,4 @@
let resp = server.get("/api/v1/health").await;
assert_eq!(resp.status(), 200);
let body: serde_json::Value = resp.json().await.unwrap();
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The health.rs test file also contains multiple uses of .unwrap() on JSON parsing operations. While these tests are simple, the pattern of using unwrap() instead of more explicit error handling creates inconsistency with how the application should handle potential JSON parsing failures in production.

Suggested fix: Use more explicit error handling patterns such as expect() or proper error propagation to maintain consistency with production code error handling practices.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The health.rs test file also contains multiple uses of `.unwrap()` on JSON parsing operations. While these tests are simple, the pattern of using unwrap() instead of more explicit error handling creates inconsistency with how the application should handle potential JSON parsing failures in production. Suggested fix: Use more explicit error handling patterns such as `expect()` or proper error propagation to maintain consistency with production code error handling practices. *Scanner: code-review/convention | * <!-- compliance-fp:7d42c98eb9f00cdcdb7317e853138058613ec12004afd61be60f1ecd4ddefb43 -->
@@ -0,0 +7,4 @@
// Initially empty
let resp = server.get("/api/v1/repositories").await;
assert_eq!(resp.status(), 200);
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The test code uses .unwrap() extensively on deserialization and database operations, which can cause tests to panic instead of properly reporting test failures when unexpected conditions occur. This makes test failures harder to debug and can mask underlying issues.

Suggested fix: Replace .unwrap() calls with proper error handling using assert! or expect() to provide meaningful failure messages. For example: resp.json().await.expect("Failed to parse JSON response")

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The test code uses `.unwrap()` extensively on deserialization and database operations, which can cause tests to panic instead of properly reporting test failures when unexpected conditions occur. This makes test failures harder to debug and can mask underlying issues. Suggested fix: Replace `.unwrap()` calls with proper error handling using `assert!` or `expect()` to provide meaningful failure messages. For example: `resp.json().await.expect("Failed to parse JSON response")` *Scanner: code-review/convention | * <!-- compliance-fp:0d7cc17501177dcd3ebbebd6cc55f317b7fdc2228a42b54b1df42321388f4882 -->
@@ -0,0 +14,4 @@
// Add a repository
let resp = server
.post(
"/api/v1/repositories",
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The test code uses .unwrap() extensively on deserialization and database operations, which can cause tests to panic instead of properly reporting test failures when unexpected conditions occur. This makes test failures harder to debug and can mask underlying issues.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The test code uses `.unwrap()` extensively on deserialization and database operations, which can cause tests to panic instead of properly reporting test failures when unexpected conditions occur. This makes test failures harder to debug and can mask underlying issues. *Scanner: code-review/convention | * <!-- compliance-fp:224bb81514bf2d5ddd2badfa0d12f2e20407cd8b8b9b99586aff6ac5ba716984 -->
@@ -0,0 +29,4 @@
// List should now return 1
let resp = server.get("/api/v1/repositories").await;
let body: serde_json::Value = resp.json().await.unwrap();
let repos = body["data"].as_array().unwrap();
Author
Owner

[medium] Inconsistent error handling with unwrap() in test code

The test code uses .unwrap() extensively on deserialization and database operations, which can cause tests to panic instead of properly reporting test failures when unexpected conditions occur. This makes test failures harder to debug and can mask underlying issues.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling with unwrap() in test code** The test code uses `.unwrap()` extensively on deserialization and database operations, which can cause tests to panic instead of properly reporting test failures when unexpected conditions occur. This makes test failures harder to debug and can mask underlying issues. *Scanner: code-review/convention | * <!-- compliance-fp:a2fb8026d06e5c35159e084fd434444313bad98fe9f5acd762142ba80aebda04 -->
@@ -0,0 +17,4 @@
.await;
// Insert findings directly
let mongodb_uri = std::env::var("TEST_MONGODB_URI")
Author
Owner

[medium] Hardcoded MongoDB Credentials in Test Code

The test code contains hardcoded MongoDB credentials in the form of a connection string with username 'root' and password 'example'. While this is only in test files, it represents a potential security risk if these tests were to be run in an environment where such credentials could be exposed or misused.

Suggested fix: Use environment variables or a secure configuration management system for database credentials even in test environments. Consider using a dedicated test database with unique credentials.

Scanner: code-review/security | CWE: CWE-798

**[medium] Hardcoded MongoDB Credentials in Test Code** The test code contains hardcoded MongoDB credentials in the form of a connection string with username 'root' and password 'example'. While this is only in test files, it represents a potential security risk if these tests were to be run in an environment where such credentials could be exposed or misused. Suggested fix: Use environment variables or a secure configuration management system for database credentials even in test environments. Consider using a dedicated test database with unique credentials. *Scanner: code-review/security | CWE: CWE-798* <!-- compliance-fp:b3f0c02251fb2f33c6b0e791505cfdd929da3636ff2c76593e2b1bb09653cc9e -->
Author
Owner

[medium] Deeply nested control flow in test setup

The test function 'stats_overview_reflects_inserted_data' has deeply nested control flow with multiple async operations and database interactions that are not clearly separated. This increases the cognitive load when trying to understand the test flow and could lead to errors if any part of the setup is modified.

Suggested fix: Extract the MongoDB setup and data insertion logic into separate helper functions to flatten the control flow and improve readability.

*Scanner: code-review/complexity | *

**[medium] Deeply nested control flow in test setup** The test function 'stats_overview_reflects_inserted_data' has deeply nested control flow with multiple async operations and database interactions that are not clearly separated. This increases the cognitive load when trying to understand the test flow and could lead to errors if any part of the setup is modified. Suggested fix: Extract the MongoDB setup and data insertion logic into separate helper functions to flatten the control flow and improve readability. *Scanner: code-review/complexity | * <!-- compliance-fp:b62cc126460ba06026d79b5e96396c139d16c8e222d8202291a286524c2e23b7 -->
@@ -0,0 +32,4 @@
db.collection::<mongodb::bson::Document>("findings")
.insert_one(mongodb::bson::doc! {
"repo_id": "test-repo-id",
"fingerprint": format!("fp-{title}"),
Author
Owner

[high] Incorrect repository ID used in stats test

In the stats_overview_reflects_inserted_data test, the MongoDB insert operation uses hardcoded 'test-repo-id' as the repo_id instead of using the actual repository ID that was created in the test. This means the findings won't be associated with any real repository and will cause incorrect stats aggregation.

Suggested fix: Store the repository ID from the successful POST request and use it in the MongoDB insert operation instead of the hardcoded value.

*Scanner: code-review/logic | *

**[high] Incorrect repository ID used in stats test** In the `stats_overview_reflects_inserted_data` test, the MongoDB insert operation uses hardcoded 'test-repo-id' as the repo_id instead of using the actual repository ID that was created in the test. This means the findings won't be associated with any real repository and will cause incorrect stats aggregation. Suggested fix: Store the repository ID from the successful POST request and use it in the MongoDB insert operation instead of the hardcoded value. *Scanner: code-review/logic | * <!-- compliance-fp:58e393684ec38fb142d980eb7174be895282d0e0fa27cfed101b2a7a7fff61bb -->
@@ -0,0 +47,4 @@
}
let resp = server.get("/api/v1/stats/overview").await;
assert_eq!(resp.status(), 200);
Author
Owner

[medium] Complex boolean expression in test assertion

The test function 'stats_overview_reflects_inserted_data' contains a complex boolean expression when checking the response status and asserting multiple fields in the response body. This makes it harder to reason about what exactly is being validated and could lead to missed assertions if one part fails.

Suggested fix: Break down the complex assertions into separate test steps or extract the response validation logic into helper functions to make each assertion's purpose clearer.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in test assertion** The test function 'stats_overview_reflects_inserted_data' contains a complex boolean expression when checking the response status and asserting multiple fields in the response body. This makes it harder to reason about what exactly is being validated and could lead to missed assertions if one part fails. Suggested fix: Break down the complex assertions into separate test steps or extract the response validation logic into helper functions to make each assertion's purpose clearer. *Scanner: code-review/complexity | * <!-- compliance-fp:81d1a1532274fcd407b66ed10bf64791d1aa8227acdcd6ebfcd680771d7458e5 -->
@@ -1,4 +1,9 @@
// Integration tests for the compliance-agent crate.
// E2E / Integration tests for the compliance-agent API.
Author
Owner

[medium] Inconsistent test module naming

The test file was renamed from 'mod.rs' to 'e2e.rs' but the module declaration still uses 'mod.rs' pattern. The test module name should match the file name for consistency.

Suggested fix: Rename the file to match the module declaration or update the module declaration to match the file name. Since this is a test file, consider renaming to 'e2e.rs' and updating the module declaration accordingly.

*Scanner: code-review/convention | *

**[medium] Inconsistent test module naming** The test file was renamed from 'mod.rs' to 'e2e.rs' but the module declaration still uses 'mod.rs' pattern. The test module name should match the file name for consistency. Suggested fix: Rename the file to match the module declaration or update the module declaration to match the file name. Since this is a test file, consider renaming to 'e2e.rs' and updating the module declaration accordingly. *Scanner: code-review/convention | * <!-- compliance-fp:d3092dba2fa76350d44f9b590a28468b619c34ea80a7523ad71ef78d7949a777 -->
@@ -18,0 +22,4 @@
/// 1. `"src/main.rs::config"` (trim last `::` segment)
/// 2. `"src/main.rs"` (trim again)
///
/// Returns the first match found, or `None` if the node is a root.
Author
Owner

[high] Off-by-one error in find_parent_qname when handling qualified names without '::'

The find_parent_qname function incorrectly handles qualified names that do not contain any '::' separator. When current.rfind("::") returns None, it should immediately return None because there's no parent. However, the current implementation enters an infinite loop because current is never updated to an empty string, and current.rfind("::") will always return None on subsequent iterations.

Suggested fix: Fix the loop condition to properly handle the case where there are no '::' separators. The function should return None immediately when rfind("::") returns None.

*Scanner: code-review/logic | *

**[high] Off-by-one error in `find_parent_qname` when handling qualified names without '::'** The `find_parent_qname` function incorrectly handles qualified names that do not contain any '::' separator. When `current.rfind("::")` returns `None`, it should immediately return `None` because there's no parent. However, the current implementation enters an infinite loop because `current` is never updated to an empty string, and `current.rfind("::")` will always return `None` on subsequent iterations. Suggested fix: Fix the loop condition to properly handle the case where there are no '::' separators. The function should return `None` immediately when `rfind("::")` returns `None`. *Scanner: code-review/logic | * <!-- compliance-fp:4f0c8fed46fedb2cd694d79018ff461ca30882bddb2a67889b803bf9400395b4 -->
@@ -121,7 +148,48 @@ impl GraphEngine {
graph.add_edge(src, tgt, edge.kind.clone());
Author
Owner

[medium] Deeply nested control flow in Contains edge synthesis

The Contains edge synthesis logic in build_petgraph has deeply nested control flow with multiple nested conditions and loops. The code walks up the qualified name hierarchy while checking for existing edges, which creates a complex flow that's hard to reason about and could introduce bugs during maintenance.

Suggested fix: Flatten the nested structure by extracting the parent lookup logic into a separate function and using early returns to avoid deep nesting. Consider using a more functional approach with iterators to simplify the edge creation logic.

*Scanner: code-review/complexity | *

**[medium] Deeply nested control flow in Contains edge synthesis** The Contains edge synthesis logic in `build_petgraph` has deeply nested control flow with multiple nested conditions and loops. The code walks up the qualified name hierarchy while checking for existing edges, which creates a complex flow that's hard to reason about and could introduce bugs during maintenance. Suggested fix: Flatten the nested structure by extracting the parent lookup logic into a separate function and using early returns to avoid deep nesting. Consider using a more functional approach with iterators to simplify the edge creation logic. *Scanner: code-review/complexity | * <!-- compliance-fp:ebb388b8c9a39790be19560562289778152605b8e0efa7c4437eb42fe14eb898 -->
@@ -122,3 +149,3 @@
resolved_edges.push(edge);
}
// Skip unresolved edges (cross-file, external deps) — conservative approach
}
Author
Owner

[medium] Potential panic in Contains edge synthesis due to unwrapping

In the build_petgraph function, there are several instances of .unwrap_or("")) and .cloned().unwrap_or_default() which can lead to panics or incorrect behavior if the nodes collection is empty or if expected fields are missing. Specifically, when accessing nodes.first() and then calling .as_str() on potentially missing fields, this could cause runtime issues if the parsing output structure isn't consistent.

Suggested fix: Use proper error propagation instead of unwrapping. Replace unwrap_or("") with explicit error handling or default values that don't risk panicking. Also consider adding assertions or early returns to validate that required fields exist before proceeding.

*Scanner: code-review/convention | *

**[medium] Potential panic in Contains edge synthesis due to unwrapping** In the `build_petgraph` function, there are several instances of `.unwrap_or(""))` and `.cloned().unwrap_or_default()` which can lead to panics or incorrect behavior if the nodes collection is empty or if expected fields are missing. Specifically, when accessing `nodes.first()` and then calling `.as_str()` on potentially missing fields, this could cause runtime issues if the parsing output structure isn't consistent. Suggested fix: Use proper error propagation instead of unwrapping. Replace `unwrap_or("")` with explicit error handling or default values that don't risk panicking. Also consider adding assertions or early returns to validate that required fields exist before proceeding. *Scanner: code-review/convention | * <!-- compliance-fp:8fef57ee461770d6e5a6f05027d6682a43b24a46e96ec186f1217b0521ac7b2c -->
@@ -132,33 +200,62 @@ impl GraphEngine {
})
Author
Owner

[medium] Complex boolean expression in edge resolution

The resolve_edge_target function contains a complex boolean expression that checks multiple conditions for edge resolution. The logic involves multiple string pattern matching operations with various prefixes and suffixes, making it difficult to follow and prone to subtle bugs when modifying the resolution strategies.

Suggested fix: Break down the complex conditionals into separate helper functions or early returns to improve readability and reduce the chance of logical errors when adding new resolution strategies.

*Scanner: code-review/complexity | *

**[medium] Complex boolean expression in edge resolution** The `resolve_edge_target` function contains a complex boolean expression that checks multiple conditions for edge resolution. The logic involves multiple string pattern matching operations with various prefixes and suffixes, making it difficult to follow and prone to subtle bugs when modifying the resolution strategies. Suggested fix: Break down the complex conditionals into separate helper functions or early returns to improve readability and reduce the chance of logical errors when adding new resolution strategies. *Scanner: code-review/complexity | * <!-- compliance-fp:1df9276a3941f76ed4bfb6e0944d204c3e52011243f491b1b22c00b4e0e75c90 -->
@@ -144,3 +218,3 @@
}
// Try matching just the function/type name (intra-file resolution)
// 2. Suffix match: "foo" → "path/file.rs::foo"
Author
Owner

[medium] Inconsistent error handling in edge resolution

The resolve_edge_target function uses multiple resolution strategies but doesn't consistently handle cases where resolution fails. Specifically, the module-path matching strategy has a fallback that uses unwrap_or(target) which could silently ignore resolution failures. Additionally, the function returns None for failed resolutions but the logic around stripped and segments processing could lead to missed matches without clear indication.

Suggested fix: Replace unwrap_or(target) with explicit error handling or logging to ensure resolution failures are properly tracked. Consider using a more robust approach for handling the module-path matching that doesn't silently fall back to the original target.

*Scanner: code-review/convention | *

**[medium] Inconsistent error handling in edge resolution** The `resolve_edge_target` function uses multiple resolution strategies but doesn't consistently handle cases where resolution fails. Specifically, the module-path matching strategy has a fallback that uses `unwrap_or(target)` which could silently ignore resolution failures. Additionally, the function returns `None` for failed resolutions but the logic around `stripped` and `segments` processing could lead to missed matches without clear indication. Suggested fix: Replace `unwrap_or(target)` with explicit error handling or logging to ensure resolution failures are properly tracked. Consider using a more robust approach for handling the module-path matching that doesn't silently fall back to the original target. *Scanner: code-review/convention | * <!-- compliance-fp:db291b9f59cd056cf731606233f9adf47c61a007c50358bf7c59a62b24d6ab03 -->
@@ -356,0 +461,4 @@
output.nodes.push(make_node("src/main.rs::config::load"));
let code_graph = engine.build_petgraph(output).unwrap();
Author
Owner

[low] Test assertion relies on specific ordering of edges

The test test_contains_edges_synthesised checks for the presence of Contains edges by collecting all edges and asserting their count, but it doesn't verify that the specific parent-child relationships are correctly formed. The assertion only checks that the sources contain the expected values, but doesn't ensure that the edges actually connect the correct nodes in the graph structure.

Suggested fix: Enhance the test to also verify that the actual graph connections match the expected parent-child relationships by checking both source and target of the edges, not just the sources.

*Scanner: code-review/convention | *

**[low] Test assertion relies on specific ordering of edges** The test `test_contains_edges_synthesised` checks for the presence of Contains edges by collecting all edges and asserting their count, but it doesn't verify that the specific parent-child relationships are correctly formed. The assertion only checks that the sources contain the expected values, but doesn't ensure that the edges actually connect the correct nodes in the graph structure. Suggested fix: Enhance the test to also verify that the actual graph connections match the expected parent-child relationships by checking both source and target of the edges, not just the sources. *Scanner: code-review/convention | * <!-- compliance-fp:dc29a0cbe654ab4e6cda772de23ffc1bc46b39fd440b495dd484d974570e2030 -->
sharang merged commit 4388e98b5b into main 2026-03-30 10:04:07 +00:00
Sign in to join this conversation.
No Reviewers
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: sharang/compliance-scanner-agent#52