fix: cascade-delete DAST targets, pentests, and downstream data on repo delete #50
Reference in New Issue
Block a user
Delete Branch "fix/cascade-delete-repo"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Deleting a repo now cascade-deletes all DAST targets, scan runs, DAST findings, pentest sessions, attack chain nodes, and pentest messages. Previously left 6 collections orphaned.
Compliance scan found 12 issue(s) in this PR:
@@ -237,5 +237,92 @@ pub async fn delete_repository(.delete_many(doc! { "repo_id": &id }).await;// Cascade delete DAST targets linked to this repo, and all their downstream data[medium] Complex nested control flow in delete_repository function
The delete_repository function has deeply nested control flow with multiple if-let statements and while loops, making it difficult to follow the execution path and increasing the risk of logic errors.
Suggested fix: Refactor the cascading deletion logic into separate functions with clear responsibilities. Consider using early returns or extracting the cursor processing logic into helper functions.
*Scanner: code-review/complexity | *
[high] Insecure Direct Object Reference (IDOR)
The
delete_repositoryfunction allows deletion of repository-related data including DAST targets and pentest sessions. However, there's no explicit authorization check to ensure that the requesting user has permission to delete these resources. This could allow unauthorized users to delete sensitive data associated with repositories they don't own.Suggested fix: Add authentication and authorization checks before performing deletions. Verify that the authenticated user has appropriate permissions to delete the repository and its associated data.
Scanner: code-review/security | CWE: CWE-639
[medium] Potential Race Condition in Deletion Logic
Multiple database operations are performed sequentially within the cascade delete functions. There's a potential race condition where concurrent requests might interfere with each other during deletion, especially when deleting related entities like attack chain nodes or messages. This could result in inconsistent state or partial deletions.
Suggested fix: Implement atomic operations or use transactions where possible to ensure consistency during cascading deletions. Consider batching related deletions to reduce the window for race conditions.
Scanner: code-review/security | CWE: CWE-362
@@ -238,2 +238,4 @@.await;// Cascade delete DAST targets linked to this repo, and all their downstream data// (scan runs, findings, pentest sessions, attack chains, messages)[medium] Potential panic from unwrap_or_default() on empty ObjectId
The code uses
target_id.map(|oid| oid.to_hex()).unwrap_or_default()which can result in an empty string when the ObjectId is None. This is then checked withif !target_id.is_empty()but the empty string case is not explicitly handled, potentially leading to unexpected behavior when deleting targets.Suggested fix: Consider explicitly handling the case where target.id is None by returning early or logging a warning, rather than relying on the default empty string behavior.
*Scanner: code-review/convention | *
@@ -239,1 +239,4 @@// Cascade delete DAST targets linked to this repo, and all their downstream data// (scan runs, findings, pentest sessions, attack chains, messages)if let Ok(mut cursor) = db.dast_targets().find(doc! { "repo_id": &id }).await {[high] Missing error handling for database operations
Multiple database operations use
let _ = ...which silently ignores errors. If any of these operations fail (e.g., network issues, permission problems), the cascade delete will be incomplete and inconsistent state may result.Suggested fix: Handle errors appropriately instead of ignoring them, possibly logging failures and considering rollback mechanisms
*Scanner: code-review/logic | *
@@ -240,0 +241,4 @@// (scan runs, findings, pentest sessions, attack chains, messages)if let Ok(mut cursor) = db.dast_targets().find(doc! { "repo_id": &id }).await {use futures_util::StreamExt;while let Some(Ok(target)) = cursor.next().await {[medium] Inconsistent error handling in cascade deletion functions
The cascade deletion functions use
let _ = ...for all database operations, which silently ignores errors. This pattern is inconsistent with the rest of the codebase where errors are properly handled or logged. The cascade delete operations should either propagate errors or log them appropriately.Suggested fix: Replace
let _ = ...with proper error handling. Either return errors from the function or log them using the appropriate logging mechanism.*Scanner: code-review/convention | *
@@ -240,0 +244,4 @@while let Some(Ok(target)) = cursor.next().await {let target_id = target.id.map(|oid| oid.to_hex()).unwrap_or_default();if !target_id.is_empty() {cascade_delete_dast_target(db, &target_id).await;[high] Potential panic from unwrapping empty ObjectId
The code uses
unwrap_or_default()ontarget.id.map(|oid| oid.to_hex())which can return an empty string when the ObjectId is None. Later,mongodb::bson::oid::ObjectId::parse_str(target_id)is called with this potentially empty string, which will cause a panic because parsing an empty string into an ObjectId fails.Suggested fix: Check if target_id is not empty before attempting to parse it as ObjectId, or handle the error case properly instead of using unwrap_or_default()
*Scanner: code-review/logic | *
@@ -240,0 +254,4 @@use futures_util::StreamExt;while let Some(Ok(session)) = cursor.next().await {let session_id = session.id.map(|oid| oid.to_hex()).unwrap_or_default();if !session_id.is_empty() {[medium] Code duplication in session cleanup logic
The same pattern of deleting attack chain nodes, pentest messages, and dast findings by session_id appears twice - once in delete_repository and once in cascade_delete_dast_target, increasing maintenance burden and potential for inconsistencies.
Suggested fix: Extract the session cleanup logic into a reusable async function that takes the database connection and session ID as parameters.
*Scanner: code-review/complexity | *
@@ -240,0 +259,4 @@.attack_chain_nodes().delete_many(doc! { "session_id": &session_id }).await;let _ = db[high] Potential Command Injection via MongoDB Query
The code constructs MongoDB queries using user-provided data without proper sanitization. Specifically, in
cascade_delete_dast_target, thetarget_idparameter is used directly in a query filter (doc! { "_id": oid }) whereoidis parsed fromtarget_id. Iftarget_idcontains malicious input that can be interpreted as a MongoDB operator, it could lead to unauthorized data access or modification.Suggested fix: Validate and sanitize the
target_idbefore parsing it into an ObjectId. Ensure that the input conforms to expected format (e.g., valid hexadecimal string of 24 characters) and reject any inputs that may contain MongoDB operators.Scanner: code-review/security | CWE: CWE-94
@@ -240,0 +269,4 @@.delete_many(doc! { "session_id": &session_id }).await;}}[medium] Missing error propagation in cascade_delete_dast_target
The
cascade_delete_dast_targetfunction useslet _ = ...for all database operations, completely ignoring potential errors. This could lead to partial cleanup operations where some related data is deleted but others aren't, leaving the database in an inconsistent state.Suggested fix: Either propagate errors from the database operations or implement proper error logging to ensure that failures during cascading deletes are not silently ignored.
*Scanner: code-review/convention | *
@@ -240,0 +270,4 @@.await;}}}[medium] Unnecessary database operations in cascade delete
In
cascade_delete_dast_target, after deleting pentest sessions, the code deletes DAST findings again using session_id. However, these findings were already deleted when deleting pentest sessions. This results in redundant database calls that could be avoided.Suggested fix: Remove the redundant deletion of DAST findings based on session_id since they're already deleted during pentest session deletion
*Scanner: code-review/logic | *
@@ -240,0 +272,4 @@}}let _ = db.pentest_sessions()[high] High cyclomatic complexity in cascade_delete_dast_target function
The cascade_delete_dast_target function contains multiple nested operations with several database calls and conditional checks, creating high complexity that makes it prone to bugs and difficult to test.
Suggested fix: Break down this function into smaller, focused functions for each deletion category (pentest sessions, findings, scan runs, target deletion). Each function should handle one responsibility and return appropriate error types.
*Scanner: code-review/complexity | *