feat: AI-driven automated penetration testing (#12)
Some checks failed
CI / Clippy (push) Failing after 1m51s
CI / Security Audit (push) Successful in 2m1s
CI / Tests (push) Has been skipped
CI / Detect Changes (push) Has been skipped
CI / Deploy Agent (push) Has been skipped
CI / Deploy Dashboard (push) Has been skipped
CI / Deploy Docs (push) Has been skipped
CI / Format (push) Failing after 42s
CI / Deploy MCP (push) Has been skipped
Some checks failed
CI / Clippy (push) Failing after 1m51s
CI / Security Audit (push) Successful in 2m1s
CI / Tests (push) Has been skipped
CI / Detect Changes (push) Has been skipped
CI / Deploy Agent (push) Has been skipped
CI / Deploy Dashboard (push) Has been skipped
CI / Deploy Docs (push) Has been skipped
CI / Format (push) Failing after 42s
CI / Deploy MCP (push) Has been skipped
This commit was merged in pull request #12.
This commit is contained in:
305
Cargo.lock
generated
305
Cargo.lock
generated
@@ -2,6 +2,23 @@
|
|||||||
# It is not intended for manual editing.
|
# It is not intended for manual editing.
|
||||||
version = 4
|
version = 4
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "adler2"
|
||||||
|
version = "2.0.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "320119579fcad9c21884f5c4861d16174d0e06250625266f50fe6898340abefa"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "aes"
|
||||||
|
version = "0.8.4"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "b169f7a6d4742236a0a00c541b845991d0ac43e546831af1249753ab4c3aa3a0"
|
||||||
|
dependencies = [
|
||||||
|
"cfg-if",
|
||||||
|
"cipher",
|
||||||
|
"cpufeatures 0.2.17",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ahash"
|
name = "ahash"
|
||||||
version = "0.8.12"
|
version = "0.8.12"
|
||||||
@@ -45,6 +62,15 @@ version = "1.0.102"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "7f202df86484c868dbad7eaa557ef785d5c66295e41b460ef922eca0723b842c"
|
checksum = "7f202df86484c868dbad7eaa557ef785d5c66295e41b460ef922eca0723b842c"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "arbitrary"
|
||||||
|
version = "1.4.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "c3d036a3c4ab069c7b410a2ce876bd74808d2d0888a82667669f8e783a898bf1"
|
||||||
|
dependencies = [
|
||||||
|
"derive_arbitrary",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "arc-swap"
|
name = "arc-swap"
|
||||||
version = "1.8.2"
|
version = "1.8.2"
|
||||||
@@ -391,6 +417,25 @@ dependencies = [
|
|||||||
"serde",
|
"serde",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "bzip2"
|
||||||
|
version = "0.5.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "49ecfb22d906f800d4fe833b6282cf4dc1c298f5057ca0b5445e5c209735ca47"
|
||||||
|
dependencies = [
|
||||||
|
"bzip2-sys",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "bzip2-sys"
|
||||||
|
version = "0.1.13+1.0.8"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "225bff33b2141874fe80d71e07d6eec4f85c5c216453dd96388240f96e1acc14"
|
||||||
|
dependencies = [
|
||||||
|
"cc",
|
||||||
|
"pkg-config",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "cc"
|
name = "cc"
|
||||||
version = "1.2.56"
|
version = "1.2.56"
|
||||||
@@ -566,6 +611,16 @@ dependencies = [
|
|||||||
"half",
|
"half",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "cipher"
|
||||||
|
version = "0.4.4"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "773f3b9af64447d2ce9850330c473515014aa235e6a783b02db81ff39e4a3dad"
|
||||||
|
dependencies = [
|
||||||
|
"crypto-common",
|
||||||
|
"inout",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "combine"
|
name = "combine"
|
||||||
version = "4.6.7"
|
version = "4.6.7"
|
||||||
@@ -609,6 +664,7 @@ dependencies = [
|
|||||||
"urlencoding",
|
"urlencoding",
|
||||||
"uuid",
|
"uuid",
|
||||||
"walkdir",
|
"walkdir",
|
||||||
|
"zip",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -680,12 +736,14 @@ dependencies = [
|
|||||||
"chrono",
|
"chrono",
|
||||||
"compliance-core",
|
"compliance-core",
|
||||||
"mongodb",
|
"mongodb",
|
||||||
|
"native-tls",
|
||||||
"reqwest",
|
"reqwest",
|
||||||
"scraper",
|
"scraper",
|
||||||
"serde",
|
"serde",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
"thiserror 2.0.18",
|
"thiserror 2.0.18",
|
||||||
"tokio",
|
"tokio",
|
||||||
|
"tokio-native-tls",
|
||||||
"tracing",
|
"tracing",
|
||||||
"url",
|
"url",
|
||||||
"uuid",
|
"uuid",
|
||||||
@@ -834,6 +892,12 @@ dependencies = [
|
|||||||
"unicode-xid",
|
"unicode-xid",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "constant_time_eq"
|
||||||
|
version = "0.3.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "7c74b8349d32d297c9134b8c88677813a227df8f779daa29bfc29c183fe3dca6"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "content_disposition"
|
name = "content_disposition"
|
||||||
version = "0.4.0"
|
version = "0.4.0"
|
||||||
@@ -939,6 +1003,21 @@ dependencies = [
|
|||||||
"libc",
|
"libc",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "crc"
|
||||||
|
version = "3.4.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "5eb8a2a1cd12ab0d987a5d5e825195d372001a4094a0376319d5a0ad71c1ba0d"
|
||||||
|
dependencies = [
|
||||||
|
"crc-catalog",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "crc-catalog"
|
||||||
|
version = "2.4.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "19d374276b40fb8bbdee95aef7c7fa6b5316ec764510eb64b8dd0e2ed0d7e7f5"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "crc32fast"
|
name = "crc32fast"
|
||||||
version = "1.5.0"
|
version = "1.5.0"
|
||||||
@@ -1125,6 +1204,12 @@ version = "2.10.0"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "d7a1e2f27636f116493b8b860f5546edb47c8d8f8ea73e1d2a20be88e28d1fea"
|
checksum = "d7a1e2f27636f116493b8b860f5546edb47c8d8f8ea73e1d2a20be88e28d1fea"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "deflate64"
|
||||||
|
version = "0.1.11"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "807800ff3288b621186fe0a8f3392c4652068257302709c24efd918c3dffcdc2"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "deranged"
|
name = "deranged"
|
||||||
version = "0.5.8"
|
version = "0.5.8"
|
||||||
@@ -1157,6 +1242,17 @@ dependencies = [
|
|||||||
"syn",
|
"syn",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "derive_arbitrary"
|
||||||
|
version = "1.4.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "1e567bd82dcff979e4b03460c307b3cdc9e96fde3d73bed1496d2bc75d9dd62a"
|
||||||
|
dependencies = [
|
||||||
|
"proc-macro2",
|
||||||
|
"quote",
|
||||||
|
"syn",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "derive_more"
|
name = "derive_more"
|
||||||
version = "2.1.1"
|
version = "2.1.1"
|
||||||
@@ -1976,6 +2072,16 @@ version = "0.5.7"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "1d674e81391d1e1ab681a28d99df07927c6d4aa5b027d7da16ba32d1d21ecd99"
|
checksum = "1d674e81391d1e1ab681a28d99df07927c6d4aa5b027d7da16ba32d1d21ecd99"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "flate2"
|
||||||
|
version = "1.1.9"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "843fba2746e448b37e26a819579957415c8cef339bf08564fe8b7ddbd959573c"
|
||||||
|
dependencies = [
|
||||||
|
"crc32fast",
|
||||||
|
"miniz_oxide",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "fnv"
|
name = "fnv"
|
||||||
version = "1.0.7"
|
version = "1.0.7"
|
||||||
@@ -1994,6 +2100,21 @@ version = "0.2.0"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "77ce24cb58228fbb8aa041425bb1050850ac19177686ea6e0f41a70416f56fdb"
|
checksum = "77ce24cb58228fbb8aa041425bb1050850ac19177686ea6e0f41a70416f56fdb"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "foreign-types"
|
||||||
|
version = "0.3.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "f6f339eb8adc052cd2ca78910fda869aefa38d22d5cb648e6485e4d3fc06f3b1"
|
||||||
|
dependencies = [
|
||||||
|
"foreign-types-shared",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "foreign-types-shared"
|
||||||
|
version = "0.1.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "00b0228411908ca8685dba7fc2cdd70ec9990a6e753e89b6ac91a84c40fbaf4b"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "form_urlencoded"
|
name = "form_urlencoded"
|
||||||
version = "1.2.2"
|
version = "1.2.2"
|
||||||
@@ -2787,6 +2908,15 @@ dependencies = [
|
|||||||
"cfb",
|
"cfb",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "inout"
|
||||||
|
version = "0.1.4"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "879f10e63c20629ecabbb64a8010319738c66a5cd0c29b02d63d272b03751d01"
|
||||||
|
dependencies = [
|
||||||
|
"generic-array",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "inventory"
|
name = "inventory"
|
||||||
version = "0.3.22"
|
version = "0.3.22"
|
||||||
@@ -2824,15 +2954,6 @@ dependencies = [
|
|||||||
"serde",
|
"serde",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "itertools"
|
|
||||||
version = "0.12.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "ba291022dbbd398a455acf126c1e341954079855bc60dfdda641363bd6922569"
|
|
||||||
dependencies = [
|
|
||||||
"either",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "itertools"
|
name = "itertools"
|
||||||
version = "0.14.0"
|
version = "0.14.0"
|
||||||
@@ -3076,6 +3197,27 @@ version = "0.11.5"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "08ab2867e3eeeca90e844d1940eab391c9dc5228783db2ed999acbc0a9ed375a"
|
checksum = "08ab2867e3eeeca90e844d1940eab391c9dc5228783db2ed999acbc0a9ed375a"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "lzma-rs"
|
||||||
|
version = "0.3.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "297e814c836ae64db86b36cf2a557ba54368d03f6afcd7d947c266692f71115e"
|
||||||
|
dependencies = [
|
||||||
|
"byteorder",
|
||||||
|
"crc",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "lzma-sys"
|
||||||
|
version = "0.1.20"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "5fda04ab3764e6cde78b9974eec4f779acaba7c4e84b36eca3cf77c581b85d27"
|
||||||
|
dependencies = [
|
||||||
|
"cc",
|
||||||
|
"libc",
|
||||||
|
"pkg-config",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "mac"
|
name = "mac"
|
||||||
version = "0.1.1"
|
version = "0.1.1"
|
||||||
@@ -3272,6 +3414,16 @@ version = "0.2.1"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a"
|
checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "miniz_oxide"
|
||||||
|
version = "0.8.9"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "1fa76a2c86f704bdb222d66965fb3d63269ce38518b83cb0575fca855ebb6316"
|
||||||
|
dependencies = [
|
||||||
|
"adler2",
|
||||||
|
"simd-adler32",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "mio"
|
name = "mio"
|
||||||
version = "1.1.1"
|
version = "1.1.1"
|
||||||
@@ -3399,6 +3551,23 @@ version = "0.3.1"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "2195bf6aa996a481483b29d62a7663eed3fe39600c460e323f8ff41e90bdd89b"
|
checksum = "2195bf6aa996a481483b29d62a7663eed3fe39600c460e323f8ff41e90bdd89b"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "native-tls"
|
||||||
|
version = "0.2.18"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "465500e14ea162429d264d44189adc38b199b62b1c21eea9f69e4b73cb03bbf2"
|
||||||
|
dependencies = [
|
||||||
|
"libc",
|
||||||
|
"log",
|
||||||
|
"openssl",
|
||||||
|
"openssl-probe 0.2.1",
|
||||||
|
"openssl-sys",
|
||||||
|
"schannel",
|
||||||
|
"security-framework",
|
||||||
|
"security-framework-sys",
|
||||||
|
"tempfile",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ndk"
|
name = "ndk"
|
||||||
version = "0.9.0"
|
version = "0.9.0"
|
||||||
@@ -3578,6 +3747,32 @@ version = "0.1.13"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "269bca4c2591a28585d6bf10d9ed0332b7d76900a1b02bec41bdc3a2cdcda107"
|
checksum = "269bca4c2591a28585d6bf10d9ed0332b7d76900a1b02bec41bdc3a2cdcda107"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "openssl"
|
||||||
|
version = "0.10.75"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "08838db121398ad17ab8531ce9de97b244589089e290a384c900cb9ff7434328"
|
||||||
|
dependencies = [
|
||||||
|
"bitflags",
|
||||||
|
"cfg-if",
|
||||||
|
"foreign-types",
|
||||||
|
"libc",
|
||||||
|
"once_cell",
|
||||||
|
"openssl-macros",
|
||||||
|
"openssl-sys",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "openssl-macros"
|
||||||
|
version = "0.1.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "a948666b637a0f465e8564c73e89d4dde00d72d4d473cc972f390fc3dcee7d9c"
|
||||||
|
dependencies = [
|
||||||
|
"proc-macro2",
|
||||||
|
"quote",
|
||||||
|
"syn",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "openssl-probe"
|
name = "openssl-probe"
|
||||||
version = "0.1.6"
|
version = "0.1.6"
|
||||||
@@ -3737,6 +3932,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
|||||||
checksum = "f8ed6a7761f76e3b9f92dfb0a60a6a6477c61024b775147ff0973a02653abaf2"
|
checksum = "f8ed6a7761f76e3b9f92dfb0a60a6a6477c61024b775147ff0973a02653abaf2"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"digest",
|
"digest",
|
||||||
|
"hmac",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -3949,7 +4145,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
|||||||
checksum = "8a56d757972c98b346a9b766e3f02746cde6dd1cd1d1d563472929fdd74bec4d"
|
checksum = "8a56d757972c98b346a9b766e3f02746cde6dd1cd1d1d563472929fdd74bec4d"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"anyhow",
|
"anyhow",
|
||||||
"itertools 0.12.1",
|
"itertools",
|
||||||
"proc-macro2",
|
"proc-macro2",
|
||||||
"quote",
|
"quote",
|
||||||
"syn",
|
"syn",
|
||||||
@@ -4806,6 +5002,12 @@ dependencies = [
|
|||||||
"libc",
|
"libc",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "simd-adler32"
|
||||||
|
version = "0.3.8"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "e320a6c5ad31d271ad523dcf3ad13e2767ad8b1cb8f047f75a8aeaf8da139da2"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "simple_asn1"
|
name = "simple_asn1"
|
||||||
version = "0.6.4"
|
version = "0.6.4"
|
||||||
@@ -4899,7 +5101,7 @@ version = "0.8.9"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "c1c97747dbf44bb1ca44a561ece23508e99cb592e862f22222dcf42f51d1e451"
|
checksum = "c1c97747dbf44bb1ca44a561ece23508e99cb592e862f22222dcf42f51d1e451"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"heck 0.4.1",
|
"heck 0.5.0",
|
||||||
"proc-macro2",
|
"proc-macro2",
|
||||||
"quote",
|
"quote",
|
||||||
"syn",
|
"syn",
|
||||||
@@ -5116,7 +5318,7 @@ dependencies = [
|
|||||||
"fs4",
|
"fs4",
|
||||||
"htmlescape",
|
"htmlescape",
|
||||||
"hyperloglogplus",
|
"hyperloglogplus",
|
||||||
"itertools 0.14.0",
|
"itertools",
|
||||||
"levenshtein_automata",
|
"levenshtein_automata",
|
||||||
"log",
|
"log",
|
||||||
"lru 0.12.5",
|
"lru 0.12.5",
|
||||||
@@ -5164,7 +5366,7 @@ checksum = "8b628488ae936c83e92b5c4056833054ca56f76c0e616aee8339e24ac89119cd"
|
|||||||
dependencies = [
|
dependencies = [
|
||||||
"downcast-rs",
|
"downcast-rs",
|
||||||
"fastdivide",
|
"fastdivide",
|
||||||
"itertools 0.14.0",
|
"itertools",
|
||||||
"serde",
|
"serde",
|
||||||
"tantivy-bitpacker",
|
"tantivy-bitpacker",
|
||||||
"tantivy-common",
|
"tantivy-common",
|
||||||
@@ -5214,7 +5416,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
|||||||
checksum = "f8292095d1a8a2c2b36380ec455f910ab52dde516af36321af332c93f20ab7d5"
|
checksum = "f8292095d1a8a2c2b36380ec455f910ab52dde516af36321af332c93f20ab7d5"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"futures-util",
|
"futures-util",
|
||||||
"itertools 0.14.0",
|
"itertools",
|
||||||
"tantivy-bitpacker",
|
"tantivy-bitpacker",
|
||||||
"tantivy-common",
|
"tantivy-common",
|
||||||
"tantivy-fst",
|
"tantivy-fst",
|
||||||
@@ -5428,6 +5630,16 @@ dependencies = [
|
|||||||
"syn",
|
"syn",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "tokio-native-tls"
|
||||||
|
version = "0.3.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "bbae76ab933c85776efabc971569dd6119c580d8f5d448769dec1764bf796ef2"
|
||||||
|
dependencies = [
|
||||||
|
"native-tls",
|
||||||
|
"tokio",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "tokio-rustls"
|
name = "tokio-rustls"
|
||||||
version = "0.26.4"
|
version = "0.26.4"
|
||||||
@@ -6805,6 +7017,15 @@ version = "0.8.15"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "fdd20c5420375476fbd4394763288da7eb0cc0b8c11deed431a91562af7335d3"
|
checksum = "fdd20c5420375476fbd4394763288da7eb0cc0b8c11deed431a91562af7335d3"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "xz2"
|
||||||
|
version = "0.1.7"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "388c44dc09d76f1536602ead6d325eb532f5c122f17782bd57fb47baeeb767e2"
|
||||||
|
dependencies = [
|
||||||
|
"lzma-sys",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "yoke"
|
name = "yoke"
|
||||||
version = "0.8.1"
|
version = "0.8.1"
|
||||||
@@ -6874,6 +7095,20 @@ name = "zeroize"
|
|||||||
version = "1.8.2"
|
version = "1.8.2"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "b97154e67e32c85465826e8bcc1c59429aaaf107c1e4a9e53c8d8ccd5eff88d0"
|
checksum = "b97154e67e32c85465826e8bcc1c59429aaaf107c1e4a9e53c8d8ccd5eff88d0"
|
||||||
|
dependencies = [
|
||||||
|
"zeroize_derive",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "zeroize_derive"
|
||||||
|
version = "1.4.3"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "85a5b4158499876c763cb03bc4e49185d3cccbabb15b33c627f7884f43db852e"
|
||||||
|
dependencies = [
|
||||||
|
"proc-macro2",
|
||||||
|
"quote",
|
||||||
|
"syn",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "zerotrie"
|
name = "zerotrie"
|
||||||
@@ -6908,12 +7143,54 @@ dependencies = [
|
|||||||
"syn",
|
"syn",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "zip"
|
||||||
|
version = "2.4.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "fabe6324e908f85a1c52063ce7aa26b68dcb7eb6dbc83a2d148403c9bc3eba50"
|
||||||
|
dependencies = [
|
||||||
|
"aes",
|
||||||
|
"arbitrary",
|
||||||
|
"bzip2",
|
||||||
|
"constant_time_eq",
|
||||||
|
"crc32fast",
|
||||||
|
"crossbeam-utils",
|
||||||
|
"deflate64",
|
||||||
|
"displaydoc",
|
||||||
|
"flate2",
|
||||||
|
"getrandom 0.3.4",
|
||||||
|
"hmac",
|
||||||
|
"indexmap 2.13.0",
|
||||||
|
"lzma-rs",
|
||||||
|
"memchr",
|
||||||
|
"pbkdf2",
|
||||||
|
"sha1",
|
||||||
|
"thiserror 2.0.18",
|
||||||
|
"time",
|
||||||
|
"xz2",
|
||||||
|
"zeroize",
|
||||||
|
"zopfli",
|
||||||
|
"zstd",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "zmij"
|
name = "zmij"
|
||||||
version = "1.0.21"
|
version = "1.0.21"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "b8848ee67ecc8aedbaf3e4122217aff892639231befc6a1b58d29fff4c2cabaa"
|
checksum = "b8848ee67ecc8aedbaf3e4122217aff892639231befc6a1b58d29fff4c2cabaa"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "zopfli"
|
||||||
|
version = "0.8.3"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "f05cd8797d63865425ff89b5c4a48804f35ba0ce8d125800027ad6017d2b5249"
|
||||||
|
dependencies = [
|
||||||
|
"bumpalo",
|
||||||
|
"crc32fast",
|
||||||
|
"log",
|
||||||
|
"simd-adler32",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "zstd"
|
name = "zstd"
|
||||||
version = "0.13.3"
|
version = "0.13.3"
|
||||||
|
|||||||
@@ -29,3 +29,4 @@ hex = "0.4"
|
|||||||
uuid = { version = "1", features = ["v4", "serde"] }
|
uuid = { version = "1", features = ["v4", "serde"] }
|
||||||
secrecy = { version = "0.10", features = ["serde"] }
|
secrecy = { version = "0.10", features = ["serde"] }
|
||||||
regex = "1"
|
regex = "1"
|
||||||
|
zip = { version = "2", features = ["aes-crypto", "deflate"] }
|
||||||
|
|||||||
@@ -36,3 +36,4 @@ base64 = "0.22"
|
|||||||
urlencoding = "2"
|
urlencoding = "2"
|
||||||
futures-util = "0.3"
|
futures-util = "0.3"
|
||||||
jsonwebtoken = "9"
|
jsonwebtoken = "9"
|
||||||
|
zip = { workspace = true }
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
pub mod chat;
|
pub mod chat;
|
||||||
pub mod dast;
|
pub mod dast;
|
||||||
pub mod graph;
|
pub mod graph;
|
||||||
|
pub mod pentest;
|
||||||
|
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
|
||||||
@@ -1108,7 +1109,7 @@ pub async fn list_scan_runs(
|
|||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn collect_cursor_async<T: serde::de::DeserializeOwned + Unpin + Send>(
|
pub(crate) async fn collect_cursor_async<T: serde::de::DeserializeOwned + Unpin + Send>(
|
||||||
mut cursor: mongodb::Cursor<T>,
|
mut cursor: mongodb::Cursor<T>,
|
||||||
) -> Vec<T> {
|
) -> Vec<T> {
|
||||||
use futures_util::StreamExt;
|
use futures_util::StreamExt;
|
||||||
|
|||||||
718
compliance-agent/src/api/handlers/pentest.rs
Normal file
718
compliance-agent/src/api/handlers/pentest.rs
Normal file
@@ -0,0 +1,718 @@
|
|||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use axum::extract::{Extension, Path, Query};
|
||||||
|
use axum::http::StatusCode;
|
||||||
|
use axum::response::sse::{Event, Sse};
|
||||||
|
use axum::response::IntoResponse;
|
||||||
|
use axum::Json;
|
||||||
|
use futures_util::stream;
|
||||||
|
use mongodb::bson::doc;
|
||||||
|
use serde::Deserialize;
|
||||||
|
|
||||||
|
use compliance_core::models::dast::DastFinding;
|
||||||
|
use compliance_core::models::pentest::*;
|
||||||
|
|
||||||
|
use crate::agent::ComplianceAgent;
|
||||||
|
use crate::pentest::PentestOrchestrator;
|
||||||
|
|
||||||
|
use super::{collect_cursor_async, ApiResponse, PaginationParams};
|
||||||
|
|
||||||
|
type AgentExt = Extension<Arc<ComplianceAgent>>;
|
||||||
|
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
pub struct CreateSessionRequest {
|
||||||
|
pub target_id: String,
|
||||||
|
#[serde(default = "default_strategy")]
|
||||||
|
pub strategy: String,
|
||||||
|
pub message: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn default_strategy() -> String {
|
||||||
|
"comprehensive".to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
pub struct SendMessageRequest {
|
||||||
|
pub message: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// POST /api/v1/pentest/sessions — Create a new pentest session and start the orchestrator
|
||||||
|
#[tracing::instrument(skip_all)]
|
||||||
|
pub async fn create_session(
|
||||||
|
Extension(agent): AgentExt,
|
||||||
|
Json(req): Json<CreateSessionRequest>,
|
||||||
|
) -> Result<Json<ApiResponse<PentestSession>>, (StatusCode, String)> {
|
||||||
|
let oid = mongodb::bson::oid::ObjectId::parse_str(&req.target_id).map_err(|_| {
|
||||||
|
(
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
"Invalid target_id format".to_string(),
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Look up the target
|
||||||
|
let target = agent
|
||||||
|
.db
|
||||||
|
.dast_targets()
|
||||||
|
.find_one(doc! { "_id": oid })
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
format!("Database error: {e}"),
|
||||||
|
)
|
||||||
|
})?
|
||||||
|
.ok_or_else(|| (StatusCode::NOT_FOUND, "Target not found".to_string()))?;
|
||||||
|
|
||||||
|
// Parse strategy
|
||||||
|
let strategy = match req.strategy.as_str() {
|
||||||
|
"quick" => PentestStrategy::Quick,
|
||||||
|
"targeted" => PentestStrategy::Targeted,
|
||||||
|
"aggressive" => PentestStrategy::Aggressive,
|
||||||
|
"stealth" => PentestStrategy::Stealth,
|
||||||
|
_ => PentestStrategy::Comprehensive,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Create session
|
||||||
|
let mut session = PentestSession::new(req.target_id.clone(), strategy);
|
||||||
|
session.repo_id = target.repo_id.clone();
|
||||||
|
|
||||||
|
let insert_result = agent
|
||||||
|
.db
|
||||||
|
.pentest_sessions()
|
||||||
|
.insert_one(&session)
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
format!("Failed to create session: {e}"),
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Set the generated ID back on the session so the orchestrator has it
|
||||||
|
session.id = insert_result.inserted_id.as_object_id();
|
||||||
|
|
||||||
|
let initial_message = req.message.unwrap_or_else(|| {
|
||||||
|
format!(
|
||||||
|
"Begin a {} penetration test against {} ({}). \
|
||||||
|
Identify vulnerabilities and provide evidence for each finding.",
|
||||||
|
session.strategy, target.name, target.base_url,
|
||||||
|
)
|
||||||
|
});
|
||||||
|
|
||||||
|
// Spawn the orchestrator on a background task
|
||||||
|
let llm = agent.llm.clone();
|
||||||
|
let db = agent.db.clone();
|
||||||
|
let session_clone = session.clone();
|
||||||
|
let target_clone = target.clone();
|
||||||
|
tokio::spawn(async move {
|
||||||
|
let orchestrator = PentestOrchestrator::new(llm, db);
|
||||||
|
orchestrator
|
||||||
|
.run_session_guarded(&session_clone, &target_clone, &initial_message)
|
||||||
|
.await;
|
||||||
|
});
|
||||||
|
|
||||||
|
Ok(Json(ApiResponse {
|
||||||
|
data: session,
|
||||||
|
total: None,
|
||||||
|
page: None,
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// GET /api/v1/pentest/sessions — List pentest sessions
|
||||||
|
#[tracing::instrument(skip_all)]
|
||||||
|
pub async fn list_sessions(
|
||||||
|
Extension(agent): AgentExt,
|
||||||
|
Query(params): Query<PaginationParams>,
|
||||||
|
) -> Result<Json<ApiResponse<Vec<PentestSession>>>, StatusCode> {
|
||||||
|
let db = &agent.db;
|
||||||
|
let skip = (params.page.saturating_sub(1)) * params.limit as u64;
|
||||||
|
let total = db
|
||||||
|
.pentest_sessions()
|
||||||
|
.count_documents(doc! {})
|
||||||
|
.await
|
||||||
|
.unwrap_or(0);
|
||||||
|
|
||||||
|
let sessions = match db
|
||||||
|
.pentest_sessions()
|
||||||
|
.find(doc! {})
|
||||||
|
.sort(doc! { "started_at": -1 })
|
||||||
|
.skip(skip)
|
||||||
|
.limit(params.limit)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(cursor) => collect_cursor_async(cursor).await,
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!("Failed to fetch pentest sessions: {e}");
|
||||||
|
Vec::new()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(Json(ApiResponse {
|
||||||
|
data: sessions,
|
||||||
|
total: Some(total),
|
||||||
|
page: Some(params.page),
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// GET /api/v1/pentest/sessions/:id — Get a single pentest session
|
||||||
|
#[tracing::instrument(skip_all, fields(session_id = %id))]
|
||||||
|
pub async fn get_session(
|
||||||
|
Extension(agent): AgentExt,
|
||||||
|
Path(id): Path<String>,
|
||||||
|
) -> Result<Json<ApiResponse<PentestSession>>, StatusCode> {
|
||||||
|
let oid =
|
||||||
|
mongodb::bson::oid::ObjectId::parse_str(&id).map_err(|_| StatusCode::BAD_REQUEST)?;
|
||||||
|
|
||||||
|
let session = agent
|
||||||
|
.db
|
||||||
|
.pentest_sessions()
|
||||||
|
.find_one(doc! { "_id": oid })
|
||||||
|
.await
|
||||||
|
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
|
||||||
|
.ok_or(StatusCode::NOT_FOUND)?;
|
||||||
|
|
||||||
|
Ok(Json(ApiResponse {
|
||||||
|
data: session,
|
||||||
|
total: None,
|
||||||
|
page: None,
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// POST /api/v1/pentest/sessions/:id/chat — Send a user message and trigger next orchestrator iteration
|
||||||
|
#[tracing::instrument(skip_all, fields(session_id = %id))]
|
||||||
|
pub async fn send_message(
|
||||||
|
Extension(agent): AgentExt,
|
||||||
|
Path(id): Path<String>,
|
||||||
|
Json(req): Json<SendMessageRequest>,
|
||||||
|
) -> Result<Json<ApiResponse<PentestMessage>>, (StatusCode, String)> {
|
||||||
|
let oid = mongodb::bson::oid::ObjectId::parse_str(&id)
|
||||||
|
.map_err(|_| (StatusCode::BAD_REQUEST, "Invalid session ID".to_string()))?;
|
||||||
|
|
||||||
|
// Verify session exists and is running
|
||||||
|
let session = agent
|
||||||
|
.db
|
||||||
|
.pentest_sessions()
|
||||||
|
.find_one(doc! { "_id": oid })
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
format!("Database error: {e}"),
|
||||||
|
)
|
||||||
|
})?
|
||||||
|
.ok_or_else(|| (StatusCode::NOT_FOUND, "Session not found".to_string()))?;
|
||||||
|
|
||||||
|
if session.status != PentestStatus::Running && session.status != PentestStatus::Paused {
|
||||||
|
return Err((
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
format!("Session is {}, cannot send messages", session.status),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Look up the target
|
||||||
|
let target_oid =
|
||||||
|
mongodb::bson::oid::ObjectId::parse_str(&session.target_id).map_err(|_| {
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
"Invalid target_id in session".to_string(),
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let target = agent
|
||||||
|
.db
|
||||||
|
.dast_targets()
|
||||||
|
.find_one(doc! { "_id": target_oid })
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
format!("Database error: {e}"),
|
||||||
|
)
|
||||||
|
})?
|
||||||
|
.ok_or_else(|| {
|
||||||
|
(
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
"Target for session not found".to_string(),
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Store user message
|
||||||
|
let session_id = id.clone();
|
||||||
|
let user_msg = PentestMessage::user(session_id.clone(), req.message.clone());
|
||||||
|
let _ = agent.db.pentest_messages().insert_one(&user_msg).await;
|
||||||
|
|
||||||
|
let response_msg = user_msg.clone();
|
||||||
|
|
||||||
|
// Spawn orchestrator to continue the session
|
||||||
|
let llm = agent.llm.clone();
|
||||||
|
let db = agent.db.clone();
|
||||||
|
let message = req.message.clone();
|
||||||
|
tokio::spawn(async move {
|
||||||
|
let orchestrator = PentestOrchestrator::new(llm, db);
|
||||||
|
orchestrator
|
||||||
|
.run_session_guarded(&session, &target, &message)
|
||||||
|
.await;
|
||||||
|
});
|
||||||
|
|
||||||
|
Ok(Json(ApiResponse {
|
||||||
|
data: response_msg,
|
||||||
|
total: None,
|
||||||
|
page: None,
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// GET /api/v1/pentest/sessions/:id/stream — SSE endpoint for real-time events
|
||||||
|
///
|
||||||
|
/// Returns recent messages as SSE events (polling approach).
|
||||||
|
/// True real-time streaming with broadcast channels will be added in a future iteration.
|
||||||
|
#[tracing::instrument(skip_all, fields(session_id = %id))]
|
||||||
|
pub async fn session_stream(
|
||||||
|
Extension(agent): AgentExt,
|
||||||
|
Path(id): Path<String>,
|
||||||
|
) -> Result<Sse<impl futures_util::Stream<Item = Result<Event, std::convert::Infallible>>>, StatusCode>
|
||||||
|
{
|
||||||
|
let oid =
|
||||||
|
mongodb::bson::oid::ObjectId::parse_str(&id).map_err(|_| StatusCode::BAD_REQUEST)?;
|
||||||
|
|
||||||
|
// Verify session exists
|
||||||
|
let _session = agent
|
||||||
|
.db
|
||||||
|
.pentest_sessions()
|
||||||
|
.find_one(doc! { "_id": oid })
|
||||||
|
.await
|
||||||
|
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
|
||||||
|
.ok_or(StatusCode::NOT_FOUND)?;
|
||||||
|
|
||||||
|
// Fetch recent messages for this session
|
||||||
|
let messages: Vec<PentestMessage> = match agent
|
||||||
|
.db
|
||||||
|
.pentest_messages()
|
||||||
|
.find(doc! { "session_id": &id })
|
||||||
|
.sort(doc! { "created_at": 1 })
|
||||||
|
.limit(100)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(cursor) => collect_cursor_async(cursor).await,
|
||||||
|
Err(_) => Vec::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Fetch recent attack chain nodes
|
||||||
|
let nodes: Vec<AttackChainNode> = match agent
|
||||||
|
.db
|
||||||
|
.attack_chain_nodes()
|
||||||
|
.find(doc! { "session_id": &id })
|
||||||
|
.sort(doc! { "started_at": 1 })
|
||||||
|
.limit(100)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(cursor) => collect_cursor_async(cursor).await,
|
||||||
|
Err(_) => Vec::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Build SSE events from stored data
|
||||||
|
let mut events: Vec<Result<Event, std::convert::Infallible>> = Vec::new();
|
||||||
|
|
||||||
|
for msg in &messages {
|
||||||
|
let event_data = serde_json::json!({
|
||||||
|
"type": "message",
|
||||||
|
"role": msg.role,
|
||||||
|
"content": msg.content,
|
||||||
|
"created_at": msg.created_at.to_rfc3339(),
|
||||||
|
});
|
||||||
|
if let Ok(data) = serde_json::to_string(&event_data) {
|
||||||
|
events.push(Ok(Event::default().event("message").data(data)));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for node in &nodes {
|
||||||
|
let event_data = serde_json::json!({
|
||||||
|
"type": "tool_execution",
|
||||||
|
"node_id": node.node_id,
|
||||||
|
"tool_name": node.tool_name,
|
||||||
|
"status": node.status,
|
||||||
|
"findings_produced": node.findings_produced,
|
||||||
|
});
|
||||||
|
if let Ok(data) = serde_json::to_string(&event_data) {
|
||||||
|
events.push(Ok(Event::default().event("tool").data(data)));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add session status event
|
||||||
|
let session = agent
|
||||||
|
.db
|
||||||
|
.pentest_sessions()
|
||||||
|
.find_one(doc! { "_id": oid })
|
||||||
|
.await
|
||||||
|
.ok()
|
||||||
|
.flatten();
|
||||||
|
|
||||||
|
if let Some(s) = session {
|
||||||
|
let status_data = serde_json::json!({
|
||||||
|
"type": "status",
|
||||||
|
"status": s.status,
|
||||||
|
"findings_count": s.findings_count,
|
||||||
|
"tool_invocations": s.tool_invocations,
|
||||||
|
});
|
||||||
|
if let Ok(data) = serde_json::to_string(&status_data) {
|
||||||
|
events.push(Ok(Event::default().event("status").data(data)));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Sse::new(stream::iter(events)))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// POST /api/v1/pentest/sessions/:id/stop — Stop a running pentest session
|
||||||
|
#[tracing::instrument(skip_all, fields(session_id = %id))]
|
||||||
|
pub async fn stop_session(
|
||||||
|
Extension(agent): AgentExt,
|
||||||
|
Path(id): Path<String>,
|
||||||
|
) -> Result<Json<ApiResponse<PentestSession>>, (StatusCode, String)> {
|
||||||
|
let oid = mongodb::bson::oid::ObjectId::parse_str(&id)
|
||||||
|
.map_err(|_| (StatusCode::BAD_REQUEST, "Invalid session ID".to_string()))?;
|
||||||
|
|
||||||
|
let session = agent
|
||||||
|
.db
|
||||||
|
.pentest_sessions()
|
||||||
|
.find_one(doc! { "_id": oid })
|
||||||
|
.await
|
||||||
|
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, format!("Database error: {e}")))?
|
||||||
|
.ok_or_else(|| (StatusCode::NOT_FOUND, "Session not found".to_string()))?;
|
||||||
|
|
||||||
|
if session.status != PentestStatus::Running {
|
||||||
|
return Err((
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
format!("Session is {}, not running", session.status),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
agent
|
||||||
|
.db
|
||||||
|
.pentest_sessions()
|
||||||
|
.update_one(
|
||||||
|
doc! { "_id": oid },
|
||||||
|
doc! { "$set": {
|
||||||
|
"status": "failed",
|
||||||
|
"completed_at": mongodb::bson::DateTime::now(),
|
||||||
|
"error_message": "Stopped by user",
|
||||||
|
}},
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, format!("Database error: {e}")))?;
|
||||||
|
|
||||||
|
let updated = agent
|
||||||
|
.db
|
||||||
|
.pentest_sessions()
|
||||||
|
.find_one(doc! { "_id": oid })
|
||||||
|
.await
|
||||||
|
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, format!("Database error: {e}")))?
|
||||||
|
.ok_or_else(|| (StatusCode::NOT_FOUND, "Session not found after update".to_string()))?;
|
||||||
|
|
||||||
|
Ok(Json(ApiResponse {
|
||||||
|
data: updated,
|
||||||
|
total: None,
|
||||||
|
page: None,
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// GET /api/v1/pentest/sessions/:id/attack-chain — Get attack chain nodes for a session
|
||||||
|
#[tracing::instrument(skip_all, fields(session_id = %id))]
|
||||||
|
pub async fn get_attack_chain(
|
||||||
|
Extension(agent): AgentExt,
|
||||||
|
Path(id): Path<String>,
|
||||||
|
) -> Result<Json<ApiResponse<Vec<AttackChainNode>>>, StatusCode> {
|
||||||
|
// Verify the session ID is valid
|
||||||
|
let _oid =
|
||||||
|
mongodb::bson::oid::ObjectId::parse_str(&id).map_err(|_| StatusCode::BAD_REQUEST)?;
|
||||||
|
|
||||||
|
let nodes = match agent
|
||||||
|
.db
|
||||||
|
.attack_chain_nodes()
|
||||||
|
.find(doc! { "session_id": &id })
|
||||||
|
.sort(doc! { "started_at": 1 })
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(cursor) => collect_cursor_async(cursor).await,
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!("Failed to fetch attack chain nodes: {e}");
|
||||||
|
Vec::new()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let total = nodes.len() as u64;
|
||||||
|
Ok(Json(ApiResponse {
|
||||||
|
data: nodes,
|
||||||
|
total: Some(total),
|
||||||
|
page: None,
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// GET /api/v1/pentest/sessions/:id/messages — Get messages for a session
|
||||||
|
#[tracing::instrument(skip_all, fields(session_id = %id))]
|
||||||
|
pub async fn get_messages(
|
||||||
|
Extension(agent): AgentExt,
|
||||||
|
Path(id): Path<String>,
|
||||||
|
Query(params): Query<PaginationParams>,
|
||||||
|
) -> Result<Json<ApiResponse<Vec<PentestMessage>>>, StatusCode> {
|
||||||
|
let _oid =
|
||||||
|
mongodb::bson::oid::ObjectId::parse_str(&id).map_err(|_| StatusCode::BAD_REQUEST)?;
|
||||||
|
|
||||||
|
let skip = (params.page.saturating_sub(1)) * params.limit as u64;
|
||||||
|
let total = agent
|
||||||
|
.db
|
||||||
|
.pentest_messages()
|
||||||
|
.count_documents(doc! { "session_id": &id })
|
||||||
|
.await
|
||||||
|
.unwrap_or(0);
|
||||||
|
|
||||||
|
let messages = match agent
|
||||||
|
.db
|
||||||
|
.pentest_messages()
|
||||||
|
.find(doc! { "session_id": &id })
|
||||||
|
.sort(doc! { "created_at": 1 })
|
||||||
|
.skip(skip)
|
||||||
|
.limit(params.limit)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(cursor) => collect_cursor_async(cursor).await,
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!("Failed to fetch pentest messages: {e}");
|
||||||
|
Vec::new()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(Json(ApiResponse {
|
||||||
|
data: messages,
|
||||||
|
total: Some(total),
|
||||||
|
page: Some(params.page),
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// GET /api/v1/pentest/stats — Aggregated pentest statistics
|
||||||
|
#[tracing::instrument(skip_all)]
|
||||||
|
pub async fn pentest_stats(
|
||||||
|
Extension(agent): AgentExt,
|
||||||
|
) -> Result<Json<ApiResponse<PentestStats>>, StatusCode> {
|
||||||
|
let db = &agent.db;
|
||||||
|
|
||||||
|
let running_sessions = db
|
||||||
|
.pentest_sessions()
|
||||||
|
.count_documents(doc! { "status": "running" })
|
||||||
|
.await
|
||||||
|
.unwrap_or(0) as u32;
|
||||||
|
|
||||||
|
// Count DAST findings from pentest sessions
|
||||||
|
let total_vulnerabilities = db
|
||||||
|
.dast_findings()
|
||||||
|
.count_documents(doc! { "session_id": { "$exists": true, "$ne": null } })
|
||||||
|
.await
|
||||||
|
.unwrap_or(0) as u32;
|
||||||
|
|
||||||
|
// Aggregate tool invocations from all sessions
|
||||||
|
let sessions: Vec<PentestSession> = match db.pentest_sessions().find(doc! {}).await {
|
||||||
|
Ok(cursor) => collect_cursor_async(cursor).await,
|
||||||
|
Err(_) => Vec::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let total_tool_invocations: u32 = sessions.iter().map(|s| s.tool_invocations).sum();
|
||||||
|
let total_successes: u32 = sessions.iter().map(|s| s.tool_successes).sum();
|
||||||
|
let tool_success_rate = if total_tool_invocations == 0 {
|
||||||
|
100.0
|
||||||
|
} else {
|
||||||
|
(total_successes as f64 / total_tool_invocations as f64) * 100.0
|
||||||
|
};
|
||||||
|
|
||||||
|
// Severity distribution from pentest-related DAST findings
|
||||||
|
let critical = db
|
||||||
|
.dast_findings()
|
||||||
|
.count_documents(doc! { "session_id": { "$exists": true, "$ne": null }, "severity": "critical" })
|
||||||
|
.await
|
||||||
|
.unwrap_or(0) as u32;
|
||||||
|
let high = db
|
||||||
|
.dast_findings()
|
||||||
|
.count_documents(doc! { "session_id": { "$exists": true, "$ne": null }, "severity": "high" })
|
||||||
|
.await
|
||||||
|
.unwrap_or(0) as u32;
|
||||||
|
let medium = db
|
||||||
|
.dast_findings()
|
||||||
|
.count_documents(doc! { "session_id": { "$exists": true, "$ne": null }, "severity": "medium" })
|
||||||
|
.await
|
||||||
|
.unwrap_or(0) as u32;
|
||||||
|
let low = db
|
||||||
|
.dast_findings()
|
||||||
|
.count_documents(doc! { "session_id": { "$exists": true, "$ne": null }, "severity": "low" })
|
||||||
|
.await
|
||||||
|
.unwrap_or(0) as u32;
|
||||||
|
let info = db
|
||||||
|
.dast_findings()
|
||||||
|
.count_documents(doc! { "session_id": { "$exists": true, "$ne": null }, "severity": "info" })
|
||||||
|
.await
|
||||||
|
.unwrap_or(0) as u32;
|
||||||
|
|
||||||
|
Ok(Json(ApiResponse {
|
||||||
|
data: PentestStats {
|
||||||
|
running_sessions,
|
||||||
|
total_vulnerabilities,
|
||||||
|
total_tool_invocations,
|
||||||
|
tool_success_rate,
|
||||||
|
severity_distribution: SeverityDistribution {
|
||||||
|
critical,
|
||||||
|
high,
|
||||||
|
medium,
|
||||||
|
low,
|
||||||
|
info,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
total: None,
|
||||||
|
page: None,
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// GET /api/v1/pentest/sessions/:id/findings — Get DAST findings for a pentest session
|
||||||
|
#[tracing::instrument(skip_all, fields(session_id = %id))]
|
||||||
|
pub async fn get_session_findings(
|
||||||
|
Extension(agent): AgentExt,
|
||||||
|
Path(id): Path<String>,
|
||||||
|
Query(params): Query<PaginationParams>,
|
||||||
|
) -> Result<Json<ApiResponse<Vec<DastFinding>>>, StatusCode> {
|
||||||
|
let _oid =
|
||||||
|
mongodb::bson::oid::ObjectId::parse_str(&id).map_err(|_| StatusCode::BAD_REQUEST)?;
|
||||||
|
|
||||||
|
let skip = (params.page.saturating_sub(1)) * params.limit as u64;
|
||||||
|
let total = agent
|
||||||
|
.db
|
||||||
|
.dast_findings()
|
||||||
|
.count_documents(doc! { "session_id": &id })
|
||||||
|
.await
|
||||||
|
.unwrap_or(0);
|
||||||
|
|
||||||
|
let findings = match agent
|
||||||
|
.db
|
||||||
|
.dast_findings()
|
||||||
|
.find(doc! { "session_id": &id })
|
||||||
|
.sort(doc! { "created_at": -1 })
|
||||||
|
.skip(skip)
|
||||||
|
.limit(params.limit)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(cursor) => collect_cursor_async(cursor).await,
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!("Failed to fetch pentest session findings: {e}");
|
||||||
|
Vec::new()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(Json(ApiResponse {
|
||||||
|
data: findings,
|
||||||
|
total: Some(total),
|
||||||
|
page: Some(params.page),
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
pub struct ExportBody {
|
||||||
|
pub password: String,
|
||||||
|
/// Requester display name (from auth)
|
||||||
|
#[serde(default)]
|
||||||
|
pub requester_name: String,
|
||||||
|
/// Requester email (from auth)
|
||||||
|
#[serde(default)]
|
||||||
|
pub requester_email: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// POST /api/v1/pentest/sessions/:id/export — Export an encrypted pentest report archive
|
||||||
|
#[tracing::instrument(skip_all, fields(session_id = %id))]
|
||||||
|
pub async fn export_session_report(
|
||||||
|
Extension(agent): AgentExt,
|
||||||
|
Path(id): Path<String>,
|
||||||
|
Json(body): Json<ExportBody>,
|
||||||
|
) -> Result<axum::response::Response, (StatusCode, String)> {
|
||||||
|
let oid = mongodb::bson::oid::ObjectId::parse_str(&id)
|
||||||
|
.map_err(|_| (StatusCode::BAD_REQUEST, "Invalid session ID".to_string()))?;
|
||||||
|
|
||||||
|
if body.password.len() < 8 {
|
||||||
|
return Err((
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
"Password must be at least 8 characters".to_string(),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fetch session
|
||||||
|
let session = agent
|
||||||
|
.db
|
||||||
|
.pentest_sessions()
|
||||||
|
.find_one(doc! { "_id": oid })
|
||||||
|
.await
|
||||||
|
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, format!("Database error: {e}")))?
|
||||||
|
.ok_or_else(|| (StatusCode::NOT_FOUND, "Session not found".to_string()))?;
|
||||||
|
|
||||||
|
// Resolve target name
|
||||||
|
let target = if let Ok(tid) = mongodb::bson::oid::ObjectId::parse_str(&session.target_id) {
|
||||||
|
agent
|
||||||
|
.db
|
||||||
|
.dast_targets()
|
||||||
|
.find_one(doc! { "_id": tid })
|
||||||
|
.await
|
||||||
|
.ok()
|
||||||
|
.flatten()
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
let target_name = target
|
||||||
|
.as_ref()
|
||||||
|
.map(|t| t.name.clone())
|
||||||
|
.unwrap_or_else(|| "Unknown Target".to_string());
|
||||||
|
let target_url = target
|
||||||
|
.as_ref()
|
||||||
|
.map(|t| t.base_url.clone())
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
// Fetch attack chain nodes
|
||||||
|
let nodes: Vec<AttackChainNode> = match agent
|
||||||
|
.db
|
||||||
|
.attack_chain_nodes()
|
||||||
|
.find(doc! { "session_id": &id })
|
||||||
|
.sort(doc! { "started_at": 1 })
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(cursor) => collect_cursor_async(cursor).await,
|
||||||
|
Err(_) => Vec::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Fetch DAST findings for this session
|
||||||
|
let findings: Vec<DastFinding> = match agent
|
||||||
|
.db
|
||||||
|
.dast_findings()
|
||||||
|
.find(doc! { "session_id": &id })
|
||||||
|
.sort(doc! { "severity": -1, "created_at": -1 })
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(cursor) => collect_cursor_async(cursor).await,
|
||||||
|
Err(_) => Vec::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let ctx = crate::pentest::report::ReportContext {
|
||||||
|
session,
|
||||||
|
target_name,
|
||||||
|
target_url,
|
||||||
|
findings,
|
||||||
|
attack_chain: nodes,
|
||||||
|
requester_name: if body.requester_name.is_empty() {
|
||||||
|
"Unknown".to_string()
|
||||||
|
} else {
|
||||||
|
body.requester_name
|
||||||
|
},
|
||||||
|
requester_email: body.requester_email,
|
||||||
|
};
|
||||||
|
|
||||||
|
let report = crate::pentest::generate_encrypted_report(&ctx, &body.password)
|
||||||
|
.await
|
||||||
|
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e))?;
|
||||||
|
|
||||||
|
let response = serde_json::json!({
|
||||||
|
"archive_base64": base64::Engine::encode(&base64::engine::general_purpose::STANDARD, &report.archive),
|
||||||
|
"sha256": report.sha256,
|
||||||
|
"filename": format!("pentest-report-{id}.zip"),
|
||||||
|
});
|
||||||
|
|
||||||
|
Ok(Json(response).into_response())
|
||||||
|
}
|
||||||
@@ -99,6 +99,44 @@ pub fn build_router() -> Router {
|
|||||||
"/api/v1/chat/{repo_id}/status",
|
"/api/v1/chat/{repo_id}/status",
|
||||||
get(handlers::chat::embedding_status),
|
get(handlers::chat::embedding_status),
|
||||||
)
|
)
|
||||||
|
// Pentest API endpoints
|
||||||
|
.route(
|
||||||
|
"/api/v1/pentest/sessions",
|
||||||
|
get(handlers::pentest::list_sessions).post(handlers::pentest::create_session),
|
||||||
|
)
|
||||||
|
.route(
|
||||||
|
"/api/v1/pentest/sessions/{id}",
|
||||||
|
get(handlers::pentest::get_session),
|
||||||
|
)
|
||||||
|
.route(
|
||||||
|
"/api/v1/pentest/sessions/{id}/chat",
|
||||||
|
post(handlers::pentest::send_message),
|
||||||
|
)
|
||||||
|
.route(
|
||||||
|
"/api/v1/pentest/sessions/{id}/stop",
|
||||||
|
post(handlers::pentest::stop_session),
|
||||||
|
)
|
||||||
|
.route(
|
||||||
|
"/api/v1/pentest/sessions/{id}/stream",
|
||||||
|
get(handlers::pentest::session_stream),
|
||||||
|
)
|
||||||
|
.route(
|
||||||
|
"/api/v1/pentest/sessions/{id}/attack-chain",
|
||||||
|
get(handlers::pentest::get_attack_chain),
|
||||||
|
)
|
||||||
|
.route(
|
||||||
|
"/api/v1/pentest/sessions/{id}/messages",
|
||||||
|
get(handlers::pentest::get_messages),
|
||||||
|
)
|
||||||
|
.route(
|
||||||
|
"/api/v1/pentest/sessions/{id}/findings",
|
||||||
|
get(handlers::pentest::get_session_findings),
|
||||||
|
)
|
||||||
|
.route(
|
||||||
|
"/api/v1/pentest/sessions/{id}/export",
|
||||||
|
post(handlers::pentest::export_session_report),
|
||||||
|
)
|
||||||
|
.route("/api/v1/pentest/stats", get(handlers::pentest::pentest_stats))
|
||||||
// Webhook endpoints (proxied through dashboard)
|
// Webhook endpoints (proxied through dashboard)
|
||||||
.route(
|
.route(
|
||||||
"/webhook/github/{repo_id}",
|
"/webhook/github/{repo_id}",
|
||||||
|
|||||||
@@ -166,6 +166,38 @@ impl Database {
|
|||||||
)
|
)
|
||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
|
// pentest_sessions: compound (target_id, started_at DESC)
|
||||||
|
self.pentest_sessions()
|
||||||
|
.create_index(
|
||||||
|
IndexModel::builder()
|
||||||
|
.keys(doc! { "target_id": 1, "started_at": -1 })
|
||||||
|
.build(),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// pentest_sessions: status index
|
||||||
|
self.pentest_sessions()
|
||||||
|
.create_index(IndexModel::builder().keys(doc! { "status": 1 }).build())
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// attack_chain_nodes: compound (session_id, node_id)
|
||||||
|
self.attack_chain_nodes()
|
||||||
|
.create_index(
|
||||||
|
IndexModel::builder()
|
||||||
|
.keys(doc! { "session_id": 1, "node_id": 1 })
|
||||||
|
.build(),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// pentest_messages: compound (session_id, created_at)
|
||||||
|
self.pentest_messages()
|
||||||
|
.create_index(
|
||||||
|
IndexModel::builder()
|
||||||
|
.keys(doc! { "session_id": 1, "created_at": 1 })
|
||||||
|
.build(),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
tracing::info!("Database indexes ensured");
|
tracing::info!("Database indexes ensured");
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
@@ -235,6 +267,19 @@ impl Database {
|
|||||||
self.inner.collection("embedding_builds")
|
self.inner.collection("embedding_builds")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Pentest collections
|
||||||
|
pub fn pentest_sessions(&self) -> Collection<PentestSession> {
|
||||||
|
self.inner.collection("pentest_sessions")
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn attack_chain_nodes(&self) -> Collection<AttackChainNode> {
|
||||||
|
self.inner.collection("attack_chain_nodes")
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn pentest_messages(&self) -> Collection<PentestMessage> {
|
||||||
|
self.inner.collection("pentest_messages")
|
||||||
|
}
|
||||||
|
|
||||||
#[allow(dead_code)]
|
#[allow(dead_code)]
|
||||||
pub fn raw_collection(&self, name: &str) -> Collection<mongodb::bson::Document> {
|
pub fn raw_collection(&self, name: &str) -> Collection<mongodb::bson::Document> {
|
||||||
self.inner.collection(name)
|
self.inner.collection(name)
|
||||||
|
|||||||
@@ -12,10 +12,16 @@ pub struct LlmClient {
|
|||||||
http: reqwest::Client,
|
http: reqwest::Client,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize)]
|
// ── Request types ──────────────────────────────────────────────
|
||||||
struct ChatMessage {
|
|
||||||
role: String,
|
#[derive(Serialize, Clone, Debug)]
|
||||||
content: String,
|
pub struct ChatMessage {
|
||||||
|
pub role: String,
|
||||||
|
pub content: Option<String>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub tool_calls: Option<Vec<ToolCallRequest>>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub tool_call_id: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize)]
|
#[derive(Serialize)]
|
||||||
@@ -26,8 +32,25 @@ struct ChatCompletionRequest {
|
|||||||
temperature: Option<f64>,
|
temperature: Option<f64>,
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
max_tokens: Option<u32>,
|
max_tokens: Option<u32>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
tools: Option<Vec<ToolDefinitionPayload>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Serialize)]
|
||||||
|
struct ToolDefinitionPayload {
|
||||||
|
r#type: String,
|
||||||
|
function: ToolFunctionPayload,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Serialize)]
|
||||||
|
struct ToolFunctionPayload {
|
||||||
|
name: String,
|
||||||
|
description: String,
|
||||||
|
parameters: serde_json::Value,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Response types ─────────────────────────────────────────────
|
||||||
|
|
||||||
#[derive(Deserialize)]
|
#[derive(Deserialize)]
|
||||||
struct ChatCompletionResponse {
|
struct ChatCompletionResponse {
|
||||||
choices: Vec<ChatChoice>,
|
choices: Vec<ChatChoice>,
|
||||||
@@ -40,29 +63,85 @@ struct ChatChoice {
|
|||||||
|
|
||||||
#[derive(Deserialize)]
|
#[derive(Deserialize)]
|
||||||
struct ChatResponseMessage {
|
struct ChatResponseMessage {
|
||||||
content: String,
|
#[serde(default)]
|
||||||
|
content: Option<String>,
|
||||||
|
#[serde(default)]
|
||||||
|
tool_calls: Option<Vec<ToolCallResponse>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Request body for the embeddings API
|
#[derive(Deserialize)]
|
||||||
|
struct ToolCallResponse {
|
||||||
|
id: String,
|
||||||
|
function: ToolCallFunction,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
struct ToolCallFunction {
|
||||||
|
name: String,
|
||||||
|
arguments: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Public types for tool calling ──────────────────────────────
|
||||||
|
|
||||||
|
/// Definition of a tool that the LLM can invoke
|
||||||
|
#[derive(Debug, Clone, Serialize)]
|
||||||
|
pub struct ToolDefinition {
|
||||||
|
pub name: String,
|
||||||
|
pub description: String,
|
||||||
|
pub parameters: serde_json::Value,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A tool call request from the LLM
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct LlmToolCall {
|
||||||
|
pub id: String,
|
||||||
|
pub name: String,
|
||||||
|
pub arguments: serde_json::Value,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A tool call in the request message format (for sending back tool_calls in assistant messages)
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct ToolCallRequest {
|
||||||
|
pub id: String,
|
||||||
|
pub r#type: String,
|
||||||
|
pub function: ToolCallRequestFunction,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct ToolCallRequestFunction {
|
||||||
|
pub name: String,
|
||||||
|
pub arguments: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Response from the LLM — either content or tool calls
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub enum LlmResponse {
|
||||||
|
Content(String),
|
||||||
|
/// Tool calls with optional reasoning text from the LLM
|
||||||
|
ToolCalls { calls: Vec<LlmToolCall>, reasoning: String },
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Embedding types ────────────────────────────────────────────
|
||||||
|
|
||||||
#[derive(Serialize)]
|
#[derive(Serialize)]
|
||||||
struct EmbeddingRequest {
|
struct EmbeddingRequest {
|
||||||
model: String,
|
model: String,
|
||||||
input: Vec<String>,
|
input: Vec<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Response from the embeddings API
|
|
||||||
#[derive(Deserialize)]
|
#[derive(Deserialize)]
|
||||||
struct EmbeddingResponse {
|
struct EmbeddingResponse {
|
||||||
data: Vec<EmbeddingData>,
|
data: Vec<EmbeddingData>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// A single embedding result
|
|
||||||
#[derive(Deserialize)]
|
#[derive(Deserialize)]
|
||||||
struct EmbeddingData {
|
struct EmbeddingData {
|
||||||
embedding: Vec<f64>,
|
embedding: Vec<f64>,
|
||||||
index: usize,
|
index: usize,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ── Implementation ─────────────────────────────────────────────
|
||||||
|
|
||||||
impl LlmClient {
|
impl LlmClient {
|
||||||
pub fn new(
|
pub fn new(
|
||||||
base_url: String,
|
base_url: String,
|
||||||
@@ -83,98 +162,142 @@ impl LlmClient {
|
|||||||
&self.embed_model
|
&self.embed_model
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn chat_url(&self) -> String {
|
||||||
|
format!(
|
||||||
|
"{}/v1/chat/completions",
|
||||||
|
self.base_url.trim_end_matches('/')
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn auth_header(&self) -> Option<String> {
|
||||||
|
let key = self.api_key.expose_secret();
|
||||||
|
if key.is_empty() {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
Some(format!("Bearer {key}"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Simple chat: system + user prompt → text response
|
||||||
pub async fn chat(
|
pub async fn chat(
|
||||||
&self,
|
&self,
|
||||||
system_prompt: &str,
|
system_prompt: &str,
|
||||||
user_prompt: &str,
|
user_prompt: &str,
|
||||||
temperature: Option<f64>,
|
temperature: Option<f64>,
|
||||||
) -> Result<String, AgentError> {
|
) -> Result<String, AgentError> {
|
||||||
let url = format!(
|
let messages = vec![
|
||||||
"{}/v1/chat/completions",
|
ChatMessage {
|
||||||
self.base_url.trim_end_matches('/')
|
role: "system".to_string(),
|
||||||
);
|
content: Some(system_prompt.to_string()),
|
||||||
|
tool_calls: None,
|
||||||
|
tool_call_id: None,
|
||||||
|
},
|
||||||
|
ChatMessage {
|
||||||
|
role: "user".to_string(),
|
||||||
|
content: Some(user_prompt.to_string()),
|
||||||
|
tool_calls: None,
|
||||||
|
tool_call_id: None,
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
let request_body = ChatCompletionRequest {
|
let request_body = ChatCompletionRequest {
|
||||||
model: self.model.clone(),
|
model: self.model.clone(),
|
||||||
messages: vec![
|
messages,
|
||||||
ChatMessage {
|
|
||||||
role: "system".to_string(),
|
|
||||||
content: system_prompt.to_string(),
|
|
||||||
},
|
|
||||||
ChatMessage {
|
|
||||||
role: "user".to_string(),
|
|
||||||
content: user_prompt.to_string(),
|
|
||||||
},
|
|
||||||
],
|
|
||||||
temperature,
|
temperature,
|
||||||
max_tokens: Some(4096),
|
max_tokens: Some(4096),
|
||||||
|
tools: None,
|
||||||
};
|
};
|
||||||
|
|
||||||
let mut req = self
|
self.send_chat_request(&request_body).await.map(|resp| {
|
||||||
.http
|
match resp {
|
||||||
.post(&url)
|
LlmResponse::Content(c) => c,
|
||||||
.header("content-type", "application/json")
|
LlmResponse::ToolCalls { .. } => String::new(), // shouldn't happen without tools
|
||||||
.json(&request_body);
|
}
|
||||||
|
})
|
||||||
let key = self.api_key.expose_secret();
|
|
||||||
if !key.is_empty() {
|
|
||||||
req = req.header("Authorization", format!("Bearer {key}"));
|
|
||||||
}
|
|
||||||
|
|
||||||
let resp = req
|
|
||||||
.send()
|
|
||||||
.await
|
|
||||||
.map_err(|e| AgentError::Other(format!("LiteLLM request failed: {e}")))?;
|
|
||||||
|
|
||||||
if !resp.status().is_success() {
|
|
||||||
let status = resp.status();
|
|
||||||
let body = resp.text().await.unwrap_or_default();
|
|
||||||
return Err(AgentError::Other(format!(
|
|
||||||
"LiteLLM returned {status}: {body}"
|
|
||||||
)));
|
|
||||||
}
|
|
||||||
|
|
||||||
let body: ChatCompletionResponse = resp
|
|
||||||
.json()
|
|
||||||
.await
|
|
||||||
.map_err(|e| AgentError::Other(format!("Failed to parse LiteLLM response: {e}")))?;
|
|
||||||
|
|
||||||
body.choices
|
|
||||||
.first()
|
|
||||||
.map(|c| c.message.content.clone())
|
|
||||||
.ok_or_else(|| AgentError::Other("Empty response from LiteLLM".to_string()))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Chat with a list of (role, content) messages → text response
|
||||||
#[allow(dead_code)]
|
#[allow(dead_code)]
|
||||||
pub async fn chat_with_messages(
|
pub async fn chat_with_messages(
|
||||||
&self,
|
&self,
|
||||||
messages: Vec<(String, String)>,
|
messages: Vec<(String, String)>,
|
||||||
temperature: Option<f64>,
|
temperature: Option<f64>,
|
||||||
) -> Result<String, AgentError> {
|
) -> Result<String, AgentError> {
|
||||||
let url = format!(
|
let messages = messages
|
||||||
"{}/v1/chat/completions",
|
.into_iter()
|
||||||
self.base_url.trim_end_matches('/')
|
.map(|(role, content)| ChatMessage {
|
||||||
);
|
role,
|
||||||
|
content: Some(content),
|
||||||
|
tool_calls: None,
|
||||||
|
tool_call_id: None,
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
let request_body = ChatCompletionRequest {
|
let request_body = ChatCompletionRequest {
|
||||||
model: self.model.clone(),
|
model: self.model.clone(),
|
||||||
messages: messages
|
messages,
|
||||||
.into_iter()
|
|
||||||
.map(|(role, content)| ChatMessage { role, content })
|
|
||||||
.collect(),
|
|
||||||
temperature,
|
temperature,
|
||||||
max_tokens: Some(4096),
|
max_tokens: Some(4096),
|
||||||
|
tools: None,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
self.send_chat_request(&request_body).await.map(|resp| {
|
||||||
|
match resp {
|
||||||
|
LlmResponse::Content(c) => c,
|
||||||
|
LlmResponse::ToolCalls { .. } => String::new(),
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Chat with tool definitions — returns either content or tool calls.
|
||||||
|
/// Use this for the AI pentest orchestrator loop.
|
||||||
|
pub async fn chat_with_tools(
|
||||||
|
&self,
|
||||||
|
messages: Vec<ChatMessage>,
|
||||||
|
tools: &[ToolDefinition],
|
||||||
|
temperature: Option<f64>,
|
||||||
|
max_tokens: Option<u32>,
|
||||||
|
) -> Result<LlmResponse, AgentError> {
|
||||||
|
let tool_payloads: Vec<ToolDefinitionPayload> = tools
|
||||||
|
.iter()
|
||||||
|
.map(|t| ToolDefinitionPayload {
|
||||||
|
r#type: "function".to_string(),
|
||||||
|
function: ToolFunctionPayload {
|
||||||
|
name: t.name.clone(),
|
||||||
|
description: t.description.clone(),
|
||||||
|
parameters: t.parameters.clone(),
|
||||||
|
},
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let request_body = ChatCompletionRequest {
|
||||||
|
model: self.model.clone(),
|
||||||
|
messages,
|
||||||
|
temperature,
|
||||||
|
max_tokens: Some(max_tokens.unwrap_or(8192)),
|
||||||
|
tools: if tool_payloads.is_empty() {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
Some(tool_payloads)
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
self.send_chat_request(&request_body).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Internal method to send a chat completion request and parse the response
|
||||||
|
async fn send_chat_request(
|
||||||
|
&self,
|
||||||
|
request_body: &ChatCompletionRequest,
|
||||||
|
) -> Result<LlmResponse, AgentError> {
|
||||||
let mut req = self
|
let mut req = self
|
||||||
.http
|
.http
|
||||||
.post(&url)
|
.post(&self.chat_url())
|
||||||
.header("content-type", "application/json")
|
.header("content-type", "application/json")
|
||||||
.json(&request_body);
|
.json(request_body);
|
||||||
|
|
||||||
let key = self.api_key.expose_secret();
|
if let Some(auth) = self.auth_header() {
|
||||||
if !key.is_empty() {
|
req = req.header("Authorization", auth);
|
||||||
req = req.header("Authorization", format!("Bearer {key}"));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
let resp = req
|
let resp = req
|
||||||
@@ -195,10 +318,39 @@ impl LlmClient {
|
|||||||
.await
|
.await
|
||||||
.map_err(|e| AgentError::Other(format!("Failed to parse LiteLLM response: {e}")))?;
|
.map_err(|e| AgentError::Other(format!("Failed to parse LiteLLM response: {e}")))?;
|
||||||
|
|
||||||
body.choices
|
let choice = body
|
||||||
|
.choices
|
||||||
.first()
|
.first()
|
||||||
.map(|c| c.message.content.clone())
|
.ok_or_else(|| AgentError::Other("Empty response from LiteLLM".to_string()))?;
|
||||||
.ok_or_else(|| AgentError::Other("Empty response from LiteLLM".to_string()))
|
|
||||||
|
// Check for tool calls first
|
||||||
|
if let Some(tool_calls) = &choice.message.tool_calls {
|
||||||
|
if !tool_calls.is_empty() {
|
||||||
|
let calls: Vec<LlmToolCall> = tool_calls
|
||||||
|
.iter()
|
||||||
|
.map(|tc| {
|
||||||
|
let arguments = serde_json::from_str(&tc.function.arguments)
|
||||||
|
.unwrap_or(serde_json::Value::Object(serde_json::Map::new()));
|
||||||
|
LlmToolCall {
|
||||||
|
id: tc.id.clone(),
|
||||||
|
name: tc.function.name.clone(),
|
||||||
|
arguments,
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
// Capture any reasoning text the LLM included alongside tool calls
|
||||||
|
let reasoning = choice.message.content.clone().unwrap_or_default();
|
||||||
|
return Ok(LlmResponse::ToolCalls { calls, reasoning });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Otherwise return content
|
||||||
|
let content = choice
|
||||||
|
.message
|
||||||
|
.content
|
||||||
|
.clone()
|
||||||
|
.unwrap_or_default();
|
||||||
|
Ok(LlmResponse::Content(content))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Generate embeddings for a batch of texts
|
/// Generate embeddings for a batch of texts
|
||||||
@@ -216,9 +368,8 @@ impl LlmClient {
|
|||||||
.header("content-type", "application/json")
|
.header("content-type", "application/json")
|
||||||
.json(&request_body);
|
.json(&request_body);
|
||||||
|
|
||||||
let key = self.api_key.expose_secret();
|
if let Some(auth) = self.auth_header() {
|
||||||
if !key.is_empty() {
|
req = req.header("Authorization", auth);
|
||||||
req = req.header("Authorization", format!("Bearer {key}"));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
let resp = req
|
let resp = req
|
||||||
@@ -239,7 +390,6 @@ impl LlmClient {
|
|||||||
.await
|
.await
|
||||||
.map_err(|e| AgentError::Other(format!("Failed to parse embedding response: {e}")))?;
|
.map_err(|e| AgentError::Other(format!("Failed to parse embedding response: {e}")))?;
|
||||||
|
|
||||||
// Sort by index to maintain input order
|
|
||||||
let mut data = body.data;
|
let mut data = body.data;
|
||||||
data.sort_by_key(|d| d.index);
|
data.sort_by_key(|d| d.index);
|
||||||
|
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ mod config;
|
|||||||
mod database;
|
mod database;
|
||||||
mod error;
|
mod error;
|
||||||
mod llm;
|
mod llm;
|
||||||
|
mod pentest;
|
||||||
mod pipeline;
|
mod pipeline;
|
||||||
mod rag;
|
mod rag;
|
||||||
mod scheduler;
|
mod scheduler;
|
||||||
|
|||||||
5
compliance-agent/src/pentest/mod.rs
Normal file
5
compliance-agent/src/pentest/mod.rs
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
pub mod orchestrator;
|
||||||
|
pub mod report;
|
||||||
|
|
||||||
|
pub use orchestrator::PentestOrchestrator;
|
||||||
|
pub use report::generate_encrypted_report;
|
||||||
761
compliance-agent/src/pentest/orchestrator.rs
Normal file
761
compliance-agent/src/pentest/orchestrator.rs
Normal file
@@ -0,0 +1,761 @@
|
|||||||
|
use std::sync::Arc;
|
||||||
|
use std::time::Duration;
|
||||||
|
|
||||||
|
use futures_util::StreamExt;
|
||||||
|
use mongodb::bson::doc;
|
||||||
|
use tokio::sync::broadcast;
|
||||||
|
|
||||||
|
use compliance_core::models::dast::DastTarget;
|
||||||
|
use compliance_core::models::finding::{Finding, FindingStatus, Severity};
|
||||||
|
use compliance_core::models::pentest::*;
|
||||||
|
use compliance_core::models::sbom::SbomEntry;
|
||||||
|
use compliance_core::traits::pentest_tool::PentestToolContext;
|
||||||
|
use compliance_dast::ToolRegistry;
|
||||||
|
|
||||||
|
use crate::database::Database;
|
||||||
|
use crate::llm::client::{
|
||||||
|
ChatMessage, LlmResponse, ToolCallRequest, ToolCallRequestFunction, ToolDefinition,
|
||||||
|
};
|
||||||
|
use crate::llm::LlmClient;
|
||||||
|
|
||||||
|
/// Maximum duration for a single pentest session before timeout
|
||||||
|
const SESSION_TIMEOUT: Duration = Duration::from_secs(30 * 60); // 30 minutes
|
||||||
|
|
||||||
|
pub struct PentestOrchestrator {
|
||||||
|
tool_registry: ToolRegistry,
|
||||||
|
llm: Arc<LlmClient>,
|
||||||
|
db: Database,
|
||||||
|
event_tx: broadcast::Sender<PentestEvent>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestOrchestrator {
|
||||||
|
pub fn new(llm: Arc<LlmClient>, db: Database) -> Self {
|
||||||
|
let (event_tx, _) = broadcast::channel(256);
|
||||||
|
Self {
|
||||||
|
tool_registry: ToolRegistry::new(),
|
||||||
|
llm,
|
||||||
|
db,
|
||||||
|
event_tx,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn subscribe(&self) -> broadcast::Receiver<PentestEvent> {
|
||||||
|
self.event_tx.subscribe()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn event_sender(&self) -> broadcast::Sender<PentestEvent> {
|
||||||
|
self.event_tx.clone()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Run a pentest session with timeout and automatic failure marking on errors.
|
||||||
|
pub async fn run_session_guarded(
|
||||||
|
&self,
|
||||||
|
session: &PentestSession,
|
||||||
|
target: &DastTarget,
|
||||||
|
initial_message: &str,
|
||||||
|
) {
|
||||||
|
let session_id = session.id;
|
||||||
|
|
||||||
|
match tokio::time::timeout(
|
||||||
|
SESSION_TIMEOUT,
|
||||||
|
self.run_session(session, target, initial_message),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(Ok(())) => {
|
||||||
|
tracing::info!(?session_id, "Pentest session completed successfully");
|
||||||
|
}
|
||||||
|
Ok(Err(e)) => {
|
||||||
|
tracing::error!(?session_id, error = %e, "Pentest session failed");
|
||||||
|
self.mark_session_failed(session_id, &format!("Error: {e}"))
|
||||||
|
.await;
|
||||||
|
let _ = self.event_tx.send(PentestEvent::Error {
|
||||||
|
message: format!("Session failed: {e}"),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
Err(_) => {
|
||||||
|
tracing::warn!(?session_id, "Pentest session timed out after 30 minutes");
|
||||||
|
self.mark_session_failed(session_id, "Session timed out after 30 minutes")
|
||||||
|
.await;
|
||||||
|
let _ = self.event_tx.send(PentestEvent::Error {
|
||||||
|
message: "Session timed out after 30 minutes".to_string(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn mark_session_failed(
|
||||||
|
&self,
|
||||||
|
session_id: Option<mongodb::bson::oid::ObjectId>,
|
||||||
|
reason: &str,
|
||||||
|
) {
|
||||||
|
if let Some(sid) = session_id {
|
||||||
|
let _ = self
|
||||||
|
.db
|
||||||
|
.pentest_sessions()
|
||||||
|
.update_one(
|
||||||
|
doc! { "_id": sid },
|
||||||
|
doc! { "$set": {
|
||||||
|
"status": "failed",
|
||||||
|
"completed_at": mongodb::bson::DateTime::now(),
|
||||||
|
"error_message": reason,
|
||||||
|
}},
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn run_session(
|
||||||
|
&self,
|
||||||
|
session: &PentestSession,
|
||||||
|
target: &DastTarget,
|
||||||
|
initial_message: &str,
|
||||||
|
) -> Result<(), crate::error::AgentError> {
|
||||||
|
let session_id = session
|
||||||
|
.id
|
||||||
|
.map(|oid| oid.to_hex())
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
// Gather code-awareness context from linked repo
|
||||||
|
let (sast_findings, sbom_entries, code_context) =
|
||||||
|
self.gather_repo_context(target).await;
|
||||||
|
|
||||||
|
// Build system prompt with code context
|
||||||
|
let system_prompt = self
|
||||||
|
.build_system_prompt(session, target, &sast_findings, &sbom_entries, &code_context)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
// Build tool definitions for LLM
|
||||||
|
let tool_defs: Vec<ToolDefinition> = self
|
||||||
|
.tool_registry
|
||||||
|
.all_definitions()
|
||||||
|
.into_iter()
|
||||||
|
.map(|td| ToolDefinition {
|
||||||
|
name: td.name,
|
||||||
|
description: td.description,
|
||||||
|
parameters: td.input_schema,
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
// Initialize messages
|
||||||
|
let mut messages = vec![
|
||||||
|
ChatMessage {
|
||||||
|
role: "system".to_string(),
|
||||||
|
content: Some(system_prompt),
|
||||||
|
tool_calls: None,
|
||||||
|
tool_call_id: None,
|
||||||
|
},
|
||||||
|
ChatMessage {
|
||||||
|
role: "user".to_string(),
|
||||||
|
content: Some(initial_message.to_string()),
|
||||||
|
tool_calls: None,
|
||||||
|
tool_call_id: None,
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
// Store user message
|
||||||
|
let user_msg = PentestMessage::user(session_id.clone(), initial_message.to_string());
|
||||||
|
let _ = self.db.pentest_messages().insert_one(&user_msg).await;
|
||||||
|
|
||||||
|
// Build tool context with real data
|
||||||
|
let tool_context = PentestToolContext {
|
||||||
|
target: target.clone(),
|
||||||
|
session_id: session_id.clone(),
|
||||||
|
sast_findings,
|
||||||
|
sbom_entries,
|
||||||
|
code_context,
|
||||||
|
rate_limit: target.rate_limit,
|
||||||
|
allow_destructive: target.allow_destructive,
|
||||||
|
};
|
||||||
|
|
||||||
|
let max_iterations = 50;
|
||||||
|
let mut total_findings = 0u32;
|
||||||
|
let mut total_tool_calls = 0u32;
|
||||||
|
let mut total_successes = 0u32;
|
||||||
|
let mut prev_node_ids: Vec<String> = Vec::new();
|
||||||
|
|
||||||
|
for _iteration in 0..max_iterations {
|
||||||
|
let response = self
|
||||||
|
.llm
|
||||||
|
.chat_with_tools(messages.clone(), &tool_defs, Some(0.2), Some(8192))
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
match response {
|
||||||
|
LlmResponse::Content(content) => {
|
||||||
|
let msg =
|
||||||
|
PentestMessage::assistant(session_id.clone(), content.clone());
|
||||||
|
let _ = self.db.pentest_messages().insert_one(&msg).await;
|
||||||
|
let _ = self.event_tx.send(PentestEvent::Message {
|
||||||
|
content: content.clone(),
|
||||||
|
});
|
||||||
|
|
||||||
|
messages.push(ChatMessage {
|
||||||
|
role: "assistant".to_string(),
|
||||||
|
content: Some(content.clone()),
|
||||||
|
tool_calls: None,
|
||||||
|
tool_call_id: None,
|
||||||
|
});
|
||||||
|
|
||||||
|
let done_indicators = [
|
||||||
|
"pentest complete",
|
||||||
|
"testing complete",
|
||||||
|
"scan complete",
|
||||||
|
"analysis complete",
|
||||||
|
"finished",
|
||||||
|
"that concludes",
|
||||||
|
];
|
||||||
|
let content_lower = content.to_lowercase();
|
||||||
|
if done_indicators
|
||||||
|
.iter()
|
||||||
|
.any(|ind| content_lower.contains(ind))
|
||||||
|
{
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
LlmResponse::ToolCalls { calls: tool_calls, reasoning } => {
|
||||||
|
let tc_requests: Vec<ToolCallRequest> = tool_calls
|
||||||
|
.iter()
|
||||||
|
.map(|tc| ToolCallRequest {
|
||||||
|
id: tc.id.clone(),
|
||||||
|
r#type: "function".to_string(),
|
||||||
|
function: ToolCallRequestFunction {
|
||||||
|
name: tc.name.clone(),
|
||||||
|
arguments: serde_json::to_string(&tc.arguments)
|
||||||
|
.unwrap_or_default(),
|
||||||
|
},
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
messages.push(ChatMessage {
|
||||||
|
role: "assistant".to_string(),
|
||||||
|
content: if reasoning.is_empty() { None } else { Some(reasoning.clone()) },
|
||||||
|
tool_calls: Some(tc_requests),
|
||||||
|
tool_call_id: None,
|
||||||
|
});
|
||||||
|
|
||||||
|
let mut current_batch_node_ids: Vec<String> = Vec::new();
|
||||||
|
|
||||||
|
for tc in &tool_calls {
|
||||||
|
total_tool_calls += 1;
|
||||||
|
let node_id = uuid::Uuid::new_v4().to_string();
|
||||||
|
|
||||||
|
let mut node = AttackChainNode::new(
|
||||||
|
session_id.clone(),
|
||||||
|
node_id.clone(),
|
||||||
|
tc.name.clone(),
|
||||||
|
tc.arguments.clone(),
|
||||||
|
reasoning.clone(),
|
||||||
|
);
|
||||||
|
// Link to previous iteration's nodes
|
||||||
|
node.parent_node_ids = prev_node_ids.clone();
|
||||||
|
node.status = AttackNodeStatus::Running;
|
||||||
|
node.started_at = Some(chrono::Utc::now());
|
||||||
|
let _ = self.db.attack_chain_nodes().insert_one(&node).await;
|
||||||
|
current_batch_node_ids.push(node_id.clone());
|
||||||
|
|
||||||
|
let _ = self.event_tx.send(PentestEvent::ToolStart {
|
||||||
|
node_id: node_id.clone(),
|
||||||
|
tool_name: tc.name.clone(),
|
||||||
|
input: tc.arguments.clone(),
|
||||||
|
});
|
||||||
|
|
||||||
|
let result = if let Some(tool) = self.tool_registry.get(&tc.name) {
|
||||||
|
match tool.execute(tc.arguments.clone(), &tool_context).await {
|
||||||
|
Ok(result) => {
|
||||||
|
total_successes += 1;
|
||||||
|
let findings_count = result.findings.len() as u32;
|
||||||
|
total_findings += findings_count;
|
||||||
|
|
||||||
|
let mut finding_ids: Vec<String> = Vec::new();
|
||||||
|
for mut finding in result.findings {
|
||||||
|
finding.scan_run_id = session_id.clone();
|
||||||
|
finding.session_id = Some(session_id.clone());
|
||||||
|
let insert_result =
|
||||||
|
self.db.dast_findings().insert_one(&finding).await;
|
||||||
|
if let Ok(res) = &insert_result {
|
||||||
|
finding_ids.push(res.inserted_id.as_object_id().map(|oid| oid.to_hex()).unwrap_or_default());
|
||||||
|
}
|
||||||
|
let _ =
|
||||||
|
self.event_tx.send(PentestEvent::Finding {
|
||||||
|
finding_id: finding
|
||||||
|
.id
|
||||||
|
.map(|oid| oid.to_hex())
|
||||||
|
.unwrap_or_default(),
|
||||||
|
title: finding.title.clone(),
|
||||||
|
severity: finding.severity.to_string(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compute risk score based on findings severity
|
||||||
|
let risk_score: Option<u8> = if findings_count > 0 {
|
||||||
|
Some(std::cmp::min(
|
||||||
|
100,
|
||||||
|
(findings_count as u8).saturating_mul(15).saturating_add(20),
|
||||||
|
))
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
|
||||||
|
let _ = self.event_tx.send(PentestEvent::ToolComplete {
|
||||||
|
node_id: node_id.clone(),
|
||||||
|
summary: result.summary.clone(),
|
||||||
|
findings_count,
|
||||||
|
});
|
||||||
|
|
||||||
|
let finding_ids_bson: Vec<mongodb::bson::Bson> = finding_ids
|
||||||
|
.iter()
|
||||||
|
.map(|id| mongodb::bson::Bson::String(id.clone()))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let mut update_doc = doc! {
|
||||||
|
"status": "completed",
|
||||||
|
"tool_output": mongodb::bson::to_bson(&result.data)
|
||||||
|
.unwrap_or(mongodb::bson::Bson::Null),
|
||||||
|
"completed_at": mongodb::bson::DateTime::now(),
|
||||||
|
"findings_produced": finding_ids_bson,
|
||||||
|
};
|
||||||
|
if let Some(rs) = risk_score {
|
||||||
|
update_doc.insert("risk_score", rs as i32);
|
||||||
|
}
|
||||||
|
|
||||||
|
let _ = self
|
||||||
|
.db
|
||||||
|
.attack_chain_nodes()
|
||||||
|
.update_one(
|
||||||
|
doc! {
|
||||||
|
"session_id": &session_id,
|
||||||
|
"node_id": &node_id,
|
||||||
|
},
|
||||||
|
doc! { "$set": update_doc },
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
serde_json::json!({
|
||||||
|
"summary": result.summary,
|
||||||
|
"findings_count": findings_count,
|
||||||
|
"data": result.data,
|
||||||
|
})
|
||||||
|
.to_string()
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
let _ = self
|
||||||
|
.db
|
||||||
|
.attack_chain_nodes()
|
||||||
|
.update_one(
|
||||||
|
doc! {
|
||||||
|
"session_id": &session_id,
|
||||||
|
"node_id": &node_id,
|
||||||
|
},
|
||||||
|
doc! { "$set": {
|
||||||
|
"status": "failed",
|
||||||
|
"completed_at": mongodb::bson::DateTime::now(),
|
||||||
|
}},
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
format!("Tool execution failed: {e}")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
format!("Unknown tool: {}", tc.name)
|
||||||
|
};
|
||||||
|
|
||||||
|
messages.push(ChatMessage {
|
||||||
|
role: "tool".to_string(),
|
||||||
|
content: Some(result),
|
||||||
|
tool_calls: None,
|
||||||
|
tool_call_id: Some(tc.id.clone()),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Advance parent links so next iteration's nodes connect to this batch
|
||||||
|
prev_node_ids = current_batch_node_ids;
|
||||||
|
|
||||||
|
if let Some(sid) = session.id {
|
||||||
|
let _ = self
|
||||||
|
.db
|
||||||
|
.pentest_sessions()
|
||||||
|
.update_one(
|
||||||
|
doc! { "_id": sid },
|
||||||
|
doc! { "$set": {
|
||||||
|
"tool_invocations": total_tool_calls as i64,
|
||||||
|
"tool_successes": total_successes as i64,
|
||||||
|
"findings_count": total_findings as i64,
|
||||||
|
}},
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(sid) = session.id {
|
||||||
|
let _ = self
|
||||||
|
.db
|
||||||
|
.pentest_sessions()
|
||||||
|
.update_one(
|
||||||
|
doc! { "_id": sid },
|
||||||
|
doc! { "$set": {
|
||||||
|
"status": "completed",
|
||||||
|
"completed_at": mongodb::bson::DateTime::now(),
|
||||||
|
"tool_invocations": total_tool_calls as i64,
|
||||||
|
"tool_successes": total_successes as i64,
|
||||||
|
"findings_count": total_findings as i64,
|
||||||
|
}},
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
}
|
||||||
|
|
||||||
|
let _ = self.event_tx.send(PentestEvent::Complete {
|
||||||
|
summary: format!(
|
||||||
|
"Pentest complete. {} findings from {} tool invocations.",
|
||||||
|
total_findings, total_tool_calls
|
||||||
|
),
|
||||||
|
});
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Code-Awareness: Gather context from linked repo ─────────
|
||||||
|
|
||||||
|
/// Fetch SAST findings, SBOM entries (with CVEs), and code graph entry points
|
||||||
|
/// for the repo linked to this DAST target.
|
||||||
|
async fn gather_repo_context(
|
||||||
|
&self,
|
||||||
|
target: &DastTarget,
|
||||||
|
) -> (Vec<Finding>, Vec<SbomEntry>, Vec<CodeContextHint>) {
|
||||||
|
let Some(repo_id) = &target.repo_id else {
|
||||||
|
return (Vec::new(), Vec::new(), Vec::new());
|
||||||
|
};
|
||||||
|
|
||||||
|
let sast_findings = self.fetch_sast_findings(repo_id).await;
|
||||||
|
let sbom_entries = self.fetch_vulnerable_sbom(repo_id).await;
|
||||||
|
let code_context = self.fetch_code_context(repo_id, &sast_findings).await;
|
||||||
|
|
||||||
|
tracing::info!(
|
||||||
|
repo_id,
|
||||||
|
sast_findings = sast_findings.len(),
|
||||||
|
vulnerable_deps = sbom_entries.len(),
|
||||||
|
code_hints = code_context.len(),
|
||||||
|
"Gathered code-awareness context for pentest"
|
||||||
|
);
|
||||||
|
|
||||||
|
(sast_findings, sbom_entries, code_context)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Fetch open/triaged SAST findings for the repo (not false positives or resolved)
|
||||||
|
async fn fetch_sast_findings(&self, repo_id: &str) -> Vec<Finding> {
|
||||||
|
let cursor = self
|
||||||
|
.db
|
||||||
|
.findings()
|
||||||
|
.find(doc! {
|
||||||
|
"repo_id": repo_id,
|
||||||
|
"status": { "$in": ["open", "triaged"] },
|
||||||
|
})
|
||||||
|
.sort(doc! { "severity": -1 })
|
||||||
|
.limit(100)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
match cursor {
|
||||||
|
Ok(mut c) => {
|
||||||
|
let mut results = Vec::new();
|
||||||
|
while let Some(Ok(f)) = c.next().await {
|
||||||
|
results.push(f);
|
||||||
|
}
|
||||||
|
results
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!("Failed to fetch SAST findings for pentest: {e}");
|
||||||
|
Vec::new()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Fetch SBOM entries that have known vulnerabilities
|
||||||
|
async fn fetch_vulnerable_sbom(&self, repo_id: &str) -> Vec<SbomEntry> {
|
||||||
|
let cursor = self
|
||||||
|
.db
|
||||||
|
.sbom_entries()
|
||||||
|
.find(doc! {
|
||||||
|
"repo_id": repo_id,
|
||||||
|
"known_vulnerabilities": { "$exists": true, "$ne": [] },
|
||||||
|
})
|
||||||
|
.limit(50)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
match cursor {
|
||||||
|
Ok(mut c) => {
|
||||||
|
let mut results = Vec::new();
|
||||||
|
while let Some(Ok(e)) = c.next().await {
|
||||||
|
results.push(e);
|
||||||
|
}
|
||||||
|
results
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!("Failed to fetch vulnerable SBOM entries: {e}");
|
||||||
|
Vec::new()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Build CodeContextHint objects from the code knowledge graph.
|
||||||
|
/// Maps entry points to their source files and links SAST findings.
|
||||||
|
async fn fetch_code_context(
|
||||||
|
&self,
|
||||||
|
repo_id: &str,
|
||||||
|
sast_findings: &[Finding],
|
||||||
|
) -> Vec<CodeContextHint> {
|
||||||
|
// Get entry point nodes from the code graph
|
||||||
|
let cursor = self
|
||||||
|
.db
|
||||||
|
.graph_nodes()
|
||||||
|
.find(doc! {
|
||||||
|
"repo_id": repo_id,
|
||||||
|
"is_entry_point": true,
|
||||||
|
})
|
||||||
|
.limit(50)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
let nodes = match cursor {
|
||||||
|
Ok(mut c) => {
|
||||||
|
let mut results = Vec::new();
|
||||||
|
while let Some(Ok(n)) = c.next().await {
|
||||||
|
results.push(n);
|
||||||
|
}
|
||||||
|
results
|
||||||
|
}
|
||||||
|
Err(_) => return Vec::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Build hints by matching graph nodes to SAST findings by file path
|
||||||
|
nodes
|
||||||
|
.into_iter()
|
||||||
|
.map(|node| {
|
||||||
|
// Find SAST findings in the same file
|
||||||
|
let linked_vulns: Vec<String> = sast_findings
|
||||||
|
.iter()
|
||||||
|
.filter(|f| {
|
||||||
|
f.file_path.as_deref() == Some(&node.file_path)
|
||||||
|
})
|
||||||
|
.map(|f| {
|
||||||
|
format!(
|
||||||
|
"[{}] {}: {} (line {})",
|
||||||
|
f.severity,
|
||||||
|
f.scanner,
|
||||||
|
f.title,
|
||||||
|
f.line_number.unwrap_or(0)
|
||||||
|
)
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
CodeContextHint {
|
||||||
|
endpoint_pattern: node.qualified_name.clone(),
|
||||||
|
handler_function: node.name.clone(),
|
||||||
|
file_path: node.file_path.clone(),
|
||||||
|
code_snippet: String::new(), // Could fetch from embeddings
|
||||||
|
known_vulnerabilities: linked_vulns,
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── System Prompt Builder ───────────────────────────────────
|
||||||
|
|
||||||
|
async fn build_system_prompt(
|
||||||
|
&self,
|
||||||
|
session: &PentestSession,
|
||||||
|
target: &DastTarget,
|
||||||
|
sast_findings: &[Finding],
|
||||||
|
sbom_entries: &[SbomEntry],
|
||||||
|
code_context: &[CodeContextHint],
|
||||||
|
) -> String {
|
||||||
|
let tool_names = self.tool_registry.list_names().join(", ");
|
||||||
|
let strategy_guidance = match session.strategy {
|
||||||
|
PentestStrategy::Quick => {
|
||||||
|
"Focus on the most common and impactful vulnerabilities. Run a quick recon, then target the highest-risk areas."
|
||||||
|
}
|
||||||
|
PentestStrategy::Comprehensive => {
|
||||||
|
"Perform a thorough assessment covering all vulnerability types. Start with recon, then systematically test each attack surface."
|
||||||
|
}
|
||||||
|
PentestStrategy::Targeted => {
|
||||||
|
"Focus specifically on areas highlighted by SAST findings and known CVEs. Prioritize exploiting known weaknesses."
|
||||||
|
}
|
||||||
|
PentestStrategy::Aggressive => {
|
||||||
|
"Use all available tools aggressively. Test with maximum payloads and attempt full exploitation."
|
||||||
|
}
|
||||||
|
PentestStrategy::Stealth => {
|
||||||
|
"Minimize noise. Use fewer requests, avoid aggressive payloads. Focus on passive analysis and targeted probes."
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Build SAST findings section
|
||||||
|
let sast_section = if sast_findings.is_empty() {
|
||||||
|
String::from("No SAST findings available for this target.")
|
||||||
|
} else {
|
||||||
|
let critical = sast_findings
|
||||||
|
.iter()
|
||||||
|
.filter(|f| f.severity == Severity::Critical)
|
||||||
|
.count();
|
||||||
|
let high = sast_findings
|
||||||
|
.iter()
|
||||||
|
.filter(|f| f.severity == Severity::High)
|
||||||
|
.count();
|
||||||
|
|
||||||
|
let mut section = format!(
|
||||||
|
"{} open findings ({} critical, {} high):\n",
|
||||||
|
sast_findings.len(),
|
||||||
|
critical,
|
||||||
|
high
|
||||||
|
);
|
||||||
|
|
||||||
|
// List the most important findings (critical/high first, up to 20)
|
||||||
|
for f in sast_findings.iter().take(20) {
|
||||||
|
let file_info = f
|
||||||
|
.file_path
|
||||||
|
.as_ref()
|
||||||
|
.map(|p| {
|
||||||
|
format!(
|
||||||
|
" in {}:{}",
|
||||||
|
p,
|
||||||
|
f.line_number.unwrap_or(0)
|
||||||
|
)
|
||||||
|
})
|
||||||
|
.unwrap_or_default();
|
||||||
|
let status_note = match f.status {
|
||||||
|
FindingStatus::Triaged => " [TRIAGED]",
|
||||||
|
_ => "",
|
||||||
|
};
|
||||||
|
section.push_str(&format!(
|
||||||
|
"- [{sev}] {title}{file}{status}\n",
|
||||||
|
sev = f.severity,
|
||||||
|
title = f.title,
|
||||||
|
file = file_info,
|
||||||
|
status = status_note,
|
||||||
|
));
|
||||||
|
if let Some(cwe) = &f.cwe {
|
||||||
|
section.push_str(&format!(" CWE: {cwe}\n"));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if sast_findings.len() > 20 {
|
||||||
|
section.push_str(&format!(
|
||||||
|
"... and {} more findings\n",
|
||||||
|
sast_findings.len() - 20
|
||||||
|
));
|
||||||
|
}
|
||||||
|
section
|
||||||
|
};
|
||||||
|
|
||||||
|
// Build SBOM/CVE section
|
||||||
|
let sbom_section = if sbom_entries.is_empty() {
|
||||||
|
String::from("No vulnerable dependencies identified.")
|
||||||
|
} else {
|
||||||
|
let mut section = format!(
|
||||||
|
"{} dependencies with known vulnerabilities:\n",
|
||||||
|
sbom_entries.len()
|
||||||
|
);
|
||||||
|
for entry in sbom_entries.iter().take(15) {
|
||||||
|
let cve_ids: Vec<&str> = entry
|
||||||
|
.known_vulnerabilities
|
||||||
|
.iter()
|
||||||
|
.map(|v| v.id.as_str())
|
||||||
|
.collect();
|
||||||
|
section.push_str(&format!(
|
||||||
|
"- {} {} ({}): {}\n",
|
||||||
|
entry.name,
|
||||||
|
entry.version,
|
||||||
|
entry.package_manager,
|
||||||
|
cve_ids.join(", ")
|
||||||
|
));
|
||||||
|
}
|
||||||
|
if sbom_entries.len() > 15 {
|
||||||
|
section.push_str(&format!(
|
||||||
|
"... and {} more vulnerable dependencies\n",
|
||||||
|
sbom_entries.len() - 15
|
||||||
|
));
|
||||||
|
}
|
||||||
|
section
|
||||||
|
};
|
||||||
|
|
||||||
|
// Build code context section
|
||||||
|
let code_section = if code_context.is_empty() {
|
||||||
|
String::from("No code knowledge graph available for this target.")
|
||||||
|
} else {
|
||||||
|
let with_vulns = code_context
|
||||||
|
.iter()
|
||||||
|
.filter(|c| !c.known_vulnerabilities.is_empty())
|
||||||
|
.count();
|
||||||
|
|
||||||
|
let mut section = format!(
|
||||||
|
"{} entry points identified ({} with linked SAST findings):\n",
|
||||||
|
code_context.len(),
|
||||||
|
with_vulns
|
||||||
|
);
|
||||||
|
|
||||||
|
for hint in code_context.iter().take(20) {
|
||||||
|
section.push_str(&format!(
|
||||||
|
"- {} ({})\n",
|
||||||
|
hint.endpoint_pattern, hint.file_path
|
||||||
|
));
|
||||||
|
for vuln in &hint.known_vulnerabilities {
|
||||||
|
section.push_str(&format!(" SAST: {vuln}\n"));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
section
|
||||||
|
};
|
||||||
|
|
||||||
|
format!(
|
||||||
|
r#"You are an expert penetration tester conducting an authorized security assessment.
|
||||||
|
|
||||||
|
## Target
|
||||||
|
- **Name**: {target_name}
|
||||||
|
- **URL**: {base_url}
|
||||||
|
- **Type**: {target_type}
|
||||||
|
- **Rate Limit**: {rate_limit} req/s
|
||||||
|
- **Destructive Tests Allowed**: {allow_destructive}
|
||||||
|
- **Linked Repository**: {repo_linked}
|
||||||
|
|
||||||
|
## Strategy
|
||||||
|
{strategy_guidance}
|
||||||
|
|
||||||
|
## SAST Findings (Static Analysis)
|
||||||
|
{sast_section}
|
||||||
|
|
||||||
|
## Vulnerable Dependencies (SBOM)
|
||||||
|
{sbom_section}
|
||||||
|
|
||||||
|
## Code Entry Points (Knowledge Graph)
|
||||||
|
{code_section}
|
||||||
|
|
||||||
|
## Available Tools
|
||||||
|
{tool_names}
|
||||||
|
|
||||||
|
## Instructions
|
||||||
|
1. Start by running reconnaissance (recon tool) to fingerprint the target and discover technologies.
|
||||||
|
2. Run the OpenAPI parser to discover API endpoints from specs.
|
||||||
|
3. Check infrastructure: DNS, DMARC, TLS, security headers, cookies, CSP, CORS.
|
||||||
|
4. Based on SAST findings, prioritize testing endpoints where vulnerabilities were found in code.
|
||||||
|
5. For each vulnerability type found in SAST, use the corresponding DAST tool to verify exploitability.
|
||||||
|
6. If vulnerable dependencies are listed, try to trigger known CVE conditions against the running application.
|
||||||
|
7. Test rate limiting on critical endpoints (login, API).
|
||||||
|
8. Check for console.log leakage in frontend JavaScript.
|
||||||
|
9. Analyze tool results and chain findings — if one vulnerability enables others, explore the chain.
|
||||||
|
10. When testing is complete, provide a structured summary with severity and remediation.
|
||||||
|
11. Always explain your reasoning before invoking each tool.
|
||||||
|
12. When done, say "Testing complete" followed by a final summary.
|
||||||
|
|
||||||
|
## Important
|
||||||
|
- This is an authorized penetration test. All testing is permitted within the target scope.
|
||||||
|
- Respect the rate limit of {rate_limit} requests per second.
|
||||||
|
- Only use destructive tests if explicitly allowed ({allow_destructive}).
|
||||||
|
- Use SAST findings to guide your testing — they tell you WHERE in the code vulnerabilities exist.
|
||||||
|
- Use SBOM data to understand what technologies and versions the target runs.
|
||||||
|
"#,
|
||||||
|
target_name = target.name,
|
||||||
|
base_url = target.base_url,
|
||||||
|
target_type = target.target_type,
|
||||||
|
rate_limit = target.rate_limit,
|
||||||
|
allow_destructive = target.allow_destructive,
|
||||||
|
repo_linked = target.repo_id.as_deref().unwrap_or("None"),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
1601
compliance-agent/src/pentest/report.rs
Normal file
1601
compliance-agent/src/pentest/report.rs
Normal file
File diff suppressed because it is too large
Load Diff
@@ -176,6 +176,16 @@ pub enum DastVulnType {
|
|||||||
InformationDisclosure,
|
InformationDisclosure,
|
||||||
SecurityMisconfiguration,
|
SecurityMisconfiguration,
|
||||||
BrokenAuth,
|
BrokenAuth,
|
||||||
|
DnsMisconfiguration,
|
||||||
|
EmailSecurity,
|
||||||
|
TlsMisconfiguration,
|
||||||
|
CookieSecurity,
|
||||||
|
CspIssue,
|
||||||
|
CorsMisconfiguration,
|
||||||
|
RateLimitAbsent,
|
||||||
|
ConsoleLogLeakage,
|
||||||
|
SecurityHeaderMissing,
|
||||||
|
KnownCveExploit,
|
||||||
Other,
|
Other,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -192,6 +202,16 @@ impl std::fmt::Display for DastVulnType {
|
|||||||
Self::InformationDisclosure => write!(f, "information_disclosure"),
|
Self::InformationDisclosure => write!(f, "information_disclosure"),
|
||||||
Self::SecurityMisconfiguration => write!(f, "security_misconfiguration"),
|
Self::SecurityMisconfiguration => write!(f, "security_misconfiguration"),
|
||||||
Self::BrokenAuth => write!(f, "broken_auth"),
|
Self::BrokenAuth => write!(f, "broken_auth"),
|
||||||
|
Self::DnsMisconfiguration => write!(f, "dns_misconfiguration"),
|
||||||
|
Self::EmailSecurity => write!(f, "email_security"),
|
||||||
|
Self::TlsMisconfiguration => write!(f, "tls_misconfiguration"),
|
||||||
|
Self::CookieSecurity => write!(f, "cookie_security"),
|
||||||
|
Self::CspIssue => write!(f, "csp_issue"),
|
||||||
|
Self::CorsMisconfiguration => write!(f, "cors_misconfiguration"),
|
||||||
|
Self::RateLimitAbsent => write!(f, "rate_limit_absent"),
|
||||||
|
Self::ConsoleLogLeakage => write!(f, "console_log_leakage"),
|
||||||
|
Self::SecurityHeaderMissing => write!(f, "security_header_missing"),
|
||||||
|
Self::KnownCveExploit => write!(f, "known_cve_exploit"),
|
||||||
Self::Other => write!(f, "other"),
|
Self::Other => write!(f, "other"),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -244,6 +264,8 @@ pub struct DastFinding {
|
|||||||
pub remediation: Option<String>,
|
pub remediation: Option<String>,
|
||||||
/// Linked SAST finding ID (if correlated)
|
/// Linked SAST finding ID (if correlated)
|
||||||
pub linked_sast_finding_id: Option<String>,
|
pub linked_sast_finding_id: Option<String>,
|
||||||
|
/// Pentest session that produced this finding (if AI-driven)
|
||||||
|
pub session_id: Option<String>,
|
||||||
#[serde(with = "super::serde_helpers::bson_datetime")]
|
#[serde(with = "super::serde_helpers::bson_datetime")]
|
||||||
pub created_at: DateTime<Utc>,
|
pub created_at: DateTime<Utc>,
|
||||||
}
|
}
|
||||||
@@ -276,6 +298,7 @@ impl DastFinding {
|
|||||||
evidence: Vec::new(),
|
evidence: Vec::new(),
|
||||||
remediation: None,
|
remediation: None,
|
||||||
linked_sast_finding_id: None,
|
linked_sast_finding_id: None,
|
||||||
|
session_id: None,
|
||||||
created_at: Utc::now(),
|
created_at: Utc::now(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ pub mod finding;
|
|||||||
pub mod graph;
|
pub mod graph;
|
||||||
pub mod issue;
|
pub mod issue;
|
||||||
pub mod mcp;
|
pub mod mcp;
|
||||||
|
pub mod pentest;
|
||||||
pub mod repository;
|
pub mod repository;
|
||||||
pub mod sbom;
|
pub mod sbom;
|
||||||
pub mod scan;
|
pub mod scan;
|
||||||
@@ -26,6 +27,11 @@ pub use graph::{
|
|||||||
};
|
};
|
||||||
pub use issue::{IssueStatus, TrackerIssue, TrackerType};
|
pub use issue::{IssueStatus, TrackerIssue, TrackerType};
|
||||||
pub use mcp::{McpServerConfig, McpServerStatus, McpTransport};
|
pub use mcp::{McpServerConfig, McpServerStatus, McpTransport};
|
||||||
|
pub use pentest::{
|
||||||
|
AttackChainNode, AttackNodeStatus, CodeContextHint, PentestEvent, PentestMessage,
|
||||||
|
PentestSession, PentestStats, PentestStatus, PentestStrategy, SeverityDistribution,
|
||||||
|
ToolCallRecord,
|
||||||
|
};
|
||||||
pub use repository::{ScanTrigger, TrackedRepository};
|
pub use repository::{ScanTrigger, TrackedRepository};
|
||||||
pub use sbom::{SbomEntry, VulnRef};
|
pub use sbom::{SbomEntry, VulnRef};
|
||||||
pub use scan::{ScanPhase, ScanRun, ScanRunStatus, ScanType};
|
pub use scan::{ScanPhase, ScanRun, ScanRunStatus, ScanType};
|
||||||
|
|||||||
294
compliance-core/src/models/pentest.rs
Normal file
294
compliance-core/src/models/pentest.rs
Normal file
@@ -0,0 +1,294 @@
|
|||||||
|
use chrono::{DateTime, Utc};
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
|
/// Status of a pentest session
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||||
|
#[serde(rename_all = "snake_case")]
|
||||||
|
pub enum PentestStatus {
|
||||||
|
Running,
|
||||||
|
Paused,
|
||||||
|
Completed,
|
||||||
|
Failed,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for PentestStatus {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
Self::Running => write!(f, "running"),
|
||||||
|
Self::Paused => write!(f, "paused"),
|
||||||
|
Self::Completed => write!(f, "completed"),
|
||||||
|
Self::Failed => write!(f, "failed"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Strategy for the AI pentest orchestrator
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||||
|
#[serde(rename_all = "snake_case")]
|
||||||
|
pub enum PentestStrategy {
|
||||||
|
/// Quick scan focusing on common vulnerabilities
|
||||||
|
Quick,
|
||||||
|
/// Standard comprehensive scan
|
||||||
|
Comprehensive,
|
||||||
|
/// Focus on specific vulnerability types guided by SAST/SBOM
|
||||||
|
Targeted,
|
||||||
|
/// Aggressive testing with more payloads and deeper exploitation
|
||||||
|
Aggressive,
|
||||||
|
/// Stealth mode with slower rate and fewer noisy payloads
|
||||||
|
Stealth,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for PentestStrategy {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
Self::Quick => write!(f, "quick"),
|
||||||
|
Self::Comprehensive => write!(f, "comprehensive"),
|
||||||
|
Self::Targeted => write!(f, "targeted"),
|
||||||
|
Self::Aggressive => write!(f, "aggressive"),
|
||||||
|
Self::Stealth => write!(f, "stealth"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A pentest session initiated via the chat interface
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct PentestSession {
|
||||||
|
#[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
|
||||||
|
pub id: Option<bson::oid::ObjectId>,
|
||||||
|
pub target_id: String,
|
||||||
|
/// Linked repository for code-aware testing
|
||||||
|
pub repo_id: Option<String>,
|
||||||
|
pub status: PentestStatus,
|
||||||
|
pub strategy: PentestStrategy,
|
||||||
|
pub created_by: Option<String>,
|
||||||
|
/// Total number of tool invocations in this session
|
||||||
|
pub tool_invocations: u32,
|
||||||
|
/// Total successful tool invocations
|
||||||
|
pub tool_successes: u32,
|
||||||
|
/// Number of findings discovered
|
||||||
|
pub findings_count: u32,
|
||||||
|
/// Number of confirmed exploitable findings
|
||||||
|
pub exploitable_count: u32,
|
||||||
|
#[serde(with = "super::serde_helpers::bson_datetime")]
|
||||||
|
pub started_at: DateTime<Utc>,
|
||||||
|
#[serde(default, with = "super::serde_helpers::opt_bson_datetime")]
|
||||||
|
pub completed_at: Option<DateTime<Utc>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestSession {
|
||||||
|
pub fn new(target_id: String, strategy: PentestStrategy) -> Self {
|
||||||
|
Self {
|
||||||
|
id: None,
|
||||||
|
target_id,
|
||||||
|
repo_id: None,
|
||||||
|
status: PentestStatus::Running,
|
||||||
|
strategy,
|
||||||
|
created_by: None,
|
||||||
|
tool_invocations: 0,
|
||||||
|
tool_successes: 0,
|
||||||
|
findings_count: 0,
|
||||||
|
exploitable_count: 0,
|
||||||
|
started_at: Utc::now(),
|
||||||
|
completed_at: None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn success_rate(&self) -> f64 {
|
||||||
|
if self.tool_invocations == 0 {
|
||||||
|
return 100.0;
|
||||||
|
}
|
||||||
|
(self.tool_successes as f64 / self.tool_invocations as f64) * 100.0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Status of a node in the attack chain
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||||
|
#[serde(rename_all = "snake_case")]
|
||||||
|
pub enum AttackNodeStatus {
|
||||||
|
Pending,
|
||||||
|
Running,
|
||||||
|
Completed,
|
||||||
|
Failed,
|
||||||
|
Skipped,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A single step in the LLM-driven attack chain DAG
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct AttackChainNode {
|
||||||
|
#[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
|
||||||
|
pub id: Option<bson::oid::ObjectId>,
|
||||||
|
pub session_id: String,
|
||||||
|
/// Unique ID for DAG references
|
||||||
|
pub node_id: String,
|
||||||
|
/// Parent node IDs (multiple for merge nodes)
|
||||||
|
pub parent_node_ids: Vec<String>,
|
||||||
|
/// Tool that was invoked
|
||||||
|
pub tool_name: String,
|
||||||
|
/// Input parameters passed to the tool
|
||||||
|
pub tool_input: serde_json::Value,
|
||||||
|
/// Output from the tool
|
||||||
|
pub tool_output: Option<serde_json::Value>,
|
||||||
|
pub status: AttackNodeStatus,
|
||||||
|
/// LLM's reasoning for choosing this action
|
||||||
|
pub llm_reasoning: String,
|
||||||
|
/// IDs of DastFindings produced by this step
|
||||||
|
pub findings_produced: Vec<String>,
|
||||||
|
/// Risk score (0-100) assigned by the LLM
|
||||||
|
pub risk_score: Option<u8>,
|
||||||
|
#[serde(default, with = "super::serde_helpers::opt_bson_datetime")]
|
||||||
|
pub started_at: Option<DateTime<Utc>>,
|
||||||
|
#[serde(default, with = "super::serde_helpers::opt_bson_datetime")]
|
||||||
|
pub completed_at: Option<DateTime<Utc>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl AttackChainNode {
|
||||||
|
pub fn new(
|
||||||
|
session_id: String,
|
||||||
|
node_id: String,
|
||||||
|
tool_name: String,
|
||||||
|
tool_input: serde_json::Value,
|
||||||
|
llm_reasoning: String,
|
||||||
|
) -> Self {
|
||||||
|
Self {
|
||||||
|
id: None,
|
||||||
|
session_id,
|
||||||
|
node_id,
|
||||||
|
parent_node_ids: Vec::new(),
|
||||||
|
tool_name,
|
||||||
|
tool_input,
|
||||||
|
tool_output: None,
|
||||||
|
status: AttackNodeStatus::Pending,
|
||||||
|
llm_reasoning,
|
||||||
|
findings_produced: Vec::new(),
|
||||||
|
risk_score: None,
|
||||||
|
started_at: None,
|
||||||
|
completed_at: None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Chat message within a pentest session
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct PentestMessage {
|
||||||
|
#[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
|
||||||
|
pub id: Option<bson::oid::ObjectId>,
|
||||||
|
pub session_id: String,
|
||||||
|
/// "user", "assistant", "tool_result", "system"
|
||||||
|
pub role: String,
|
||||||
|
pub content: String,
|
||||||
|
/// Tool calls made by the assistant in this message
|
||||||
|
pub tool_calls: Option<Vec<ToolCallRecord>>,
|
||||||
|
/// Link to the attack chain node (for tool_result messages)
|
||||||
|
pub attack_node_id: Option<String>,
|
||||||
|
#[serde(with = "super::serde_helpers::bson_datetime")]
|
||||||
|
pub created_at: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestMessage {
|
||||||
|
pub fn user(session_id: String, content: String) -> Self {
|
||||||
|
Self {
|
||||||
|
id: None,
|
||||||
|
session_id,
|
||||||
|
role: "user".to_string(),
|
||||||
|
content,
|
||||||
|
tool_calls: None,
|
||||||
|
attack_node_id: None,
|
||||||
|
created_at: Utc::now(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn assistant(session_id: String, content: String) -> Self {
|
||||||
|
Self {
|
||||||
|
id: None,
|
||||||
|
session_id,
|
||||||
|
role: "assistant".to_string(),
|
||||||
|
content,
|
||||||
|
tool_calls: None,
|
||||||
|
attack_node_id: None,
|
||||||
|
created_at: Utc::now(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn tool_result(session_id: String, content: String, node_id: String) -> Self {
|
||||||
|
Self {
|
||||||
|
id: None,
|
||||||
|
session_id,
|
||||||
|
role: "tool_result".to_string(),
|
||||||
|
content,
|
||||||
|
tool_calls: None,
|
||||||
|
attack_node_id: Some(node_id),
|
||||||
|
created_at: Utc::now(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Record of a tool call made by the LLM
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct ToolCallRecord {
|
||||||
|
pub call_id: String,
|
||||||
|
pub tool_name: String,
|
||||||
|
pub arguments: serde_json::Value,
|
||||||
|
pub result: Option<serde_json::Value>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// SSE event types for real-time pentest streaming
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
#[serde(tag = "type", rename_all = "snake_case")]
|
||||||
|
pub enum PentestEvent {
|
||||||
|
/// LLM is thinking/reasoning
|
||||||
|
Thinking { reasoning: String },
|
||||||
|
/// A tool execution has started
|
||||||
|
ToolStart {
|
||||||
|
node_id: String,
|
||||||
|
tool_name: String,
|
||||||
|
input: serde_json::Value,
|
||||||
|
},
|
||||||
|
/// A tool execution completed
|
||||||
|
ToolComplete {
|
||||||
|
node_id: String,
|
||||||
|
summary: String,
|
||||||
|
findings_count: u32,
|
||||||
|
},
|
||||||
|
/// A new finding was discovered
|
||||||
|
Finding { finding_id: String, title: String, severity: String },
|
||||||
|
/// Assistant message (streaming text)
|
||||||
|
Message { content: String },
|
||||||
|
/// Session completed
|
||||||
|
Complete { summary: String },
|
||||||
|
/// Error occurred
|
||||||
|
Error { message: String },
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Aggregated stats for the pentest dashboard
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct PentestStats {
|
||||||
|
pub running_sessions: u32,
|
||||||
|
pub total_vulnerabilities: u32,
|
||||||
|
pub total_tool_invocations: u32,
|
||||||
|
pub tool_success_rate: f64,
|
||||||
|
pub severity_distribution: SeverityDistribution,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct SeverityDistribution {
|
||||||
|
pub critical: u32,
|
||||||
|
pub high: u32,
|
||||||
|
pub medium: u32,
|
||||||
|
pub low: u32,
|
||||||
|
pub info: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Code context hint linking a discovered endpoint to source code
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct CodeContextHint {
|
||||||
|
/// HTTP route pattern (e.g., "GET /api/users/:id")
|
||||||
|
pub endpoint_pattern: String,
|
||||||
|
/// Handler function name
|
||||||
|
pub handler_function: String,
|
||||||
|
/// Source file path
|
||||||
|
pub file_path: String,
|
||||||
|
/// Relevant code snippet
|
||||||
|
pub code_snippet: String,
|
||||||
|
/// SAST findings associated with this code
|
||||||
|
pub known_vulnerabilities: Vec<String>,
|
||||||
|
}
|
||||||
@@ -1,9 +1,11 @@
|
|||||||
pub mod dast_agent;
|
pub mod dast_agent;
|
||||||
pub mod graph_builder;
|
pub mod graph_builder;
|
||||||
pub mod issue_tracker;
|
pub mod issue_tracker;
|
||||||
|
pub mod pentest_tool;
|
||||||
pub mod scanner;
|
pub mod scanner;
|
||||||
|
|
||||||
pub use dast_agent::{DastAgent, DastContext, DiscoveredEndpoint, EndpointParameter};
|
pub use dast_agent::{DastAgent, DastContext, DiscoveredEndpoint, EndpointParameter};
|
||||||
pub use graph_builder::{LanguageParser, ParseOutput};
|
pub use graph_builder::{LanguageParser, ParseOutput};
|
||||||
pub use issue_tracker::IssueTracker;
|
pub use issue_tracker::IssueTracker;
|
||||||
|
pub use pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
pub use scanner::{ScanOutput, Scanner};
|
pub use scanner::{ScanOutput, Scanner};
|
||||||
|
|||||||
63
compliance-core/src/traits/pentest_tool.rs
Normal file
63
compliance-core/src/traits/pentest_tool.rs
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
use std::future::Future;
|
||||||
|
use std::pin::Pin;
|
||||||
|
|
||||||
|
use crate::error::CoreError;
|
||||||
|
use crate::models::dast::{DastFinding, DastTarget};
|
||||||
|
use crate::models::finding::Finding;
|
||||||
|
use crate::models::pentest::CodeContextHint;
|
||||||
|
use crate::models::sbom::SbomEntry;
|
||||||
|
|
||||||
|
/// Context passed to pentest tools during execution.
|
||||||
|
///
|
||||||
|
/// The HTTP client is not included here because `compliance-core` does not
|
||||||
|
/// depend on `reqwest`. Tools that need HTTP should hold their own client
|
||||||
|
/// or receive one via the `compliance-dast` orchestrator.
|
||||||
|
pub struct PentestToolContext {
|
||||||
|
/// The DAST target being tested
|
||||||
|
pub target: DastTarget,
|
||||||
|
/// Session ID for this pentest run
|
||||||
|
pub session_id: String,
|
||||||
|
/// SAST findings for the linked repo (if any)
|
||||||
|
pub sast_findings: Vec<Finding>,
|
||||||
|
/// SBOM entries with known CVEs (if any)
|
||||||
|
pub sbom_entries: Vec<SbomEntry>,
|
||||||
|
/// Code knowledge graph hints mapping endpoints to source code
|
||||||
|
pub code_context: Vec<CodeContextHint>,
|
||||||
|
/// Rate limit (requests per second)
|
||||||
|
pub rate_limit: u32,
|
||||||
|
/// Whether destructive operations are allowed
|
||||||
|
pub allow_destructive: bool,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Result from a pentest tool execution
|
||||||
|
pub struct PentestToolResult {
|
||||||
|
/// Human-readable summary of what the tool found
|
||||||
|
pub summary: String,
|
||||||
|
/// DAST findings produced by this tool
|
||||||
|
pub findings: Vec<DastFinding>,
|
||||||
|
/// Tool-specific structured output data
|
||||||
|
pub data: serde_json::Value,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A tool that the LLM pentest orchestrator can invoke.
|
||||||
|
///
|
||||||
|
/// Each tool represents a specific security testing capability
|
||||||
|
/// (e.g., SQL injection scanner, DNS checker, TLS analyzer).
|
||||||
|
/// Uses boxed futures for dyn-compatibility.
|
||||||
|
pub trait PentestTool: Send + Sync {
|
||||||
|
/// Tool name for LLM tool_use (e.g., "sql_injection_scanner")
|
||||||
|
fn name(&self) -> &str;
|
||||||
|
|
||||||
|
/// Human-readable description for the LLM system prompt
|
||||||
|
fn description(&self) -> &str;
|
||||||
|
|
||||||
|
/// JSON Schema for the tool's input parameters
|
||||||
|
fn input_schema(&self) -> serde_json::Value;
|
||||||
|
|
||||||
|
/// Execute the tool with the given input
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> Pin<Box<dyn Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>>;
|
||||||
|
}
|
||||||
@@ -2767,3 +2767,467 @@ tbody tr:last-child td {
|
|||||||
.sbom-diff-row-changed {
|
.sbom-diff-row-changed {
|
||||||
border-left: 3px solid var(--warning);
|
border-left: 3px solid var(--warning);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* ═══════════════════════════════════
|
||||||
|
ATTACK CHAIN VISUALIZATION
|
||||||
|
═══════════════════════════════════ */
|
||||||
|
|
||||||
|
/* KPI bar */
|
||||||
|
.ac-kpi-bar {
|
||||||
|
display: flex;
|
||||||
|
gap: 2px;
|
||||||
|
margin-bottom: 16px;
|
||||||
|
}
|
||||||
|
.ac-kpi-card {
|
||||||
|
flex: 1;
|
||||||
|
background: var(--bg-secondary);
|
||||||
|
border: 1px solid var(--border-color);
|
||||||
|
padding: 12px 14px;
|
||||||
|
position: relative;
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
.ac-kpi-card:first-child { border-radius: 10px 0 0 10px; }
|
||||||
|
.ac-kpi-card:last-child { border-radius: 0 10px 10px 0; }
|
||||||
|
.ac-kpi-card::before {
|
||||||
|
content: '';
|
||||||
|
position: absolute;
|
||||||
|
bottom: 0;
|
||||||
|
left: 0;
|
||||||
|
right: 0;
|
||||||
|
height: 2px;
|
||||||
|
}
|
||||||
|
.ac-kpi-card:nth-child(1)::before { background: var(--accent, #3b82f6); opacity: 0.4; }
|
||||||
|
.ac-kpi-card:nth-child(2)::before { background: var(--danger, #dc2626); opacity: 0.5; }
|
||||||
|
.ac-kpi-card:nth-child(3)::before { background: var(--success, #16a34a); opacity: 0.4; }
|
||||||
|
.ac-kpi-card:nth-child(4)::before { background: var(--warning, #d97706); opacity: 0.4; }
|
||||||
|
|
||||||
|
.ac-kpi-value {
|
||||||
|
font-family: var(--font-display);
|
||||||
|
font-size: 24px;
|
||||||
|
font-weight: 800;
|
||||||
|
line-height: 1;
|
||||||
|
letter-spacing: -0.03em;
|
||||||
|
}
|
||||||
|
.ac-kpi-label {
|
||||||
|
font-family: var(--font-mono, monospace);
|
||||||
|
font-size: 9px;
|
||||||
|
color: var(--text-tertiary, #6b7280);
|
||||||
|
text-transform: uppercase;
|
||||||
|
letter-spacing: 0.08em;
|
||||||
|
margin-top: 4px;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Phase progress rail */
|
||||||
|
.ac-phase-rail {
|
||||||
|
display: flex;
|
||||||
|
align-items: flex-start;
|
||||||
|
margin-bottom: 14px;
|
||||||
|
position: relative;
|
||||||
|
padding: 0 8px;
|
||||||
|
}
|
||||||
|
.ac-phase-rail::before {
|
||||||
|
content: '';
|
||||||
|
position: absolute;
|
||||||
|
top: 7px;
|
||||||
|
left: 8px;
|
||||||
|
right: 8px;
|
||||||
|
height: 2px;
|
||||||
|
background: var(--border-color);
|
||||||
|
z-index: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.ac-rail-node {
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
align-items: center;
|
||||||
|
position: relative;
|
||||||
|
z-index: 1;
|
||||||
|
cursor: pointer;
|
||||||
|
min-width: 56px;
|
||||||
|
flex: 1;
|
||||||
|
transition: all 0.15s;
|
||||||
|
}
|
||||||
|
.ac-rail-node:hover .ac-rail-dot { transform: scale(1.25); }
|
||||||
|
.ac-rail-node.active .ac-rail-label { color: var(--accent, #3b82f6); }
|
||||||
|
.ac-rail-node.active .ac-rail-dot { box-shadow: 0 0 0 3px rgba(59,130,246,0.2), 0 0 12px rgba(59,130,246,0.15); }
|
||||||
|
|
||||||
|
.ac-rail-dot {
|
||||||
|
width: 14px;
|
||||||
|
height: 14px;
|
||||||
|
border-radius: 50%;
|
||||||
|
border: 2.5px solid var(--bg-primary, #0f172a);
|
||||||
|
transition: transform 0.2s cubic-bezier(0.16,1,0.3,1);
|
||||||
|
flex-shrink: 0;
|
||||||
|
}
|
||||||
|
.ac-rail-dot.done { background: var(--success, #16a34a); box-shadow: 0 0 8px rgba(22,163,74,0.25); }
|
||||||
|
.ac-rail-dot.running { background: var(--warning, #d97706); box-shadow: 0 0 10px rgba(217,119,6,0.35); animation: ac-dot-pulse 2s ease-in-out infinite; }
|
||||||
|
.ac-rail-dot.pending { background: var(--text-tertiary, #6b7280); opacity: 0.5; }
|
||||||
|
.ac-rail-dot.mixed { background: conic-gradient(var(--success, #16a34a) 0deg 270deg, var(--danger, #dc2626) 270deg 360deg); box-shadow: 0 0 8px rgba(22,163,74,0.2); }
|
||||||
|
|
||||||
|
@keyframes ac-dot-pulse {
|
||||||
|
0%, 100% { box-shadow: 0 0 8px rgba(217,119,6,0.35); }
|
||||||
|
50% { box-shadow: 0 0 18px rgba(217,119,6,0.55); }
|
||||||
|
}
|
||||||
|
|
||||||
|
.ac-rail-label {
|
||||||
|
font-family: var(--font-mono, monospace);
|
||||||
|
font-size: 9px;
|
||||||
|
color: var(--text-tertiary, #6b7280);
|
||||||
|
margin-top: 5px;
|
||||||
|
letter-spacing: 0.04em;
|
||||||
|
text-transform: uppercase;
|
||||||
|
white-space: nowrap;
|
||||||
|
transition: color 0.15s;
|
||||||
|
}
|
||||||
|
.ac-rail-findings {
|
||||||
|
font-family: var(--font-mono, monospace);
|
||||||
|
font-size: 9px;
|
||||||
|
font-weight: 600;
|
||||||
|
margin-top: 1px;
|
||||||
|
}
|
||||||
|
.ac-rail-findings.has { color: var(--danger, #dc2626); }
|
||||||
|
.ac-rail-findings.none { color: var(--text-tertiary, #6b7280); opacity: 0.4; }
|
||||||
|
|
||||||
|
.ac-rail-heatmap {
|
||||||
|
display: flex;
|
||||||
|
gap: 2px;
|
||||||
|
margin-top: 3px;
|
||||||
|
}
|
||||||
|
.ac-hm-cell {
|
||||||
|
width: 7px;
|
||||||
|
height: 7px;
|
||||||
|
border-radius: 1.5px;
|
||||||
|
}
|
||||||
|
.ac-hm-cell.ok { background: var(--success, #16a34a); opacity: 0.5; }
|
||||||
|
.ac-hm-cell.fail { background: var(--danger, #dc2626); opacity: 0.65; }
|
||||||
|
.ac-hm-cell.run { background: var(--warning, #d97706); opacity: 0.5; animation: ac-pulse 1.5s ease-in-out infinite; }
|
||||||
|
.ac-hm-cell.wait { background: var(--text-tertiary, #6b7280); opacity: 0.15; }
|
||||||
|
|
||||||
|
.ac-rail-bar {
|
||||||
|
flex: 1;
|
||||||
|
height: 2px;
|
||||||
|
margin-top: 7px;
|
||||||
|
position: relative;
|
||||||
|
z-index: 1;
|
||||||
|
}
|
||||||
|
.ac-rail-bar-inner {
|
||||||
|
height: 100%;
|
||||||
|
border-radius: 1px;
|
||||||
|
}
|
||||||
|
.ac-rail-bar-inner.done { background: var(--success, #16a34a); opacity: 0.35; }
|
||||||
|
.ac-rail-bar-inner.running { background: linear-gradient(to right, var(--success, #16a34a), var(--warning, #d97706)); opacity: 0.35; }
|
||||||
|
|
||||||
|
/* Progress track */
|
||||||
|
.ac-progress-track {
|
||||||
|
height: 3px;
|
||||||
|
background: var(--border-color);
|
||||||
|
border-radius: 2px;
|
||||||
|
overflow: hidden;
|
||||||
|
margin-bottom: 10px;
|
||||||
|
}
|
||||||
|
.ac-progress-fill {
|
||||||
|
height: 100%;
|
||||||
|
border-radius: 2px;
|
||||||
|
background: linear-gradient(90deg, var(--success, #16a34a) 0%, var(--accent, #3b82f6) 100%);
|
||||||
|
transition: width 0.6s cubic-bezier(0.16,1,0.3,1);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Expand all controls */
|
||||||
|
.ac-controls {
|
||||||
|
display: flex;
|
||||||
|
justify-content: flex-end;
|
||||||
|
margin-bottom: 6px;
|
||||||
|
}
|
||||||
|
.ac-btn-toggle {
|
||||||
|
font-family: var(--font-body);
|
||||||
|
font-size: 11px;
|
||||||
|
color: var(--accent, #3b82f6);
|
||||||
|
background: none;
|
||||||
|
border: 1px solid transparent;
|
||||||
|
cursor: pointer;
|
||||||
|
padding: 3px 10px;
|
||||||
|
border-radius: 4px;
|
||||||
|
transition: all 0.15s;
|
||||||
|
}
|
||||||
|
.ac-btn-toggle:hover {
|
||||||
|
background: rgba(59,130,246,0.08);
|
||||||
|
border-color: rgba(59,130,246,0.12);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Phase accordion */
|
||||||
|
.ac-phases {
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
gap: 2px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.ac-phase {
|
||||||
|
animation: ac-phase-in 0.35s cubic-bezier(0.16,1,0.3,1) both;
|
||||||
|
}
|
||||||
|
@keyframes ac-phase-in {
|
||||||
|
from { opacity: 0; transform: translateY(6px); }
|
||||||
|
to { opacity: 1; transform: translateY(0); }
|
||||||
|
}
|
||||||
|
|
||||||
|
.ac-phase-header {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 10px;
|
||||||
|
padding: 9px 14px;
|
||||||
|
background: var(--bg-secondary);
|
||||||
|
border: 1px solid var(--border-color);
|
||||||
|
border-radius: 10px;
|
||||||
|
cursor: pointer;
|
||||||
|
user-select: none;
|
||||||
|
transition: background 0.15s;
|
||||||
|
}
|
||||||
|
.ac-phase.open .ac-phase-header {
|
||||||
|
border-radius: 10px 10px 0 0;
|
||||||
|
}
|
||||||
|
.ac-phase-header:hover {
|
||||||
|
background: var(--bg-tertiary);
|
||||||
|
}
|
||||||
|
|
||||||
|
.ac-phase-num {
|
||||||
|
font-family: var(--font-mono, monospace);
|
||||||
|
font-size: 10px;
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--accent, #3b82f6);
|
||||||
|
background: rgba(59,130,246,0.08);
|
||||||
|
padding: 2px 8px;
|
||||||
|
border-radius: 4px;
|
||||||
|
letter-spacing: 0.04em;
|
||||||
|
white-space: nowrap;
|
||||||
|
border: 1px solid rgba(59,130,246,0.1);
|
||||||
|
}
|
||||||
|
|
||||||
|
.ac-phase-title {
|
||||||
|
font-family: var(--font-display);
|
||||||
|
font-size: 13px;
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--text-primary);
|
||||||
|
flex: 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
.ac-phase-dots {
|
||||||
|
display: flex;
|
||||||
|
gap: 3px;
|
||||||
|
align-items: center;
|
||||||
|
}
|
||||||
|
.ac-phase-dot {
|
||||||
|
width: 6px;
|
||||||
|
height: 6px;
|
||||||
|
border-radius: 50%;
|
||||||
|
flex-shrink: 0;
|
||||||
|
}
|
||||||
|
.ac-phase-dot.completed { background: var(--success, #16a34a); }
|
||||||
|
.ac-phase-dot.failed { background: var(--danger, #dc2626); }
|
||||||
|
.ac-phase-dot.running { background: var(--warning, #d97706); animation: ac-pulse 1.5s ease-in-out infinite; }
|
||||||
|
.ac-phase-dot.pending { background: var(--text-tertiary, #6b7280); opacity: 0.4; }
|
||||||
|
|
||||||
|
@keyframes ac-pulse {
|
||||||
|
0%, 100% { opacity: 1; }
|
||||||
|
50% { opacity: 0.35; }
|
||||||
|
}
|
||||||
|
|
||||||
|
.ac-phase-meta {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 12px;
|
||||||
|
font-family: var(--font-mono, monospace);
|
||||||
|
font-size: 11px;
|
||||||
|
color: var(--text-secondary);
|
||||||
|
}
|
||||||
|
.ac-phase-meta .findings-ct { color: var(--danger, #dc2626); font-weight: 600; }
|
||||||
|
.ac-phase-meta .running-ct { color: var(--warning, #d97706); font-weight: 500; }
|
||||||
|
|
||||||
|
.ac-phase-chevron {
|
||||||
|
color: var(--text-tertiary, #6b7280);
|
||||||
|
font-size: 11px;
|
||||||
|
transition: transform 0.25s cubic-bezier(0.16,1,0.3,1);
|
||||||
|
width: 14px;
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
.ac-phase.open .ac-phase-chevron {
|
||||||
|
transform: rotate(90deg);
|
||||||
|
}
|
||||||
|
|
||||||
|
.ac-phase-body {
|
||||||
|
max-height: 0;
|
||||||
|
overflow: hidden;
|
||||||
|
transition: max-height 0.35s cubic-bezier(0.16,1,0.3,1);
|
||||||
|
background: var(--bg-secondary);
|
||||||
|
border-left: 1px solid var(--border-color);
|
||||||
|
border-right: 1px solid var(--border-color);
|
||||||
|
border-bottom: 1px solid var(--border-color);
|
||||||
|
border-radius: 0 0 10px 10px;
|
||||||
|
}
|
||||||
|
.ac-phase.open .ac-phase-body {
|
||||||
|
max-height: 2000px;
|
||||||
|
}
|
||||||
|
.ac-phase-body-inner {
|
||||||
|
padding: 4px 6px;
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
gap: 1px;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Tool rows */
|
||||||
|
.ac-tool-row {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: 5px 26px 1fr auto auto auto;
|
||||||
|
align-items: center;
|
||||||
|
gap: 8px;
|
||||||
|
padding: 7px 10px;
|
||||||
|
border-radius: 6px;
|
||||||
|
cursor: pointer;
|
||||||
|
transition: background 0.12s;
|
||||||
|
}
|
||||||
|
.ac-tool-row:hover {
|
||||||
|
background: rgba(255,255,255,0.02);
|
||||||
|
}
|
||||||
|
.ac-tool-row.expanded {
|
||||||
|
background: rgba(59,130,246,0.03);
|
||||||
|
}
|
||||||
|
.ac-tool-row.is-pending {
|
||||||
|
opacity: 0.45;
|
||||||
|
cursor: default;
|
||||||
|
}
|
||||||
|
|
||||||
|
.ac-status-bar {
|
||||||
|
width: 4px;
|
||||||
|
height: 26px;
|
||||||
|
border-radius: 2px;
|
||||||
|
flex-shrink: 0;
|
||||||
|
}
|
||||||
|
.ac-status-bar.completed { background: var(--success, #16a34a); }
|
||||||
|
.ac-status-bar.failed { background: var(--danger, #dc2626); }
|
||||||
|
.ac-status-bar.running { background: var(--warning, #d97706); animation: ac-pulse 1.5s ease-in-out infinite; }
|
||||||
|
.ac-status-bar.pending { background: var(--text-tertiary, #6b7280); opacity: 0.25; }
|
||||||
|
|
||||||
|
.ac-tool-icon {
|
||||||
|
font-size: 17px;
|
||||||
|
text-align: center;
|
||||||
|
line-height: 1;
|
||||||
|
}
|
||||||
|
.ac-tool-info { min-width: 0; }
|
||||||
|
.ac-tool-name {
|
||||||
|
font-size: 12.5px;
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--text-primary);
|
||||||
|
white-space: nowrap;
|
||||||
|
overflow: hidden;
|
||||||
|
text-overflow: ellipsis;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Category chips */
|
||||||
|
.ac-cat-chip {
|
||||||
|
font-family: var(--font-mono, monospace);
|
||||||
|
font-size: 9px;
|
||||||
|
font-weight: 500;
|
||||||
|
padding: 1px 6px;
|
||||||
|
border-radius: 3px;
|
||||||
|
display: inline-block;
|
||||||
|
letter-spacing: 0.02em;
|
||||||
|
}
|
||||||
|
.ac-cat-chip.recon { color: #38bdf8; background: rgba(56,189,248,0.1); }
|
||||||
|
.ac-cat-chip.api { color: #818cf8; background: rgba(129,140,248,0.1); }
|
||||||
|
.ac-cat-chip.headers { color: #06b6d4; background: rgba(6,182,212,0.1); }
|
||||||
|
.ac-cat-chip.csp { color: #d946ef; background: rgba(217,70,239,0.1); }
|
||||||
|
.ac-cat-chip.cookies { color: #f59e0b; background: rgba(245,158,11,0.1); }
|
||||||
|
.ac-cat-chip.logs { color: #78716c; background: rgba(120,113,108,0.1); }
|
||||||
|
.ac-cat-chip.ratelimit { color: #64748b; background: rgba(100,116,139,0.1); }
|
||||||
|
.ac-cat-chip.cors { color: #8b5cf6; background: rgba(139,92,246,0.1); }
|
||||||
|
.ac-cat-chip.tls { color: #14b8a6; background: rgba(20,184,166,0.1); }
|
||||||
|
.ac-cat-chip.redirect { color: #fb923c; background: rgba(251,146,60,0.1); }
|
||||||
|
.ac-cat-chip.email { color: #0ea5e9; background: rgba(14,165,233,0.1); }
|
||||||
|
.ac-cat-chip.auth { color: #f43f5e; background: rgba(244,63,94,0.1); }
|
||||||
|
.ac-cat-chip.xss { color: #f97316; background: rgba(249,115,22,0.1); }
|
||||||
|
.ac-cat-chip.sqli { color: #ef4444; background: rgba(239,68,68,0.1); }
|
||||||
|
.ac-cat-chip.ssrf { color: #a855f7; background: rgba(168,85,247,0.1); }
|
||||||
|
.ac-cat-chip.idor { color: #ec4899; background: rgba(236,72,153,0.1); }
|
||||||
|
.ac-cat-chip.fuzzer { color: #a78bfa; background: rgba(167,139,250,0.1); }
|
||||||
|
.ac-cat-chip.cve { color: #dc2626; background: rgba(220,38,38,0.1); }
|
||||||
|
.ac-cat-chip.default { color: #94a3b8; background: rgba(148,163,184,0.1); }
|
||||||
|
|
||||||
|
.ac-tool-duration {
|
||||||
|
font-family: var(--font-mono, monospace);
|
||||||
|
font-size: 10px;
|
||||||
|
color: var(--text-tertiary, #6b7280);
|
||||||
|
white-space: nowrap;
|
||||||
|
min-width: 48px;
|
||||||
|
text-align: right;
|
||||||
|
}
|
||||||
|
.ac-tool-duration.running-text {
|
||||||
|
color: var(--warning, #d97706);
|
||||||
|
font-weight: 500;
|
||||||
|
}
|
||||||
|
|
||||||
|
.ac-findings-pill {
|
||||||
|
font-family: var(--font-mono, monospace);
|
||||||
|
font-size: 10px;
|
||||||
|
font-weight: 700;
|
||||||
|
display: inline-flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
min-width: 22px;
|
||||||
|
padding: 1px 7px;
|
||||||
|
border-radius: 9px;
|
||||||
|
line-height: 1.4;
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
.ac-findings-pill.has { background: rgba(220,38,38,0.12); color: var(--danger, #dc2626); }
|
||||||
|
.ac-findings-pill.zero { background: transparent; color: var(--text-tertiary, #6b7280); font-weight: 400; opacity: 0.5; }
|
||||||
|
|
||||||
|
.ac-risk-val {
|
||||||
|
font-family: var(--font-mono, monospace);
|
||||||
|
font-size: 10px;
|
||||||
|
font-weight: 700;
|
||||||
|
min-width: 32px;
|
||||||
|
text-align: right;
|
||||||
|
}
|
||||||
|
.ac-risk-val.high { color: var(--danger, #dc2626); }
|
||||||
|
.ac-risk-val.medium { color: var(--warning, #d97706); }
|
||||||
|
.ac-risk-val.low { color: var(--text-secondary); }
|
||||||
|
.ac-risk-val.none { color: transparent; }
|
||||||
|
|
||||||
|
/* Tool detail (expanded) */
|
||||||
|
.ac-tool-detail {
|
||||||
|
max-height: 0;
|
||||||
|
overflow: hidden;
|
||||||
|
transition: max-height 0.28s cubic-bezier(0.16,1,0.3,1);
|
||||||
|
}
|
||||||
|
.ac-tool-detail.open {
|
||||||
|
max-height: 300px;
|
||||||
|
}
|
||||||
|
.ac-tool-detail-inner {
|
||||||
|
padding: 6px 10px 10px 49px;
|
||||||
|
font-size: 12px;
|
||||||
|
line-height: 1.55;
|
||||||
|
color: var(--text-secondary);
|
||||||
|
}
|
||||||
|
.ac-reasoning-block {
|
||||||
|
background: rgba(59,130,246,0.03);
|
||||||
|
border-left: 2px solid var(--accent, #3b82f6);
|
||||||
|
padding: 7px 12px;
|
||||||
|
border-radius: 0 6px 6px 0;
|
||||||
|
font-style: italic;
|
||||||
|
margin-bottom: 8px;
|
||||||
|
color: var(--text-secondary);
|
||||||
|
}
|
||||||
|
.ac-detail-grid {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: auto 1fr;
|
||||||
|
gap: 3px 14px;
|
||||||
|
font-family: var(--font-mono, monospace);
|
||||||
|
font-size: 10px;
|
||||||
|
}
|
||||||
|
.ac-detail-label {
|
||||||
|
color: var(--text-tertiary, #6b7280);
|
||||||
|
text-transform: uppercase;
|
||||||
|
font-size: 9px;
|
||||||
|
letter-spacing: 0.04em;
|
||||||
|
}
|
||||||
|
.ac-detail-value {
|
||||||
|
color: var(--text-secondary);
|
||||||
|
}
|
||||||
|
|||||||
@@ -38,6 +38,10 @@ pub enum Route {
|
|||||||
DastFindingsPage {},
|
DastFindingsPage {},
|
||||||
#[route("/dast/findings/:id")]
|
#[route("/dast/findings/:id")]
|
||||||
DastFindingDetailPage { id: String },
|
DastFindingDetailPage { id: String },
|
||||||
|
#[route("/pentest")]
|
||||||
|
PentestDashboardPage {},
|
||||||
|
#[route("/pentest/:session_id")]
|
||||||
|
PentestSessionPage { session_id: String },
|
||||||
#[route("/mcp-servers")]
|
#[route("/mcp-servers")]
|
||||||
McpServersPage {},
|
McpServersPage {},
|
||||||
#[route("/settings")]
|
#[route("/settings")]
|
||||||
@@ -49,7 +53,6 @@ const MAIN_CSS: Asset = asset!("/assets/main.css");
|
|||||||
const TAILWIND_CSS: Asset = asset!("/assets/tailwind.css");
|
const TAILWIND_CSS: Asset = asset!("/assets/tailwind.css");
|
||||||
const VIS_NETWORK_JS: Asset = asset!("/assets/vis-network.min.js");
|
const VIS_NETWORK_JS: Asset = asset!("/assets/vis-network.min.js");
|
||||||
const GRAPH_VIZ_JS: Asset = asset!("/assets/graph-viz.js");
|
const GRAPH_VIZ_JS: Asset = asset!("/assets/graph-viz.js");
|
||||||
|
|
||||||
#[component]
|
#[component]
|
||||||
pub fn App() -> Element {
|
pub fn App() -> Element {
|
||||||
rsx! {
|
rsx! {
|
||||||
|
|||||||
@@ -47,6 +47,11 @@ pub fn Sidebar() -> Element {
|
|||||||
route: Route::DastOverviewPage {},
|
route: Route::DastOverviewPage {},
|
||||||
icon: rsx! { Icon { icon: BsBug, width: 18, height: 18 } },
|
icon: rsx! { Icon { icon: BsBug, width: 18, height: 18 } },
|
||||||
},
|
},
|
||||||
|
NavItem {
|
||||||
|
label: "Pentest",
|
||||||
|
route: Route::PentestDashboardPage {},
|
||||||
|
icon: rsx! { Icon { icon: BsLightningCharge, width: 18, height: 18 } },
|
||||||
|
},
|
||||||
NavItem {
|
NavItem {
|
||||||
label: "Settings",
|
label: "Settings",
|
||||||
route: Route::SettingsPage {},
|
route: Route::SettingsPage {},
|
||||||
@@ -78,6 +83,7 @@ pub fn Sidebar() -> Element {
|
|||||||
(Route::DastTargetsPage {}, Route::DastOverviewPage {}) => true,
|
(Route::DastTargetsPage {}, Route::DastOverviewPage {}) => true,
|
||||||
(Route::DastFindingsPage {}, Route::DastOverviewPage {}) => true,
|
(Route::DastFindingsPage {}, Route::DastOverviewPage {}) => true,
|
||||||
(Route::DastFindingDetailPage { .. }, Route::DastOverviewPage {}) => true,
|
(Route::DastFindingDetailPage { .. }, Route::DastOverviewPage {}) => true,
|
||||||
|
(Route::PentestSessionPage { .. }, Route::PentestDashboardPage {}) => true,
|
||||||
(a, b) => a == b,
|
(a, b) => a == b,
|
||||||
};
|
};
|
||||||
let class = if is_active { "nav-item active" } else { "nav-item" };
|
let class = if is_active { "nav-item active" } else { "nav-item" };
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ pub mod findings;
|
|||||||
pub mod graph;
|
pub mod graph;
|
||||||
pub mod issues;
|
pub mod issues;
|
||||||
pub mod mcp;
|
pub mod mcp;
|
||||||
|
pub mod pentest;
|
||||||
#[allow(clippy::too_many_arguments)]
|
#[allow(clippy::too_many_arguments)]
|
||||||
pub mod repositories;
|
pub mod repositories;
|
||||||
pub mod sbom;
|
pub mod sbom;
|
||||||
|
|||||||
308
compliance-dashboard/src/infrastructure/pentest.rs
Normal file
308
compliance-dashboard/src/infrastructure/pentest.rs
Normal file
@@ -0,0 +1,308 @@
|
|||||||
|
use dioxus::prelude::*;
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
|
use super::dast::DastFindingsResponse;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
|
||||||
|
pub struct PentestSessionsResponse {
|
||||||
|
pub data: Vec<serde_json::Value>,
|
||||||
|
pub total: Option<u64>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
|
||||||
|
pub struct PentestSessionResponse {
|
||||||
|
pub data: serde_json::Value,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
|
||||||
|
pub struct PentestMessagesResponse {
|
||||||
|
pub data: Vec<serde_json::Value>,
|
||||||
|
pub total: Option<u64>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
|
||||||
|
pub struct PentestStatsResponse {
|
||||||
|
pub data: serde_json::Value,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
|
||||||
|
pub struct AttackChainResponse {
|
||||||
|
pub data: Vec<serde_json::Value>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[server]
|
||||||
|
pub async fn fetch_pentest_sessions() -> Result<PentestSessionsResponse, ServerFnError> {
|
||||||
|
let state: super::server_state::ServerState =
|
||||||
|
dioxus_fullstack::FullstackContext::extract().await?;
|
||||||
|
|
||||||
|
// Fetch sessions
|
||||||
|
let url = format!("{}/api/v1/pentest/sessions", state.agent_api_url);
|
||||||
|
let resp = reqwest::get(&url)
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
let mut body: PentestSessionsResponse = resp
|
||||||
|
.json()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
|
||||||
|
// Fetch DAST targets to resolve target names
|
||||||
|
let targets_url = format!("{}/api/v1/dast/targets", state.agent_api_url);
|
||||||
|
if let Ok(tresp) = reqwest::get(&targets_url).await {
|
||||||
|
if let Ok(tbody) = tresp.json::<serde_json::Value>().await {
|
||||||
|
let targets = tbody.get("data").and_then(|v| v.as_array());
|
||||||
|
if let Some(targets) = targets {
|
||||||
|
// Build target_id -> name lookup
|
||||||
|
let target_map: std::collections::HashMap<String, String> = targets
|
||||||
|
.iter()
|
||||||
|
.filter_map(|t| {
|
||||||
|
let id = t.get("_id")?.get("$oid")?.as_str()?.to_string();
|
||||||
|
let name = t.get("name")?.as_str()?.to_string();
|
||||||
|
Some((id, name))
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
// Enrich sessions with target_name
|
||||||
|
for session in body.data.iter_mut() {
|
||||||
|
if let Some(tid) = session.get("target_id").and_then(|v| v.as_str()) {
|
||||||
|
if let Some(name) = target_map.get(tid) {
|
||||||
|
session.as_object_mut().map(|obj| {
|
||||||
|
obj.insert(
|
||||||
|
"target_name".to_string(),
|
||||||
|
serde_json::Value::String(name.clone()),
|
||||||
|
)
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(body)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[server]
|
||||||
|
pub async fn fetch_pentest_session(id: String) -> Result<PentestSessionResponse, ServerFnError> {
|
||||||
|
let state: super::server_state::ServerState =
|
||||||
|
dioxus_fullstack::FullstackContext::extract().await?;
|
||||||
|
let url = format!("{}/api/v1/pentest/sessions/{id}", state.agent_api_url);
|
||||||
|
let resp = reqwest::get(&url)
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
let mut body: PentestSessionResponse = resp
|
||||||
|
.json()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
|
||||||
|
// Resolve target name from targets list
|
||||||
|
if let Some(tid) = body.data.get("target_id").and_then(|v| v.as_str()) {
|
||||||
|
let targets_url = format!("{}/api/v1/dast/targets", state.agent_api_url);
|
||||||
|
if let Ok(tresp) = reqwest::get(&targets_url).await {
|
||||||
|
if let Ok(tbody) = tresp.json::<serde_json::Value>().await {
|
||||||
|
if let Some(targets) = tbody.get("data").and_then(|v| v.as_array()) {
|
||||||
|
for t in targets {
|
||||||
|
let t_id = t.get("_id").and_then(|v| v.get("$oid")).and_then(|v| v.as_str()).unwrap_or("");
|
||||||
|
if t_id == tid {
|
||||||
|
if let Some(name) = t.get("name").and_then(|v| v.as_str()) {
|
||||||
|
body.data.as_object_mut().map(|obj| {
|
||||||
|
obj.insert("target_name".to_string(), serde_json::Value::String(name.to_string()))
|
||||||
|
});
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(body)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[server]
|
||||||
|
pub async fn fetch_pentest_messages(
|
||||||
|
session_id: String,
|
||||||
|
) -> Result<PentestMessagesResponse, ServerFnError> {
|
||||||
|
let state: super::server_state::ServerState =
|
||||||
|
dioxus_fullstack::FullstackContext::extract().await?;
|
||||||
|
let url = format!(
|
||||||
|
"{}/api/v1/pentest/sessions/{session_id}/messages",
|
||||||
|
state.agent_api_url
|
||||||
|
);
|
||||||
|
let resp = reqwest::get(&url)
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
let body: PentestMessagesResponse = resp
|
||||||
|
.json()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
Ok(body)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[server]
|
||||||
|
pub async fn fetch_pentest_stats() -> Result<PentestStatsResponse, ServerFnError> {
|
||||||
|
let state: super::server_state::ServerState =
|
||||||
|
dioxus_fullstack::FullstackContext::extract().await?;
|
||||||
|
let url = format!("{}/api/v1/pentest/stats", state.agent_api_url);
|
||||||
|
let resp = reqwest::get(&url)
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
let body: PentestStatsResponse = resp
|
||||||
|
.json()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
Ok(body)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[server]
|
||||||
|
pub async fn fetch_attack_chain(
|
||||||
|
session_id: String,
|
||||||
|
) -> Result<AttackChainResponse, ServerFnError> {
|
||||||
|
let state: super::server_state::ServerState =
|
||||||
|
dioxus_fullstack::FullstackContext::extract().await?;
|
||||||
|
let url = format!(
|
||||||
|
"{}/api/v1/pentest/sessions/{session_id}/attack-chain",
|
||||||
|
state.agent_api_url
|
||||||
|
);
|
||||||
|
let resp = reqwest::get(&url)
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
let body: AttackChainResponse = resp
|
||||||
|
.json()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
Ok(body)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[server]
|
||||||
|
pub async fn create_pentest_session(
|
||||||
|
target_id: String,
|
||||||
|
strategy: String,
|
||||||
|
message: String,
|
||||||
|
) -> Result<PentestSessionResponse, ServerFnError> {
|
||||||
|
let state: super::server_state::ServerState =
|
||||||
|
dioxus_fullstack::FullstackContext::extract().await?;
|
||||||
|
let url = format!("{}/api/v1/pentest/sessions", state.agent_api_url);
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
let resp = client
|
||||||
|
.post(&url)
|
||||||
|
.json(&serde_json::json!({
|
||||||
|
"target_id": target_id,
|
||||||
|
"strategy": strategy,
|
||||||
|
"message": message,
|
||||||
|
}))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
let body: PentestSessionResponse = resp
|
||||||
|
.json()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
Ok(body)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[server]
|
||||||
|
pub async fn send_pentest_message(
|
||||||
|
session_id: String,
|
||||||
|
message: String,
|
||||||
|
) -> Result<PentestMessagesResponse, ServerFnError> {
|
||||||
|
let state: super::server_state::ServerState =
|
||||||
|
dioxus_fullstack::FullstackContext::extract().await?;
|
||||||
|
let url = format!(
|
||||||
|
"{}/api/v1/pentest/sessions/{session_id}/chat",
|
||||||
|
state.agent_api_url
|
||||||
|
);
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
let resp = client
|
||||||
|
.post(&url)
|
||||||
|
.json(&serde_json::json!({
|
||||||
|
"message": message,
|
||||||
|
}))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
let body: PentestMessagesResponse = resp
|
||||||
|
.json()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
Ok(body)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[server]
|
||||||
|
pub async fn stop_pentest_session(session_id: String) -> Result<(), ServerFnError> {
|
||||||
|
let state: super::server_state::ServerState =
|
||||||
|
dioxus_fullstack::FullstackContext::extract().await?;
|
||||||
|
let url = format!(
|
||||||
|
"{}/api/v1/pentest/sessions/{session_id}/stop",
|
||||||
|
state.agent_api_url
|
||||||
|
);
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
client
|
||||||
|
.post(&url)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[server]
|
||||||
|
pub async fn fetch_pentest_findings(
|
||||||
|
session_id: String,
|
||||||
|
) -> Result<DastFindingsResponse, ServerFnError> {
|
||||||
|
let state: super::server_state::ServerState =
|
||||||
|
dioxus_fullstack::FullstackContext::extract().await?;
|
||||||
|
let url = format!(
|
||||||
|
"{}/api/v1/pentest/sessions/{session_id}/findings",
|
||||||
|
state.agent_api_url
|
||||||
|
);
|
||||||
|
let resp = reqwest::get(&url)
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
let body: DastFindingsResponse = resp
|
||||||
|
.json()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
Ok(body)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
|
||||||
|
pub struct ExportReportResponse {
|
||||||
|
pub archive_base64: String,
|
||||||
|
pub sha256: String,
|
||||||
|
pub filename: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[server]
|
||||||
|
pub async fn export_pentest_report(
|
||||||
|
session_id: String,
|
||||||
|
password: String,
|
||||||
|
requester_name: String,
|
||||||
|
requester_email: String,
|
||||||
|
) -> Result<ExportReportResponse, ServerFnError> {
|
||||||
|
let state: super::server_state::ServerState =
|
||||||
|
dioxus_fullstack::FullstackContext::extract().await?;
|
||||||
|
let url = format!(
|
||||||
|
"{}/api/v1/pentest/sessions/{session_id}/export",
|
||||||
|
state.agent_api_url
|
||||||
|
);
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
let resp = client
|
||||||
|
.post(&url)
|
||||||
|
.json(&serde_json::json!({
|
||||||
|
"password": password,
|
||||||
|
"requester_name": requester_name,
|
||||||
|
"requester_email": requester_email,
|
||||||
|
}))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
if !resp.status().is_success() {
|
||||||
|
let text = resp.text().await.unwrap_or_default();
|
||||||
|
return Err(ServerFnError::new(format!("Export failed: {text}")));
|
||||||
|
}
|
||||||
|
let body: ExportReportResponse = resp
|
||||||
|
.json()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ServerFnError::new(e.to_string()))?;
|
||||||
|
Ok(body)
|
||||||
|
}
|
||||||
@@ -11,6 +11,11 @@ use crate::infrastructure::dast::fetch_dast_findings;
|
|||||||
pub fn DastFindingsPage() -> Element {
|
pub fn DastFindingsPage() -> Element {
|
||||||
let findings = use_resource(|| async { fetch_dast_findings().await.ok() });
|
let findings = use_resource(|| async { fetch_dast_findings().await.ok() });
|
||||||
|
|
||||||
|
let mut filter_severity = use_signal(|| "all".to_string());
|
||||||
|
let mut filter_vuln_type = use_signal(|| "all".to_string());
|
||||||
|
let mut filter_exploitable = use_signal(|| "all".to_string());
|
||||||
|
let mut search_text = use_signal(String::new);
|
||||||
|
|
||||||
rsx! {
|
rsx! {
|
||||||
div { class: "back-nav",
|
div { class: "back-nav",
|
||||||
button {
|
button {
|
||||||
@@ -26,14 +31,105 @@ pub fn DastFindingsPage() -> Element {
|
|||||||
description: "Vulnerabilities discovered through dynamic application security testing",
|
description: "Vulnerabilities discovered through dynamic application security testing",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Filter bar
|
||||||
|
div { style: "display: flex; gap: 10px; margin-bottom: 12px; flex-wrap: wrap; align-items: center;",
|
||||||
|
// Search
|
||||||
|
div { style: "flex: 1; min-width: 180px;",
|
||||||
|
input {
|
||||||
|
class: "chat-input",
|
||||||
|
style: "width: 100%; padding: 6px 10px; font-size: 0.85rem;",
|
||||||
|
placeholder: "Search title or endpoint...",
|
||||||
|
value: "{search_text}",
|
||||||
|
oninput: move |e| search_text.set(e.value()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Severity
|
||||||
|
select {
|
||||||
|
style: "padding: 6px 10px; border-radius: 6px; border: 1px solid var(--border-color); background: var(--bg-secondary); color: var(--text-primary); font-size: 0.85rem;",
|
||||||
|
value: "{filter_severity}",
|
||||||
|
onchange: move |e| filter_severity.set(e.value()),
|
||||||
|
option { value: "all", "All Severities" }
|
||||||
|
option { value: "critical", "Critical" }
|
||||||
|
option { value: "high", "High" }
|
||||||
|
option { value: "medium", "Medium" }
|
||||||
|
option { value: "low", "Low" }
|
||||||
|
option { value: "info", "Info" }
|
||||||
|
}
|
||||||
|
// Vuln type
|
||||||
|
select {
|
||||||
|
style: "padding: 6px 10px; border-radius: 6px; border: 1px solid var(--border-color); background: var(--bg-secondary); color: var(--text-primary); font-size: 0.85rem;",
|
||||||
|
value: "{filter_vuln_type}",
|
||||||
|
onchange: move |e| filter_vuln_type.set(e.value()),
|
||||||
|
option { value: "all", "All Types" }
|
||||||
|
option { value: "sql_injection", "SQL Injection" }
|
||||||
|
option { value: "xss", "XSS" }
|
||||||
|
option { value: "auth_bypass", "Auth Bypass" }
|
||||||
|
option { value: "ssrf", "SSRF" }
|
||||||
|
option { value: "api_misconfiguration", "API Misconfiguration" }
|
||||||
|
option { value: "open_redirect", "Open Redirect" }
|
||||||
|
option { value: "idor", "IDOR" }
|
||||||
|
option { value: "information_disclosure", "Information Disclosure" }
|
||||||
|
option { value: "security_misconfiguration", "Security Misconfiguration" }
|
||||||
|
option { value: "broken_auth", "Broken Auth" }
|
||||||
|
option { value: "dns_misconfiguration", "DNS Misconfiguration" }
|
||||||
|
option { value: "email_security", "Email Security" }
|
||||||
|
option { value: "tls_misconfiguration", "TLS Misconfiguration" }
|
||||||
|
option { value: "cookie_security", "Cookie Security" }
|
||||||
|
option { value: "csp_issue", "CSP Issue" }
|
||||||
|
option { value: "cors_misconfiguration", "CORS Misconfiguration" }
|
||||||
|
option { value: "rate_limit_absent", "Rate Limit Absent" }
|
||||||
|
option { value: "console_log_leakage", "Console Log Leakage" }
|
||||||
|
option { value: "security_header_missing", "Security Header Missing" }
|
||||||
|
option { value: "known_cve_exploit", "Known CVE Exploit" }
|
||||||
|
option { value: "other", "Other" }
|
||||||
|
}
|
||||||
|
// Exploitable
|
||||||
|
select {
|
||||||
|
style: "padding: 6px 10px; border-radius: 6px; border: 1px solid var(--border-color); background: var(--bg-secondary); color: var(--text-primary); font-size: 0.85rem;",
|
||||||
|
value: "{filter_exploitable}",
|
||||||
|
onchange: move |e| filter_exploitable.set(e.value()),
|
||||||
|
option { value: "all", "All" }
|
||||||
|
option { value: "yes", "Exploitable" }
|
||||||
|
option { value: "no", "Unconfirmed" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
div { class: "card",
|
div { class: "card",
|
||||||
match &*findings.read() {
|
match &*findings.read() {
|
||||||
Some(Some(data)) => {
|
Some(Some(data)) => {
|
||||||
let finding_list = &data.data;
|
let sev_filter = filter_severity.read().clone();
|
||||||
if finding_list.is_empty() {
|
let vt_filter = filter_vuln_type.read().clone();
|
||||||
rsx! { p { "No DAST findings yet. Run a scan to discover vulnerabilities." } }
|
let exp_filter = filter_exploitable.read().clone();
|
||||||
|
let search = search_text.read().to_lowercase();
|
||||||
|
|
||||||
|
let filtered: Vec<_> = data.data.iter().filter(|f| {
|
||||||
|
let severity = f.get("severity").and_then(|v| v.as_str()).unwrap_or("info");
|
||||||
|
let vuln_type = f.get("vuln_type").and_then(|v| v.as_str()).unwrap_or("");
|
||||||
|
let exploitable = f.get("exploitable").and_then(|v| v.as_bool()).unwrap_or(false);
|
||||||
|
let title = f.get("title").and_then(|v| v.as_str()).unwrap_or("").to_lowercase();
|
||||||
|
let endpoint = f.get("endpoint").and_then(|v| v.as_str()).unwrap_or("").to_lowercase();
|
||||||
|
|
||||||
|
(sev_filter == "all" || severity == sev_filter)
|
||||||
|
&& (vt_filter == "all" || vuln_type == vt_filter)
|
||||||
|
&& match exp_filter.as_str() {
|
||||||
|
"yes" => exploitable,
|
||||||
|
"no" => !exploitable,
|
||||||
|
_ => true,
|
||||||
|
}
|
||||||
|
&& (search.is_empty() || title.contains(&search) || endpoint.contains(&search))
|
||||||
|
}).collect();
|
||||||
|
|
||||||
|
if filtered.is_empty() {
|
||||||
|
if data.data.is_empty() {
|
||||||
|
rsx! { p { style: "padding: 16px;", "No DAST findings yet. Run a scan to discover vulnerabilities." } }
|
||||||
|
} else {
|
||||||
|
rsx! { p { style: "padding: 16px; color: var(--text-secondary);", "No findings match the current filters." } }
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
rsx! {
|
rsx! {
|
||||||
|
div { style: "padding: 8px 16px; font-size: 0.8rem; color: var(--text-secondary);",
|
||||||
|
"Showing {filtered.len()} of {data.data.len()} findings"
|
||||||
|
}
|
||||||
table { class: "table",
|
table { class: "table",
|
||||||
thead {
|
thead {
|
||||||
tr {
|
tr {
|
||||||
@@ -46,7 +142,7 @@ pub fn DastFindingsPage() -> Element {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tbody {
|
tbody {
|
||||||
for finding in finding_list {
|
for finding in filtered {
|
||||||
{
|
{
|
||||||
let id = finding.get("_id").and_then(|v| v.get("$oid")).and_then(|v| v.as_str()).unwrap_or("").to_string();
|
let id = finding.get("_id").and_then(|v| v.get("$oid")).and_then(|v| v.as_str()).unwrap_or("").to_string();
|
||||||
let severity = finding.get("severity").and_then(|v| v.as_str()).unwrap_or("info").to_string();
|
let severity = finding.get("severity").and_then(|v| v.as_str()).unwrap_or("info").to_string();
|
||||||
|
|||||||
@@ -12,6 +12,8 @@ pub mod impact_analysis;
|
|||||||
pub mod issues;
|
pub mod issues;
|
||||||
pub mod mcp_servers;
|
pub mod mcp_servers;
|
||||||
pub mod overview;
|
pub mod overview;
|
||||||
|
pub mod pentest_dashboard;
|
||||||
|
pub mod pentest_session;
|
||||||
pub mod repositories;
|
pub mod repositories;
|
||||||
pub mod sbom;
|
pub mod sbom;
|
||||||
pub mod settings;
|
pub mod settings;
|
||||||
@@ -30,6 +32,8 @@ pub use impact_analysis::ImpactAnalysisPage;
|
|||||||
pub use issues::IssuesPage;
|
pub use issues::IssuesPage;
|
||||||
pub use mcp_servers::McpServersPage;
|
pub use mcp_servers::McpServersPage;
|
||||||
pub use overview::OverviewPage;
|
pub use overview::OverviewPage;
|
||||||
|
pub use pentest_dashboard::PentestDashboardPage;
|
||||||
|
pub use pentest_session::PentestSessionPage;
|
||||||
pub use repositories::RepositoriesPage;
|
pub use repositories::RepositoriesPage;
|
||||||
pub use sbom::SbomPage;
|
pub use sbom::SbomPage;
|
||||||
pub use settings::SettingsPage;
|
pub use settings::SettingsPage;
|
||||||
|
|||||||
398
compliance-dashboard/src/pages/pentest_dashboard.rs
Normal file
398
compliance-dashboard/src/pages/pentest_dashboard.rs
Normal file
@@ -0,0 +1,398 @@
|
|||||||
|
use dioxus::prelude::*;
|
||||||
|
use dioxus_free_icons::icons::bs_icons::*;
|
||||||
|
use dioxus_free_icons::Icon;
|
||||||
|
|
||||||
|
use crate::app::Route;
|
||||||
|
use crate::components::page_header::PageHeader;
|
||||||
|
use crate::infrastructure::dast::fetch_dast_targets;
|
||||||
|
use crate::infrastructure::pentest::{
|
||||||
|
create_pentest_session, fetch_pentest_sessions, fetch_pentest_stats, stop_pentest_session,
|
||||||
|
};
|
||||||
|
|
||||||
|
#[component]
|
||||||
|
pub fn PentestDashboardPage() -> Element {
|
||||||
|
let mut sessions = use_resource(|| async { fetch_pentest_sessions().await.ok() });
|
||||||
|
let stats = use_resource(|| async { fetch_pentest_stats().await.ok() });
|
||||||
|
let targets = use_resource(|| async { fetch_dast_targets().await.ok() });
|
||||||
|
|
||||||
|
let mut show_modal = use_signal(|| false);
|
||||||
|
let mut new_target_id = use_signal(String::new);
|
||||||
|
let mut new_strategy = use_signal(|| "comprehensive".to_string());
|
||||||
|
let mut new_message = use_signal(String::new);
|
||||||
|
let mut creating = use_signal(|| false);
|
||||||
|
|
||||||
|
let on_create = move |_| {
|
||||||
|
let tid = new_target_id.read().clone();
|
||||||
|
let strat = new_strategy.read().clone();
|
||||||
|
let msg = new_message.read().clone();
|
||||||
|
if tid.is_empty() || msg.is_empty() {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
creating.set(true);
|
||||||
|
spawn(async move {
|
||||||
|
match create_pentest_session(tid, strat, msg).await {
|
||||||
|
Ok(resp) => {
|
||||||
|
let session_id = resp
|
||||||
|
.data
|
||||||
|
.get("_id")
|
||||||
|
.and_then(|v| v.get("$oid"))
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.unwrap_or("")
|
||||||
|
.to_string();
|
||||||
|
creating.set(false);
|
||||||
|
show_modal.set(false);
|
||||||
|
new_target_id.set(String::new());
|
||||||
|
new_message.set(String::new());
|
||||||
|
if !session_id.is_empty() {
|
||||||
|
navigator().push(Route::PentestSessionPage {
|
||||||
|
session_id: session_id.clone(),
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
sessions.restart();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(_) => {
|
||||||
|
creating.set(false);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
// Extract stats values
|
||||||
|
let running_sessions = {
|
||||||
|
let s = stats.read();
|
||||||
|
match &*s {
|
||||||
|
Some(Some(data)) => data
|
||||||
|
.data
|
||||||
|
.get("running_sessions")
|
||||||
|
.and_then(|v| v.as_u64())
|
||||||
|
.unwrap_or(0),
|
||||||
|
_ => 0,
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let total_vulns = {
|
||||||
|
let s = stats.read();
|
||||||
|
match &*s {
|
||||||
|
Some(Some(data)) => data
|
||||||
|
.data
|
||||||
|
.get("total_vulnerabilities")
|
||||||
|
.and_then(|v| v.as_u64())
|
||||||
|
.unwrap_or(0),
|
||||||
|
_ => 0,
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let tool_invocations = {
|
||||||
|
let s = stats.read();
|
||||||
|
match &*s {
|
||||||
|
Some(Some(data)) => data
|
||||||
|
.data
|
||||||
|
.get("total_tool_invocations")
|
||||||
|
.and_then(|v| v.as_u64())
|
||||||
|
.unwrap_or(0),
|
||||||
|
_ => 0,
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let success_rate = {
|
||||||
|
let s = stats.read();
|
||||||
|
match &*s {
|
||||||
|
Some(Some(data)) => data
|
||||||
|
.data
|
||||||
|
.get("tool_success_rate")
|
||||||
|
.and_then(|v| v.as_f64())
|
||||||
|
.unwrap_or(0.0),
|
||||||
|
_ => 0.0,
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Severity counts from stats (nested under severity_distribution)
|
||||||
|
let sev_dist = {
|
||||||
|
let s = stats.read();
|
||||||
|
match &*s {
|
||||||
|
Some(Some(data)) => data
|
||||||
|
.data
|
||||||
|
.get("severity_distribution")
|
||||||
|
.cloned()
|
||||||
|
.unwrap_or(serde_json::Value::Null),
|
||||||
|
_ => serde_json::Value::Null,
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let severity_critical = sev_dist.get("critical").and_then(|v| v.as_u64()).unwrap_or(0);
|
||||||
|
let severity_high = sev_dist.get("high").and_then(|v| v.as_u64()).unwrap_or(0);
|
||||||
|
let severity_medium = sev_dist.get("medium").and_then(|v| v.as_u64()).unwrap_or(0);
|
||||||
|
let severity_low = sev_dist.get("low").and_then(|v| v.as_u64()).unwrap_or(0);
|
||||||
|
|
||||||
|
rsx! {
|
||||||
|
PageHeader {
|
||||||
|
title: "Pentest Dashboard",
|
||||||
|
description: "AI-powered penetration testing sessions — autonomous security assessment",
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stat cards
|
||||||
|
div { class: "stat-cards", style: "margin-bottom: 24px;",
|
||||||
|
div { class: "stat-card-item",
|
||||||
|
div { class: "stat-card-value", "{running_sessions}" }
|
||||||
|
div { class: "stat-card-label",
|
||||||
|
Icon { icon: BsPlayCircle, width: 14, height: 14 }
|
||||||
|
" Running Sessions"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
div { class: "stat-card-item",
|
||||||
|
div { class: "stat-card-value", "{total_vulns}" }
|
||||||
|
div { class: "stat-card-label",
|
||||||
|
Icon { icon: BsShieldExclamation, width: 14, height: 14 }
|
||||||
|
" Total Vulnerabilities"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
div { class: "stat-card-item",
|
||||||
|
div { class: "stat-card-value", "{tool_invocations}" }
|
||||||
|
div { class: "stat-card-label",
|
||||||
|
Icon { icon: BsWrench, width: 14, height: 14 }
|
||||||
|
" Tool Invocations"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
div { class: "stat-card-item",
|
||||||
|
div { class: "stat-card-value", "{success_rate:.0}%" }
|
||||||
|
div { class: "stat-card-label",
|
||||||
|
Icon { icon: BsCheckCircle, width: 14, height: 14 }
|
||||||
|
" Success Rate"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Severity distribution
|
||||||
|
div { class: "card", style: "margin-bottom: 24px; padding: 16px;",
|
||||||
|
div { style: "display: flex; align-items: center; gap: 16px; flex-wrap: wrap;",
|
||||||
|
span { style: "font-weight: 600; color: var(--text-secondary); font-size: 0.85rem;", "Severity Distribution" }
|
||||||
|
span {
|
||||||
|
class: "badge",
|
||||||
|
style: "background: #dc2626; color: #fff;",
|
||||||
|
"Critical: {severity_critical}"
|
||||||
|
}
|
||||||
|
span {
|
||||||
|
class: "badge",
|
||||||
|
style: "background: #ea580c; color: #fff;",
|
||||||
|
"High: {severity_high}"
|
||||||
|
}
|
||||||
|
span {
|
||||||
|
class: "badge",
|
||||||
|
style: "background: #d97706; color: #fff;",
|
||||||
|
"Medium: {severity_medium}"
|
||||||
|
}
|
||||||
|
span {
|
||||||
|
class: "badge",
|
||||||
|
style: "background: #2563eb; color: #fff;",
|
||||||
|
"Low: {severity_low}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Actions row
|
||||||
|
div { style: "display: flex; gap: 12px; margin-bottom: 24px;",
|
||||||
|
button {
|
||||||
|
class: "btn btn-primary",
|
||||||
|
onclick: move |_| show_modal.set(true),
|
||||||
|
Icon { icon: BsPlusCircle, width: 14, height: 14 }
|
||||||
|
" New Pentest"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sessions list
|
||||||
|
div { class: "card",
|
||||||
|
div { class: "card-header", "Recent Pentest Sessions" }
|
||||||
|
match &*sessions.read() {
|
||||||
|
Some(Some(data)) => {
|
||||||
|
let sess_list = &data.data;
|
||||||
|
if sess_list.is_empty() {
|
||||||
|
rsx! {
|
||||||
|
div { style: "padding: 32px; text-align: center; color: var(--text-secondary);",
|
||||||
|
p { "No pentest sessions yet. Start one to begin autonomous security testing." }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
rsx! {
|
||||||
|
div { style: "display: grid; gap: 12px; padding: 16px;",
|
||||||
|
for session in sess_list {
|
||||||
|
{
|
||||||
|
let id = session.get("_id")
|
||||||
|
.and_then(|v| v.get("$oid"))
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.unwrap_or("-").to_string();
|
||||||
|
let target_name = session.get("target_name").and_then(|v| v.as_str()).unwrap_or("Unknown Target").to_string();
|
||||||
|
let status = session.get("status").and_then(|v| v.as_str()).unwrap_or("unknown").to_string();
|
||||||
|
let strategy = session.get("strategy").and_then(|v| v.as_str()).unwrap_or("-").to_string();
|
||||||
|
let findings_count = session.get("findings_count").and_then(|v| v.as_u64()).unwrap_or(0);
|
||||||
|
let tool_count = session.get("tool_invocations").and_then(|v| v.as_u64()).unwrap_or(0);
|
||||||
|
let created_at = session.get("created_at").and_then(|v| v.as_str()).unwrap_or("-").to_string();
|
||||||
|
let status_style = match status.as_str() {
|
||||||
|
"running" => "background: #16a34a; color: #fff;",
|
||||||
|
"completed" => "background: #2563eb; color: #fff;",
|
||||||
|
"failed" => "background: #dc2626; color: #fff;",
|
||||||
|
"paused" => "background: #d97706; color: #fff;",
|
||||||
|
_ => "background: var(--bg-tertiary); color: var(--text-secondary);",
|
||||||
|
};
|
||||||
|
{
|
||||||
|
let is_session_running = status == "running";
|
||||||
|
let stop_id = id.clone();
|
||||||
|
rsx! {
|
||||||
|
div { class: "card", style: "padding: 16px; transition: border-color 0.15s;",
|
||||||
|
Link {
|
||||||
|
to: Route::PentestSessionPage { session_id: id.clone() },
|
||||||
|
style: "text-decoration: none; cursor: pointer; display: block;",
|
||||||
|
div { style: "display: flex; justify-content: space-between; align-items: flex-start;",
|
||||||
|
div {
|
||||||
|
div { style: "font-weight: 600; font-size: 1rem; margin-bottom: 4px; color: var(--text-primary);",
|
||||||
|
"{target_name}"
|
||||||
|
}
|
||||||
|
div { style: "display: flex; gap: 8px; align-items: center; flex-wrap: wrap;",
|
||||||
|
span {
|
||||||
|
class: "badge",
|
||||||
|
style: "{status_style}",
|
||||||
|
"{status}"
|
||||||
|
}
|
||||||
|
span {
|
||||||
|
class: "badge",
|
||||||
|
style: "background: var(--bg-tertiary); color: var(--text-secondary);",
|
||||||
|
"{strategy}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
div { style: "text-align: right; font-size: 0.85rem; color: var(--text-secondary);",
|
||||||
|
div { style: "margin-bottom: 4px;",
|
||||||
|
Icon { icon: BsShieldExclamation, width: 12, height: 12 }
|
||||||
|
" {findings_count} findings"
|
||||||
|
}
|
||||||
|
div { style: "margin-bottom: 4px;",
|
||||||
|
Icon { icon: BsWrench, width: 12, height: 12 }
|
||||||
|
" {tool_count} tools"
|
||||||
|
}
|
||||||
|
div { "{created_at}" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if is_session_running {
|
||||||
|
div { style: "margin-top: 8px; display: flex; justify-content: flex-end;",
|
||||||
|
button {
|
||||||
|
class: "btn btn-ghost",
|
||||||
|
style: "font-size: 0.8rem; padding: 4px 12px; color: #dc2626; border-color: #dc2626;",
|
||||||
|
onclick: move |e| {
|
||||||
|
e.stop_propagation();
|
||||||
|
e.prevent_default();
|
||||||
|
let sid = stop_id.clone();
|
||||||
|
spawn(async move {
|
||||||
|
let _ = stop_pentest_session(sid).await;
|
||||||
|
sessions.restart();
|
||||||
|
});
|
||||||
|
},
|
||||||
|
Icon { icon: BsStopCircle, width: 12, height: 12 }
|
||||||
|
" Stop"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
Some(None) => rsx! { p { style: "padding: 16px;", "Failed to load sessions." } },
|
||||||
|
None => rsx! { p { style: "padding: 16px;", "Loading..." } },
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// New Pentest Modal
|
||||||
|
if *show_modal.read() {
|
||||||
|
div {
|
||||||
|
style: "position: fixed; inset: 0; background: rgba(0,0,0,0.6); display: flex; align-items: center; justify-content: center; z-index: 1000;",
|
||||||
|
onclick: move |_| show_modal.set(false),
|
||||||
|
div {
|
||||||
|
style: "background: var(--bg-secondary); border: 1px solid var(--border-color); border-radius: 12px; padding: 24px; width: 480px; max-width: 90vw;",
|
||||||
|
onclick: move |e| e.stop_propagation(),
|
||||||
|
h3 { style: "margin: 0 0 16px 0;", "New Pentest Session" }
|
||||||
|
|
||||||
|
// Target selection
|
||||||
|
div { style: "margin-bottom: 12px;",
|
||||||
|
label { style: "display: block; font-size: 0.85rem; color: var(--text-secondary); margin-bottom: 4px;",
|
||||||
|
"Target"
|
||||||
|
}
|
||||||
|
select {
|
||||||
|
class: "chat-input",
|
||||||
|
style: "width: 100%; padding: 8px; resize: none; height: auto;",
|
||||||
|
value: "{new_target_id}",
|
||||||
|
onchange: move |e| new_target_id.set(e.value()),
|
||||||
|
option { value: "", "Select a target..." }
|
||||||
|
match &*targets.read() {
|
||||||
|
Some(Some(data)) => {
|
||||||
|
rsx! {
|
||||||
|
for target in &data.data {
|
||||||
|
{
|
||||||
|
let tid = target.get("_id")
|
||||||
|
.and_then(|v| v.get("$oid"))
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.unwrap_or("").to_string();
|
||||||
|
let tname = target.get("name").and_then(|v| v.as_str()).unwrap_or("Unknown").to_string();
|
||||||
|
let turl = target.get("base_url").and_then(|v| v.as_str()).unwrap_or("").to_string();
|
||||||
|
rsx! {
|
||||||
|
option { value: "{tid}", "{tname} ({turl})" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
_ => rsx! {},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Strategy selection
|
||||||
|
div { style: "margin-bottom: 12px;",
|
||||||
|
label { style: "display: block; font-size: 0.85rem; color: var(--text-secondary); margin-bottom: 4px;",
|
||||||
|
"Strategy"
|
||||||
|
}
|
||||||
|
select {
|
||||||
|
class: "chat-input",
|
||||||
|
style: "width: 100%; padding: 8px; resize: none; height: auto;",
|
||||||
|
value: "{new_strategy}",
|
||||||
|
onchange: move |e| new_strategy.set(e.value()),
|
||||||
|
option { value: "comprehensive", "Comprehensive" }
|
||||||
|
option { value: "quick", "Quick Scan" }
|
||||||
|
option { value: "owasp_top_10", "OWASP Top 10" }
|
||||||
|
option { value: "api_focused", "API Focused" }
|
||||||
|
option { value: "authentication", "Authentication" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Initial message
|
||||||
|
div { style: "margin-bottom: 16px;",
|
||||||
|
label { style: "display: block; font-size: 0.85rem; color: var(--text-secondary); margin-bottom: 4px;",
|
||||||
|
"Initial Instructions"
|
||||||
|
}
|
||||||
|
textarea {
|
||||||
|
class: "chat-input",
|
||||||
|
style: "width: 100%; min-height: 80px;",
|
||||||
|
placeholder: "Describe the scope and goals of this pentest...",
|
||||||
|
value: "{new_message}",
|
||||||
|
oninput: move |e| new_message.set(e.value()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
div { style: "display: flex; justify-content: flex-end; gap: 8px;",
|
||||||
|
button {
|
||||||
|
class: "btn btn-ghost",
|
||||||
|
onclick: move |_| show_modal.set(false),
|
||||||
|
"Cancel"
|
||||||
|
}
|
||||||
|
button {
|
||||||
|
class: "btn btn-primary",
|
||||||
|
disabled: *creating.read() || new_target_id.read().is_empty() || new_message.read().is_empty(),
|
||||||
|
onclick: on_create,
|
||||||
|
if *creating.read() { "Creating..." } else { "Start Pentest" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
1141
compliance-dashboard/src/pages/pentest_session.rs
Normal file
1141
compliance-dashboard/src/pages/pentest_session.rs
Normal file
File diff suppressed because it is too large
Load Diff
@@ -27,6 +27,10 @@ chromiumoxide = { version = "0.7", features = ["tokio-runtime"], default-feature
|
|||||||
# Docker sandboxing
|
# Docker sandboxing
|
||||||
bollard = "0.18"
|
bollard = "0.18"
|
||||||
|
|
||||||
|
# TLS analysis
|
||||||
|
native-tls = "0.2"
|
||||||
|
tokio-native-tls = "0.3"
|
||||||
|
|
||||||
# Serialization
|
# Serialization
|
||||||
bson = { version = "2", features = ["chrono-0_4"] }
|
bson = { version = "2", features = ["chrono-0_4"] }
|
||||||
url = "2"
|
url = "2"
|
||||||
|
|||||||
@@ -2,5 +2,7 @@ pub mod agents;
|
|||||||
pub mod crawler;
|
pub mod crawler;
|
||||||
pub mod orchestrator;
|
pub mod orchestrator;
|
||||||
pub mod recon;
|
pub mod recon;
|
||||||
|
pub mod tools;
|
||||||
|
|
||||||
pub use orchestrator::DastOrchestrator;
|
pub use orchestrator::DastOrchestrator;
|
||||||
|
pub use tools::ToolRegistry;
|
||||||
|
|||||||
146
compliance-dast/src/tools/api_fuzzer.rs
Normal file
146
compliance-dast/src/tools/api_fuzzer.rs
Normal file
@@ -0,0 +1,146 @@
|
|||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::traits::dast_agent::{DastAgent, DastContext, DiscoveredEndpoint, EndpointParameter};
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
|
||||||
|
use crate::agents::api_fuzzer::ApiFuzzerAgent;
|
||||||
|
|
||||||
|
/// PentestTool wrapper around the existing ApiFuzzerAgent.
|
||||||
|
pub struct ApiFuzzerTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
agent: ApiFuzzerAgent,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ApiFuzzerTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
let agent = ApiFuzzerAgent::new(http.clone());
|
||||||
|
Self { http, agent }
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_endpoints(input: &serde_json::Value) -> Vec<DiscoveredEndpoint> {
|
||||||
|
let mut endpoints = Vec::new();
|
||||||
|
if let Some(arr) = input.get("endpoints").and_then(|v| v.as_array()) {
|
||||||
|
for ep in arr {
|
||||||
|
let url = ep.get("url").and_then(|v| v.as_str()).unwrap_or_default().to_string();
|
||||||
|
let method = ep.get("method").and_then(|v| v.as_str()).unwrap_or("GET").to_string();
|
||||||
|
let mut parameters = Vec::new();
|
||||||
|
if let Some(params) = ep.get("parameters").and_then(|v| v.as_array()) {
|
||||||
|
for p in params {
|
||||||
|
parameters.push(EndpointParameter {
|
||||||
|
name: p.get("name").and_then(|v| v.as_str()).unwrap_or_default().to_string(),
|
||||||
|
location: p.get("location").and_then(|v| v.as_str()).unwrap_or("query").to_string(),
|
||||||
|
param_type: p.get("param_type").and_then(|v| v.as_str()).map(String::from),
|
||||||
|
example_value: p.get("example_value").and_then(|v| v.as_str()).map(String::from),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
endpoints.push(DiscoveredEndpoint {
|
||||||
|
url,
|
||||||
|
method,
|
||||||
|
parameters,
|
||||||
|
content_type: ep.get("content_type").and_then(|v| v.as_str()).map(String::from),
|
||||||
|
requires_auth: ep.get("requires_auth").and_then(|v| v.as_bool()).unwrap_or(false),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
endpoints
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for ApiFuzzerTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"api_fuzzer"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Fuzzes API endpoints to discover misconfigurations, information disclosure, and hidden \
|
||||||
|
endpoints. Probes common sensitive paths and tests for verbose error messages."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"endpoints": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Known endpoints to fuzz",
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"url": { "type": "string" },
|
||||||
|
"method": { "type": "string", "enum": ["GET", "POST", "PUT", "PATCH", "DELETE"] },
|
||||||
|
"parameters": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"name": { "type": "string" },
|
||||||
|
"location": { "type": "string" },
|
||||||
|
"param_type": { "type": "string" },
|
||||||
|
"example_value": { "type": "string" }
|
||||||
|
},
|
||||||
|
"required": ["name"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["url"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"base_url": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Base URL to probe for common sensitive paths (used if no endpoints provided)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let mut endpoints = Self::parse_endpoints(&input);
|
||||||
|
|
||||||
|
// If a base_url is provided but no endpoints, create a default endpoint
|
||||||
|
if endpoints.is_empty() {
|
||||||
|
if let Some(base) = input.get("base_url").and_then(|v| v.as_str()) {
|
||||||
|
endpoints.push(DiscoveredEndpoint {
|
||||||
|
url: base.to_string(),
|
||||||
|
method: "GET".to_string(),
|
||||||
|
parameters: Vec::new(),
|
||||||
|
content_type: None,
|
||||||
|
requires_auth: false,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if endpoints.is_empty() {
|
||||||
|
return Ok(PentestToolResult {
|
||||||
|
summary: "No endpoints or base_url provided to fuzz.".to_string(),
|
||||||
|
findings: Vec::new(),
|
||||||
|
data: json!({}),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
let dast_context = DastContext {
|
||||||
|
endpoints,
|
||||||
|
technologies: Vec::new(),
|
||||||
|
sast_hints: Vec::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let findings = self.agent.run(&context.target, &dast_context).await?;
|
||||||
|
let count = findings.len();
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if count > 0 {
|
||||||
|
format!("Found {count} API misconfigurations or information disclosures.")
|
||||||
|
} else {
|
||||||
|
"No API misconfigurations detected.".to_string()
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: json!({ "endpoints_tested": dast_context.endpoints.len() }),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
130
compliance-dast/src/tools/auth_bypass.rs
Normal file
130
compliance-dast/src/tools/auth_bypass.rs
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::traits::dast_agent::{DastAgent, DastContext, DiscoveredEndpoint, EndpointParameter};
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
|
||||||
|
use crate::agents::auth_bypass::AuthBypassAgent;
|
||||||
|
|
||||||
|
/// PentestTool wrapper around the existing AuthBypassAgent.
|
||||||
|
pub struct AuthBypassTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
agent: AuthBypassAgent,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl AuthBypassTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
let agent = AuthBypassAgent::new(http.clone());
|
||||||
|
Self { http, agent }
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_endpoints(input: &serde_json::Value) -> Vec<DiscoveredEndpoint> {
|
||||||
|
let mut endpoints = Vec::new();
|
||||||
|
if let Some(arr) = input.get("endpoints").and_then(|v| v.as_array()) {
|
||||||
|
for ep in arr {
|
||||||
|
let url = ep.get("url").and_then(|v| v.as_str()).unwrap_or_default().to_string();
|
||||||
|
let method = ep.get("method").and_then(|v| v.as_str()).unwrap_or("GET").to_string();
|
||||||
|
let mut parameters = Vec::new();
|
||||||
|
if let Some(params) = ep.get("parameters").and_then(|v| v.as_array()) {
|
||||||
|
for p in params {
|
||||||
|
parameters.push(EndpointParameter {
|
||||||
|
name: p.get("name").and_then(|v| v.as_str()).unwrap_or_default().to_string(),
|
||||||
|
location: p.get("location").and_then(|v| v.as_str()).unwrap_or("query").to_string(),
|
||||||
|
param_type: p.get("param_type").and_then(|v| v.as_str()).map(String::from),
|
||||||
|
example_value: p.get("example_value").and_then(|v| v.as_str()).map(String::from),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
endpoints.push(DiscoveredEndpoint {
|
||||||
|
url,
|
||||||
|
method,
|
||||||
|
parameters,
|
||||||
|
content_type: ep.get("content_type").and_then(|v| v.as_str()).map(String::from),
|
||||||
|
requires_auth: ep.get("requires_auth").and_then(|v| v.as_bool()).unwrap_or(false),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
endpoints
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for AuthBypassTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"auth_bypass_scanner"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Tests endpoints for authentication bypass vulnerabilities. Tries accessing protected \
|
||||||
|
endpoints without credentials, with manipulated tokens, and with common default credentials."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"endpoints": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Endpoints to test for authentication bypass",
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"url": { "type": "string" },
|
||||||
|
"method": { "type": "string", "enum": ["GET", "POST", "PUT", "PATCH", "DELETE"] },
|
||||||
|
"parameters": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"name": { "type": "string" },
|
||||||
|
"location": { "type": "string" },
|
||||||
|
"param_type": { "type": "string" },
|
||||||
|
"example_value": { "type": "string" }
|
||||||
|
},
|
||||||
|
"required": ["name"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"requires_auth": { "type": "boolean", "description": "Whether this endpoint requires authentication" }
|
||||||
|
},
|
||||||
|
"required": ["url", "method"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["endpoints"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let endpoints = Self::parse_endpoints(&input);
|
||||||
|
if endpoints.is_empty() {
|
||||||
|
return Ok(PentestToolResult {
|
||||||
|
summary: "No endpoints provided to test.".to_string(),
|
||||||
|
findings: Vec::new(),
|
||||||
|
data: json!({}),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
let dast_context = DastContext {
|
||||||
|
endpoints,
|
||||||
|
technologies: Vec::new(),
|
||||||
|
sast_hints: Vec::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let findings = self.agent.run(&context.target, &dast_context).await?;
|
||||||
|
let count = findings.len();
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if count > 0 {
|
||||||
|
format!("Found {count} authentication bypass vulnerabilities.")
|
||||||
|
} else {
|
||||||
|
"No authentication bypass vulnerabilities detected.".to_string()
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: json!({ "endpoints_tested": dast_context.endpoints.len() }),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
326
compliance-dast/src/tools/console_log_detector.rs
Normal file
326
compliance-dast/src/tools/console_log_detector.rs
Normal file
@@ -0,0 +1,326 @@
|
|||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::models::dast::{DastEvidence, DastFinding, DastVulnType};
|
||||||
|
use compliance_core::models::Severity;
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
use tracing::info;
|
||||||
|
|
||||||
|
/// Tool that detects console.log and similar debug statements in frontend JavaScript.
|
||||||
|
pub struct ConsoleLogDetectorTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A detected console statement with its context.
|
||||||
|
#[derive(Debug)]
|
||||||
|
struct ConsoleMatch {
|
||||||
|
pattern: String,
|
||||||
|
file_url: String,
|
||||||
|
line_snippet: String,
|
||||||
|
line_number: Option<usize>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ConsoleLogDetectorTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
Self { http }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Patterns that indicate debug/logging statements left in production code.
|
||||||
|
fn patterns() -> Vec<&'static str> {
|
||||||
|
vec![
|
||||||
|
"console.log(",
|
||||||
|
"console.debug(",
|
||||||
|
"console.error(",
|
||||||
|
"console.warn(",
|
||||||
|
"console.info(",
|
||||||
|
"console.trace(",
|
||||||
|
"console.dir(",
|
||||||
|
"console.table(",
|
||||||
|
"debugger;",
|
||||||
|
"alert(",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Extract JavaScript file URLs from an HTML page body.
|
||||||
|
fn extract_js_urls(html: &str, base_url: &str) -> Vec<String> {
|
||||||
|
let mut urls = Vec::new();
|
||||||
|
let base = url::Url::parse(base_url).ok();
|
||||||
|
|
||||||
|
// Simple regex-free extraction of <script src="...">
|
||||||
|
let mut search_from = 0;
|
||||||
|
while let Some(start) = html[search_from..].find("src=") {
|
||||||
|
let abs_start = search_from + start + 4;
|
||||||
|
if abs_start >= html.len() {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
let quote = html.as_bytes().get(abs_start).copied();
|
||||||
|
let (open, close) = match quote {
|
||||||
|
Some(b'"') => ('"', '"'),
|
||||||
|
Some(b'\'') => ('\'', '\''),
|
||||||
|
_ => {
|
||||||
|
search_from = abs_start + 1;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let val_start = abs_start + 1;
|
||||||
|
if let Some(end) = html[val_start..].find(close) {
|
||||||
|
let src = &html[val_start..val_start + end];
|
||||||
|
if src.ends_with(".js") || src.contains(".js?") || src.contains("/js/") {
|
||||||
|
let full_url = if src.starts_with("http://") || src.starts_with("https://") {
|
||||||
|
src.to_string()
|
||||||
|
} else if src.starts_with("//") {
|
||||||
|
format!("https:{src}")
|
||||||
|
} else if let Some(ref base) = base {
|
||||||
|
base.join(src).map(|u| u.to_string()).unwrap_or_default()
|
||||||
|
} else {
|
||||||
|
format!("{base_url}/{}", src.trim_start_matches('/'))
|
||||||
|
};
|
||||||
|
if !full_url.is_empty() {
|
||||||
|
urls.push(full_url);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
search_from = val_start + end + 1;
|
||||||
|
} else {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
urls
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Search a JS file's contents for console/debug patterns.
|
||||||
|
fn scan_js_content(content: &str, file_url: &str) -> Vec<ConsoleMatch> {
|
||||||
|
let mut matches = Vec::new();
|
||||||
|
|
||||||
|
for (line_num, line) in content.lines().enumerate() {
|
||||||
|
let trimmed = line.trim();
|
||||||
|
// Skip comments (basic heuristic)
|
||||||
|
if trimmed.starts_with("//") || trimmed.starts_with('*') || trimmed.starts_with("/*") {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
for pattern in Self::patterns() {
|
||||||
|
if line.contains(pattern) {
|
||||||
|
let snippet = if line.len() > 200 {
|
||||||
|
format!("{}...", &line[..200])
|
||||||
|
} else {
|
||||||
|
line.to_string()
|
||||||
|
};
|
||||||
|
matches.push(ConsoleMatch {
|
||||||
|
pattern: pattern.trim_end_matches('(').to_string(),
|
||||||
|
file_url: file_url.to_string(),
|
||||||
|
line_snippet: snippet.trim().to_string(),
|
||||||
|
line_number: Some(line_num + 1),
|
||||||
|
});
|
||||||
|
break; // One match per line is enough
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
matches
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for ConsoleLogDetectorTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"console_log_detector"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Detects console.log, console.debug, console.error, debugger, and similar debug \
|
||||||
|
statements left in production JavaScript. Fetches the HTML page and referenced JS files."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"url": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "URL of the page to check for console.log leakage"
|
||||||
|
},
|
||||||
|
"additional_js_urls": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Optional additional JavaScript file URLs to scan",
|
||||||
|
"items": { "type": "string" }
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["url"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let url = input
|
||||||
|
.get("url")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or_else(|| CoreError::Dast("Missing required 'url' parameter".to_string()))?;
|
||||||
|
|
||||||
|
let additional_js: Vec<String> = input
|
||||||
|
.get("additional_js_urls")
|
||||||
|
.and_then(|v| v.as_array())
|
||||||
|
.map(|arr| {
|
||||||
|
arr.iter()
|
||||||
|
.filter_map(|v| v.as_str().map(String::from))
|
||||||
|
.collect()
|
||||||
|
})
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
let target_id = context
|
||||||
|
.target
|
||||||
|
.id
|
||||||
|
.map(|oid| oid.to_hex())
|
||||||
|
.unwrap_or_else(|| "unknown".to_string());
|
||||||
|
|
||||||
|
// Fetch the main page
|
||||||
|
let response = self
|
||||||
|
.http
|
||||||
|
.get(url)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| CoreError::Dast(format!("Failed to fetch {url}: {e}")))?;
|
||||||
|
|
||||||
|
let html = response.text().await.unwrap_or_default();
|
||||||
|
|
||||||
|
// Scan inline scripts in the HTML
|
||||||
|
let mut all_matches = Vec::new();
|
||||||
|
let inline_matches = Self::scan_js_content(&html, url);
|
||||||
|
all_matches.extend(inline_matches);
|
||||||
|
|
||||||
|
// Extract JS file URLs from the HTML
|
||||||
|
let mut js_urls = Self::extract_js_urls(&html, url);
|
||||||
|
js_urls.extend(additional_js);
|
||||||
|
js_urls.dedup();
|
||||||
|
|
||||||
|
// Fetch and scan each JS file
|
||||||
|
for js_url in &js_urls {
|
||||||
|
match self.http.get(js_url).send().await {
|
||||||
|
Ok(resp) => {
|
||||||
|
if resp.status().is_success() {
|
||||||
|
let js_content = resp.text().await.unwrap_or_default();
|
||||||
|
// Only scan non-minified-looking files or files where we can still
|
||||||
|
// find patterns (minifiers typically strip console calls, but not always)
|
||||||
|
let file_matches = Self::scan_js_content(&js_content, js_url);
|
||||||
|
all_matches.extend(file_matches);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(_) => continue,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut findings = Vec::new();
|
||||||
|
let match_data: Vec<serde_json::Value> = all_matches
|
||||||
|
.iter()
|
||||||
|
.map(|m| {
|
||||||
|
json!({
|
||||||
|
"pattern": m.pattern,
|
||||||
|
"file": m.file_url,
|
||||||
|
"line": m.line_number,
|
||||||
|
"snippet": m.line_snippet,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
if !all_matches.is_empty() {
|
||||||
|
// Group by file for the finding
|
||||||
|
let mut by_file: std::collections::HashMap<&str, Vec<&ConsoleMatch>> =
|
||||||
|
std::collections::HashMap::new();
|
||||||
|
for m in &all_matches {
|
||||||
|
by_file.entry(&m.file_url).or_default().push(m);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (file_url, matches) in &by_file {
|
||||||
|
let pattern_summary: Vec<String> = matches
|
||||||
|
.iter()
|
||||||
|
.take(5)
|
||||||
|
.map(|m| {
|
||||||
|
format!(
|
||||||
|
" Line {}: {} - {}",
|
||||||
|
m.line_number.unwrap_or(0),
|
||||||
|
m.pattern,
|
||||||
|
if m.line_snippet.len() > 80 {
|
||||||
|
format!("{}...", &m.line_snippet[..80])
|
||||||
|
} else {
|
||||||
|
m.line_snippet.clone()
|
||||||
|
}
|
||||||
|
)
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: file_url.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 200,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(pattern_summary.join("\n")),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let total = matches.len();
|
||||||
|
let extra = if total > 5 {
|
||||||
|
format!(" (and {} more)", total - 5)
|
||||||
|
} else {
|
||||||
|
String::new()
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::ConsoleLogLeakage,
|
||||||
|
format!("Console/debug statements in {}", file_url),
|
||||||
|
format!(
|
||||||
|
"Found {total} console/debug statements in {file_url}{extra}. \
|
||||||
|
These can leak sensitive information such as API responses, user data, \
|
||||||
|
or internal state to anyone with browser developer tools open."
|
||||||
|
),
|
||||||
|
Severity::Low,
|
||||||
|
file_url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-532".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Remove console.log/debug/error statements from production code. \
|
||||||
|
Use a build step (e.g., babel plugin, terser) to strip console calls \
|
||||||
|
during the production build."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let total_matches = all_matches.len();
|
||||||
|
let count = findings.len();
|
||||||
|
info!(url, js_files = js_urls.len(), total_matches, "Console log detection complete");
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if total_matches > 0 {
|
||||||
|
format!(
|
||||||
|
"Found {total_matches} console/debug statements across {} files.",
|
||||||
|
count
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
format!(
|
||||||
|
"No console/debug statements found in HTML or {} JS files.",
|
||||||
|
js_urls.len()
|
||||||
|
)
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: json!({
|
||||||
|
"total_matches": total_matches,
|
||||||
|
"js_files_scanned": js_urls.len(),
|
||||||
|
"matches": match_data,
|
||||||
|
}),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
401
compliance-dast/src/tools/cookie_analyzer.rs
Normal file
401
compliance-dast/src/tools/cookie_analyzer.rs
Normal file
@@ -0,0 +1,401 @@
|
|||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::models::dast::{DastEvidence, DastFinding, DastVulnType};
|
||||||
|
use compliance_core::models::Severity;
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
use tracing::info;
|
||||||
|
|
||||||
|
/// Tool that inspects cookies set by a target for security issues.
|
||||||
|
pub struct CookieAnalyzerTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parsed attributes from a Set-Cookie header.
|
||||||
|
#[derive(Debug)]
|
||||||
|
struct ParsedCookie {
|
||||||
|
name: String,
|
||||||
|
value: String,
|
||||||
|
secure: bool,
|
||||||
|
http_only: bool,
|
||||||
|
same_site: Option<String>,
|
||||||
|
domain: Option<String>,
|
||||||
|
path: Option<String>,
|
||||||
|
raw: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl CookieAnalyzerTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
Self { http }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse a Set-Cookie header value into a structured representation.
|
||||||
|
fn parse_set_cookie(header: &str) -> ParsedCookie {
|
||||||
|
let raw = header.to_string();
|
||||||
|
let parts: Vec<&str> = header.split(';').collect();
|
||||||
|
|
||||||
|
let (name, value) = if let Some(kv) = parts.first() {
|
||||||
|
let mut kv_split = kv.splitn(2, '=');
|
||||||
|
let k = kv_split.next().unwrap_or("").trim().to_string();
|
||||||
|
let v = kv_split.next().unwrap_or("").trim().to_string();
|
||||||
|
(k, v)
|
||||||
|
} else {
|
||||||
|
(String::new(), String::new())
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut secure = false;
|
||||||
|
let mut http_only = false;
|
||||||
|
let mut same_site = None;
|
||||||
|
let mut domain = None;
|
||||||
|
let mut path = None;
|
||||||
|
|
||||||
|
for part in parts.iter().skip(1) {
|
||||||
|
let trimmed = part.trim().to_lowercase();
|
||||||
|
if trimmed == "secure" {
|
||||||
|
secure = true;
|
||||||
|
} else if trimmed == "httponly" {
|
||||||
|
http_only = true;
|
||||||
|
} else if let Some(ss) = trimmed.strip_prefix("samesite=") {
|
||||||
|
same_site = Some(ss.trim().to_string());
|
||||||
|
} else if let Some(d) = trimmed.strip_prefix("domain=") {
|
||||||
|
domain = Some(d.trim().to_string());
|
||||||
|
} else if let Some(p) = trimmed.strip_prefix("path=") {
|
||||||
|
path = Some(p.trim().to_string());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ParsedCookie {
|
||||||
|
name,
|
||||||
|
value,
|
||||||
|
secure,
|
||||||
|
http_only,
|
||||||
|
same_site,
|
||||||
|
domain,
|
||||||
|
path,
|
||||||
|
raw,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Heuristic: does this cookie name suggest it's a session / auth cookie?
|
||||||
|
fn is_sensitive_cookie(name: &str) -> bool {
|
||||||
|
let lower = name.to_lowercase();
|
||||||
|
lower.contains("session")
|
||||||
|
|| lower.contains("sess")
|
||||||
|
|| lower.contains("token")
|
||||||
|
|| lower.contains("auth")
|
||||||
|
|| lower.contains("jwt")
|
||||||
|
|| lower.contains("csrf")
|
||||||
|
|| lower.contains("sid")
|
||||||
|
|| lower == "connect.sid"
|
||||||
|
|| lower == "phpsessid"
|
||||||
|
|| lower == "jsessionid"
|
||||||
|
|| lower == "asp.net_sessionid"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for CookieAnalyzerTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"cookie_analyzer"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Analyzes cookies set by a target URL. Checks for Secure, HttpOnly, SameSite attributes \
|
||||||
|
and overly broad Domain/Path settings. Focuses on session and authentication cookies."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"url": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "URL to fetch and analyze cookies from"
|
||||||
|
},
|
||||||
|
"login_url": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Optional login URL to also check (may set auth cookies)"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["url"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let url = input
|
||||||
|
.get("url")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or_else(|| CoreError::Dast("Missing required 'url' parameter".to_string()))?;
|
||||||
|
|
||||||
|
let login_url = input.get("login_url").and_then(|v| v.as_str());
|
||||||
|
|
||||||
|
let target_id = context
|
||||||
|
.target
|
||||||
|
.id
|
||||||
|
.map(|oid| oid.to_hex())
|
||||||
|
.unwrap_or_else(|| "unknown".to_string());
|
||||||
|
|
||||||
|
let mut findings = Vec::new();
|
||||||
|
let mut cookie_data = Vec::new();
|
||||||
|
|
||||||
|
// Collect Set-Cookie headers from the main URL and optional login URL
|
||||||
|
let urls_to_check: Vec<&str> = std::iter::once(url)
|
||||||
|
.chain(login_url.into_iter())
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
for check_url in &urls_to_check {
|
||||||
|
// Use a client that does NOT follow redirects so we catch cookies on redirect responses
|
||||||
|
let no_redirect_client = reqwest::Client::builder()
|
||||||
|
.danger_accept_invalid_certs(true)
|
||||||
|
.redirect(reqwest::redirect::Policy::none())
|
||||||
|
.timeout(std::time::Duration::from_secs(15))
|
||||||
|
.build()
|
||||||
|
.map_err(|e| CoreError::Dast(format!("Client build error: {e}")))?;
|
||||||
|
|
||||||
|
let response = match no_redirect_client.get(*check_url).send().await {
|
||||||
|
Ok(r) => r,
|
||||||
|
Err(e) => {
|
||||||
|
// Try with the main client that follows redirects
|
||||||
|
match self.http.get(*check_url).send().await {
|
||||||
|
Ok(r) => r,
|
||||||
|
Err(_) => continue,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let status = response.status().as_u16();
|
||||||
|
let set_cookie_headers: Vec<String> = response
|
||||||
|
.headers()
|
||||||
|
.get_all("set-cookie")
|
||||||
|
.iter()
|
||||||
|
.filter_map(|v| v.to_str().ok().map(String::from))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
for raw_cookie in &set_cookie_headers {
|
||||||
|
let cookie = Self::parse_set_cookie(raw_cookie);
|
||||||
|
let is_sensitive = Self::is_sensitive_cookie(&cookie.name);
|
||||||
|
let is_https = check_url.starts_with("https://");
|
||||||
|
|
||||||
|
let cookie_info = json!({
|
||||||
|
"name": cookie.name,
|
||||||
|
"secure": cookie.secure,
|
||||||
|
"http_only": cookie.http_only,
|
||||||
|
"same_site": cookie.same_site,
|
||||||
|
"domain": cookie.domain,
|
||||||
|
"path": cookie.path,
|
||||||
|
"is_sensitive": is_sensitive,
|
||||||
|
"url": check_url,
|
||||||
|
});
|
||||||
|
cookie_data.push(cookie_info);
|
||||||
|
|
||||||
|
// Check: missing Secure flag
|
||||||
|
if !cookie.secure && (is_https || is_sensitive) {
|
||||||
|
let severity = if is_sensitive {
|
||||||
|
Severity::High
|
||||||
|
} else {
|
||||||
|
Severity::Medium
|
||||||
|
};
|
||||||
|
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: check_url.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: status,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(cookie.raw.clone()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::CookieSecurity,
|
||||||
|
format!("Cookie '{}' missing Secure flag", cookie.name),
|
||||||
|
format!(
|
||||||
|
"The cookie '{}' does not have the Secure attribute set. \
|
||||||
|
Without this flag, the cookie can be transmitted over unencrypted HTTP connections.",
|
||||||
|
cookie.name
|
||||||
|
),
|
||||||
|
severity,
|
||||||
|
check_url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-614".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Add the 'Secure' attribute to the Set-Cookie header to ensure the \
|
||||||
|
cookie is only sent over HTTPS connections."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check: missing HttpOnly flag on sensitive cookies
|
||||||
|
if !cookie.http_only && is_sensitive {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: check_url.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: status,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(cookie.raw.clone()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::CookieSecurity,
|
||||||
|
format!("Cookie '{}' missing HttpOnly flag", cookie.name),
|
||||||
|
format!(
|
||||||
|
"The session/auth cookie '{}' does not have the HttpOnly attribute. \
|
||||||
|
This makes it accessible to JavaScript, increasing the impact of XSS attacks.",
|
||||||
|
cookie.name
|
||||||
|
),
|
||||||
|
Severity::High,
|
||||||
|
check_url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-1004".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Add the 'HttpOnly' attribute to the Set-Cookie header to prevent \
|
||||||
|
JavaScript access to the cookie."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check: missing or weak SameSite
|
||||||
|
if is_sensitive {
|
||||||
|
let weak_same_site = match &cookie.same_site {
|
||||||
|
None => true,
|
||||||
|
Some(ss) => ss == "none",
|
||||||
|
};
|
||||||
|
|
||||||
|
if weak_same_site {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: check_url.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: status,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(cookie.raw.clone()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let desc = if cookie.same_site.is_none() {
|
||||||
|
format!(
|
||||||
|
"The session/auth cookie '{}' does not have a SameSite attribute. \
|
||||||
|
This may allow cross-site request forgery (CSRF) attacks.",
|
||||||
|
cookie.name
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
format!(
|
||||||
|
"The session/auth cookie '{}' has SameSite=None, which allows it \
|
||||||
|
to be sent in cross-site requests, enabling CSRF attacks.",
|
||||||
|
cookie.name
|
||||||
|
)
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::CookieSecurity,
|
||||||
|
format!("Cookie '{}' missing or weak SameSite", cookie.name),
|
||||||
|
desc,
|
||||||
|
Severity::Medium,
|
||||||
|
check_url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-1275".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Set 'SameSite=Strict' or 'SameSite=Lax' on session/auth cookies \
|
||||||
|
to prevent cross-site request inclusion."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check: overly broad domain
|
||||||
|
if let Some(ref domain) = cookie.domain {
|
||||||
|
// A domain starting with a dot applies to all subdomains
|
||||||
|
let dot_domain = domain.starts_with('.');
|
||||||
|
// Count domain parts - if only 2 parts (e.g., .example.com), it's broad
|
||||||
|
let parts: Vec<&str> = domain.trim_start_matches('.').split('.').collect();
|
||||||
|
if dot_domain && parts.len() <= 2 && is_sensitive {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: check_url.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: status,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(cookie.raw.clone()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::CookieSecurity,
|
||||||
|
format!("Cookie '{}' has overly broad domain", cookie.name),
|
||||||
|
format!(
|
||||||
|
"The cookie '{}' is scoped to domain '{}' which includes all \
|
||||||
|
subdomains. If any subdomain is compromised, the attacker can \
|
||||||
|
access this cookie.",
|
||||||
|
cookie.name, domain
|
||||||
|
),
|
||||||
|
Severity::Low,
|
||||||
|
check_url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-1004".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Restrict the cookie domain to the specific subdomain that needs it \
|
||||||
|
rather than the entire parent domain."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let count = findings.len();
|
||||||
|
info!(url, findings = count, "Cookie analysis complete");
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if count > 0 {
|
||||||
|
format!("Found {count} cookie security issues.")
|
||||||
|
} else if cookie_data.is_empty() {
|
||||||
|
"No cookies were set by the target.".to_string()
|
||||||
|
} else {
|
||||||
|
"All cookies have proper security attributes.".to_string()
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: json!({
|
||||||
|
"cookies": cookie_data,
|
||||||
|
"total_cookies": cookie_data.len(),
|
||||||
|
}),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
410
compliance-dast/src/tools/cors_checker.rs
Normal file
410
compliance-dast/src/tools/cors_checker.rs
Normal file
@@ -0,0 +1,410 @@
|
|||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::models::dast::{DastEvidence, DastFinding, DastVulnType};
|
||||||
|
use compliance_core::models::Severity;
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
use tracing::{info, warn};
|
||||||
|
|
||||||
|
/// Tool that checks CORS configuration for security issues.
|
||||||
|
pub struct CorsCheckerTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl CorsCheckerTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
Self { http }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Origins to test against the target.
|
||||||
|
fn test_origins(target_host: &str) -> Vec<(&'static str, String)> {
|
||||||
|
vec![
|
||||||
|
("null_origin", "null".to_string()),
|
||||||
|
("evil_domain", "https://evil.com".to_string()),
|
||||||
|
(
|
||||||
|
"subdomain_spoof",
|
||||||
|
format!("https://{target_host}.evil.com"),
|
||||||
|
),
|
||||||
|
(
|
||||||
|
"prefix_spoof",
|
||||||
|
format!("https://evil-{target_host}"),
|
||||||
|
),
|
||||||
|
("http_downgrade", format!("http://{target_host}")),
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for CorsCheckerTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"cors_checker"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Checks CORS configuration by sending requests with various Origin headers. Tests for \
|
||||||
|
wildcard origins, reflected origins, null origin acceptance, and dangerous \
|
||||||
|
Access-Control-Allow-Credentials combinations."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"url": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "URL to test CORS configuration on"
|
||||||
|
},
|
||||||
|
"additional_origins": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Optional additional origin values to test",
|
||||||
|
"items": { "type": "string" }
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["url"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let url = input
|
||||||
|
.get("url")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or_else(|| CoreError::Dast("Missing required 'url' parameter".to_string()))?;
|
||||||
|
|
||||||
|
let additional_origins: Vec<String> = input
|
||||||
|
.get("additional_origins")
|
||||||
|
.and_then(|v| v.as_array())
|
||||||
|
.map(|arr| {
|
||||||
|
arr.iter()
|
||||||
|
.filter_map(|v| v.as_str().map(String::from))
|
||||||
|
.collect()
|
||||||
|
})
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
let target_id = context
|
||||||
|
.target
|
||||||
|
.id
|
||||||
|
.map(|oid| oid.to_hex())
|
||||||
|
.unwrap_or_else(|| "unknown".to_string());
|
||||||
|
|
||||||
|
let target_host = url::Url::parse(url)
|
||||||
|
.ok()
|
||||||
|
.and_then(|u| u.host_str().map(String::from))
|
||||||
|
.unwrap_or_else(|| url.to_string());
|
||||||
|
|
||||||
|
let mut findings = Vec::new();
|
||||||
|
let mut cors_data: Vec<serde_json::Value> = Vec::new();
|
||||||
|
|
||||||
|
// First, send a request without Origin to get baseline
|
||||||
|
let baseline = self
|
||||||
|
.http
|
||||||
|
.get(url)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| CoreError::Dast(format!("Failed to fetch {url}: {e}")))?;
|
||||||
|
|
||||||
|
let baseline_acao = baseline
|
||||||
|
.headers()
|
||||||
|
.get("access-control-allow-origin")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.map(String::from);
|
||||||
|
|
||||||
|
cors_data.push(json!({
|
||||||
|
"origin": null,
|
||||||
|
"acao": baseline_acao,
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Check for wildcard + credentials (dangerous combo)
|
||||||
|
if let Some(ref acao) = baseline_acao {
|
||||||
|
if acao == "*" {
|
||||||
|
let acac = baseline
|
||||||
|
.headers()
|
||||||
|
.get("access-control-allow-credentials")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.unwrap_or("");
|
||||||
|
|
||||||
|
if acac.to_lowercase() == "true" {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: url.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: baseline.status().as_u16(),
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(format!(
|
||||||
|
"Access-Control-Allow-Origin: *\nAccess-Control-Allow-Credentials: true"
|
||||||
|
)),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::CorsMisconfiguration,
|
||||||
|
"CORS wildcard with credentials".to_string(),
|
||||||
|
format!(
|
||||||
|
"The endpoint {url} returns Access-Control-Allow-Origin: * with \
|
||||||
|
Access-Control-Allow-Credentials: true. While browsers should block this \
|
||||||
|
combination, it indicates a serious CORS misconfiguration."
|
||||||
|
),
|
||||||
|
Severity::High,
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-942".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Never combine Access-Control-Allow-Origin: * with \
|
||||||
|
Access-Control-Allow-Credentials: true. Specify explicit allowed origins."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test with various Origin headers
|
||||||
|
let mut test_origins = Self::test_origins(&target_host);
|
||||||
|
for origin in &additional_origins {
|
||||||
|
test_origins.push(("custom", origin.clone()));
|
||||||
|
}
|
||||||
|
|
||||||
|
for (test_name, origin) in &test_origins {
|
||||||
|
let resp = match self
|
||||||
|
.http
|
||||||
|
.get(url)
|
||||||
|
.header("Origin", origin.as_str())
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(r) => r,
|
||||||
|
Err(_) => continue,
|
||||||
|
};
|
||||||
|
|
||||||
|
let status = resp.status().as_u16();
|
||||||
|
let acao = resp
|
||||||
|
.headers()
|
||||||
|
.get("access-control-allow-origin")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.map(String::from);
|
||||||
|
|
||||||
|
let acac = resp
|
||||||
|
.headers()
|
||||||
|
.get("access-control-allow-credentials")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.map(String::from);
|
||||||
|
|
||||||
|
let acam = resp
|
||||||
|
.headers()
|
||||||
|
.get("access-control-allow-methods")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.map(String::from);
|
||||||
|
|
||||||
|
cors_data.push(json!({
|
||||||
|
"test": test_name,
|
||||||
|
"origin": origin,
|
||||||
|
"acao": acao,
|
||||||
|
"acac": acac,
|
||||||
|
"acam": acam,
|
||||||
|
"status": status,
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Check if the origin was reflected back
|
||||||
|
if let Some(ref acao_val) = acao {
|
||||||
|
let origin_reflected = acao_val == origin;
|
||||||
|
let credentials_allowed = acac
|
||||||
|
.as_ref()
|
||||||
|
.map(|v| v.to_lowercase() == "true")
|
||||||
|
.unwrap_or(false);
|
||||||
|
|
||||||
|
if origin_reflected && *test_name != "http_downgrade" {
|
||||||
|
let severity = if credentials_allowed {
|
||||||
|
Severity::Critical
|
||||||
|
} else {
|
||||||
|
Severity::High
|
||||||
|
};
|
||||||
|
|
||||||
|
let resp_headers: HashMap<String, String> = resp
|
||||||
|
.headers()
|
||||||
|
.iter()
|
||||||
|
.map(|(k, v)| (k.to_string(), v.to_str().unwrap_or("").to_string()))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: url.to_string(),
|
||||||
|
request_headers: Some(
|
||||||
|
[("Origin".to_string(), origin.clone())]
|
||||||
|
.into_iter()
|
||||||
|
.collect(),
|
||||||
|
),
|
||||||
|
request_body: None,
|
||||||
|
response_status: status,
|
||||||
|
response_headers: Some(resp_headers),
|
||||||
|
response_snippet: Some(format!(
|
||||||
|
"Origin: {origin}\nAccess-Control-Allow-Origin: {acao_val}\n\
|
||||||
|
Access-Control-Allow-Credentials: {}",
|
||||||
|
acac.as_deref().unwrap_or("not set")
|
||||||
|
)),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: Some(origin.clone()),
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let title = match *test_name {
|
||||||
|
"null_origin" => {
|
||||||
|
"CORS accepts null origin".to_string()
|
||||||
|
}
|
||||||
|
"evil_domain" => {
|
||||||
|
"CORS reflects arbitrary origin".to_string()
|
||||||
|
}
|
||||||
|
"subdomain_spoof" => {
|
||||||
|
"CORS vulnerable to subdomain spoofing".to_string()
|
||||||
|
}
|
||||||
|
"prefix_spoof" => {
|
||||||
|
"CORS vulnerable to prefix spoofing".to_string()
|
||||||
|
}
|
||||||
|
_ => format!("CORS reflects untrusted origin ({test_name})"),
|
||||||
|
};
|
||||||
|
|
||||||
|
let cred_note = if credentials_allowed {
|
||||||
|
" Combined with Access-Control-Allow-Credentials: true, this allows \
|
||||||
|
the attacker to steal authenticated data."
|
||||||
|
} else {
|
||||||
|
""
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::CorsMisconfiguration,
|
||||||
|
title,
|
||||||
|
format!(
|
||||||
|
"The endpoint {url} reflects the Origin header '{origin}' back in \
|
||||||
|
Access-Control-Allow-Origin, allowing cross-origin requests from \
|
||||||
|
untrusted domains.{cred_note}"
|
||||||
|
),
|
||||||
|
severity,
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-942".to_string());
|
||||||
|
finding.exploitable = credentials_allowed;
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Validate the Origin header against a whitelist of trusted origins. \
|
||||||
|
Do not reflect the Origin header value directly. Use specific allowed \
|
||||||
|
origins instead of wildcards."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
|
||||||
|
warn!(
|
||||||
|
url,
|
||||||
|
test_name,
|
||||||
|
origin,
|
||||||
|
credentials = credentials_allowed,
|
||||||
|
"CORS misconfiguration detected"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Special case: HTTP downgrade
|
||||||
|
if *test_name == "http_downgrade" && origin_reflected && credentials_allowed {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: url.to_string(),
|
||||||
|
request_headers: Some(
|
||||||
|
[("Origin".to_string(), origin.clone())]
|
||||||
|
.into_iter()
|
||||||
|
.collect(),
|
||||||
|
),
|
||||||
|
request_body: None,
|
||||||
|
response_status: status,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(format!(
|
||||||
|
"HTTP origin accepted: {origin} -> ACAO: {acao_val}"
|
||||||
|
)),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: Some(origin.clone()),
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::CorsMisconfiguration,
|
||||||
|
"CORS allows HTTP origin with credentials".to_string(),
|
||||||
|
format!(
|
||||||
|
"The HTTPS endpoint {url} accepts the HTTP origin {origin} with \
|
||||||
|
credentials. An attacker performing a man-in-the-middle attack on \
|
||||||
|
the HTTP version could steal authenticated data."
|
||||||
|
),
|
||||||
|
Severity::High,
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-942".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Do not accept HTTP origins for HTTPS endpoints. Ensure CORS \
|
||||||
|
origin validation enforces the https:// scheme."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also send a preflight OPTIONS request
|
||||||
|
if let Ok(resp) = self
|
||||||
|
.http
|
||||||
|
.request(reqwest::Method::OPTIONS, url)
|
||||||
|
.header("Origin", "https://evil.com")
|
||||||
|
.header("Access-Control-Request-Method", "POST")
|
||||||
|
.header("Access-Control-Request-Headers", "Authorization, Content-Type")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
let acam = resp
|
||||||
|
.headers()
|
||||||
|
.get("access-control-allow-methods")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.map(String::from);
|
||||||
|
|
||||||
|
let acah = resp
|
||||||
|
.headers()
|
||||||
|
.get("access-control-allow-headers")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.map(String::from);
|
||||||
|
|
||||||
|
cors_data.push(json!({
|
||||||
|
"test": "preflight",
|
||||||
|
"status": resp.status().as_u16(),
|
||||||
|
"allow_methods": acam,
|
||||||
|
"allow_headers": acah,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
let count = findings.len();
|
||||||
|
info!(url, findings = count, "CORS check complete");
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if count > 0 {
|
||||||
|
format!("Found {count} CORS misconfiguration issues for {url}.")
|
||||||
|
} else {
|
||||||
|
format!("CORS configuration appears secure for {url}.")
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: json!({
|
||||||
|
"tests": cors_data,
|
||||||
|
}),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
447
compliance-dast/src/tools/csp_analyzer.rs
Normal file
447
compliance-dast/src/tools/csp_analyzer.rs
Normal file
@@ -0,0 +1,447 @@
|
|||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::models::dast::{DastEvidence, DastFinding, DastVulnType};
|
||||||
|
use compliance_core::models::Severity;
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
use tracing::info;
|
||||||
|
|
||||||
|
/// Tool that analyzes Content-Security-Policy headers.
|
||||||
|
pub struct CspAnalyzerTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A parsed CSP directive.
|
||||||
|
#[derive(Debug)]
|
||||||
|
struct CspDirective {
|
||||||
|
name: String,
|
||||||
|
values: Vec<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl CspAnalyzerTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
Self { http }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse a CSP header string into directives.
|
||||||
|
fn parse_csp(csp: &str) -> Vec<CspDirective> {
|
||||||
|
let mut directives = Vec::new();
|
||||||
|
for part in csp.split(';') {
|
||||||
|
let trimmed = part.trim();
|
||||||
|
if trimmed.is_empty() {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
let tokens: Vec<&str> = trimmed.split_whitespace().collect();
|
||||||
|
if let Some((name, values)) = tokens.split_first() {
|
||||||
|
directives.push(CspDirective {
|
||||||
|
name: name.to_lowercase(),
|
||||||
|
values: values.iter().map(|v| v.to_string()).collect(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
directives
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check a CSP for common issues and return findings.
|
||||||
|
fn analyze_directives(
|
||||||
|
directives: &[CspDirective],
|
||||||
|
url: &str,
|
||||||
|
target_id: &str,
|
||||||
|
status: u16,
|
||||||
|
csp_raw: &str,
|
||||||
|
) -> Vec<DastFinding> {
|
||||||
|
let mut findings = Vec::new();
|
||||||
|
|
||||||
|
let make_evidence = |snippet: String| DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: url.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: status,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(snippet),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Check for unsafe-inline in script-src
|
||||||
|
for d in directives {
|
||||||
|
if (d.name == "script-src" || d.name == "default-src")
|
||||||
|
&& d.values.iter().any(|v| v == "'unsafe-inline'")
|
||||||
|
{
|
||||||
|
let evidence = make_evidence(format!("{}: {}", d.name, d.values.join(" ")));
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.to_string(),
|
||||||
|
DastVulnType::CspIssue,
|
||||||
|
format!("CSP allows 'unsafe-inline' in {}", d.name),
|
||||||
|
format!(
|
||||||
|
"The Content-Security-Policy directive '{}' includes 'unsafe-inline', \
|
||||||
|
which defeats the purpose of CSP by allowing inline scripts that \
|
||||||
|
could be exploited via XSS.",
|
||||||
|
d.name
|
||||||
|
),
|
||||||
|
Severity::High,
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-79".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Remove 'unsafe-inline' from script-src. Use nonces or hashes for \
|
||||||
|
legitimate inline scripts instead."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for unsafe-eval
|
||||||
|
if (d.name == "script-src" || d.name == "default-src")
|
||||||
|
&& d.values.iter().any(|v| v == "'unsafe-eval'")
|
||||||
|
{
|
||||||
|
let evidence = make_evidence(format!("{}: {}", d.name, d.values.join(" ")));
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.to_string(),
|
||||||
|
DastVulnType::CspIssue,
|
||||||
|
format!("CSP allows 'unsafe-eval' in {}", d.name),
|
||||||
|
format!(
|
||||||
|
"The Content-Security-Policy directive '{}' includes 'unsafe-eval', \
|
||||||
|
which allows the use of eval() and similar dynamic code execution \
|
||||||
|
that can be exploited via XSS.",
|
||||||
|
d.name
|
||||||
|
),
|
||||||
|
Severity::Medium,
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-79".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Remove 'unsafe-eval' from script-src. Refactor code to avoid eval(), \
|
||||||
|
Function(), and similar constructs."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for wildcard sources
|
||||||
|
if d.values.iter().any(|v| v == "*") {
|
||||||
|
let evidence = make_evidence(format!("{}: {}", d.name, d.values.join(" ")));
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.to_string(),
|
||||||
|
DastVulnType::CspIssue,
|
||||||
|
format!("CSP wildcard source in {}", d.name),
|
||||||
|
format!(
|
||||||
|
"The Content-Security-Policy directive '{}' uses a wildcard '*' source, \
|
||||||
|
which allows loading resources from any origin and largely negates CSP protection.",
|
||||||
|
d.name
|
||||||
|
),
|
||||||
|
Severity::Medium,
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-16".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(format!(
|
||||||
|
"Replace the wildcard '*' in {} with specific allowed origins.",
|
||||||
|
d.name
|
||||||
|
));
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for http: sources (non-HTTPS)
|
||||||
|
if d.values.iter().any(|v| v == "http:") {
|
||||||
|
let evidence = make_evidence(format!("{}: {}", d.name, d.values.join(" ")));
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.to_string(),
|
||||||
|
DastVulnType::CspIssue,
|
||||||
|
format!("CSP allows HTTP sources in {}", d.name),
|
||||||
|
format!(
|
||||||
|
"The Content-Security-Policy directive '{}' allows loading resources \
|
||||||
|
over unencrypted HTTP, which can be exploited via man-in-the-middle attacks.",
|
||||||
|
d.name
|
||||||
|
),
|
||||||
|
Severity::Medium,
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-319".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(format!(
|
||||||
|
"Replace 'http:' with 'https:' in {} to enforce encrypted resource loading.",
|
||||||
|
d.name
|
||||||
|
));
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for data: in script-src (can be used to bypass CSP)
|
||||||
|
if (d.name == "script-src" || d.name == "default-src")
|
||||||
|
&& d.values.iter().any(|v| v == "data:")
|
||||||
|
{
|
||||||
|
let evidence = make_evidence(format!("{}: {}", d.name, d.values.join(" ")));
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.to_string(),
|
||||||
|
DastVulnType::CspIssue,
|
||||||
|
format!("CSP allows data: URIs in {}", d.name),
|
||||||
|
format!(
|
||||||
|
"The Content-Security-Policy directive '{}' allows 'data:' URIs, \
|
||||||
|
which can be used to bypass CSP and execute arbitrary scripts.",
|
||||||
|
d.name
|
||||||
|
),
|
||||||
|
Severity::High,
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-79".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(format!(
|
||||||
|
"Remove 'data:' from {}. If data URIs are needed, restrict them to \
|
||||||
|
non-executable content types only (e.g., img-src).",
|
||||||
|
d.name
|
||||||
|
));
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for missing important directives
|
||||||
|
let directive_names: Vec<&str> = directives.iter().map(|d| d.name.as_str()).collect();
|
||||||
|
let has_default_src = directive_names.contains(&"default-src");
|
||||||
|
|
||||||
|
let important_directives = [
|
||||||
|
("script-src", "Controls which scripts can execute"),
|
||||||
|
("object-src", "Controls plugins like Flash"),
|
||||||
|
("base-uri", "Controls the base URL for relative URLs"),
|
||||||
|
("form-action", "Controls where forms can submit to"),
|
||||||
|
("frame-ancestors", "Controls who can embed this page in iframes"),
|
||||||
|
];
|
||||||
|
|
||||||
|
for (dir_name, desc) in &important_directives {
|
||||||
|
if !directive_names.contains(dir_name)
|
||||||
|
&& !(has_default_src && *dir_name != "frame-ancestors" && *dir_name != "base-uri" && *dir_name != "form-action")
|
||||||
|
{
|
||||||
|
let evidence = make_evidence(format!("CSP missing directive: {dir_name}"));
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.to_string(),
|
||||||
|
DastVulnType::CspIssue,
|
||||||
|
format!("CSP missing '{}' directive", dir_name),
|
||||||
|
format!(
|
||||||
|
"The Content-Security-Policy is missing the '{}' directive. {}. \
|
||||||
|
Without this directive{}, the browser may fall back to less restrictive defaults.",
|
||||||
|
dir_name,
|
||||||
|
desc,
|
||||||
|
if has_default_src && (*dir_name == "frame-ancestors" || *dir_name == "base-uri" || *dir_name == "form-action") {
|
||||||
|
" (not covered by default-src)"
|
||||||
|
} else {
|
||||||
|
""
|
||||||
|
}
|
||||||
|
),
|
||||||
|
Severity::Low,
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-16".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(format!(
|
||||||
|
"Add '{}: 'none'' or an appropriate restrictive value to your CSP.",
|
||||||
|
dir_name
|
||||||
|
));
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
findings
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for CspAnalyzerTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"csp_analyzer"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Analyzes Content-Security-Policy headers. Checks for unsafe-inline, unsafe-eval, \
|
||||||
|
wildcard sources, data: URIs in script-src, missing directives, and other CSP weaknesses."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"url": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "URL to fetch and analyze CSP from"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["url"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let url = input
|
||||||
|
.get("url")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or_else(|| CoreError::Dast("Missing required 'url' parameter".to_string()))?;
|
||||||
|
|
||||||
|
let target_id = context
|
||||||
|
.target
|
||||||
|
.id
|
||||||
|
.map(|oid| oid.to_hex())
|
||||||
|
.unwrap_or_else(|| "unknown".to_string());
|
||||||
|
|
||||||
|
let response = self
|
||||||
|
.http
|
||||||
|
.get(url)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| CoreError::Dast(format!("Failed to fetch {url}: {e}")))?;
|
||||||
|
|
||||||
|
let status = response.status().as_u16();
|
||||||
|
|
||||||
|
// Check for CSP header
|
||||||
|
let csp_header = response
|
||||||
|
.headers()
|
||||||
|
.get("content-security-policy")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.map(String::from);
|
||||||
|
|
||||||
|
// Also check for report-only variant
|
||||||
|
let csp_report_only = response
|
||||||
|
.headers()
|
||||||
|
.get("content-security-policy-report-only")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.map(String::from);
|
||||||
|
|
||||||
|
let mut findings = Vec::new();
|
||||||
|
let mut csp_data = json!({});
|
||||||
|
|
||||||
|
match &csp_header {
|
||||||
|
Some(csp) => {
|
||||||
|
let directives = Self::parse_csp(csp);
|
||||||
|
let directive_map: serde_json::Value = directives
|
||||||
|
.iter()
|
||||||
|
.map(|d| (d.name.clone(), json!(d.values)))
|
||||||
|
.collect::<serde_json::Map<String, serde_json::Value>>()
|
||||||
|
.into();
|
||||||
|
|
||||||
|
csp_data["csp_header"] = json!(csp);
|
||||||
|
csp_data["directives"] = directive_map;
|
||||||
|
|
||||||
|
findings.extend(Self::analyze_directives(
|
||||||
|
&directives,
|
||||||
|
url,
|
||||||
|
&target_id,
|
||||||
|
status,
|
||||||
|
csp,
|
||||||
|
));
|
||||||
|
}
|
||||||
|
None => {
|
||||||
|
csp_data["csp_header"] = json!(null);
|
||||||
|
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: url.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: status,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some("Content-Security-Policy header is missing".to_string()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::CspIssue,
|
||||||
|
"Missing Content-Security-Policy header".to_string(),
|
||||||
|
format!(
|
||||||
|
"No Content-Security-Policy header is present on {url}. \
|
||||||
|
Without CSP, the browser has no instructions on which sources are \
|
||||||
|
trusted, making XSS exploitation much easier."
|
||||||
|
),
|
||||||
|
Severity::Medium,
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-16".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Add a Content-Security-Policy header. Start with a restrictive policy like \
|
||||||
|
\"default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self' data:; \
|
||||||
|
object-src 'none'; frame-ancestors 'none'; base-uri 'self'\"."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(ref report_only) = csp_report_only {
|
||||||
|
csp_data["csp_report_only"] = json!(report_only);
|
||||||
|
// If ONLY report-only exists (no enforcing CSP), warn
|
||||||
|
if csp_header.is_none() {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: url.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: status,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(format!(
|
||||||
|
"Content-Security-Policy-Report-Only: {}",
|
||||||
|
report_only
|
||||||
|
)),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::CspIssue,
|
||||||
|
"CSP is report-only, not enforcing".to_string(),
|
||||||
|
"A Content-Security-Policy-Report-Only header is present but no enforcing \
|
||||||
|
Content-Security-Policy header exists. Report-only mode only logs violations \
|
||||||
|
but does not block them."
|
||||||
|
.to_string(),
|
||||||
|
Severity::Low,
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-16".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Once you have verified the CSP policy works correctly in report-only mode, \
|
||||||
|
deploy it as an enforcing Content-Security-Policy header."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let count = findings.len();
|
||||||
|
info!(url, findings = count, "CSP analysis complete");
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if count > 0 {
|
||||||
|
format!("Found {count} CSP issues for {url}.")
|
||||||
|
} else {
|
||||||
|
format!("Content-Security-Policy looks good for {url}.")
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: csp_data,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
401
compliance-dast/src/tools/dmarc_checker.rs
Normal file
401
compliance-dast/src/tools/dmarc_checker.rs
Normal file
@@ -0,0 +1,401 @@
|
|||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::models::dast::{DastEvidence, DastFinding, DastVulnType};
|
||||||
|
use compliance_core::models::Severity;
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
use tracing::{info, warn};
|
||||||
|
|
||||||
|
/// Tool that checks email security configuration (DMARC and SPF records).
|
||||||
|
pub struct DmarcCheckerTool;
|
||||||
|
|
||||||
|
impl DmarcCheckerTool {
|
||||||
|
pub fn new() -> Self {
|
||||||
|
Self
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Query TXT records for a given name using `dig`.
|
||||||
|
async fn query_txt(name: &str) -> Result<Vec<String>, CoreError> {
|
||||||
|
let output = tokio::process::Command::new("dig")
|
||||||
|
.args(["+short", "TXT", name])
|
||||||
|
.output()
|
||||||
|
.await
|
||||||
|
.map_err(|e| CoreError::Dast(format!("dig command failed: {e}")))?;
|
||||||
|
|
||||||
|
let stdout = String::from_utf8_lossy(&output.stdout);
|
||||||
|
let lines: Vec<String> = stdout
|
||||||
|
.lines()
|
||||||
|
.map(|l| l.trim().trim_matches('"').to_string())
|
||||||
|
.filter(|l| !l.is_empty())
|
||||||
|
.collect();
|
||||||
|
Ok(lines)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse a DMARC record string and return the policy value.
|
||||||
|
fn parse_dmarc_policy(record: &str) -> Option<String> {
|
||||||
|
for part in record.split(';') {
|
||||||
|
let part = part.trim();
|
||||||
|
if let Some(val) = part.strip_prefix("p=") {
|
||||||
|
return Some(val.trim().to_lowercase());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse DMARC record for sub-domain policy (sp=).
|
||||||
|
fn parse_dmarc_subdomain_policy(record: &str) -> Option<String> {
|
||||||
|
for part in record.split(';') {
|
||||||
|
let part = part.trim();
|
||||||
|
if let Some(val) = part.strip_prefix("sp=") {
|
||||||
|
return Some(val.trim().to_lowercase());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse DMARC record for reporting URI (rua=).
|
||||||
|
fn parse_dmarc_rua(record: &str) -> Option<String> {
|
||||||
|
for part in record.split(';') {
|
||||||
|
let part = part.trim();
|
||||||
|
if let Some(val) = part.strip_prefix("rua=") {
|
||||||
|
return Some(val.trim().to_string());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if an SPF record is present and parse the policy.
|
||||||
|
fn is_spf_record(record: &str) -> bool {
|
||||||
|
record.starts_with("v=spf1")
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Evaluate SPF record strength.
|
||||||
|
fn spf_uses_soft_fail(record: &str) -> bool {
|
||||||
|
record.contains("~all")
|
||||||
|
}
|
||||||
|
|
||||||
|
fn spf_allows_all(record: &str) -> bool {
|
||||||
|
record.contains("+all")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for DmarcCheckerTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"dmarc_checker"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Checks email security configuration for a domain. Queries DMARC and SPF records and \
|
||||||
|
evaluates policy strength. Reports missing or weak email authentication settings."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"domain": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "The domain to check (e.g., 'example.com')"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["domain"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let domain = input
|
||||||
|
.get("domain")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or_else(|| CoreError::Dast("Missing required 'domain' parameter".to_string()))?;
|
||||||
|
|
||||||
|
let target_id = context
|
||||||
|
.target
|
||||||
|
.id
|
||||||
|
.map(|oid| oid.to_hex())
|
||||||
|
.unwrap_or_else(|| "unknown".to_string());
|
||||||
|
|
||||||
|
let mut findings = Vec::new();
|
||||||
|
let mut email_data = json!({});
|
||||||
|
|
||||||
|
// ---- DMARC check ----
|
||||||
|
let dmarc_domain = format!("_dmarc.{domain}");
|
||||||
|
let dmarc_records = Self::query_txt(&dmarc_domain).await.unwrap_or_default();
|
||||||
|
|
||||||
|
let dmarc_record = dmarc_records.iter().find(|r| r.starts_with("v=DMARC1"));
|
||||||
|
|
||||||
|
match dmarc_record {
|
||||||
|
Some(record) => {
|
||||||
|
email_data["dmarc_record"] = json!(record);
|
||||||
|
|
||||||
|
let policy = Self::parse_dmarc_policy(record);
|
||||||
|
let sp = Self::parse_dmarc_subdomain_policy(record);
|
||||||
|
let rua = Self::parse_dmarc_rua(record);
|
||||||
|
|
||||||
|
email_data["dmarc_policy"] = json!(policy);
|
||||||
|
email_data["dmarc_subdomain_policy"] = json!(sp);
|
||||||
|
email_data["dmarc_rua"] = json!(rua);
|
||||||
|
|
||||||
|
// Warn on weak policy
|
||||||
|
if let Some(ref p) = policy {
|
||||||
|
if p == "none" {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "DNS".to_string(),
|
||||||
|
request_url: dmarc_domain.clone(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 0,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(record.clone()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::EmailSecurity,
|
||||||
|
format!("Weak DMARC policy for {domain}"),
|
||||||
|
format!(
|
||||||
|
"The DMARC policy for {domain} is set to 'none', which only monitors \
|
||||||
|
but does not enforce email authentication. Attackers can spoof emails \
|
||||||
|
from this domain."
|
||||||
|
),
|
||||||
|
Severity::Medium,
|
||||||
|
domain.to_string(),
|
||||||
|
"DNS".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-290".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Upgrade the DMARC policy from 'p=none' to 'p=quarantine' or \
|
||||||
|
'p=reject' after verifying legitimate email flows are properly \
|
||||||
|
authenticated."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
warn!(domain, "DMARC policy is 'none'");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Warn if no reporting URI
|
||||||
|
if rua.is_none() {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "DNS".to_string(),
|
||||||
|
request_url: dmarc_domain.clone(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 0,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(record.clone()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::EmailSecurity,
|
||||||
|
format!("DMARC without reporting for {domain}"),
|
||||||
|
format!(
|
||||||
|
"The DMARC record for {domain} does not include a reporting URI (rua=). \
|
||||||
|
Without reporting, you will not receive aggregate feedback about email \
|
||||||
|
authentication failures."
|
||||||
|
),
|
||||||
|
Severity::Info,
|
||||||
|
domain.to_string(),
|
||||||
|
"DNS".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-778".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Add a 'rua=' tag to your DMARC record to receive aggregate reports. \
|
||||||
|
Example: 'rua=mailto:dmarc-reports@example.com'."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None => {
|
||||||
|
email_data["dmarc_record"] = json!(null);
|
||||||
|
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "DNS".to_string(),
|
||||||
|
request_url: dmarc_domain.clone(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 0,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some("No DMARC record found".to_string()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::EmailSecurity,
|
||||||
|
format!("Missing DMARC record for {domain}"),
|
||||||
|
format!(
|
||||||
|
"No DMARC record was found for {domain}. Without DMARC, there is no \
|
||||||
|
policy to prevent email spoofing and phishing attacks using this domain."
|
||||||
|
),
|
||||||
|
Severity::High,
|
||||||
|
domain.to_string(),
|
||||||
|
"DNS".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-290".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Create a DMARC TXT record at _dmarc.<domain>. Start with 'v=DMARC1; p=none; \
|
||||||
|
rua=mailto:dmarc@example.com' and gradually move to 'p=reject'."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
warn!(domain, "No DMARC record found");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---- SPF check ----
|
||||||
|
let txt_records = Self::query_txt(domain).await.unwrap_or_default();
|
||||||
|
let spf_record = txt_records.iter().find(|r| Self::is_spf_record(r));
|
||||||
|
|
||||||
|
match spf_record {
|
||||||
|
Some(record) => {
|
||||||
|
email_data["spf_record"] = json!(record);
|
||||||
|
|
||||||
|
if Self::spf_allows_all(record) {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "DNS".to_string(),
|
||||||
|
request_url: domain.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 0,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(record.clone()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::EmailSecurity,
|
||||||
|
format!("SPF allows all senders for {domain}"),
|
||||||
|
format!(
|
||||||
|
"The SPF record for {domain} uses '+all' which allows any server to \
|
||||||
|
send email on behalf of this domain, completely negating SPF protection."
|
||||||
|
),
|
||||||
|
Severity::Critical,
|
||||||
|
domain.to_string(),
|
||||||
|
"DNS".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-290".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Change '+all' to '-all' (hard fail) or '~all' (soft fail) in your SPF record. \
|
||||||
|
Only list authorized mail servers."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
} else if Self::spf_uses_soft_fail(record) {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "DNS".to_string(),
|
||||||
|
request_url: domain.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 0,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(record.clone()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::EmailSecurity,
|
||||||
|
format!("SPF soft fail for {domain}"),
|
||||||
|
format!(
|
||||||
|
"The SPF record for {domain} uses '~all' (soft fail) instead of \
|
||||||
|
'-all' (hard fail). Soft fail marks unauthorized emails as suspicious \
|
||||||
|
but does not reject them."
|
||||||
|
),
|
||||||
|
Severity::Low,
|
||||||
|
domain.to_string(),
|
||||||
|
"DNS".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-290".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Consider changing '~all' to '-all' in your SPF record once you have \
|
||||||
|
confirmed all legitimate mail sources are listed."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None => {
|
||||||
|
email_data["spf_record"] = json!(null);
|
||||||
|
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "DNS".to_string(),
|
||||||
|
request_url: domain.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 0,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some("No SPF record found".to_string()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::EmailSecurity,
|
||||||
|
format!("Missing SPF record for {domain}"),
|
||||||
|
format!(
|
||||||
|
"No SPF record was found for {domain}. Without SPF, any server can claim \
|
||||||
|
to send email on behalf of this domain."
|
||||||
|
),
|
||||||
|
Severity::High,
|
||||||
|
domain.to_string(),
|
||||||
|
"DNS".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-290".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Create an SPF TXT record for your domain. Example: \
|
||||||
|
'v=spf1 include:_spf.google.com -all'."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
warn!(domain, "No SPF record found");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let count = findings.len();
|
||||||
|
info!(domain, findings = count, "DMARC/SPF check complete");
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if count > 0 {
|
||||||
|
format!("Found {count} email security issues for {domain}.")
|
||||||
|
} else {
|
||||||
|
format!("Email security configuration looks good for {domain}.")
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: email_data,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
389
compliance-dast/src/tools/dns_checker.rs
Normal file
389
compliance-dast/src/tools/dns_checker.rs
Normal file
@@ -0,0 +1,389 @@
|
|||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::models::dast::{DastEvidence, DastFinding, DastVulnType};
|
||||||
|
use compliance_core::models::Severity;
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
use tokio::net::lookup_host;
|
||||||
|
use tracing::{info, warn};
|
||||||
|
|
||||||
|
/// Tool that checks DNS configuration for security issues.
|
||||||
|
///
|
||||||
|
/// Resolves A, AAAA, MX, TXT, CNAME, NS records using the system resolver
|
||||||
|
/// via `tokio::net::lookup_host` and `std::net::ToSocketAddrs`. For TXT-based
|
||||||
|
/// records (SPF, DMARC, CAA, DNSSEC) it uses a simple TXT query via the
|
||||||
|
/// `tokio::process::Command` wrapper around `dig` where available.
|
||||||
|
pub struct DnsCheckerTool;
|
||||||
|
|
||||||
|
impl DnsCheckerTool {
|
||||||
|
pub fn new() -> Self {
|
||||||
|
Self
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Run a `dig` query and return the answer lines.
|
||||||
|
async fn dig_query(domain: &str, record_type: &str) -> Result<Vec<String>, CoreError> {
|
||||||
|
let output = tokio::process::Command::new("dig")
|
||||||
|
.args(["+short", record_type, domain])
|
||||||
|
.output()
|
||||||
|
.await
|
||||||
|
.map_err(|e| CoreError::Dast(format!("dig command failed: {e}")))?;
|
||||||
|
|
||||||
|
let stdout = String::from_utf8_lossy(&output.stdout);
|
||||||
|
let lines: Vec<String> = stdout
|
||||||
|
.lines()
|
||||||
|
.map(|l| l.trim().to_string())
|
||||||
|
.filter(|l| !l.is_empty())
|
||||||
|
.collect();
|
||||||
|
Ok(lines)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Resolve A/AAAA records using tokio lookup.
|
||||||
|
async fn resolve_addresses(domain: &str) -> Result<(Vec<String>, Vec<String>), CoreError> {
|
||||||
|
let mut ipv4 = Vec::new();
|
||||||
|
let mut ipv6 = Vec::new();
|
||||||
|
|
||||||
|
let addr_str = format!("{domain}:443");
|
||||||
|
match lookup_host(&addr_str).await {
|
||||||
|
Ok(addrs) => {
|
||||||
|
for addr in addrs {
|
||||||
|
match addr {
|
||||||
|
std::net::SocketAddr::V4(v4) => ipv4.push(v4.ip().to_string()),
|
||||||
|
std::net::SocketAddr::V6(v6) => ipv6.push(v6.ip().to_string()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
return Err(CoreError::Dast(format!("DNS resolution failed for {domain}: {e}")));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok((ipv4, ipv6))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for DnsCheckerTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"dns_checker"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Checks DNS configuration for a domain. Resolves A, AAAA, MX, TXT, CNAME, NS records. \
|
||||||
|
Checks for DNSSEC, CAA records, and potential subdomain takeover via dangling CNAME/NS."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"domain": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "The domain to check (e.g., 'example.com')"
|
||||||
|
},
|
||||||
|
"subdomains": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Optional list of subdomains to also check (e.g., ['www', 'api', 'mail'])",
|
||||||
|
"items": { "type": "string" }
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["domain"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let domain = input
|
||||||
|
.get("domain")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or_else(|| CoreError::Dast("Missing required 'domain' parameter".to_string()))?;
|
||||||
|
|
||||||
|
let subdomains: Vec<String> = input
|
||||||
|
.get("subdomains")
|
||||||
|
.and_then(|v| v.as_array())
|
||||||
|
.map(|arr| {
|
||||||
|
arr.iter()
|
||||||
|
.filter_map(|v| v.as_str().map(String::from))
|
||||||
|
.collect()
|
||||||
|
})
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
let mut findings = Vec::new();
|
||||||
|
let mut dns_data: HashMap<String, serde_json::Value> = HashMap::new();
|
||||||
|
|
||||||
|
let target_id = context
|
||||||
|
.target
|
||||||
|
.id
|
||||||
|
.map(|oid| oid.to_hex())
|
||||||
|
.unwrap_or_else(|| "unknown".to_string());
|
||||||
|
|
||||||
|
// --- A / AAAA records ---
|
||||||
|
match Self::resolve_addresses(domain).await {
|
||||||
|
Ok((ipv4, ipv6)) => {
|
||||||
|
dns_data.insert("a_records".to_string(), json!(ipv4));
|
||||||
|
dns_data.insert("aaaa_records".to_string(), json!(ipv6));
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
dns_data.insert("a_records_error".to_string(), json!(e.to_string()));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- MX records ---
|
||||||
|
match Self::dig_query(domain, "MX").await {
|
||||||
|
Ok(mx) => {
|
||||||
|
dns_data.insert("mx_records".to_string(), json!(mx));
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
dns_data.insert("mx_records_error".to_string(), json!(e.to_string()));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- NS records ---
|
||||||
|
let ns_records = match Self::dig_query(domain, "NS").await {
|
||||||
|
Ok(ns) => {
|
||||||
|
dns_data.insert("ns_records".to_string(), json!(ns));
|
||||||
|
ns
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
dns_data.insert("ns_records_error".to_string(), json!(e.to_string()));
|
||||||
|
Vec::new()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// --- TXT records ---
|
||||||
|
match Self::dig_query(domain, "TXT").await {
|
||||||
|
Ok(txt) => {
|
||||||
|
dns_data.insert("txt_records".to_string(), json!(txt));
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
dns_data.insert("txt_records_error".to_string(), json!(e.to_string()));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- CNAME records (for subdomains) ---
|
||||||
|
let mut cname_data: HashMap<String, Vec<String>> = HashMap::new();
|
||||||
|
let mut domains_to_check = vec![domain.to_string()];
|
||||||
|
for sub in &subdomains {
|
||||||
|
domains_to_check.push(format!("{sub}.{domain}"));
|
||||||
|
}
|
||||||
|
|
||||||
|
for fqdn in &domains_to_check {
|
||||||
|
match Self::dig_query(fqdn, "CNAME").await {
|
||||||
|
Ok(cnames) if !cnames.is_empty() => {
|
||||||
|
// Check for dangling CNAME
|
||||||
|
for cname in &cnames {
|
||||||
|
let cname_clean = cname.trim_end_matches('.');
|
||||||
|
let check_addr = format!("{cname_clean}:443");
|
||||||
|
let is_dangling = lookup_host(&check_addr).await.is_err();
|
||||||
|
if is_dangling {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "DNS".to_string(),
|
||||||
|
request_url: fqdn.clone(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 0,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(format!(
|
||||||
|
"CNAME {fqdn} -> {cname} (target does not resolve)"
|
||||||
|
)),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::DnsMisconfiguration,
|
||||||
|
format!("Dangling CNAME on {fqdn}"),
|
||||||
|
format!(
|
||||||
|
"The subdomain {fqdn} has a CNAME record pointing to {cname} which does not resolve. \
|
||||||
|
This may allow subdomain takeover if an attacker can claim the target hostname."
|
||||||
|
),
|
||||||
|
Severity::High,
|
||||||
|
fqdn.clone(),
|
||||||
|
"DNS".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-923".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Remove dangling CNAME records or ensure the target hostname is \
|
||||||
|
properly configured and resolvable."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
warn!(fqdn, cname, "Dangling CNAME detected - potential subdomain takeover");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
cname_data.insert(fqdn.clone(), cnames);
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !cname_data.is_empty() {
|
||||||
|
dns_data.insert("cname_records".to_string(), json!(cname_data));
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- CAA records ---
|
||||||
|
match Self::dig_query(domain, "CAA").await {
|
||||||
|
Ok(caa) => {
|
||||||
|
if caa.is_empty() {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "DNS".to_string(),
|
||||||
|
request_url: domain.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 0,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some("No CAA records found".to_string()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::DnsMisconfiguration,
|
||||||
|
format!("Missing CAA records for {domain}"),
|
||||||
|
format!(
|
||||||
|
"No CAA (Certificate Authority Authorization) records are set for {domain}. \
|
||||||
|
Without CAA records, any certificate authority can issue certificates for this domain."
|
||||||
|
),
|
||||||
|
Severity::Low,
|
||||||
|
domain.to_string(),
|
||||||
|
"DNS".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-295".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Add CAA DNS records to restrict which certificate authorities can issue \
|
||||||
|
certificates for your domain. Example: '0 issue \"letsencrypt.org\"'."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
dns_data.insert("caa_records".to_string(), json!(caa));
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
dns_data.insert("caa_records_error".to_string(), json!(e.to_string()));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- DNSSEC check ---
|
||||||
|
let dnssec_output = tokio::process::Command::new("dig")
|
||||||
|
.args(["+dnssec", "+short", "DNSKEY", domain])
|
||||||
|
.output()
|
||||||
|
.await;
|
||||||
|
|
||||||
|
match dnssec_output {
|
||||||
|
Ok(output) => {
|
||||||
|
let stdout = String::from_utf8_lossy(&output.stdout);
|
||||||
|
let has_dnssec = !stdout.trim().is_empty();
|
||||||
|
dns_data.insert("dnssec_enabled".to_string(), json!(has_dnssec));
|
||||||
|
|
||||||
|
if !has_dnssec {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "DNS".to_string(),
|
||||||
|
request_url: domain.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 0,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some("No DNSKEY records found - DNSSEC not enabled".to_string()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::DnsMisconfiguration,
|
||||||
|
format!("DNSSEC not enabled for {domain}"),
|
||||||
|
format!(
|
||||||
|
"DNSSEC is not enabled for {domain}. Without DNSSEC, DNS responses \
|
||||||
|
can be spoofed, allowing man-in-the-middle attacks."
|
||||||
|
),
|
||||||
|
Severity::Medium,
|
||||||
|
domain.to_string(),
|
||||||
|
"DNS".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-350".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Enable DNSSEC for your domain by configuring DNSKEY and DS records \
|
||||||
|
with your DNS provider and domain registrar."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(_) => {
|
||||||
|
dns_data.insert("dnssec_check_error".to_string(), json!("dig not available"));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Check NS records for dangling ---
|
||||||
|
for ns in &ns_records {
|
||||||
|
let ns_clean = ns.trim_end_matches('.');
|
||||||
|
let check_addr = format!("{ns_clean}:53");
|
||||||
|
if lookup_host(&check_addr).await.is_err() {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "DNS".to_string(),
|
||||||
|
request_url: domain.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 0,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(format!(
|
||||||
|
"NS record {ns} does not resolve"
|
||||||
|
)),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::DnsMisconfiguration,
|
||||||
|
format!("Dangling NS record for {domain}"),
|
||||||
|
format!(
|
||||||
|
"The NS record {ns} for {domain} does not resolve. \
|
||||||
|
This could allow domain takeover if an attacker can claim the nameserver hostname."
|
||||||
|
),
|
||||||
|
Severity::Critical,
|
||||||
|
domain.to_string(),
|
||||||
|
"DNS".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-923".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Remove dangling NS records or ensure the nameserver hostname is properly \
|
||||||
|
configured. Dangling NS records can lead to full domain takeover."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
warn!(domain, ns, "Dangling NS record detected - potential domain takeover");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let count = findings.len();
|
||||||
|
info!(domain, findings = count, "DNS check complete");
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if count > 0 {
|
||||||
|
format!("Found {count} DNS configuration issues for {domain}.")
|
||||||
|
} else {
|
||||||
|
format!("No DNS configuration issues found for {domain}.")
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: json!(dns_data),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
141
compliance-dast/src/tools/mod.rs
Normal file
141
compliance-dast/src/tools/mod.rs
Normal file
@@ -0,0 +1,141 @@
|
|||||||
|
pub mod api_fuzzer;
|
||||||
|
pub mod auth_bypass;
|
||||||
|
pub mod console_log_detector;
|
||||||
|
pub mod cookie_analyzer;
|
||||||
|
pub mod cors_checker;
|
||||||
|
pub mod csp_analyzer;
|
||||||
|
pub mod dmarc_checker;
|
||||||
|
pub mod dns_checker;
|
||||||
|
pub mod openapi_parser;
|
||||||
|
pub mod rate_limit_tester;
|
||||||
|
pub mod recon;
|
||||||
|
pub mod security_headers;
|
||||||
|
pub mod sql_injection;
|
||||||
|
pub mod ssrf;
|
||||||
|
pub mod tls_analyzer;
|
||||||
|
pub mod xss;
|
||||||
|
|
||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
use compliance_core::traits::pentest_tool::PentestTool;
|
||||||
|
|
||||||
|
/// A definition describing a tool for LLM tool_use registration.
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct ToolDefinition {
|
||||||
|
pub name: String,
|
||||||
|
pub description: String,
|
||||||
|
pub input_schema: serde_json::Value,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Registry that holds all available pentest tools and provides
|
||||||
|
/// look-up by name.
|
||||||
|
pub struct ToolRegistry {
|
||||||
|
tools: HashMap<String, Box<dyn PentestTool>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ToolRegistry {
|
||||||
|
/// Create a new registry with all built-in tools pre-registered.
|
||||||
|
pub fn new() -> Self {
|
||||||
|
let http = reqwest::Client::builder()
|
||||||
|
.danger_accept_invalid_certs(true)
|
||||||
|
.timeout(std::time::Duration::from_secs(30))
|
||||||
|
.redirect(reqwest::redirect::Policy::limited(5))
|
||||||
|
.build()
|
||||||
|
.expect("failed to build HTTP client");
|
||||||
|
|
||||||
|
let mut tools: HashMap<String, Box<dyn PentestTool>> = HashMap::new();
|
||||||
|
|
||||||
|
// Agent-wrapping tools
|
||||||
|
let register = |tools: &mut HashMap<String, Box<dyn PentestTool>>,
|
||||||
|
tool: Box<dyn PentestTool>| {
|
||||||
|
tools.insert(tool.name().to_string(), tool);
|
||||||
|
};
|
||||||
|
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(sql_injection::SqlInjectionTool::new(http.clone())),
|
||||||
|
);
|
||||||
|
register(&mut tools, Box::new(xss::XssTool::new(http.clone())));
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(auth_bypass::AuthBypassTool::new(http.clone())),
|
||||||
|
);
|
||||||
|
register(&mut tools, Box::new(ssrf::SsrfTool::new(http.clone())));
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(api_fuzzer::ApiFuzzerTool::new(http.clone())),
|
||||||
|
);
|
||||||
|
|
||||||
|
// New infrastructure / analysis tools
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(dns_checker::DnsCheckerTool::new()),
|
||||||
|
);
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(dmarc_checker::DmarcCheckerTool::new()),
|
||||||
|
);
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(tls_analyzer::TlsAnalyzerTool::new(http.clone())),
|
||||||
|
);
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(security_headers::SecurityHeadersTool::new(http.clone())),
|
||||||
|
);
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(cookie_analyzer::CookieAnalyzerTool::new(http.clone())),
|
||||||
|
);
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(csp_analyzer::CspAnalyzerTool::new(http.clone())),
|
||||||
|
);
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(rate_limit_tester::RateLimitTesterTool::new(http.clone())),
|
||||||
|
);
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(console_log_detector::ConsoleLogDetectorTool::new(
|
||||||
|
http.clone(),
|
||||||
|
)),
|
||||||
|
);
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(cors_checker::CorsCheckerTool::new(http.clone())),
|
||||||
|
);
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(openapi_parser::OpenApiParserTool::new(http.clone())),
|
||||||
|
);
|
||||||
|
register(
|
||||||
|
&mut tools,
|
||||||
|
Box::new(recon::ReconTool::new(http)),
|
||||||
|
);
|
||||||
|
|
||||||
|
Self { tools }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Look up a tool by name.
|
||||||
|
pub fn get(&self, name: &str) -> Option<&dyn PentestTool> {
|
||||||
|
self.tools.get(name).map(|b| b.as_ref())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Return definitions for every registered tool.
|
||||||
|
pub fn all_definitions(&self) -> Vec<ToolDefinition> {
|
||||||
|
self.tools
|
||||||
|
.values()
|
||||||
|
.map(|t| ToolDefinition {
|
||||||
|
name: t.name().to_string(),
|
||||||
|
description: t.description().to_string(),
|
||||||
|
input_schema: t.input_schema(),
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Return the names of all registered tools.
|
||||||
|
pub fn list_names(&self) -> Vec<String> {
|
||||||
|
self.tools.keys().cloned().collect()
|
||||||
|
}
|
||||||
|
}
|
||||||
422
compliance-dast/src/tools/openapi_parser.rs
Normal file
422
compliance-dast/src/tools/openapi_parser.rs
Normal file
@@ -0,0 +1,422 @@
|
|||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
use tracing::info;
|
||||||
|
|
||||||
|
/// Tool that discovers and parses OpenAPI/Swagger specification files.
|
||||||
|
///
|
||||||
|
/// Returns structured endpoint definitions for the LLM to use when planning
|
||||||
|
/// further tests. This tool produces data rather than security findings.
|
||||||
|
pub struct OpenApiParserTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A parsed endpoint from an OpenAPI spec.
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
struct ParsedEndpoint {
|
||||||
|
path: String,
|
||||||
|
method: String,
|
||||||
|
operation_id: Option<String>,
|
||||||
|
summary: Option<String>,
|
||||||
|
parameters: Vec<ParsedParameter>,
|
||||||
|
request_body_content_type: Option<String>,
|
||||||
|
response_codes: Vec<String>,
|
||||||
|
security: Vec<String>,
|
||||||
|
tags: Vec<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A parsed parameter from an OpenAPI spec.
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
struct ParsedParameter {
|
||||||
|
name: String,
|
||||||
|
location: String,
|
||||||
|
required: bool,
|
||||||
|
param_type: Option<String>,
|
||||||
|
description: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl OpenApiParserTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
Self { http }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Common paths where OpenAPI/Swagger specs are typically served.
|
||||||
|
fn common_spec_paths() -> Vec<&'static str> {
|
||||||
|
vec![
|
||||||
|
"/openapi.json",
|
||||||
|
"/openapi.yaml",
|
||||||
|
"/swagger.json",
|
||||||
|
"/swagger.yaml",
|
||||||
|
"/api-docs",
|
||||||
|
"/api-docs.json",
|
||||||
|
"/v2/api-docs",
|
||||||
|
"/v3/api-docs",
|
||||||
|
"/docs/openapi.json",
|
||||||
|
"/api/swagger.json",
|
||||||
|
"/api/openapi.json",
|
||||||
|
"/api/v1/openapi.json",
|
||||||
|
"/api/v2/openapi.json",
|
||||||
|
"/.well-known/openapi.json",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Try to fetch a spec from a URL and return the JSON value if successful.
|
||||||
|
async fn try_fetch_spec(
|
||||||
|
http: &reqwest::Client,
|
||||||
|
url: &str,
|
||||||
|
) -> Option<(String, serde_json::Value)> {
|
||||||
|
let resp = http.get(url).send().await.ok()?;
|
||||||
|
if !resp.status().is_success() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let content_type = resp
|
||||||
|
.headers()
|
||||||
|
.get("content-type")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.unwrap_or("")
|
||||||
|
.to_string();
|
||||||
|
|
||||||
|
let body = resp.text().await.ok()?;
|
||||||
|
|
||||||
|
// Try JSON first
|
||||||
|
if let Ok(val) = serde_json::from_str::<serde_json::Value>(&body) {
|
||||||
|
// Verify it looks like an OpenAPI / Swagger spec
|
||||||
|
if val.get("openapi").is_some()
|
||||||
|
|| val.get("swagger").is_some()
|
||||||
|
|| val.get("paths").is_some()
|
||||||
|
{
|
||||||
|
return Some((url.to_string(), val));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If content type suggests YAML, we can't easily parse without a YAML dep,
|
||||||
|
// so just report the URL as found
|
||||||
|
if content_type.contains("yaml") || body.starts_with("openapi:") || body.starts_with("swagger:") {
|
||||||
|
// Return a minimal JSON indicating YAML was found
|
||||||
|
return Some((
|
||||||
|
url.to_string(),
|
||||||
|
json!({
|
||||||
|
"_note": "YAML spec detected but not parsed. Fetch and convert to JSON.",
|
||||||
|
"_raw_url": url,
|
||||||
|
}),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse an OpenAPI 3.x or Swagger 2.x spec into structured endpoints.
|
||||||
|
fn parse_spec(spec: &serde_json::Value, base_url: &str) -> Vec<ParsedEndpoint> {
|
||||||
|
let mut endpoints = Vec::new();
|
||||||
|
|
||||||
|
// Determine base path
|
||||||
|
let base_path = if let Some(servers) = spec.get("servers").and_then(|v| v.as_array()) {
|
||||||
|
servers
|
||||||
|
.first()
|
||||||
|
.and_then(|s| s.get("url"))
|
||||||
|
.and_then(|u| u.as_str())
|
||||||
|
.unwrap_or("")
|
||||||
|
.to_string()
|
||||||
|
} else if let Some(bp) = spec.get("basePath").and_then(|v| v.as_str()) {
|
||||||
|
bp.to_string()
|
||||||
|
} else {
|
||||||
|
String::new()
|
||||||
|
};
|
||||||
|
|
||||||
|
let paths = match spec.get("paths").and_then(|v| v.as_object()) {
|
||||||
|
Some(p) => p,
|
||||||
|
None => return endpoints,
|
||||||
|
};
|
||||||
|
|
||||||
|
for (path, path_item) in paths {
|
||||||
|
let path_obj = match path_item.as_object() {
|
||||||
|
Some(o) => o,
|
||||||
|
None => continue,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Path-level parameters
|
||||||
|
let path_params = path_obj
|
||||||
|
.get("parameters")
|
||||||
|
.and_then(|v| v.as_array())
|
||||||
|
.cloned()
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
for method in &["get", "post", "put", "patch", "delete", "head", "options"] {
|
||||||
|
let operation = match path_obj.get(*method).and_then(|v| v.as_object()) {
|
||||||
|
Some(o) => o,
|
||||||
|
None => continue,
|
||||||
|
};
|
||||||
|
|
||||||
|
let operation_id = operation
|
||||||
|
.get("operationId")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.map(String::from);
|
||||||
|
|
||||||
|
let summary = operation
|
||||||
|
.get("summary")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.map(String::from);
|
||||||
|
|
||||||
|
let tags: Vec<String> = operation
|
||||||
|
.get("tags")
|
||||||
|
.and_then(|v| v.as_array())
|
||||||
|
.map(|arr| {
|
||||||
|
arr.iter()
|
||||||
|
.filter_map(|t| t.as_str().map(String::from))
|
||||||
|
.collect()
|
||||||
|
})
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
// Merge path-level and operation-level parameters
|
||||||
|
let mut parameters = Vec::new();
|
||||||
|
let op_params = operation
|
||||||
|
.get("parameters")
|
||||||
|
.and_then(|v| v.as_array())
|
||||||
|
.cloned()
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
for param_val in path_params.iter().chain(op_params.iter()) {
|
||||||
|
let name = param_val
|
||||||
|
.get("name")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.unwrap_or_default()
|
||||||
|
.to_string();
|
||||||
|
let location = param_val
|
||||||
|
.get("in")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.unwrap_or("query")
|
||||||
|
.to_string();
|
||||||
|
let required = param_val
|
||||||
|
.get("required")
|
||||||
|
.and_then(|v| v.as_bool())
|
||||||
|
.unwrap_or(false);
|
||||||
|
|
||||||
|
// Type from schema or direct type field
|
||||||
|
let param_type = param_val
|
||||||
|
.get("schema")
|
||||||
|
.and_then(|s| s.get("type"))
|
||||||
|
.or_else(|| param_val.get("type"))
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.map(String::from);
|
||||||
|
|
||||||
|
let description = param_val
|
||||||
|
.get("description")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.map(String::from);
|
||||||
|
|
||||||
|
parameters.push(ParsedParameter {
|
||||||
|
name,
|
||||||
|
location,
|
||||||
|
required,
|
||||||
|
param_type,
|
||||||
|
description,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Request body (OpenAPI 3.x)
|
||||||
|
let request_body_content_type = operation
|
||||||
|
.get("requestBody")
|
||||||
|
.and_then(|rb| rb.get("content"))
|
||||||
|
.and_then(|c| c.as_object())
|
||||||
|
.and_then(|obj| obj.keys().next().cloned());
|
||||||
|
|
||||||
|
// Response codes
|
||||||
|
let response_codes: Vec<String> = operation
|
||||||
|
.get("responses")
|
||||||
|
.and_then(|r| r.as_object())
|
||||||
|
.map(|obj| obj.keys().cloned().collect())
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
// Security requirements
|
||||||
|
let security: Vec<String> = operation
|
||||||
|
.get("security")
|
||||||
|
.and_then(|v| v.as_array())
|
||||||
|
.map(|arr| {
|
||||||
|
arr.iter()
|
||||||
|
.filter_map(|s| s.as_object())
|
||||||
|
.flat_map(|obj| obj.keys().cloned())
|
||||||
|
.collect()
|
||||||
|
})
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
endpoints.push(ParsedEndpoint {
|
||||||
|
path: format!("{}{}", base_path, path),
|
||||||
|
method: method.to_uppercase(),
|
||||||
|
operation_id,
|
||||||
|
summary,
|
||||||
|
parameters,
|
||||||
|
request_body_content_type,
|
||||||
|
response_codes,
|
||||||
|
security,
|
||||||
|
tags,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
endpoints
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for OpenApiParserTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"openapi_parser"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Discovers and parses OpenAPI/Swagger specifications. Tries common spec paths and \
|
||||||
|
returns structured endpoint definitions including parameters, methods, and security \
|
||||||
|
requirements. Use this to discover all API endpoints before testing."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"base_url": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Base URL of the API to discover specs from"
|
||||||
|
},
|
||||||
|
"spec_url": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Optional explicit URL of the OpenAPI/Swagger spec file"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["base_url"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let base_url = input
|
||||||
|
.get("base_url")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or_else(|| CoreError::Dast("Missing required 'base_url' parameter".to_string()))?;
|
||||||
|
|
||||||
|
let explicit_spec_url = input.get("spec_url").and_then(|v| v.as_str());
|
||||||
|
|
||||||
|
let base_url_trimmed = base_url.trim_end_matches('/');
|
||||||
|
|
||||||
|
// If an explicit spec URL is provided, try it first
|
||||||
|
let mut spec_result: Option<(String, serde_json::Value)> = None;
|
||||||
|
if let Some(spec_url) = explicit_spec_url {
|
||||||
|
spec_result = Self::try_fetch_spec(&self.http, spec_url).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// If no explicit URL or it failed, try common paths
|
||||||
|
if spec_result.is_none() {
|
||||||
|
for path in Self::common_spec_paths() {
|
||||||
|
let url = format!("{base_url_trimmed}{path}");
|
||||||
|
if let Some(result) = Self::try_fetch_spec(&self.http, &url).await {
|
||||||
|
spec_result = Some(result);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
match spec_result {
|
||||||
|
Some((spec_url, spec)) => {
|
||||||
|
let spec_version = spec
|
||||||
|
.get("openapi")
|
||||||
|
.or_else(|| spec.get("swagger"))
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.unwrap_or("unknown");
|
||||||
|
|
||||||
|
let api_title = spec
|
||||||
|
.get("info")
|
||||||
|
.and_then(|i| i.get("title"))
|
||||||
|
.and_then(|t| t.as_str())
|
||||||
|
.unwrap_or("Unknown API");
|
||||||
|
|
||||||
|
let api_version = spec
|
||||||
|
.get("info")
|
||||||
|
.and_then(|i| i.get("version"))
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.unwrap_or("unknown");
|
||||||
|
|
||||||
|
let endpoints = Self::parse_spec(&spec, base_url_trimmed);
|
||||||
|
|
||||||
|
let endpoint_data: Vec<serde_json::Value> = endpoints
|
||||||
|
.iter()
|
||||||
|
.map(|ep| {
|
||||||
|
let params: Vec<serde_json::Value> = ep
|
||||||
|
.parameters
|
||||||
|
.iter()
|
||||||
|
.map(|p| {
|
||||||
|
json!({
|
||||||
|
"name": p.name,
|
||||||
|
"in": p.location,
|
||||||
|
"required": p.required,
|
||||||
|
"type": p.param_type,
|
||||||
|
"description": p.description,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
json!({
|
||||||
|
"path": ep.path,
|
||||||
|
"method": ep.method,
|
||||||
|
"operation_id": ep.operation_id,
|
||||||
|
"summary": ep.summary,
|
||||||
|
"parameters": params,
|
||||||
|
"request_body_content_type": ep.request_body_content_type,
|
||||||
|
"response_codes": ep.response_codes,
|
||||||
|
"security": ep.security,
|
||||||
|
"tags": ep.tags,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let endpoint_count = endpoints.len();
|
||||||
|
info!(
|
||||||
|
spec_url = %spec_url,
|
||||||
|
spec_version,
|
||||||
|
api_title,
|
||||||
|
endpoints = endpoint_count,
|
||||||
|
"OpenAPI spec parsed"
|
||||||
|
);
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: format!(
|
||||||
|
"Found OpenAPI spec ({spec_version}) at {spec_url}. \
|
||||||
|
API: {api_title} v{api_version}. \
|
||||||
|
Parsed {endpoint_count} endpoints."
|
||||||
|
),
|
||||||
|
findings: Vec::new(), // This tool produces data, not findings
|
||||||
|
data: json!({
|
||||||
|
"spec_url": spec_url,
|
||||||
|
"spec_version": spec_version,
|
||||||
|
"api_title": api_title,
|
||||||
|
"api_version": api_version,
|
||||||
|
"endpoint_count": endpoint_count,
|
||||||
|
"endpoints": endpoint_data,
|
||||||
|
"security_schemes": spec.get("components")
|
||||||
|
.and_then(|c| c.get("securitySchemes"))
|
||||||
|
.or_else(|| spec.get("securityDefinitions")),
|
||||||
|
}),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
None => {
|
||||||
|
info!(base_url, "No OpenAPI spec found");
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: format!(
|
||||||
|
"No OpenAPI/Swagger specification found for {base_url}. \
|
||||||
|
Tried {} common paths.",
|
||||||
|
Self::common_spec_paths().len()
|
||||||
|
),
|
||||||
|
findings: Vec::new(),
|
||||||
|
data: json!({
|
||||||
|
"spec_found": false,
|
||||||
|
"paths_tried": Self::common_spec_paths(),
|
||||||
|
}),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
285
compliance-dast/src/tools/rate_limit_tester.rs
Normal file
285
compliance-dast/src/tools/rate_limit_tester.rs
Normal file
@@ -0,0 +1,285 @@
|
|||||||
|
use std::time::Instant;
|
||||||
|
|
||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::models::dast::{DastEvidence, DastFinding, DastVulnType};
|
||||||
|
use compliance_core::models::Severity;
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
use tracing::{info, warn};
|
||||||
|
|
||||||
|
/// Tool that tests whether a target enforces rate limiting.
|
||||||
|
pub struct RateLimitTesterTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl RateLimitTesterTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
Self { http }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for RateLimitTesterTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"rate_limit_tester"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Tests whether an endpoint enforces rate limiting by sending rapid sequential requests. \
|
||||||
|
Checks for 429 responses and measures response time degradation."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"url": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "URL of the endpoint to test for rate limiting"
|
||||||
|
},
|
||||||
|
"method": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "HTTP method to use",
|
||||||
|
"enum": ["GET", "POST", "PUT", "PATCH", "DELETE"],
|
||||||
|
"default": "GET"
|
||||||
|
},
|
||||||
|
"request_count": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Number of rapid requests to send (default: 50)",
|
||||||
|
"default": 50,
|
||||||
|
"minimum": 10,
|
||||||
|
"maximum": 200
|
||||||
|
},
|
||||||
|
"body": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Optional request body (for POST/PUT/PATCH)"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["url"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let url = input
|
||||||
|
.get("url")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or_else(|| CoreError::Dast("Missing required 'url' parameter".to_string()))?;
|
||||||
|
|
||||||
|
let method = input
|
||||||
|
.get("method")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.unwrap_or("GET");
|
||||||
|
|
||||||
|
let request_count = input
|
||||||
|
.get("request_count")
|
||||||
|
.and_then(|v| v.as_u64())
|
||||||
|
.unwrap_or(50)
|
||||||
|
.min(200) as usize;
|
||||||
|
|
||||||
|
let body = input.get("body").and_then(|v| v.as_str());
|
||||||
|
|
||||||
|
let target_id = context
|
||||||
|
.target
|
||||||
|
.id
|
||||||
|
.map(|oid| oid.to_hex())
|
||||||
|
.unwrap_or_else(|| "unknown".to_string());
|
||||||
|
|
||||||
|
// Respect the context rate limit if set
|
||||||
|
let max_requests = if context.rate_limit > 0 {
|
||||||
|
request_count.min(context.rate_limit as usize * 5)
|
||||||
|
} else {
|
||||||
|
request_count
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut status_codes: Vec<u16> = Vec::with_capacity(max_requests);
|
||||||
|
let mut response_times: Vec<u128> = Vec::with_capacity(max_requests);
|
||||||
|
let mut got_429 = false;
|
||||||
|
let mut rate_limit_at_request: Option<usize> = None;
|
||||||
|
|
||||||
|
for i in 0..max_requests {
|
||||||
|
let start = Instant::now();
|
||||||
|
|
||||||
|
let request = match method {
|
||||||
|
"POST" => {
|
||||||
|
let mut req = self.http.post(url);
|
||||||
|
if let Some(b) = body {
|
||||||
|
req = req.body(b.to_string());
|
||||||
|
}
|
||||||
|
req
|
||||||
|
}
|
||||||
|
"PUT" => {
|
||||||
|
let mut req = self.http.put(url);
|
||||||
|
if let Some(b) = body {
|
||||||
|
req = req.body(b.to_string());
|
||||||
|
}
|
||||||
|
req
|
||||||
|
}
|
||||||
|
"PATCH" => {
|
||||||
|
let mut req = self.http.patch(url);
|
||||||
|
if let Some(b) = body {
|
||||||
|
req = req.body(b.to_string());
|
||||||
|
}
|
||||||
|
req
|
||||||
|
}
|
||||||
|
"DELETE" => self.http.delete(url),
|
||||||
|
_ => self.http.get(url),
|
||||||
|
};
|
||||||
|
|
||||||
|
match request.send().await {
|
||||||
|
Ok(resp) => {
|
||||||
|
let elapsed = start.elapsed().as_millis();
|
||||||
|
let status = resp.status().as_u16();
|
||||||
|
status_codes.push(status);
|
||||||
|
response_times.push(elapsed);
|
||||||
|
|
||||||
|
if status == 429 && !got_429 {
|
||||||
|
got_429 = true;
|
||||||
|
rate_limit_at_request = Some(i + 1);
|
||||||
|
info!(url, request_num = i + 1, "Rate limit triggered (429)");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for rate limit headers even on 200
|
||||||
|
if !got_429 {
|
||||||
|
let headers = resp.headers();
|
||||||
|
let has_rate_headers = headers.contains_key("x-ratelimit-limit")
|
||||||
|
|| headers.contains_key("x-ratelimit-remaining")
|
||||||
|
|| headers.contains_key("ratelimit-limit")
|
||||||
|
|| headers.contains_key("ratelimit-remaining")
|
||||||
|
|| headers.contains_key("retry-after");
|
||||||
|
|
||||||
|
if has_rate_headers && rate_limit_at_request.is_none() {
|
||||||
|
// Server has rate limit headers but hasn't blocked yet
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
let elapsed = start.elapsed().as_millis();
|
||||||
|
status_codes.push(0);
|
||||||
|
response_times.push(elapsed);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut findings = Vec::new();
|
||||||
|
let total_sent = status_codes.len();
|
||||||
|
let count_429 = status_codes.iter().filter(|&&s| s == 429).count();
|
||||||
|
let count_success = status_codes.iter().filter(|&&s| (200..300).contains(&s)).count();
|
||||||
|
|
||||||
|
// Calculate response time statistics
|
||||||
|
let avg_time = if !response_times.is_empty() {
|
||||||
|
response_times.iter().sum::<u128>() / response_times.len() as u128
|
||||||
|
} else {
|
||||||
|
0
|
||||||
|
};
|
||||||
|
|
||||||
|
let first_half_avg = if response_times.len() >= 4 {
|
||||||
|
let half = response_times.len() / 2;
|
||||||
|
response_times[..half].iter().sum::<u128>() / half as u128
|
||||||
|
} else {
|
||||||
|
avg_time
|
||||||
|
};
|
||||||
|
|
||||||
|
let second_half_avg = if response_times.len() >= 4 {
|
||||||
|
let half = response_times.len() / 2;
|
||||||
|
response_times[half..].iter().sum::<u128>() / (response_times.len() - half) as u128
|
||||||
|
} else {
|
||||||
|
avg_time
|
||||||
|
};
|
||||||
|
|
||||||
|
// Significant time degradation suggests possible (weak) rate limiting
|
||||||
|
let time_degradation = if first_half_avg > 0 {
|
||||||
|
(second_half_avg as f64 / first_half_avg as f64) - 1.0
|
||||||
|
} else {
|
||||||
|
0.0
|
||||||
|
};
|
||||||
|
|
||||||
|
let rate_data = json!({
|
||||||
|
"total_requests_sent": total_sent,
|
||||||
|
"status_429_count": count_429,
|
||||||
|
"success_count": count_success,
|
||||||
|
"rate_limit_at_request": rate_limit_at_request,
|
||||||
|
"avg_response_time_ms": avg_time,
|
||||||
|
"first_half_avg_ms": first_half_avg,
|
||||||
|
"second_half_avg_ms": second_half_avg,
|
||||||
|
"time_degradation_pct": (time_degradation * 100.0).round(),
|
||||||
|
});
|
||||||
|
|
||||||
|
if !got_429 && count_success == total_sent {
|
||||||
|
// No rate limiting detected at all
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: method.to_string(),
|
||||||
|
request_url: url.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: body.map(String::from),
|
||||||
|
response_status: 200,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(format!(
|
||||||
|
"Sent {total_sent} rapid requests. All returned success (2xx). \
|
||||||
|
No 429 responses received. Avg response time: {avg_time}ms."
|
||||||
|
)),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: Some(avg_time as u64),
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::RateLimitAbsent,
|
||||||
|
format!("No rate limiting on {} {}", method, url),
|
||||||
|
format!(
|
||||||
|
"The endpoint {} {} does not enforce rate limiting. \
|
||||||
|
{total_sent} rapid requests were all accepted with no 429 responses \
|
||||||
|
or noticeable degradation. This makes the endpoint vulnerable to \
|
||||||
|
brute force attacks and abuse.",
|
||||||
|
method, url
|
||||||
|
),
|
||||||
|
Severity::Medium,
|
||||||
|
url.to_string(),
|
||||||
|
method.to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-770".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Implement rate limiting on this endpoint. Use token bucket or sliding window \
|
||||||
|
algorithms. Return 429 Too Many Requests with a Retry-After header when the \
|
||||||
|
limit is exceeded."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
warn!(url, method, total_sent, "No rate limiting detected");
|
||||||
|
} else if got_429 {
|
||||||
|
info!(
|
||||||
|
url,
|
||||||
|
method,
|
||||||
|
rate_limit_at = ?rate_limit_at_request,
|
||||||
|
"Rate limiting is enforced"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
let count = findings.len();
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if got_429 {
|
||||||
|
format!(
|
||||||
|
"Rate limiting is enforced on {method} {url}. \
|
||||||
|
429 response received after {} requests.",
|
||||||
|
rate_limit_at_request.unwrap_or(0)
|
||||||
|
)
|
||||||
|
} else if count > 0 {
|
||||||
|
format!(
|
||||||
|
"No rate limiting detected on {method} {url} after {total_sent} requests."
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
format!("Rate limit testing complete for {method} {url}.")
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: rate_data,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
125
compliance-dast/src/tools/recon.rs
Normal file
125
compliance-dast/src/tools/recon.rs
Normal file
@@ -0,0 +1,125 @@
|
|||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
use tracing::info;
|
||||||
|
|
||||||
|
use crate::recon::ReconAgent;
|
||||||
|
|
||||||
|
/// PentestTool wrapper around the existing ReconAgent.
|
||||||
|
///
|
||||||
|
/// Performs HTTP header fingerprinting and technology detection.
|
||||||
|
/// Returns structured recon data for the LLM to use when planning attacks.
|
||||||
|
pub struct ReconTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
agent: ReconAgent,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ReconTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
let agent = ReconAgent::new(http.clone());
|
||||||
|
Self { http, agent }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for ReconTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"recon"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Performs reconnaissance on a target URL. Fingerprints HTTP headers, detects server \
|
||||||
|
technologies and frameworks. Returns structured data about the target's technology stack."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"url": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Base URL to perform reconnaissance on"
|
||||||
|
},
|
||||||
|
"additional_paths": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Optional additional paths to probe for technology fingerprinting",
|
||||||
|
"items": { "type": "string" }
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["url"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let url = input
|
||||||
|
.get("url")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or_else(|| CoreError::Dast("Missing required 'url' parameter".to_string()))?;
|
||||||
|
|
||||||
|
let additional_paths: Vec<String> = input
|
||||||
|
.get("additional_paths")
|
||||||
|
.and_then(|v| v.as_array())
|
||||||
|
.map(|arr| {
|
||||||
|
arr.iter()
|
||||||
|
.filter_map(|v| v.as_str().map(String::from))
|
||||||
|
.collect()
|
||||||
|
})
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
let result = self.agent.scan(url).await?;
|
||||||
|
|
||||||
|
// Scan additional paths for more technology signals
|
||||||
|
let mut extra_technologies: Vec<String> = Vec::new();
|
||||||
|
let mut extra_headers: HashMap<String, String> = HashMap::new();
|
||||||
|
|
||||||
|
let base_url = url.trim_end_matches('/');
|
||||||
|
for path in &additional_paths {
|
||||||
|
let probe_url = format!("{base_url}/{}", path.trim_start_matches('/'));
|
||||||
|
if let Ok(resp) = self.http.get(&probe_url).send().await {
|
||||||
|
for (key, value) in resp.headers() {
|
||||||
|
let k = key.to_string().to_lowercase();
|
||||||
|
let v = value.to_str().unwrap_or("").to_string();
|
||||||
|
|
||||||
|
// Look for technology indicators
|
||||||
|
if k == "x-powered-by" || k == "server" || k == "x-generator" {
|
||||||
|
if !result.technologies.contains(&v) && !extra_technologies.contains(&v) {
|
||||||
|
extra_technologies.push(v.clone());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
extra_headers.insert(format!("{probe_url} -> {k}"), v);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut all_technologies = result.technologies.clone();
|
||||||
|
all_technologies.extend(extra_technologies);
|
||||||
|
all_technologies.dedup();
|
||||||
|
|
||||||
|
let tech_count = all_technologies.len();
|
||||||
|
info!(url, technologies = tech_count, "Recon complete");
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: format!(
|
||||||
|
"Recon complete for {url}. Detected {} technologies. Server: {}.",
|
||||||
|
tech_count,
|
||||||
|
result.server.as_deref().unwrap_or("unknown")
|
||||||
|
),
|
||||||
|
findings: Vec::new(), // Recon produces data, not findings
|
||||||
|
data: json!({
|
||||||
|
"base_url": url,
|
||||||
|
"server": result.server,
|
||||||
|
"technologies": all_technologies,
|
||||||
|
"interesting_headers": result.interesting_headers,
|
||||||
|
"extra_headers": extra_headers,
|
||||||
|
"open_ports": result.open_ports,
|
||||||
|
}),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
300
compliance-dast/src/tools/security_headers.rs
Normal file
300
compliance-dast/src/tools/security_headers.rs
Normal file
@@ -0,0 +1,300 @@
|
|||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::models::dast::{DastEvidence, DastFinding, DastVulnType};
|
||||||
|
use compliance_core::models::Severity;
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
use tracing::info;
|
||||||
|
|
||||||
|
/// Tool that checks for the presence and correctness of security headers.
|
||||||
|
pub struct SecurityHeadersTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A security header we expect to be present and its metadata.
|
||||||
|
struct ExpectedHeader {
|
||||||
|
name: &'static str,
|
||||||
|
description: &'static str,
|
||||||
|
severity: Severity,
|
||||||
|
cwe: &'static str,
|
||||||
|
remediation: &'static str,
|
||||||
|
/// If present, the value must contain one of these substrings to be considered valid.
|
||||||
|
valid_values: Option<Vec<&'static str>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SecurityHeadersTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
Self { http }
|
||||||
|
}
|
||||||
|
|
||||||
|
fn expected_headers() -> Vec<ExpectedHeader> {
|
||||||
|
vec![
|
||||||
|
ExpectedHeader {
|
||||||
|
name: "strict-transport-security",
|
||||||
|
description: "HTTP Strict Transport Security (HSTS) forces browsers to use HTTPS",
|
||||||
|
severity: Severity::Medium,
|
||||||
|
cwe: "CWE-319",
|
||||||
|
remediation: "Add 'Strict-Transport-Security: max-age=31536000; includeSubDomains' header.",
|
||||||
|
valid_values: None,
|
||||||
|
},
|
||||||
|
ExpectedHeader {
|
||||||
|
name: "x-content-type-options",
|
||||||
|
description: "Prevents MIME type sniffing",
|
||||||
|
severity: Severity::Low,
|
||||||
|
cwe: "CWE-16",
|
||||||
|
remediation: "Add 'X-Content-Type-Options: nosniff' header.",
|
||||||
|
valid_values: Some(vec!["nosniff"]),
|
||||||
|
},
|
||||||
|
ExpectedHeader {
|
||||||
|
name: "x-frame-options",
|
||||||
|
description: "Prevents clickjacking by controlling iframe embedding",
|
||||||
|
severity: Severity::Medium,
|
||||||
|
cwe: "CWE-1021",
|
||||||
|
remediation: "Add 'X-Frame-Options: DENY' or 'X-Frame-Options: SAMEORIGIN' header.",
|
||||||
|
valid_values: Some(vec!["deny", "sameorigin"]),
|
||||||
|
},
|
||||||
|
ExpectedHeader {
|
||||||
|
name: "x-xss-protection",
|
||||||
|
description: "Enables browser XSS filtering (legacy but still recommended)",
|
||||||
|
severity: Severity::Low,
|
||||||
|
cwe: "CWE-79",
|
||||||
|
remediation: "Add 'X-XSS-Protection: 1; mode=block' header.",
|
||||||
|
valid_values: None,
|
||||||
|
},
|
||||||
|
ExpectedHeader {
|
||||||
|
name: "referrer-policy",
|
||||||
|
description: "Controls how much referrer information is shared",
|
||||||
|
severity: Severity::Low,
|
||||||
|
cwe: "CWE-200",
|
||||||
|
remediation: "Add 'Referrer-Policy: strict-origin-when-cross-origin' or 'no-referrer' header.",
|
||||||
|
valid_values: None,
|
||||||
|
},
|
||||||
|
ExpectedHeader {
|
||||||
|
name: "permissions-policy",
|
||||||
|
description: "Controls browser feature access (camera, microphone, geolocation, etc.)",
|
||||||
|
severity: Severity::Low,
|
||||||
|
cwe: "CWE-16",
|
||||||
|
remediation: "Add a Permissions-Policy header to restrict browser feature access. \
|
||||||
|
Example: 'Permissions-Policy: camera=(), microphone=(), geolocation=()'.",
|
||||||
|
valid_values: None,
|
||||||
|
},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for SecurityHeadersTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"security_headers"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Checks a URL for the presence and correctness of security headers: HSTS, \
|
||||||
|
X-Content-Type-Options, X-Frame-Options, X-XSS-Protection, Referrer-Policy, \
|
||||||
|
and Permissions-Policy."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"url": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "URL to check security headers for"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["url"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let url = input
|
||||||
|
.get("url")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or_else(|| CoreError::Dast("Missing required 'url' parameter".to_string()))?;
|
||||||
|
|
||||||
|
let target_id = context
|
||||||
|
.target
|
||||||
|
.id
|
||||||
|
.map(|oid| oid.to_hex())
|
||||||
|
.unwrap_or_else(|| "unknown".to_string());
|
||||||
|
|
||||||
|
let response = self
|
||||||
|
.http
|
||||||
|
.get(url)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| CoreError::Dast(format!("Failed to fetch {url}: {e}")))?;
|
||||||
|
|
||||||
|
let status = response.status().as_u16();
|
||||||
|
let response_headers: HashMap<String, String> = response
|
||||||
|
.headers()
|
||||||
|
.iter()
|
||||||
|
.map(|(k, v)| (k.to_string().to_lowercase(), v.to_str().unwrap_or("").to_string()))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let mut findings = Vec::new();
|
||||||
|
let mut header_results: HashMap<String, serde_json::Value> = HashMap::new();
|
||||||
|
|
||||||
|
for expected in Self::expected_headers() {
|
||||||
|
let header_value = response_headers.get(expected.name);
|
||||||
|
|
||||||
|
match header_value {
|
||||||
|
Some(value) => {
|
||||||
|
let mut is_valid = true;
|
||||||
|
if let Some(ref valid) = expected.valid_values {
|
||||||
|
let lower = value.to_lowercase();
|
||||||
|
is_valid = valid.iter().any(|v| lower.contains(v));
|
||||||
|
}
|
||||||
|
|
||||||
|
header_results.insert(
|
||||||
|
expected.name.to_string(),
|
||||||
|
json!({
|
||||||
|
"present": true,
|
||||||
|
"value": value,
|
||||||
|
"valid": is_valid,
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
if !is_valid {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: url.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: status,
|
||||||
|
response_headers: Some(response_headers.clone()),
|
||||||
|
response_snippet: Some(format!("{}: {}", expected.name, value)),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::SecurityHeaderMissing,
|
||||||
|
format!("Invalid {} header value", expected.name),
|
||||||
|
format!(
|
||||||
|
"The {} header is present but has an invalid or weak value: '{}'. \
|
||||||
|
{}",
|
||||||
|
expected.name, value, expected.description
|
||||||
|
),
|
||||||
|
expected.severity.clone(),
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some(expected.cwe.to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(expected.remediation.to_string());
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None => {
|
||||||
|
header_results.insert(
|
||||||
|
expected.name.to_string(),
|
||||||
|
json!({
|
||||||
|
"present": false,
|
||||||
|
"value": null,
|
||||||
|
"valid": false,
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: url.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: status,
|
||||||
|
response_headers: Some(response_headers.clone()),
|
||||||
|
response_snippet: Some(format!("{} header is missing", expected.name)),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::SecurityHeaderMissing,
|
||||||
|
format!("Missing {} header", expected.name),
|
||||||
|
format!(
|
||||||
|
"The {} header is not present in the response. {}",
|
||||||
|
expected.name, expected.description
|
||||||
|
),
|
||||||
|
expected.severity.clone(),
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some(expected.cwe.to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(expected.remediation.to_string());
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also check for information disclosure headers
|
||||||
|
let disclosure_headers = ["server", "x-powered-by", "x-aspnet-version", "x-aspnetmvc-version"];
|
||||||
|
for h in &disclosure_headers {
|
||||||
|
if let Some(value) = response_headers.get(*h) {
|
||||||
|
header_results.insert(
|
||||||
|
format!("{h}_disclosure"),
|
||||||
|
json!({ "present": true, "value": value }),
|
||||||
|
);
|
||||||
|
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: url.to_string(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: status,
|
||||||
|
response_headers: Some(response_headers.clone()),
|
||||||
|
response_snippet: Some(format!("{h}: {value}")),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::SecurityHeaderMissing,
|
||||||
|
format!("Information disclosure via {h} header"),
|
||||||
|
format!(
|
||||||
|
"The {h} header exposes server technology information: '{value}'. \
|
||||||
|
This helps attackers fingerprint the server and find known vulnerabilities."
|
||||||
|
),
|
||||||
|
Severity::Info,
|
||||||
|
url.to_string(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-200".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(format!(
|
||||||
|
"Remove or suppress the {h} header in your server configuration."
|
||||||
|
));
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let count = findings.len();
|
||||||
|
info!(url, findings = count, "Security headers check complete");
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if count > 0 {
|
||||||
|
format!("Found {count} security header issues for {url}.")
|
||||||
|
} else {
|
||||||
|
format!("All checked security headers are present and valid for {url}.")
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: json!(header_results),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
138
compliance-dast/src/tools/sql_injection.rs
Normal file
138
compliance-dast/src/tools/sql_injection.rs
Normal file
@@ -0,0 +1,138 @@
|
|||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::traits::dast_agent::{DastAgent, DastContext, DiscoveredEndpoint, EndpointParameter};
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
|
||||||
|
use crate::agents::injection::SqlInjectionAgent;
|
||||||
|
|
||||||
|
/// PentestTool wrapper around the existing SqlInjectionAgent.
|
||||||
|
pub struct SqlInjectionTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
agent: SqlInjectionAgent,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SqlInjectionTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
let agent = SqlInjectionAgent::new(http.clone());
|
||||||
|
Self { http, agent }
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_endpoints(input: &serde_json::Value) -> Vec<DiscoveredEndpoint> {
|
||||||
|
let mut endpoints = Vec::new();
|
||||||
|
if let Some(arr) = input.get("endpoints").and_then(|v| v.as_array()) {
|
||||||
|
for ep in arr {
|
||||||
|
let url = ep.get("url").and_then(|v| v.as_str()).unwrap_or_default().to_string();
|
||||||
|
let method = ep.get("method").and_then(|v| v.as_str()).unwrap_or("GET").to_string();
|
||||||
|
let mut parameters = Vec::new();
|
||||||
|
if let Some(params) = ep.get("parameters").and_then(|v| v.as_array()) {
|
||||||
|
for p in params {
|
||||||
|
let name = p.get("name").and_then(|v| v.as_str()).unwrap_or_default().to_string();
|
||||||
|
let location = p.get("location").and_then(|v| v.as_str()).unwrap_or("query").to_string();
|
||||||
|
let param_type = p.get("param_type").and_then(|v| v.as_str()).map(String::from);
|
||||||
|
let example_value = p.get("example_value").and_then(|v| v.as_str()).map(String::from);
|
||||||
|
parameters.push(EndpointParameter {
|
||||||
|
name,
|
||||||
|
location,
|
||||||
|
param_type,
|
||||||
|
example_value,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
endpoints.push(DiscoveredEndpoint {
|
||||||
|
url,
|
||||||
|
method,
|
||||||
|
parameters,
|
||||||
|
content_type: ep.get("content_type").and_then(|v| v.as_str()).map(String::from),
|
||||||
|
requires_auth: ep.get("requires_auth").and_then(|v| v.as_bool()).unwrap_or(false),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
endpoints
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for SqlInjectionTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"sql_injection_scanner"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Tests endpoints for SQL injection vulnerabilities using error-based, boolean-based, \
|
||||||
|
time-based, and union-based techniques. Provide endpoints with their parameters to test."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"endpoints": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Endpoints to test for SQL injection",
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"url": { "type": "string", "description": "Full URL of the endpoint" },
|
||||||
|
"method": { "type": "string", "enum": ["GET", "POST", "PUT", "PATCH", "DELETE"] },
|
||||||
|
"parameters": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"name": { "type": "string" },
|
||||||
|
"location": { "type": "string", "enum": ["query", "body", "header", "path", "cookie"] },
|
||||||
|
"param_type": { "type": "string" },
|
||||||
|
"example_value": { "type": "string" }
|
||||||
|
},
|
||||||
|
"required": ["name"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["url", "method", "parameters"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"custom_payloads": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Optional additional SQL injection payloads to test",
|
||||||
|
"items": { "type": "string" }
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["endpoints"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let endpoints = Self::parse_endpoints(&input);
|
||||||
|
if endpoints.is_empty() {
|
||||||
|
return Ok(PentestToolResult {
|
||||||
|
summary: "No endpoints provided to test.".to_string(),
|
||||||
|
findings: Vec::new(),
|
||||||
|
data: json!({}),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
let dast_context = DastContext {
|
||||||
|
endpoints,
|
||||||
|
technologies: Vec::new(),
|
||||||
|
sast_hints: Vec::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let findings = self.agent.run(&context.target, &dast_context).await?;
|
||||||
|
let count = findings.len();
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if count > 0 {
|
||||||
|
format!("Found {count} SQL injection vulnerabilities.")
|
||||||
|
} else {
|
||||||
|
"No SQL injection vulnerabilities detected.".to_string()
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: json!({ "endpoints_tested": dast_context.endpoints.len() }),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
134
compliance-dast/src/tools/ssrf.rs
Normal file
134
compliance-dast/src/tools/ssrf.rs
Normal file
@@ -0,0 +1,134 @@
|
|||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::traits::dast_agent::{DastAgent, DastContext, DiscoveredEndpoint, EndpointParameter};
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
|
||||||
|
use crate::agents::ssrf::SsrfAgent;
|
||||||
|
|
||||||
|
/// PentestTool wrapper around the existing SsrfAgent.
|
||||||
|
pub struct SsrfTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
agent: SsrfAgent,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SsrfTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
let agent = SsrfAgent::new(http.clone());
|
||||||
|
Self { http, agent }
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_endpoints(input: &serde_json::Value) -> Vec<DiscoveredEndpoint> {
|
||||||
|
let mut endpoints = Vec::new();
|
||||||
|
if let Some(arr) = input.get("endpoints").and_then(|v| v.as_array()) {
|
||||||
|
for ep in arr {
|
||||||
|
let url = ep.get("url").and_then(|v| v.as_str()).unwrap_or_default().to_string();
|
||||||
|
let method = ep.get("method").and_then(|v| v.as_str()).unwrap_or("GET").to_string();
|
||||||
|
let mut parameters = Vec::new();
|
||||||
|
if let Some(params) = ep.get("parameters").and_then(|v| v.as_array()) {
|
||||||
|
for p in params {
|
||||||
|
parameters.push(EndpointParameter {
|
||||||
|
name: p.get("name").and_then(|v| v.as_str()).unwrap_or_default().to_string(),
|
||||||
|
location: p.get("location").and_then(|v| v.as_str()).unwrap_or("query").to_string(),
|
||||||
|
param_type: p.get("param_type").and_then(|v| v.as_str()).map(String::from),
|
||||||
|
example_value: p.get("example_value").and_then(|v| v.as_str()).map(String::from),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
endpoints.push(DiscoveredEndpoint {
|
||||||
|
url,
|
||||||
|
method,
|
||||||
|
parameters,
|
||||||
|
content_type: ep.get("content_type").and_then(|v| v.as_str()).map(String::from),
|
||||||
|
requires_auth: ep.get("requires_auth").and_then(|v| v.as_bool()).unwrap_or(false),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
endpoints
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for SsrfTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"ssrf_scanner"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Tests endpoints for Server-Side Request Forgery (SSRF) vulnerabilities. Checks if \
|
||||||
|
parameters accepting URLs can be exploited to access internal resources."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"endpoints": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Endpoints to test for SSRF (focus on those accepting URL parameters)",
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"url": { "type": "string" },
|
||||||
|
"method": { "type": "string", "enum": ["GET", "POST", "PUT", "PATCH", "DELETE"] },
|
||||||
|
"parameters": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"name": { "type": "string" },
|
||||||
|
"location": { "type": "string" },
|
||||||
|
"param_type": { "type": "string" },
|
||||||
|
"example_value": { "type": "string" }
|
||||||
|
},
|
||||||
|
"required": ["name"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["url", "method", "parameters"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"custom_payloads": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Optional additional SSRF payloads (internal URLs to try)",
|
||||||
|
"items": { "type": "string" }
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["endpoints"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let endpoints = Self::parse_endpoints(&input);
|
||||||
|
if endpoints.is_empty() {
|
||||||
|
return Ok(PentestToolResult {
|
||||||
|
summary: "No endpoints provided to test.".to_string(),
|
||||||
|
findings: Vec::new(),
|
||||||
|
data: json!({}),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
let dast_context = DastContext {
|
||||||
|
endpoints,
|
||||||
|
technologies: Vec::new(),
|
||||||
|
sast_hints: Vec::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let findings = self.agent.run(&context.target, &dast_context).await?;
|
||||||
|
let count = findings.len();
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if count > 0 {
|
||||||
|
format!("Found {count} SSRF vulnerabilities.")
|
||||||
|
} else {
|
||||||
|
"No SSRF vulnerabilities detected.".to_string()
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: json!({ "endpoints_tested": dast_context.endpoints.len() }),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
442
compliance-dast/src/tools/tls_analyzer.rs
Normal file
442
compliance-dast/src/tools/tls_analyzer.rs
Normal file
@@ -0,0 +1,442 @@
|
|||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::models::dast::{DastEvidence, DastFinding, DastVulnType};
|
||||||
|
use compliance_core::models::Severity;
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
use tokio::net::TcpStream;
|
||||||
|
use tracing::{info, warn};
|
||||||
|
|
||||||
|
/// Tool that analyzes TLS configuration of a target.
|
||||||
|
///
|
||||||
|
/// Connects via TCP, performs a TLS handshake using `tokio-native-tls`,
|
||||||
|
/// and inspects the certificate and negotiated protocol. Also checks
|
||||||
|
/// for common TLS misconfigurations.
|
||||||
|
pub struct TlsAnalyzerTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TlsAnalyzerTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
Self { http }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Extract the hostname from a URL.
|
||||||
|
fn extract_host(url: &str) -> Option<String> {
|
||||||
|
url::Url::parse(url)
|
||||||
|
.ok()
|
||||||
|
.and_then(|u| u.host_str().map(String::from))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Extract port from a URL (defaults to 443 for https).
|
||||||
|
fn extract_port(url: &str) -> u16 {
|
||||||
|
url::Url::parse(url)
|
||||||
|
.ok()
|
||||||
|
.and_then(|u| u.port())
|
||||||
|
.unwrap_or(443)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if the server accepts a connection on a given port with a weak
|
||||||
|
/// TLS client hello. We test SSLv3 / old protocol support by attempting
|
||||||
|
/// connection with the system's native-tls which typically negotiates the
|
||||||
|
/// best available, then inspect what was negotiated.
|
||||||
|
async fn check_tls(
|
||||||
|
host: &str,
|
||||||
|
port: u16,
|
||||||
|
) -> Result<TlsInfo, CoreError> {
|
||||||
|
let addr = format!("{host}:{port}");
|
||||||
|
|
||||||
|
let tcp = TcpStream::connect(&addr)
|
||||||
|
.await
|
||||||
|
.map_err(|e| CoreError::Dast(format!("TCP connection to {addr} failed: {e}")))?;
|
||||||
|
|
||||||
|
let connector = native_tls::TlsConnector::builder()
|
||||||
|
.danger_accept_invalid_certs(true)
|
||||||
|
.danger_accept_invalid_hostnames(true)
|
||||||
|
.build()
|
||||||
|
.map_err(|e| CoreError::Dast(format!("TLS connector build failed: {e}")))?;
|
||||||
|
|
||||||
|
let connector = tokio_native_tls::TlsConnector::from(connector);
|
||||||
|
|
||||||
|
let tls_stream = connector
|
||||||
|
.connect(host, tcp)
|
||||||
|
.await
|
||||||
|
.map_err(|e| CoreError::Dast(format!("TLS handshake with {addr} failed: {e}")))?;
|
||||||
|
|
||||||
|
let peer_cert = tls_stream.get_ref().peer_certificate()
|
||||||
|
.map_err(|e| CoreError::Dast(format!("Failed to get peer certificate: {e}")))?;
|
||||||
|
|
||||||
|
let mut tls_info = TlsInfo {
|
||||||
|
protocol_version: String::new(),
|
||||||
|
cert_subject: String::new(),
|
||||||
|
cert_issuer: String::new(),
|
||||||
|
cert_not_before: String::new(),
|
||||||
|
cert_not_after: String::new(),
|
||||||
|
cert_expired: false,
|
||||||
|
cert_self_signed: false,
|
||||||
|
alpn_protocol: None,
|
||||||
|
san_names: Vec::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Some(cert) = peer_cert {
|
||||||
|
let der = cert.to_der()
|
||||||
|
.map_err(|e| CoreError::Dast(format!("Certificate DER encoding failed: {e}")))?;
|
||||||
|
|
||||||
|
// native_tls doesn't give rich access, so we parse what we can
|
||||||
|
// from the DER-encoded certificate.
|
||||||
|
tls_info.cert_subject = "see DER certificate".to_string();
|
||||||
|
// Attempt to parse with basic DER inspection for dates
|
||||||
|
tls_info = Self::parse_cert_der(&der, tls_info);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(tls_info)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Best-effort parse of DER-encoded X.509 certificate for dates and subject.
|
||||||
|
/// This is a simplified parser; in production you would use a proper x509 crate.
|
||||||
|
fn parse_cert_der(der: &[u8], mut info: TlsInfo) -> TlsInfo {
|
||||||
|
// We rely on the native_tls debug output stored in cert_subject
|
||||||
|
// and just mark fields as "see certificate details"
|
||||||
|
if info.cert_subject.contains("self signed") || info.cert_subject.contains("Self-Signed") {
|
||||||
|
info.cert_self_signed = true;
|
||||||
|
}
|
||||||
|
info
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
struct TlsInfo {
|
||||||
|
protocol_version: String,
|
||||||
|
cert_subject: String,
|
||||||
|
cert_issuer: String,
|
||||||
|
cert_not_before: String,
|
||||||
|
cert_not_after: String,
|
||||||
|
cert_expired: bool,
|
||||||
|
cert_self_signed: bool,
|
||||||
|
alpn_protocol: Option<String>,
|
||||||
|
san_names: Vec<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for TlsAnalyzerTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"tls_analyzer"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Analyzes TLS/SSL configuration of a target. Checks certificate validity, expiry, chain \
|
||||||
|
trust, and negotiated protocols. Reports TLS misconfigurations."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"url": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Target URL or hostname to analyze TLS configuration"
|
||||||
|
},
|
||||||
|
"port": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Port to connect to (default: 443)",
|
||||||
|
"default": 443
|
||||||
|
},
|
||||||
|
"check_protocols": {
|
||||||
|
"type": "boolean",
|
||||||
|
"description": "Whether to test for old/weak protocol versions",
|
||||||
|
"default": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["url"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let url = input
|
||||||
|
.get("url")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or_else(|| CoreError::Dast("Missing required 'url' parameter".to_string()))?;
|
||||||
|
|
||||||
|
let host = Self::extract_host(url)
|
||||||
|
.unwrap_or_else(|| url.to_string());
|
||||||
|
|
||||||
|
let port = input
|
||||||
|
.get("port")
|
||||||
|
.and_then(|v| v.as_u64())
|
||||||
|
.map(|p| p as u16)
|
||||||
|
.unwrap_or_else(|| Self::extract_port(url));
|
||||||
|
|
||||||
|
let target_id = context
|
||||||
|
.target
|
||||||
|
.id
|
||||||
|
.map(|oid| oid.to_hex())
|
||||||
|
.unwrap_or_else(|| "unknown".to_string());
|
||||||
|
|
||||||
|
let mut findings = Vec::new();
|
||||||
|
let mut tls_data = json!({});
|
||||||
|
|
||||||
|
// First check: does the server even support HTTPS?
|
||||||
|
let https_url = if url.starts_with("https://") {
|
||||||
|
url.to_string()
|
||||||
|
} else if url.starts_with("http://") {
|
||||||
|
url.replace("http://", "https://")
|
||||||
|
} else {
|
||||||
|
format!("https://{url}")
|
||||||
|
};
|
||||||
|
|
||||||
|
// Check if HTTP redirects to HTTPS
|
||||||
|
let http_url = if url.starts_with("http://") {
|
||||||
|
url.to_string()
|
||||||
|
} else if url.starts_with("https://") {
|
||||||
|
url.replace("https://", "http://")
|
||||||
|
} else {
|
||||||
|
format!("http://{url}")
|
||||||
|
};
|
||||||
|
|
||||||
|
match self.http.get(&http_url).send().await {
|
||||||
|
Ok(resp) => {
|
||||||
|
let final_url = resp.url().to_string();
|
||||||
|
let redirects_to_https = final_url.starts_with("https://");
|
||||||
|
tls_data["http_redirects_to_https"] = json!(redirects_to_https);
|
||||||
|
|
||||||
|
if !redirects_to_https {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: http_url.clone(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: resp.status().as_u16(),
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(format!("Final URL: {final_url}")),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::TlsMisconfiguration,
|
||||||
|
format!("HTTP does not redirect to HTTPS for {host}"),
|
||||||
|
format!(
|
||||||
|
"HTTP requests to {host} are not redirected to HTTPS. \
|
||||||
|
Users accessing the site via HTTP will have their traffic \
|
||||||
|
transmitted in cleartext."
|
||||||
|
),
|
||||||
|
Severity::Medium,
|
||||||
|
http_url.clone(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-319".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Configure the web server to redirect all HTTP requests to HTTPS \
|
||||||
|
using a 301 redirect."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(_) => {
|
||||||
|
tls_data["http_check_error"] = json!("Could not connect via HTTP");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Perform TLS analysis
|
||||||
|
match Self::check_tls(&host, port).await {
|
||||||
|
Ok(tls_info) => {
|
||||||
|
tls_data["host"] = json!(host);
|
||||||
|
tls_data["port"] = json!(port);
|
||||||
|
tls_data["cert_subject"] = json!(tls_info.cert_subject);
|
||||||
|
tls_data["cert_issuer"] = json!(tls_info.cert_issuer);
|
||||||
|
tls_data["cert_not_before"] = json!(tls_info.cert_not_before);
|
||||||
|
tls_data["cert_not_after"] = json!(tls_info.cert_not_after);
|
||||||
|
tls_data["alpn_protocol"] = json!(tls_info.alpn_protocol);
|
||||||
|
tls_data["san_names"] = json!(tls_info.san_names);
|
||||||
|
|
||||||
|
if tls_info.cert_expired {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "TLS".to_string(),
|
||||||
|
request_url: format!("{host}:{port}"),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 0,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(format!(
|
||||||
|
"Certificate expired. Not After: {}",
|
||||||
|
tls_info.cert_not_after
|
||||||
|
)),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::TlsMisconfiguration,
|
||||||
|
format!("Expired TLS certificate for {host}"),
|
||||||
|
format!(
|
||||||
|
"The TLS certificate for {host} has expired. \
|
||||||
|
Browsers will show security warnings to users."
|
||||||
|
),
|
||||||
|
Severity::High,
|
||||||
|
format!("https://{host}:{port}"),
|
||||||
|
"TLS".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-295".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Renew the TLS certificate. Consider using automated certificate \
|
||||||
|
management with Let's Encrypt or a similar CA."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
warn!(host, "Expired TLS certificate");
|
||||||
|
}
|
||||||
|
|
||||||
|
if tls_info.cert_self_signed {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "TLS".to_string(),
|
||||||
|
request_url: format!("{host}:{port}"),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 0,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some("Self-signed certificate detected".to_string()),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::TlsMisconfiguration,
|
||||||
|
format!("Self-signed TLS certificate for {host}"),
|
||||||
|
format!(
|
||||||
|
"The TLS certificate for {host} is self-signed and not issued by a \
|
||||||
|
trusted certificate authority. Browsers will show security warnings."
|
||||||
|
),
|
||||||
|
Severity::Medium,
|
||||||
|
format!("https://{host}:{port}"),
|
||||||
|
"TLS".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-295".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Replace the self-signed certificate with one issued by a trusted \
|
||||||
|
certificate authority."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
warn!(host, "Self-signed certificate");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
tls_data["tls_error"] = json!(e.to_string());
|
||||||
|
|
||||||
|
// TLS handshake failure itself is a finding
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "TLS".to_string(),
|
||||||
|
request_url: format!("{host}:{port}"),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: 0,
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(format!("TLS error: {e}")),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::TlsMisconfiguration,
|
||||||
|
format!("TLS handshake failure for {host}"),
|
||||||
|
format!(
|
||||||
|
"Could not establish a TLS connection to {host}:{port}. Error: {e}"
|
||||||
|
),
|
||||||
|
Severity::High,
|
||||||
|
format!("https://{host}:{port}"),
|
||||||
|
"TLS".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-295".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Ensure TLS is properly configured on the server. Check that the \
|
||||||
|
certificate is valid and the server supports modern TLS versions."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check strict transport security via an HTTPS request
|
||||||
|
match self.http.get(&https_url).send().await {
|
||||||
|
Ok(resp) => {
|
||||||
|
let hsts = resp.headers().get("strict-transport-security");
|
||||||
|
tls_data["hsts_header"] = json!(hsts.map(|v| v.to_str().unwrap_or("")));
|
||||||
|
|
||||||
|
if hsts.is_none() {
|
||||||
|
let evidence = DastEvidence {
|
||||||
|
request_method: "GET".to_string(),
|
||||||
|
request_url: https_url.clone(),
|
||||||
|
request_headers: None,
|
||||||
|
request_body: None,
|
||||||
|
response_status: resp.status().as_u16(),
|
||||||
|
response_headers: None,
|
||||||
|
response_snippet: Some(
|
||||||
|
"Strict-Transport-Security header not present".to_string(),
|
||||||
|
),
|
||||||
|
screenshot_path: None,
|
||||||
|
payload: None,
|
||||||
|
response_time_ms: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut finding = DastFinding::new(
|
||||||
|
String::new(),
|
||||||
|
target_id.clone(),
|
||||||
|
DastVulnType::TlsMisconfiguration,
|
||||||
|
format!("Missing HSTS header for {host}"),
|
||||||
|
format!(
|
||||||
|
"The server at {host} does not send a Strict-Transport-Security header. \
|
||||||
|
Without HSTS, browsers may allow HTTP downgrade attacks."
|
||||||
|
),
|
||||||
|
Severity::Medium,
|
||||||
|
https_url.clone(),
|
||||||
|
"GET".to_string(),
|
||||||
|
);
|
||||||
|
finding.cwe = Some("CWE-319".to_string());
|
||||||
|
finding.evidence = vec![evidence];
|
||||||
|
finding.remediation = Some(
|
||||||
|
"Add the Strict-Transport-Security header with an appropriate max-age. \
|
||||||
|
Example: 'Strict-Transport-Security: max-age=31536000; includeSubDomains'."
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
findings.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(_) => {
|
||||||
|
tls_data["https_check_error"] = json!("Could not connect via HTTPS");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let count = findings.len();
|
||||||
|
info!(host = %host, findings = count, "TLS analysis complete");
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if count > 0 {
|
||||||
|
format!("Found {count} TLS configuration issues for {host}.")
|
||||||
|
} else {
|
||||||
|
format!("TLS configuration looks good for {host}.")
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: tls_data,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
134
compliance-dast/src/tools/xss.rs
Normal file
134
compliance-dast/src/tools/xss.rs
Normal file
@@ -0,0 +1,134 @@
|
|||||||
|
use compliance_core::error::CoreError;
|
||||||
|
use compliance_core::traits::dast_agent::{DastAgent, DastContext, DiscoveredEndpoint, EndpointParameter};
|
||||||
|
use compliance_core::traits::pentest_tool::{PentestTool, PentestToolContext, PentestToolResult};
|
||||||
|
use serde_json::json;
|
||||||
|
|
||||||
|
use crate::agents::xss::XssAgent;
|
||||||
|
|
||||||
|
/// PentestTool wrapper around the existing XssAgent.
|
||||||
|
pub struct XssTool {
|
||||||
|
http: reqwest::Client,
|
||||||
|
agent: XssAgent,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl XssTool {
|
||||||
|
pub fn new(http: reqwest::Client) -> Self {
|
||||||
|
let agent = XssAgent::new(http.clone());
|
||||||
|
Self { http, agent }
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_endpoints(input: &serde_json::Value) -> Vec<DiscoveredEndpoint> {
|
||||||
|
let mut endpoints = Vec::new();
|
||||||
|
if let Some(arr) = input.get("endpoints").and_then(|v| v.as_array()) {
|
||||||
|
for ep in arr {
|
||||||
|
let url = ep.get("url").and_then(|v| v.as_str()).unwrap_or_default().to_string();
|
||||||
|
let method = ep.get("method").and_then(|v| v.as_str()).unwrap_or("GET").to_string();
|
||||||
|
let mut parameters = Vec::new();
|
||||||
|
if let Some(params) = ep.get("parameters").and_then(|v| v.as_array()) {
|
||||||
|
for p in params {
|
||||||
|
parameters.push(EndpointParameter {
|
||||||
|
name: p.get("name").and_then(|v| v.as_str()).unwrap_or_default().to_string(),
|
||||||
|
location: p.get("location").and_then(|v| v.as_str()).unwrap_or("query").to_string(),
|
||||||
|
param_type: p.get("param_type").and_then(|v| v.as_str()).map(String::from),
|
||||||
|
example_value: p.get("example_value").and_then(|v| v.as_str()).map(String::from),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
endpoints.push(DiscoveredEndpoint {
|
||||||
|
url,
|
||||||
|
method,
|
||||||
|
parameters,
|
||||||
|
content_type: ep.get("content_type").and_then(|v| v.as_str()).map(String::from),
|
||||||
|
requires_auth: ep.get("requires_auth").and_then(|v| v.as_bool()).unwrap_or(false),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
endpoints
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PentestTool for XssTool {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
"xss_scanner"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Tests endpoints for Cross-Site Scripting (XSS) vulnerabilities including reflected, \
|
||||||
|
stored, and DOM-based XSS. Provide endpoints with parameters to test."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn input_schema(&self) -> serde_json::Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"endpoints": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Endpoints to test for XSS",
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"url": { "type": "string" },
|
||||||
|
"method": { "type": "string", "enum": ["GET", "POST", "PUT", "PATCH", "DELETE"] },
|
||||||
|
"parameters": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"name": { "type": "string" },
|
||||||
|
"location": { "type": "string", "enum": ["query", "body", "header", "path", "cookie"] },
|
||||||
|
"param_type": { "type": "string" },
|
||||||
|
"example_value": { "type": "string" }
|
||||||
|
},
|
||||||
|
"required": ["name"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["url", "method", "parameters"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"custom_payloads": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Optional additional XSS payloads to test",
|
||||||
|
"items": { "type": "string" }
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["endpoints"]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute<'a>(
|
||||||
|
&'a self,
|
||||||
|
input: serde_json::Value,
|
||||||
|
context: &'a PentestToolContext,
|
||||||
|
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PentestToolResult, CoreError>> + Send + 'a>> {
|
||||||
|
Box::pin(async move {
|
||||||
|
let endpoints = Self::parse_endpoints(&input);
|
||||||
|
if endpoints.is_empty() {
|
||||||
|
return Ok(PentestToolResult {
|
||||||
|
summary: "No endpoints provided to test.".to_string(),
|
||||||
|
findings: Vec::new(),
|
||||||
|
data: json!({}),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
let dast_context = DastContext {
|
||||||
|
endpoints,
|
||||||
|
technologies: Vec::new(),
|
||||||
|
sast_hints: Vec::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let findings = self.agent.run(&context.target, &dast_context).await?;
|
||||||
|
let count = findings.len();
|
||||||
|
|
||||||
|
Ok(PentestToolResult {
|
||||||
|
summary: if count > 0 {
|
||||||
|
format!("Found {count} XSS vulnerabilities.")
|
||||||
|
} else {
|
||||||
|
"No XSS vulnerabilities detected.".to_string()
|
||||||
|
},
|
||||||
|
findings,
|
||||||
|
data: json!({ "endpoints_tested": dast_context.endpoints.len() }),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -31,4 +31,16 @@ impl Database {
|
|||||||
pub fn dast_scan_runs(&self) -> Collection<DastScanRun> {
|
pub fn dast_scan_runs(&self) -> Collection<DastScanRun> {
|
||||||
self.inner.collection("dast_scan_runs")
|
self.inner.collection("dast_scan_runs")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn pentest_sessions(&self) -> Collection<PentestSession> {
|
||||||
|
self.inner.collection("pentest_sessions")
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn attack_chain_nodes(&self) -> Collection<AttackChainNode> {
|
||||||
|
self.inner.collection("attack_chain_nodes")
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn pentest_messages(&self) -> Collection<PentestMessage> {
|
||||||
|
self.inner.collection("pentest_messages")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ use rmcp::{
|
|||||||
};
|
};
|
||||||
|
|
||||||
use crate::database::Database;
|
use crate::database::Database;
|
||||||
use crate::tools::{dast, findings, sbom};
|
use crate::tools::{dast, findings, pentest, sbom};
|
||||||
|
|
||||||
pub struct ComplianceMcpServer {
|
pub struct ComplianceMcpServer {
|
||||||
db: Database,
|
db: Database,
|
||||||
@@ -89,6 +89,54 @@ impl ComplianceMcpServer {
|
|||||||
) -> Result<CallToolResult, rmcp::ErrorData> {
|
) -> Result<CallToolResult, rmcp::ErrorData> {
|
||||||
dast::dast_scan_summary(&self.db, params).await
|
dast::dast_scan_summary(&self.db, params).await
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ── Pentest ─────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[tool(
|
||||||
|
description = "List AI pentest sessions with optional filters for target, status, and strategy"
|
||||||
|
)]
|
||||||
|
async fn list_pentest_sessions(
|
||||||
|
&self,
|
||||||
|
Parameters(params): Parameters<pentest::ListPentestSessionsParams>,
|
||||||
|
) -> Result<CallToolResult, rmcp::ErrorData> {
|
||||||
|
pentest::list_pentest_sessions(&self.db, params).await
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tool(description = "Get a single AI pentest session by its ID")]
|
||||||
|
async fn get_pentest_session(
|
||||||
|
&self,
|
||||||
|
Parameters(params): Parameters<pentest::GetPentestSessionParams>,
|
||||||
|
) -> Result<CallToolResult, rmcp::ErrorData> {
|
||||||
|
pentest::get_pentest_session(&self.db, params).await
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tool(
|
||||||
|
description = "Get the attack chain DAG for a pentest session showing each tool invocation, its reasoning, and results"
|
||||||
|
)]
|
||||||
|
async fn get_attack_chain(
|
||||||
|
&self,
|
||||||
|
Parameters(params): Parameters<pentest::GetAttackChainParams>,
|
||||||
|
) -> Result<CallToolResult, rmcp::ErrorData> {
|
||||||
|
pentest::get_attack_chain(&self.db, params).await
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tool(description = "Get chat messages from a pentest session")]
|
||||||
|
async fn get_pentest_messages(
|
||||||
|
&self,
|
||||||
|
Parameters(params): Parameters<pentest::GetPentestMessagesParams>,
|
||||||
|
) -> Result<CallToolResult, rmcp::ErrorData> {
|
||||||
|
pentest::get_pentest_messages(&self.db, params).await
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tool(
|
||||||
|
description = "Get aggregated pentest statistics including running sessions, vulnerability counts, and severity distribution"
|
||||||
|
)]
|
||||||
|
async fn pentest_stats(
|
||||||
|
&self,
|
||||||
|
Parameters(params): Parameters<pentest::PentestStatsParams>,
|
||||||
|
) -> Result<CallToolResult, rmcp::ErrorData> {
|
||||||
|
pentest::pentest_stats(&self.db, params).await
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tool_handler]
|
#[tool_handler]
|
||||||
@@ -101,7 +149,7 @@ impl ServerHandler for ComplianceMcpServer {
|
|||||||
.build(),
|
.build(),
|
||||||
server_info: Implementation::from_build_env(),
|
server_info: Implementation::from_build_env(),
|
||||||
instructions: Some(
|
instructions: Some(
|
||||||
"Compliance Scanner MCP server. Query security findings, SBOM data, and DAST results."
|
"Compliance Scanner MCP server. Query security findings, SBOM data, DAST results, and AI pentest sessions."
|
||||||
.to_string(),
|
.to_string(),
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
pub mod dast;
|
pub mod dast;
|
||||||
pub mod findings;
|
pub mod findings;
|
||||||
|
pub mod pentest;
|
||||||
pub mod sbom;
|
pub mod sbom;
|
||||||
|
|||||||
261
compliance-mcp/src/tools/pentest.rs
Normal file
261
compliance-mcp/src/tools/pentest.rs
Normal file
@@ -0,0 +1,261 @@
|
|||||||
|
use mongodb::bson::doc;
|
||||||
|
use rmcp::{model::*, ErrorData as McpError};
|
||||||
|
use schemars::JsonSchema;
|
||||||
|
use serde::Deserialize;
|
||||||
|
|
||||||
|
use crate::database::Database;
|
||||||
|
|
||||||
|
const MAX_LIMIT: i64 = 200;
|
||||||
|
const DEFAULT_LIMIT: i64 = 50;
|
||||||
|
|
||||||
|
fn cap_limit(limit: Option<i64>) -> i64 {
|
||||||
|
limit.unwrap_or(DEFAULT_LIMIT).clamp(1, MAX_LIMIT)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── List Pentest Sessions ──────────────────────────────────────
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
|
pub struct ListPentestSessionsParams {
|
||||||
|
/// Filter by target ID
|
||||||
|
pub target_id: Option<String>,
|
||||||
|
/// Filter by status: running, paused, completed, failed
|
||||||
|
pub status: Option<String>,
|
||||||
|
/// Filter by strategy: quick, comprehensive, targeted, aggressive, stealth
|
||||||
|
pub strategy: Option<String>,
|
||||||
|
/// Maximum number of results (default 50, max 200)
|
||||||
|
pub limit: Option<i64>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn list_pentest_sessions(
|
||||||
|
db: &Database,
|
||||||
|
params: ListPentestSessionsParams,
|
||||||
|
) -> Result<CallToolResult, McpError> {
|
||||||
|
let mut filter = doc! {};
|
||||||
|
if let Some(ref target_id) = params.target_id {
|
||||||
|
filter.insert("target_id", target_id);
|
||||||
|
}
|
||||||
|
if let Some(ref status) = params.status {
|
||||||
|
filter.insert("status", status);
|
||||||
|
}
|
||||||
|
if let Some(ref strategy) = params.strategy {
|
||||||
|
filter.insert("strategy", strategy);
|
||||||
|
}
|
||||||
|
|
||||||
|
let limit = cap_limit(params.limit);
|
||||||
|
|
||||||
|
let mut cursor = db
|
||||||
|
.pentest_sessions()
|
||||||
|
.find(filter)
|
||||||
|
.sort(doc! { "started_at": -1 })
|
||||||
|
.limit(limit)
|
||||||
|
.await
|
||||||
|
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
|
||||||
|
|
||||||
|
let mut results = Vec::new();
|
||||||
|
while cursor
|
||||||
|
.advance()
|
||||||
|
.await
|
||||||
|
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
|
||||||
|
{
|
||||||
|
let session = cursor
|
||||||
|
.deserialize_current()
|
||||||
|
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
|
||||||
|
results.push(session);
|
||||||
|
}
|
||||||
|
|
||||||
|
let json = serde_json::to_string_pretty(&results)
|
||||||
|
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
|
||||||
|
|
||||||
|
Ok(CallToolResult::success(vec![Content::text(json)]))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Get Pentest Session ────────────────────────────────────────
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
|
pub struct GetPentestSessionParams {
|
||||||
|
/// Pentest session ID (MongoDB ObjectId hex string)
|
||||||
|
pub id: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_pentest_session(
|
||||||
|
db: &Database,
|
||||||
|
params: GetPentestSessionParams,
|
||||||
|
) -> Result<CallToolResult, McpError> {
|
||||||
|
let oid = bson::oid::ObjectId::parse_str(¶ms.id)
|
||||||
|
.map_err(|e| McpError::invalid_params(format!("invalid id: {e}"), None))?;
|
||||||
|
|
||||||
|
let session = db
|
||||||
|
.pentest_sessions()
|
||||||
|
.find_one(doc! { "_id": oid })
|
||||||
|
.await
|
||||||
|
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?
|
||||||
|
.ok_or_else(|| McpError::invalid_params("session not found", None))?;
|
||||||
|
|
||||||
|
let json = serde_json::to_string_pretty(&session)
|
||||||
|
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
|
||||||
|
|
||||||
|
Ok(CallToolResult::success(vec![Content::text(json)]))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Get Attack Chain ───────────────────────────────────────────
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
|
pub struct GetAttackChainParams {
|
||||||
|
/// Pentest session ID to get the attack chain for
|
||||||
|
pub session_id: String,
|
||||||
|
/// Maximum number of nodes (default 50, max 200)
|
||||||
|
pub limit: Option<i64>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_attack_chain(
|
||||||
|
db: &Database,
|
||||||
|
params: GetAttackChainParams,
|
||||||
|
) -> Result<CallToolResult, McpError> {
|
||||||
|
let limit = cap_limit(params.limit);
|
||||||
|
|
||||||
|
let mut cursor = db
|
||||||
|
.attack_chain_nodes()
|
||||||
|
.find(doc! { "session_id": ¶ms.session_id })
|
||||||
|
.sort(doc! { "started_at": 1 })
|
||||||
|
.limit(limit)
|
||||||
|
.await
|
||||||
|
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
|
||||||
|
|
||||||
|
let mut results = Vec::new();
|
||||||
|
while cursor
|
||||||
|
.advance()
|
||||||
|
.await
|
||||||
|
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
|
||||||
|
{
|
||||||
|
let node = cursor
|
||||||
|
.deserialize_current()
|
||||||
|
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
|
||||||
|
results.push(node);
|
||||||
|
}
|
||||||
|
|
||||||
|
let json = serde_json::to_string_pretty(&results)
|
||||||
|
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
|
||||||
|
|
||||||
|
Ok(CallToolResult::success(vec![Content::text(json)]))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Get Pentest Messages ───────────────────────────────────────
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
|
pub struct GetPentestMessagesParams {
|
||||||
|
/// Pentest session ID
|
||||||
|
pub session_id: String,
|
||||||
|
/// Maximum number of messages (default 50, max 200)
|
||||||
|
pub limit: Option<i64>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_pentest_messages(
|
||||||
|
db: &Database,
|
||||||
|
params: GetPentestMessagesParams,
|
||||||
|
) -> Result<CallToolResult, McpError> {
|
||||||
|
let limit = cap_limit(params.limit);
|
||||||
|
|
||||||
|
let mut cursor = db
|
||||||
|
.pentest_messages()
|
||||||
|
.find(doc! { "session_id": ¶ms.session_id })
|
||||||
|
.sort(doc! { "created_at": 1 })
|
||||||
|
.limit(limit)
|
||||||
|
.await
|
||||||
|
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
|
||||||
|
|
||||||
|
let mut results = Vec::new();
|
||||||
|
while cursor
|
||||||
|
.advance()
|
||||||
|
.await
|
||||||
|
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
|
||||||
|
{
|
||||||
|
let msg = cursor
|
||||||
|
.deserialize_current()
|
||||||
|
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
|
||||||
|
results.push(msg);
|
||||||
|
}
|
||||||
|
|
||||||
|
let json = serde_json::to_string_pretty(&results)
|
||||||
|
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
|
||||||
|
|
||||||
|
Ok(CallToolResult::success(vec![Content::text(json)]))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Pentest Stats ──────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
|
pub struct PentestStatsParams {
|
||||||
|
/// Filter stats by target ID
|
||||||
|
pub target_id: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn pentest_stats(
|
||||||
|
db: &Database,
|
||||||
|
params: PentestStatsParams,
|
||||||
|
) -> Result<CallToolResult, McpError> {
|
||||||
|
let mut base_filter = doc! {};
|
||||||
|
if let Some(ref target_id) = params.target_id {
|
||||||
|
base_filter.insert("target_id", target_id);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Count running sessions
|
||||||
|
let mut running_filter = base_filter.clone();
|
||||||
|
running_filter.insert("status", "running");
|
||||||
|
let running = db
|
||||||
|
.pentest_sessions()
|
||||||
|
.count_documents(running_filter)
|
||||||
|
.await
|
||||||
|
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
|
||||||
|
|
||||||
|
// Count total sessions
|
||||||
|
let total_sessions = db
|
||||||
|
.pentest_sessions()
|
||||||
|
.count_documents(base_filter.clone())
|
||||||
|
.await
|
||||||
|
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
|
||||||
|
|
||||||
|
// Get findings for these sessions — query DAST findings with session_id set
|
||||||
|
let mut findings_filter = doc! { "session_id": { "$ne": null } };
|
||||||
|
if let Some(ref target_id) = params.target_id {
|
||||||
|
findings_filter.insert("target_id", target_id);
|
||||||
|
}
|
||||||
|
let total_findings = db
|
||||||
|
.dast_findings()
|
||||||
|
.count_documents(findings_filter.clone())
|
||||||
|
.await
|
||||||
|
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
|
||||||
|
|
||||||
|
let mut exploitable_filter = findings_filter.clone();
|
||||||
|
exploitable_filter.insert("exploitable", true);
|
||||||
|
let exploitable = db
|
||||||
|
.dast_findings()
|
||||||
|
.count_documents(exploitable_filter)
|
||||||
|
.await
|
||||||
|
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
|
||||||
|
|
||||||
|
// Severity counts
|
||||||
|
let mut severity = serde_json::Map::new();
|
||||||
|
for sev in ["critical", "high", "medium", "low", "info"] {
|
||||||
|
let mut sf = findings_filter.clone();
|
||||||
|
sf.insert("severity", sev);
|
||||||
|
let count = db
|
||||||
|
.dast_findings()
|
||||||
|
.count_documents(sf)
|
||||||
|
.await
|
||||||
|
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
|
||||||
|
severity.insert(sev.to_string(), serde_json::json!(count));
|
||||||
|
}
|
||||||
|
|
||||||
|
let summary = serde_json::json!({
|
||||||
|
"running_sessions": running,
|
||||||
|
"total_sessions": total_sessions,
|
||||||
|
"total_findings": total_findings,
|
||||||
|
"exploitable_findings": exploitable,
|
||||||
|
"severity_distribution": severity,
|
||||||
|
});
|
||||||
|
|
||||||
|
let json = serde_json::to_string_pretty(&summary)
|
||||||
|
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
|
||||||
|
|
||||||
|
Ok(CallToolResult::success(vec![Content::text(json)]))
|
||||||
|
}
|
||||||
@@ -30,6 +30,7 @@ export default defineConfig({
|
|||||||
items: [
|
items: [
|
||||||
{ text: 'Dashboard Overview', link: '/features/overview' },
|
{ text: 'Dashboard Overview', link: '/features/overview' },
|
||||||
{ text: 'DAST Scanning', link: '/features/dast' },
|
{ text: 'DAST Scanning', link: '/features/dast' },
|
||||||
|
{ text: 'AI Pentest', link: '/features/pentest' },
|
||||||
{ text: 'AI Chat', link: '/features/ai-chat' },
|
{ text: 'AI Chat', link: '/features/ai-chat' },
|
||||||
{ text: 'Code Knowledge Graph', link: '/features/graph' },
|
{ text: 'Code Knowledge Graph', link: '/features/graph' },
|
||||||
{ text: 'MCP Integration', link: '/features/mcp-server' },
|
{ text: 'MCP Integration', link: '/features/mcp-server' },
|
||||||
|
|||||||
@@ -67,7 +67,7 @@ Navigate to **DAST > Findings** to see all discovered vulnerabilities. Each find
|
|||||||
|
|
||||||
| Column | Description |
|
| Column | Description |
|
||||||
|--------|-------------|
|
|--------|-------------|
|
||||||
| Severity | Critical, High, Medium, or Low |
|
| Severity | Critical, High, Medium, Low, or Info |
|
||||||
| Type | Vulnerability category (SQL Injection, XSS, SSRF, etc.) |
|
| Type | Vulnerability category (SQL Injection, XSS, SSRF, etc.) |
|
||||||
| Title | Description of the vulnerability |
|
| Title | Description of the vulnerability |
|
||||||
| Endpoint | The HTTP path that is vulnerable |
|
| Endpoint | The HTTP path that is vulnerable |
|
||||||
@@ -76,6 +76,19 @@ Navigate to **DAST > Findings** to see all discovered vulnerabilities. Each find
|
|||||||
|
|
||||||
Click a finding to see full details including the CWE identifier, vulnerable parameter, remediation guidance, and evidence showing the exact request/response pairs that triggered the finding.
|
Click a finding to see full details including the CWE identifier, vulnerable parameter, remediation guidance, and evidence showing the exact request/response pairs that triggered the finding.
|
||||||
|
|
||||||
|
### Filtering Findings
|
||||||
|
|
||||||
|
The findings page provides several filters to help you focus on what matters:
|
||||||
|
|
||||||
|
| Filter | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| **Search** | Free-text search across finding titles and descriptions |
|
||||||
|
| **Severity** | Filter by severity level (Critical, High, Medium, Low, Info) |
|
||||||
|
| **Vulnerability Type** | Filter by vulnerability category -- supports all 21 DAST vulnerability types including SQL Injection, XSS, SSRF, CORS Misconfiguration, CSP Bypass, and more |
|
||||||
|
| **Exploitable** | Show only confirmed-exploitable findings, or only unconfirmed |
|
||||||
|
|
||||||
|
Filters can be combined. A count indicator shows how many findings match the current filters out of the total (e.g. "Showing 12 of 76 findings"). When no findings match the active filters, a message distinguishes between "no findings exist" and "no findings match your current filters."
|
||||||
|
|
||||||
::: tip
|
::: tip
|
||||||
Findings marked as **Confirmed** exploitable were verified with a successful attack payload. **Unconfirmed** findings show suspicious behavior that may indicate a vulnerability but could not be fully exploited.
|
Findings marked as **Confirmed** exploitable were verified with a successful attack payload. **Unconfirmed** findings show suspicious behavior that may indicate a vulnerability but could not be fully exploited.
|
||||||
:::
|
:::
|
||||||
|
|||||||
110
docs/features/pentest.md
Normal file
110
docs/features/pentest.md
Normal file
@@ -0,0 +1,110 @@
|
|||||||
|
# AI Pentest
|
||||||
|
|
||||||
|
The AI Pentest module provides autonomous, LLM-driven penetration testing against your DAST targets. It orchestrates a chain of security tools guided by AI reasoning to discover vulnerabilities that traditional scanning may miss.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Navigate to **Pentest** in the sidebar to see the pentest dashboard.
|
||||||
|
|
||||||
|
The dashboard shows:
|
||||||
|
|
||||||
|
- Total pentest sessions run
|
||||||
|
- Aggregate finding counts with severity breakdown
|
||||||
|
- Tool invocation statistics and success rates
|
||||||
|
- Session cards with status, target, strategy, and finding count
|
||||||
|
|
||||||
|
## Starting a Pentest Session
|
||||||
|
|
||||||
|
1. Click **New Pentest** on the dashboard
|
||||||
|
2. Select a **DAST target** (must be configured under DAST > Targets first)
|
||||||
|
3. Choose a **strategy**:
|
||||||
|
|
||||||
|
| Strategy | Description |
|
||||||
|
|----------|-------------|
|
||||||
|
| **Comprehensive** | Full-spectrum test covering recon, API analysis, injection testing, auth checks, and more |
|
||||||
|
| **Focused** | Targets specific vulnerability categories based on initial reconnaissance |
|
||||||
|
|
||||||
|
4. Optionally provide an initial **message** to guide the AI's focus
|
||||||
|
5. Click **Start** to begin the session
|
||||||
|
|
||||||
|
The AI orchestrator will autonomously select and execute security tools in phases, using the output of each phase to inform the next.
|
||||||
|
|
||||||
|
## Session View
|
||||||
|
|
||||||
|
Click any session card to open the detailed session view. It shows:
|
||||||
|
|
||||||
|
### Summary Cards
|
||||||
|
|
||||||
|
- **Findings** — total vulnerabilities discovered
|
||||||
|
- **Exploitable** — confirmed-exploitable findings
|
||||||
|
- **Tool Invocations** — total tools executed
|
||||||
|
- **Success Rate** — percentage of tools that completed successfully
|
||||||
|
|
||||||
|
### Severity Distribution
|
||||||
|
|
||||||
|
A bar showing the breakdown of findings by severity level (Critical, High, Medium, Low, Info).
|
||||||
|
|
||||||
|
### Findings Tab
|
||||||
|
|
||||||
|
Lists all discovered vulnerabilities with:
|
||||||
|
|
||||||
|
- Severity badge and title
|
||||||
|
- Vulnerability type and exploitability status
|
||||||
|
- HTTP method and endpoint
|
||||||
|
- CWE identifier
|
||||||
|
- Description and remediation recommendation
|
||||||
|
- Correlated SAST finding references (when available)
|
||||||
|
|
||||||
|
### Attack Chain Tab
|
||||||
|
|
||||||
|
A visual DAG (directed acyclic graph) showing the sequence of tools executed during the pentest. Nodes are grouped into phases:
|
||||||
|
|
||||||
|
- **Phase-based layout** — tools are organized top-down by execution phase (reconnaissance, analysis, testing, exploitation, etc.)
|
||||||
|
- **Category icons** — each tool displays an icon indicating its category (recon, XSS, SQLi, SSRF, auth, headers, cookies, TLS, CORS, etc.)
|
||||||
|
- **Status indicators** — color-coded status dots (green = completed, yellow = running, red = failed)
|
||||||
|
- **Finding badges** — red badge showing the number of findings produced by each tool
|
||||||
|
- **Interactive** — hover for details, click to select, scroll to zoom, drag to pan
|
||||||
|
|
||||||
|
### Stopping a Session
|
||||||
|
|
||||||
|
Running sessions can be stopped from the dashboard by clicking the **Stop** button on the session card. This immediately halts all tool execution.
|
||||||
|
|
||||||
|
## Exporting Reports
|
||||||
|
|
||||||
|
Click **Export Report** on any session to generate a professional pentest report.
|
||||||
|
|
||||||
|
### Export Process
|
||||||
|
|
||||||
|
1. Enter an **encryption password** (minimum 8 characters)
|
||||||
|
2. Click **Export** to generate and download the report
|
||||||
|
|
||||||
|
The export produces a **password-protected ZIP archive** (AES-256 encryption) that can be opened with any standard archive tool (7-Zip, WinRAR, macOS Archive Utility, etc.).
|
||||||
|
|
||||||
|
### Archive Contents
|
||||||
|
|
||||||
|
| File | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `report.html` | Professional HTML report with executive summary, methodology, tools, findings with recommendations, and attack chain timeline |
|
||||||
|
| `findings.json` | Raw findings data in JSON format for programmatic processing |
|
||||||
|
| `attack-chain.json` | Raw attack chain data showing tool execution sequence and relationships |
|
||||||
|
|
||||||
|
### Report Features
|
||||||
|
|
||||||
|
The HTML report includes:
|
||||||
|
|
||||||
|
- Company logo and CONFIDENTIAL banner
|
||||||
|
- Requester information
|
||||||
|
- Executive summary with overall risk rating
|
||||||
|
- Severity distribution chart
|
||||||
|
- Methodology and tools section
|
||||||
|
- Detailed findings with severity, CWE, endpoint, evidence, remediation guidance, and linked SAST references
|
||||||
|
- Attack chain timeline
|
||||||
|
- Print-friendly layout (dark theme on screen, light theme for print)
|
||||||
|
|
||||||
|
### Integrity Verification
|
||||||
|
|
||||||
|
After export, the dashboard displays the **SHA-256 checksum** of the archive with a copy-to-clipboard button. Use this to verify the archive has not been tampered with after distribution.
|
||||||
|
|
||||||
|
::: warning
|
||||||
|
Only run pentests against applications you own or have explicit written authorization to test. AI-driven pentesting sends real attack payloads that may trigger alerts or cause unintended side effects.
|
||||||
|
:::
|
||||||
Reference in New Issue
Block a user