You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+29-10Lines changed: 29 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@
8
8
9
9
## Overview
10
10
11
-
Phantom is a **network reconnaissance and security auditing tool** designed for directly connected networks. It discovers devices via ARP scanning, tracks their history, detects ARP spoofing attacks, and can perform MITM interception with live packet analysis powered by a local LLM.
11
+
Phantom is a **network reconnaissance and security auditing tool** designed for directly connected networks. It discovers devices via ARP scanning, tracks their history, detects ARP spoofing attacks, and can perform MITM interception with live packet analysis powered by a local or cloud LLM.
12
12
13
13
The GUI is built with **PySide6** (Qt framework) and uses **Scapy** for all packet-level operations.
14
14
@@ -21,7 +21,7 @@ The GUI is built with **PySide6** (Qt framework) and uses **Scapy** for all pack
21
21
-**New Device & MAC Change Detection**: Highlights new devices (green) and IP-to-MAC binding changes (red) — a classic ARP spoofing indicator.
22
22
-**ARP Spoof Detection**: Passive background sniffer that alerts on conflicting ARP bindings and gateway MAC changes.
23
23
-**MITM Interception**: ARP-spoof a target to intercept its traffic; captured packets are displayed in real time with a full layer-by-layer breakdown.
24
-
-**LLM Packet Analysis**: Send any captured packet to a local [Ollama](https://ollama.com) instance for AI-assisted analysis (protocol identification, risk assessment, credential spotting).
24
+
-**LLM Packet Analysis**: Send any captured packet to a local [Ollama](https://ollama.com) instance or the [Anthropic API](https://www.anthropic.com)for AI-assisted analysis (protocol identification, risk assessment, credential spotting).
25
25
-**PCAP Export**: Save captured packets from a MITM session as a `.pcap` file for offline analysis in Wireshark.
26
26
-**Scan Export**: Export scan results to JSON or CSV.
27
27
-**Progress Bar**: Live progress feedback during scanning.
@@ -38,7 +38,9 @@ The GUI is built with **PySide6** (Qt framework) and uses **Scapy** for all pack
38
38
-**PySide6** — graphical user interface
39
39
-**netifaces** — network interface introspection
40
40
-**requests** — Ollama API streaming
41
+
-**anthropic** — Anthropic API client (installed via `requirements.txt`)
41
42
-**Ollama** (optional) — local LLM for packet analysis (`ollama serve`)
43
+
-**Anthropic API key** (optional) — set via `ANTHROPIC_API_KEY` env var or entered in the UI
42
44
43
45
---
44
46
@@ -132,18 +134,35 @@ Click **Save PCAP** to write the captured session to a `.pcap` file.
132
134
133
135
> **Note:** MITM requires root/sudo. IP forwarding is restored automatically when MITM is stopped.
134
136
135
-
### 4. LLM packet analysis (Ollama)
137
+
### 4. LLM packet analysis
136
138
137
-
With [Ollama](https://ollama.com) running locally (`ollama serve`) and at least one model pulled:
139
+
Select a captured packet in the MITM window, then choose a **Provider**:
138
140
139
-
1. Select a captured packet in the MITM window.
140
-
2. Choose a model from the **Model** drop-down (populated automatically from the running Ollama instance). Click **↻** to refresh the list after pulling a new model.
141
+
#### Ollama (local)
142
+
143
+
Requires [Ollama](https://ollama.com) running locally (`ollama serve`) with at least one model pulled.
144
+
145
+
1. Set **Provider** to **Ollama (local)**.
146
+
2. Choose a model from the **Model** drop-down (populated automatically). Click **↻** to refresh after pulling a new model.
141
147
3. Optionally add context in the **Context** field (e.g. `"this is a smart TV"`).
142
-
4. Click **Analyse with LLM** — the analysis opens in a dedicated window and streams in token by token. Use **Copy analysis** to copy the result to the clipboard.
148
+
4. Click **Analyse with LLM**.
149
+
150
+
> **Tip:** Any model available via `ollama list` can be used. Smaller models respond faster; larger ones give more detailed analysis.
151
+
152
+
#### Anthropic API (cloud)
153
+
154
+
Requires an [Anthropic API key](https://console.anthropic.com).
155
+
156
+
1. Set **Provider** to **Anthropic**.
157
+
2. Choose a model (`claude-opus-4-6`, `claude-sonnet-4-6`, or `claude-haiku-4-5`).
158
+
3. Enter your API key in the **API key** field (or set `ANTHROPIC_API_KEY` in the environment and it will pre-fill automatically).
159
+
4. Optionally add context, then click **Analyse with LLM**.
160
+
161
+
> **Tip:**`claude-haiku-4-5` is fastest and cheapest for quick checks; `claude-opus-4-6` gives the most thorough analysis.
143
162
144
-
The LLM identifies protocol/service, describes what the endpoints are doing, flags security-relevant observations, and provides a risk rating.
163
+
The analysis opens in a dedicated window and streams token by token. Use **Copy analysis** to copy the result to the clipboard.
145
164
146
-
> **Tip:** Any model available via `ollama list` can be used. Smaller models (e.g. `llama3.2:1b`) respond faster; larger ones (e.g. `llama3.1:8b`) give more detailed analysis.
165
+
The LLM identifies protocol/service, flags security-relevant observations (plaintext credentials, CVE patterns, suspicious beaconing), and provides a risk rating.
0 commit comments