You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-4Lines changed: 7 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ The GUI is built with **PySide6** (Qt framework) and uses **Scapy** for all pack
27
27
-**Progress Bar**: Live progress feedback during scanning.
28
28
-**Custom CIDR Target**: Scan a specific subnet instead of the local interface network.
29
29
-**Multithreading**: All network operations run in `QThread` workers — the UI stays responsive throughout.
30
-
-**C Extension (macOS)**: A native C extension provides accurate, sequential ARP scanning on macOS where Scapy bulk-send is unreliable.
30
+
-**C Extension (macOS)**: A native C extension provides accurate, parallel ARP scanning on macOS where Scapy bulk-send is unreliable.
31
31
32
32
---
33
33
@@ -134,14 +134,17 @@ Click **Save PCAP** to write the captured session to a `.pcap` file.
134
134
135
135
### 4. LLM packet analysis (Ollama)
136
136
137
-
With [Ollama](https://ollama.com) running locally (`ollama serve`) and a model pulled (default: `deepseek-r1:1.5b`):
137
+
With [Ollama](https://ollama.com) running locally (`ollama serve`) and at least one model pulled:
138
138
139
139
1. Select a captured packet in the MITM window.
140
-
2. Optionally add context in the **Context** field (e.g. `"this is a smart TV"`).
141
-
3. Click **Analyse with LLM** — the analysis streams in token by token.
140
+
2. Choose a model from the **Model** drop-down (populated automatically from the running Ollama instance). Click **↻** to refresh the list after pulling a new model.
141
+
3. Optionally add context in the **Context** field (e.g. `"this is a smart TV"`).
142
+
4. Click **Analyse with LLM** — the analysis opens in a dedicated window and streams in token by token. Use **Copy analysis** to copy the result to the clipboard.
142
143
143
144
The LLM identifies protocol/service, describes what the endpoints are doing, flags security-relevant observations, and provides a risk rating.
144
145
146
+
> **Tip:** Any model available via `ollama list` can be used. Smaller models (e.g. `llama3.2:1b`) respond faster; larger ones (e.g. `llama3.1:8b`) give more detailed analysis.
0 commit comments