Skip to content

Commit 22654e4

Browse files
committed
updating...
1 parent a5a1834 commit 22654e4

1 file changed

Lines changed: 7 additions & 4 deletions

File tree

README.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ The GUI is built with **PySide6** (Qt framework) and uses **Scapy** for all pack
2727
- **Progress Bar**: Live progress feedback during scanning.
2828
- **Custom CIDR Target**: Scan a specific subnet instead of the local interface network.
2929
- **Multithreading**: All network operations run in `QThread` workers — the UI stays responsive throughout.
30-
- **C Extension (macOS)**: A native C extension provides accurate, sequential ARP scanning on macOS where Scapy bulk-send is unreliable.
30+
- **C Extension (macOS)**: A native C extension provides accurate, parallel ARP scanning on macOS where Scapy bulk-send is unreliable.
3131

3232
---
3333

@@ -134,14 +134,17 @@ Click **Save PCAP** to write the captured session to a `.pcap` file.
134134
135135
### 4. LLM packet analysis (Ollama)
136136

137-
With [Ollama](https://ollama.com) running locally (`ollama serve`) and a model pulled (default: `deepseek-r1:1.5b`):
137+
With [Ollama](https://ollama.com) running locally (`ollama serve`) and at least one model pulled:
138138

139139
1. Select a captured packet in the MITM window.
140-
2. Optionally add context in the **Context** field (e.g. `"this is a smart TV"`).
141-
3. Click **Analyse with LLM** — the analysis streams in token by token.
140+
2. Choose a model from the **Model** drop-down (populated automatically from the running Ollama instance). Click **** to refresh the list after pulling a new model.
141+
3. Optionally add context in the **Context** field (e.g. `"this is a smart TV"`).
142+
4. Click **Analyse with LLM** — the analysis opens in a dedicated window and streams in token by token. Use **Copy analysis** to copy the result to the clipboard.
142143

143144
The LLM identifies protocol/service, describes what the endpoints are doing, flags security-relevant observations, and provides a risk rating.
144145

146+
> **Tip:** Any model available via `ollama list` can be used. Smaller models (e.g. `llama3.2:1b`) respond faster; larger ones (e.g. `llama3.1:8b`) give more detailed analysis.
147+
145148
---
146149

147150
## Architecture

0 commit comments

Comments
 (0)