Skip to content

Commit 5d4eb49

Browse files
Automated build from blog source
1 parent 52602c4 commit 5d4eb49

2 files changed

Lines changed: 148 additions & 0 deletions

File tree

tech_blog.html

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@ <h1 id="titl"></h1>
4444

4545
<script>
4646
const blogs_map = [
47+
[5, "What you dont know (in networking) can hurt you", "28 Aug 2025"],
4748
[4, "Be careful of Python for loops (or Python in general)", "29 Jul 2025"],
4849
[3, "Downsides of Alpine linux", "27 Jun 2025"],
4950
[2, "Create a custom ORM", "27 Jun 2025"],

tech_blog/5.html

Lines changed: 147 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,147 @@
1+
<p>I wanted to create my first wireguard tunnel so I can learn more
2+
about networking and tunneling and host my own data center. It actually
3+
worked pretty well except for some things. This was back in March. It
4+
was not until now that I was able to fix those things.</p>
5+
<p>So I resigned myself to try to be as simple as possible just so I
6+
could get a proof of concept working. I wanted to have a cloud VPS with
7+
a public IP be the server/router and have all the client traffic go
8+
through that.</p>
9+
<h2 id="issues">Issues</h2>
10+
<p>I got it set up and was able to do just about everything except for
11+
three things: - ssh into servers not connected to the VPN or the VPN ip
12+
itself (it just hangs) - push things to git servers - In a SSH session
13+
from client to client, typing things in for example it is really jumpy
14+
and laggy</p>
15+
<h2 id="configuration">Configuration</h2>
16+
<p>Server</p>
17+
<pre><code>[Interface]
18+
PrivateKey = &lt;server_private_key&gt;
19+
Address = 10.66.66.1/24
20+
ListenPort = &lt;port&gt;
21+
PostUp = iptables -I INPUT -p udp --dport &lt;port&gt; -j ACCEPT
22+
PostUp = iptables -I FORWARD -i wg0 -j ACCEPT
23+
PostUp = iptables -I FORWARD -i &lt;physical_interface_name&gt; -o wg0 -j ACCEPT
24+
PostUp = iptables -t nat -A POSTROUTING -o &lt;physical_interface_name&gt; -j MASQUERADE
25+
26+
PostDown = iptables -D INPUT -p udp --dport $port -j ACCEPT
27+
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT
28+
PostDown = iptables -D FORWARD -i &lt;physical_interface_name&gt; -o wg0 -j ACCEPT
29+
PostDown = iptables -t nat -D POSTROUTING -o &lt;physical_interface_name&gt; -j MASQUERADE
30+
31+
[Peer]
32+
PublicKey = &lt;client1_pub_key&gt;
33+
AllowedIPs = 10.66.66.2/32
34+
35+
[Peer]
36+
PublicKey = &lt;client2_pub_key&gt;
37+
AllowedIPs = 10.66.66.3/32</code></pre>
38+
<p>client1</p>
39+
<pre><code>[Interface]
40+
PrivateKey = &lt;client1_private_key&gt;
41+
Address = 10.66.66.2/32
42+
43+
[Peer]
44+
PublicKey = &lt;server_pub_key&gt;
45+
AllowedIPs = 0.0.0.0
46+
Endpoint = &lt;server_physical_ip&gt;:&lt;port&gt;</code></pre>
47+
<p>client2</p>
48+
<pre><code>[Interface]
49+
PrivateKey = &lt;client2_private_key&gt;
50+
Address = 10.66.66.3/32
51+
52+
[Peer]
53+
PublicKey = &lt;server_pub_key&gt;
54+
AllowedIPs = 0.0.0.0
55+
Endpoint = &lt;server_physical_ip&gt;:&lt;port&gt;</code></pre>
56+
<p>If you are experienced at networking and/or tunnels you may know the
57+
issue. In order to investigate the issue of the ssh issues I tried ssh
58+
-v</p>
59+
<pre><code>$ ssh -v &lt;username&gt;@&lt;vpn_server_vpn_ip&gt;
60+
... connection established
61+
... authenticating
62+
... loading keys
63+
... algorithm negotiation
64+
expecting SSH2_MSG_KEX_ECDH_REPLY
65+
&lt;hangs&gt;</code></pre>
66+
<p>Hmm, am I breaking the ssh server where it is not sending back the
67+
ecdh reply? The server is still up</p>
68+
<p>I guess this is ok because I can ssh off the VPN.</p>
69+
<p>For the second issue, pushing things using git, I had no idea what to
70+
do, so I moved onto the third issue which I thought would be more fun to
71+
debug, the jumpyness.</p>
72+
<h2 id="jumpyness">Jumpyness</h2>
73+
<p>Doing a simple ping from client to client, I see, I see weird
74+
things</p>
75+
<p>From client1</p>
76+
<pre><code>$ ping client2
77+
64 bytes from &lt;ip&gt;: icmp_seq=1 ttl=61 time=219 ms
78+
64 bytes from &lt;ip&gt;: icmp_seq=2 ttl=61 time=136 ms
79+
64 bytes from &lt;ip&gt;: icmp_seq=3 ttl=61 time=363 ms
80+
64 bytes from &lt;ip&gt;: icmp_seq=4 ttl=61 time=107 ms
81+
64 bytes from &lt;ip&gt;: icmp_seq=5 ttl=61 time=105 ms
82+
64 bytes from &lt;ip&gt;: icmp_seq=6 ttl=61 time=125 ms
83+
64 bytes from &lt;ip&gt;: icmp_seq=7 ttl=61 time=250 ms
84+
64 bytes from &lt;ip&gt;: icmp_seq=9 ttl=61 time=496 ms</code></pre>
85+
<p>Yeah my eyes and fingers are not deceiving me,there is definitely
86+
something wrong. One ping 1 second different than the other takes 5
87+
times the amount of time, which is not normal</p>
88+
<p>One thing I thought of is that it was quite a circuotus route, like
89+
the traffic goes from the Pacific Northwest to the bay area and back so
90+
there could be a lot of network congestion there compared to if I just
91+
hosted the VPN server in Oregon or something. This turned out to not be
92+
true, because of this:</p>
93+
<p>From client1</p>
94+
<pre><code>$ ping vpn_server_private_ip
95+
64 bytes from 10.66.67.1: icmp_seq=1 ttl=62 time=36.6 ms
96+
64 bytes from 10.66.67.1: icmp_seq=2 ttl=62 time=33.8 ms
97+
64 bytes from 10.66.67.1: icmp_seq=3 ttl=62 time=37.4 ms
98+
64 bytes from 10.66.67.1: icmp_seq=4 ttl=62 time=34.7 ms
99+
64 bytes from 10.66.67.1: icmp_seq=5 ttl=62 time=34.2 ms
100+
64 bytes from 10.66.67.1: icmp_seq=6 ttl=62 time=34.2 ms</code></pre>
101+
<p>Pinging the VPN server takes the same round trip since both clients
102+
are in the pacific northwest, so why is client to client jumpier?</p>
103+
<p>Could it be routing rules? I dont have much experience in this, so I
104+
wanted to rule everything out that I knew how to measure first.</p>
105+
<h2 id="speedtest">speedtest</h2>
106+
<p>If you have access to two servers, you can do a speed test between
107+
them. I used iperf3. Here is a speedtest from client to client</p>
108+
<pre><code>Interval Transfer Bitrate Retr Cwnd
109+
0.00-1.00 sec 1.30 MBytes 10.9 Mbits/sec 0 146 KBytes
110+
1.00-2.00 sec 2.58 MBytes 21.6 Mbits/sec 0 258 KBytes
111+
2.00-3.00 sec 4.23 MBytes 35.5 Mbits/sec 0 440 KBytes
112+
3.00-4.00 sec 5.52 MBytes 46.3 Mbits/sec 0 684 KBytes
113+
4.00-5.00 sec 4.25 MBytes 35.6 Mbits/sec 22 395 KBytes
114+
5.00-6.00 sec 2.74 MBytes 23.0 Mbits/sec 5 306 KBytes
115+
6.00-7.00 sec 3.74 MBytes 31.4 Mbits/sec 0 325 KBytes
116+
7.00-8.00 sec 3.68 MBytes 30.9 Mbits/sec 0 335 KBytes
117+
8.00-9.00 sec 3.86 MBytes 32.4 Mbits/sec 3 236 KBytes
118+
9.00-10.00 sec 2.70 MBytes 22.6 Mbits/sec 0 281 KBytes</code></pre>
119+
<p>If I am interpreting the data right, I am seeing around 30Mb/s
120+
average? That is kind of slow because on speedtest.net I get 300Mb/s.
121+
Now I dont know if these tests are apples and oranges, but I suspect
122+
that this is still slow.</p>
123+
<h2 id="packet-fragmentation">Packet Fragmentation</h2>
124+
<p>Looking more into wireguard and the options it has, I discovered the
125+
MTU option. Apparently if a packet is too big to go through a network
126+
hop’s width, it will get split (fragmented). How to test this is to send
127+
messages with nofrag flag and see if you get responses. If you get no
128+
responses, the message got dropped because it could not fit through nor
129+
fragmented.</p>
130+
<p>Here is a ping test sending the do not fragment flag between a client
131+
and the server</p>
132+
<pre><code>$ ping -M do -s 1393 10.66.67.1
133+
PING 10.66.67.1 (10.66.67.1) 1393(1421) bytes of data.
134+
ping: local error: message too long, mtu=1420
135+
ping: local error: message too long, mtu=1420</code></pre>
136+
<p>Looks like the max MTU of the network is 1420</p>
137+
<pre><code>PING 10.66.67.1 (10.66.67.1) 1392(1420) bytes of data.
138+
1400 bytes from 10.66.67.1: icmp_seq=1 ttl=62 time=33.2 ms
139+
1400 bytes from 10.66.67.1: icmp_seq=2 ttl=62 time=33.4 ms</code></pre>
140+
<p>The default MTU wireguard sets is 1420, which seems correct since
141+
packets will not be fragmented if sent or received with a MTU of
142+
1420.</p>
143+
<p>Trying the client to client test, we get the same MTU discovery, but
144+
with the usual worse ping results.</p>
145+
<h1 id="conclusion">Conclusion</h1>
146+
<p>Currently I dont know something about networking and/or tunnels, and
147+
it is killing me.</p>

0 commit comments

Comments
 (0)