Wireguard has a bit of a gotcha when running multiple independent tunnels, one of which has a default route associated with it.

Symptoms

I have two Wireguard tunnels on my system. One has a default route, and the other does not. Ping latency is surprisingly high on the tunnel without the default route.

Solution

Add a matching FwMark setting to your non-default-route tunnel. Typically this will be FwMark = 51820, but check the logs or Table setting of your default route tunnel to see what mark it's using.

Technical deep dive

Let's start with some background. Imagine you have the following routing table on your system:

$ ip route
0.0.0.0/0 via 192.168.1.1 dev eth0 proto dhcp src 192.168.1.2 metric 302
192.168.1.0/24 dev eth0 proto dhcp scope link src 192.168.1.2 metric 302

Routing decisions are made by matching the most specific route - so from the bottom of the table upwards.

When OpenVPN is set up to route all traffic, it will insert an extra couple of more specific routes to make sure no traffic exits via the raw interface (except the transport traffic).

At this point I'm going to be making these routes up - they should be very close, but the syntax may not be correct or complete. They'll work to illustrate this problem.

$ ip route
0.0.0.0/0 via 192.168.1.1 dev eth0 proto dhcp src 192.168.1.2 metric 302
0.0.0.0/1 via <server tunnel IP> dev tun0
128.0.0.0/1 via <server tunnel IP> dev tun0
192.168.1.0/24 dev eth0 proto dhcp scope link src 192.168.1.2 metric 302
<server internet IP> via 192.168.1.1

Note the addition of two routes that cover half the IPv4 space each. This way, they're more specific than the default route (0.0.0.0/0) but still cover the same IP space. The last piece is the very specific route to just the VPN server, which will match the transport traffic.

Let's do the same with a Wireguard tunnel that will route all traffic.

First, the configuration we'll feed into wg-quick.

[Interface]
PrivateKey = <private key>
Address = <client tunnel IP>

[Peer]
PublicKey = <server public key>
AllowedIPs = 0.0.0.0/0
Endpoint = <server internet IP>

Then let's inspect our routing table:

$ ip route
0.0.0.0/0 via 192.168.1.1 dev eth0 proto dhcp src 192.168.1.2 metric 302
192.168.1.0/24 dev eth0 proto dhcp scope link src 192.168.1.2 metric 302

No routes to our Wireguard server! What happened?

The answer lies in how Wireguard solves the problem of routing the transport traffic. I would have expected it would do something similar to OpenVPN (creating routes with specificity such that they end up in the correct order in the routing table). It clearly doesn't do that, though. Let's check out wg-quick's logs and see what it did.

# wg-quick up wg0
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add <client tunnel IP> dev wg0
[#] ip link set mtu 1420 up dev wg0
[#] wg set wg1 fwmark 51820
[#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820
[#] ip -4 rule add not fwmark 51820 table 51820
[#] ip -4 rule add table main suppress_prefixlength 0
[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
[#] iptables-restore -nV

Taking a look at the route add command specifically, you can see it's doing something a bit odd: it's creating a new route table with the id 51820. Let's print that table's contents.

$ ip route show table 51820
0.0.0.0/0 via <openvpn tunnel IP> dev wg0

There it is! That's the default route we were missing before. The next piece is how traffic ends up routed with this table as opposed to the main table. You can see this being set up in the line that follows the default route creation in the log:

[#] ip -4 rule add not fwmark 51820 table 51820

This uses iptables' packet marking functionality to say any packet that is not marked 51820 should use the alternate routing table (and thereby go out the VPN tunnel rather than any other interface on the box). The last piece we've got to handle is making Wireguard's transport traffic use the main routing table rather than going out its own interface. Wireguard has the fwmark setting for this purpose.

[#] wg set wg1 fwmark 51820

So this combination of settings causes your system to route all traffic except wg0's transport traffic out over wg0.

What happens if we add a second Wireguard tunnel with the following configuration?

[Interface]
PrivateKey = <private key>
Address = <client tunnel IP>

[Peer]
PublicKey = <server public key>
AllowedIPs = 192.168.1.0/24
Endpoint = <server internet IP>

It's kind of subtle, and you might not expect it if you haven't done policy based routing with iptables before. The only difference between this tunnel and the other is the AllowedIPs configuration. For this tunnel, we're only routing a specific remote network. This means we won't have to solve the transport traffic routing problem - it'll just work, since our VPN server is not inside of the AllowedIPs range. Which means that wg-quick won't attempt to solve that problem. Let's take a look at the logs.

# wg-quick up wg1
[#] ip link add wg1 type wireguard
[#] wg setconf wg1 /dev/fd/63
[#] ip -4 address add <client tunnel IP> dev wg1
[#] ip link set mtu 1420 up dev wg1

It didn't do any of the alternate routing table stuff it did before. In particular, it did not set Wireguard's fwmark.

See it now?

wg1 ends up nested inside of wg0. Since wg1's transport traffic is not marked, it'll get put on the alternate routing table, and ship out via wg0. The solution is pretty simple. We only need to add FwMark to the configuration.

[Interface]
PrivateKey = <private key>
Address = <client tunnel IP>
FwMark = 51820

[Peer]
PublicKey = <server public key>
AllowedIPs = 192.168.1.0/24
Endpoint = <server internet IP>

This makes wg1's transport traffic use the main routing table since it is now marked.