narrow-article-96388
02/13/2023, 3:26 PMcurl -sfL <https://get.k3s.io> | INSTALL_K3S_EXEC="server --disable traefik" sh -s - --docker
on worker:
curl -sfL <https://get.k3s.io> | K3S_URL=<https://10.10.1.60:6443> K3S_TOKEN={{ Token }} sh -s - --docker --node-ip 10.116.212.4 --node-external-ip 10.116.212.4 --flannel-iface eth1
- I installed K3S using wireguard with following this guideline https://www.inovex.de/de/blog/how-to-set-up-a-k3s-cluster-on-wireguard/ and K3S running well:
on master:
curl -sfL <https://get.k3s.io> | INSTALL_K3S_EXEC="server --disable traefik" sh -s - --docker --advertise-address 10.222.0.1 --node-external-ip 10.10.1.60 --flannel-iface=wg0 --flannel-backend=wireguard-native --flannel-external-ip
on worker:
curl -sfL <https://get.k3s.io> | K3S_URL=<https://10.10.1.60:6443> K3S_TOKEN={{ Token }} sh -s - --docker --node-ip 10.116.212.4 --node-external-ip 10.116.212.4 --flannel-iface eth1
- Baremetal01 (10.116.1.2) can connect to VPC A & VPC B via IPSec Site to Site
- All instances in VPC A & B can connect to baremetal01
But i have one problem, the pods in baremetal01, cannot connect to VPC A and VPC B, only connect to IP internal baremetal01.
Any advice would be appreciated. Thank youcurl -sfL <https://get.k3s.io> | INSTALL_K3S_EXEC="server --disable traefik" sh -s - --docker
on worker:
curl -sfL <https://get.k3s.io> | K3S_URL=<https://10.10.1.60:6443> K3S_TOKEN={{ Token }} sh -s - --docker --node-ip 10.222.0.2 --node-external-ip 10.116.1.2 --flannel-iface eth1
Second (with wireguard):
on master:
curl -sfL <https://get.k3s.io> | INSTALL_K3S_EXEC="server --disable traefik" sh -s - --docker --advertise-address 10.222.0.1 --node-external-ip 10.10.1.60 --flannel-iface=wg0 --flannel-backend=wireguard-native --flannel-external-ip
on worker:
curl -sfL <https://get.k3s.io> | K3S_URL=<https://10.10.1.60:6443> K3S_TOKEN={{ Token }} sh -s - --docker --node-ip 10.222.0.2 --node-external-ip 10.116.1.2 --flannel-iface eth1
plain-byte-79620
02/13/2023, 3:55 PM10.10.1.60
from a pod on baremetal01? Could it be related to the MTU? As I understand you are doing a wireguard tunnel to connect all the nodes and another wireguard tunnel used by K3s for the pods.narrow-article-96388
02/13/2023, 5:00 PM10.10.1.60
, this is my wireguard setting:
on master:
[Interface]
Address = 10.222.0.1
ListenPort = 51871
PrivateKey = xxxxxxxxx/Dq1wHNBrG7Efx3U=
MTU = 1420
[Peer]
PublicKey = xxxxxxxxxx/ukusTxPnKAq1mGyc5T8uDeWixw=
Endpoint = 10.116.1.2:51871
AllowedIPs = 10.222.0.2/32
PersistentKeepalive = 29
on node:
[Interface]
Address = 10.222.0.2/32
ListenPort = 51871
PrivateKey = xxxxxxxxxDCUjdpCiojksD9l8IbUyjw9IO34=
[Peer]
PublicKey = xxxxxxxxxxx+xwraRsunUXlz43m9HID4M7x3k=
Endpoint = 10.10.1.60:51871
AllowedIPs = 10.222.0.1/32
PersistentKeepalive = 29
plain-byte-79620
02/13/2023, 5:04 PMnarrow-article-96388
02/13/2023, 5:26 PMplain-byte-79620
02/13/2023, 5:43 PMnarrow-article-96388
02/14/2023, 1:53 AMplain-byte-79620
02/14/2023, 7:59 AMwg show
what do you get?narrow-article-96388
02/14/2023, 8:02 AMroot@master-baremetal01:~# wg show
interface: wg0
public key: xxxxx+cJmA+xwraRsunUXlz43m9HID4M7x3k=
private key: (hidden)
listening port: 51871
peer: xxxxxD2kd10zmdI/ukusTxPnKAq1mGyc5T8uDeWixw=
endpoint: 10.116.1.2:51871
allowed ips: 10.222.0.2/32
latest handshake: 29 seconds ago
transfer: 40.92 MiB received, 150.46 MiB sent
persistent keepalive: every 29 seconds
interface: flannel-wg
public key: xxxxxf9hYbAMltlGWD9UlKR1bD2yvABrHnu964i6lM=
private key: (hidden)
listening port: 51820
peer: xxxxx+T2hwyaWfdlh8BJOeVqBJM0wNkN1S5A8MhHk=
endpoint: 10.116.1.2:51820
allowed ips: 10.42.1.0/24
latest handshake: 1 minute, 9 seconds ago
transfer: 201.19 KiB received, 116.75 KiB sent
persistent keepalive: every 25 seconds
on nodes/ worker:
interface: flannel-wg
public key: xxxxxx+T2hwyaWfdlh8BJOeVqBJM0wNkN1S5A8MhHk=
private key: (hidden)
listening port: 51820
peer: xxxxxxx9hYbAMltlGWD9UlKR1bD2yvABrHnu964i6lM=
endpoint: 10.10.1.60:51820
allowed ips: 10.42.0.0/24
latest handshake: 52 seconds ago
transfer: 22.51 KiB received, 46.35 KiB sent
persistent keepalive: every 25 seconds
interface: wg0
public key: xxxxx2kd10zmdI/ukusTxPnKAq1mGyc5T8uDeWixw=
private key: (hidden)
listening port: 51871
peer: xxxxxxx+cJmA+xwraRsunUXlz43m9HID4M7x3k=
endpoint: 10.10.1.60:51871
allowed ips: 10.222.0.1/32
latest handshake: 12 seconds ago
transfer: 150.32 MiB received, 40.93 MiB sent
persistent keepalive: every 29 seconds
plain-byte-79620
02/14/2023, 9:05 AM10.222.0
IPs as node-ip on K3snarrow-article-96388
02/14/2023, 9:59 AMcurl -sfL <https://get.k3s.io> | INSTALL_K3S_EXEC="server --disable traefik" sh -s - --docker --advertise-address 10.222.0.1 --cluster-cidr 10.222.0.0/24 --node-external-ip 10.10.1.60 --flannel-iface=wg0 --flannel-backend=wireguard-native --flannel-external-ip
plain-byte-79620
02/14/2023, 10:00 AM--node-ip 10.222.0.1
narrow-article-96388
02/14/2023, 10:01 AMcurl -sfL <https://get.k3s.io> | INSTALL_K3S_EXEC="server --disable traefik" sh -s - --docker --advertise-address 10.222.0.1 --node-ip 10.222.0.1 --cluster-cidr 10.222.0.0/24 --node-external-ip 10.10.1.60 --flannel-iface=wg0 --flannel-backend=wireguard-native --flannel-external-ip
plain-byte-79620
02/14/2023, 10:02 AM10.222.0.2
narrow-article-96388
02/14/2023, 10:02 AMcurl -sfL <https://get.k3s.io> | INSTALL_K3S_EXEC="server --disable traefik" sh -s - --docker --advertise-address 10.222.0.1 --node-ip 10.222.0.1 --cluster-cidr 10.222.0.0/24 --node-external-ip 10.10.3.224 --flannel-iface=wg0 --flannel-backend=wireguard-native --flannel-external-ip
Error:
Feb 14 10:55:35 ip-10-10-3-224 k3s[32292]: I0214 10:55:35.351317 32292 reconciler.go:169] "Reconciler: start to sync state"
Feb 14 10:55:35 ip-10-10-3-224 k3s[32292]: time="2023-02-14T10:55:35Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:6443/v1-k3s/readyz>: 500 Internal Server Error"
Feb 14 10:55:35 ip-10-10-3-224 systemd-networkd[416]: flannel-wg: Link UP
Feb 14 10:55:35 ip-10-10-3-224 k3s[32292]: time="2023-02-14T10:55:35Z" level=fatal msg="flannel exited: failed to set up the route: failed to add route flannel-wg: file exists"
Feb 14 10:55:35 ip-10-10-3-224 systemd-networkd[416]: flannel-wg: Gained carrier
Feb 14 10:55:35 ip-10-10-3-224 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Feb 14 10:55:35 ip-10-10-3-224 systemd[1]: k3s.service: Failed with result 'exit-code'.
Feb 14 10:55:35 ip-10-10-3-224 systemd[1]: k3s.service: Consumed 6.944s CPU time.
plain-byte-79620
02/14/2023, 11:07 AMk3s
instancenarrow-article-96388
02/14/2023, 11:14 AM10.222.0.1
plain-byte-79620
02/14/2023, 11:15 AMk3s-uninstall.sh
? The error seems related to the flannel interface created from the previous setupnarrow-article-96388
02/14/2023, 11:19 AMplain-byte-79620
02/14/2023, 11:27 AMflannel-wg
has 10.222.0.0
as address? Why do you specify 10.222.0.0/24
as cluster-cidr
? you are creating an IP overlapping. Maybe I didn't understand correctly what you want to do.narrow-article-96388
02/15/2023, 1:50 AMplain-byte-79620
02/15/2023, 9:47 AMflannel-wg
is installed by K3s and gets an IP from the cluster-cidr
that it's overlapping with the IP that you are using on your wireguard tunnel. If you want that the PODs traffic be forwarded on the wireguard tunnel that you created you should use --flannel-backend=host-gw
and configures the routes manually if you use --flannel-backend=wireguard-native
K3s will always create an additional wireguard tunnel.narrow-article-96388
02/15/2023, 10:02 AMplain-byte-79620
02/15/2023, 10:05 AMnarrow-article-96388
02/15/2023, 10:07 AMplain-byte-79620
02/15/2023, 10:18 AM10.222.0.1
as default gateway for all the IPs that the PODs need to contact. With that you'll force all the traffic to the wireguard tunnel.narrow-article-96388
02/15/2023, 12:12 PMip route add 10.10.0.0/21 via 10.222.0.1
it still doesn’t work 😞plain-byte-79620
02/15/2023, 2:03 PMcluster-cidr
from the K3s config?narrow-article-96388
02/15/2023, 4:06 PM--cluster-cidr 10.222.0.0/24
, it will overlapping.plain-byte-79620
02/15/2023, 6:11 PM--cluster-cidr 10.222.0.0/24
use another cidr and use the wireguard-native
backend.
I think that if you create an additional routing table with 10.222.0.1 as default gateway and then an ip rule that matches the IPs of the pods as src and 10.10.0.0/21
as destination to use that table it should work.narrow-article-96388
02/17/2023, 6:38 AMroot@baremetal01:~# ip route
default via 160.202.190.49 dev bond1 proto static
10.0.0.0/8 via 10.116.1.1 dev bond0 proto static
10.42.0.0/16 dev flannel-wg scope link
10.42.1.0/24 dev cni0 proto kernel scope link src 10.42.1.1
10.116.1.0/26 dev bond0 proto kernel scope link src 10.116.1.2
10.222.0.1 dev wg0 scope link
160.26.0.0/16 via 10.116.1.1 dev bond0 proto static
160.202.190.48/28 dev bond1 proto kernel scope link src 160.202.190.54
166.8.0.0/14 via 10.116.1.1 dev bond0 proto static
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
table 220:
root@baremetal01:~# ip route list table 220
10.10.0.0/21 via 160.202.190.49 dev bond1 proto static src 10.116.1.2
172.16.0.0/19 via 160.202.190.49 dev bond1 proto static src 10.116.1.2
plain-byte-79620
02/17/2023, 9:08 AMip rule add from 10.42.0.0/16 to 10.10.0.0/21 table new_table
narrow-article-96388
02/20/2023, 6:19 AMplain-byte-79620
02/20/2023, 9:22 AMtcpdump
on the node with the 10.10.0.83
address filtering the ICMP packets.narrow-article-96388
02/20/2023, 3:21 PM