This message was deleted.
# k3s
a
This message was deleted.
b
Context of this question is actually trying to configure k3s using tailscale for node-server communications.
Just found out about https://github.com/k3s-io/k3s/pull/7352 so basically I need to found out how to manually set this up for now. Sounds like I need to set all node-*-ip and advertise ip to the tailscale ip. But I better need to keep etcd on the local physical ip.
Also not sure what flannel-backend I should set. vxlan or host-ip? (COnsidering that tailscale needs to accept and advertise the pod (and serviuce?) cidr as routes
b
You'll need a tailscale flannel backend
That's what I'm doing in the PR. By using the flannel extension backend
b
Hi! Yes, thanks for that PR. In the mean time, I need to make the config manually though.
b
I hope I can merge the PR today, so that it is part of the next k3s release (in ~1 week)
You should start tailscale, log and so on
and then use the tailscale IP as the node-ip
and also as the advertise-ip for kube-api
that will make all your control plane traffic go through tailscale.
b
I actually used the tailscale ip as the node external ip (on the server nodes) also advertise, but kept the node-ip as the local ip that seemed to make it so etcd works via that local ip
b
For the data-plane traffic, maybe for the time being you are happy enough by selecting the tailscale interface as the flannel interface. Performance won't be awesome because your traffic will get a double encapsulation (vxlan+tailscale) but it should work
b
yes, I was wondering if that could work with flannels host-ip backend, if I add the k3s subnets to tailscale (as you do in your PR)
b
HA mode is tricky, etcd traffic will not work well because it needs a very low latency for raft to work correctly
b
my servers will be in the same cloud/DC, so they can operate dirctly, without tailscale
b
perfect
host-ip
backend might work if you set
tailscale0
as your flannel interface but I am not sure, can you give it a try?
IIRC, if you keep
node-ip
as local ip, your
kube-api
won't be able to reach the nodes and thus you can't run things like
kubectl logs
or
kubectl exec
or?
b
hm, I'm getting
Copy code
Jun 09 16:01:16 <http://rnwtrk-k3s-m0.drum-map.ts.net|rnwtrk-k3s-m0.drum-map.ts.net> k3s[41206]: {"level":"info","ts":"2023-06-09T16:01:16.662Z","caller":"etcdserver/server.go:845","msg":"starting etcd server","local-member-id":"261f27b021c9631d","local-server-version":"3.5.7","cluster-id":"df7481bb94076a93","cluster-version":"3.5"}
Jun 09 16:01:16 <http://rnwtrk-k3s-m0.drum-map.ts.net|rnwtrk-k3s-m0.drum-map.ts.net> k3s[41206]: {"level":"info","ts":"2023-06-09T16:01:16.664Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"261f27b021c9631d","initial-advertise-peer-urls":["<http://127.0.0.1:2400>"],"listen-peer-urls":["<http://127.0.0.1:2400>"]
,"advertise-client-urls":["<http://127.0.0.1:2399>"],"listen-client-urls":["<http://127.0.0.1:2399>"],"listen-metrics-urls":[]}
Jun 09 16:01:16 <http://rnwtrk-k3s-m0.drum-map.ts.net|rnwtrk-k3s-m0.drum-map.ts.net> k3s[41206]: {"level":"info","ts":"2023-06-09T16:01:16.664Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"261f27b021c9631d","forward-ticks":9,"forward-duration":"4.5s","election-ticks":10,"elect
ion-timeout":"5s"}
Jun 09 16:01:16 <http://rnwtrk-k3s-m0.drum-map.ts.net|rnwtrk-k3s-m0.drum-map.ts.net> k3s[41206]: {"level":"info","ts":"2023-06-09T16:01:16.664Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"127.0.0.1:2400"}
Jun 09 16:01:16 <http://rnwtrk-k3s-m0.drum-map.ts.net|rnwtrk-k3s-m0.drum-map.ts.net> k3s[41206]: {"level":"info","ts":"2023-06-09T16:01:16.664Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"127.0.0.1:2400"}
Jun 09 16:01:19 <http://rnwtrk-k3s-m0.drum-map.ts.net|rnwtrk-k3s-m0.drum-map.ts.net> k3s[41206]: {"level":"warn","ts":"2023-06-09T16:01:19.315Z","logger":"etcd-client","caller":"v3@v3.5.7-k3s1/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"<etcd-endpoints://0xc0004a2540/127.0.0.1:2399>","attempt":0,"error":"rpc error: code
 = Canceled desc = context canceled"}
Jun 09 16:01:19 <http://rnwtrk-k3s-m0.drum-map.ts.net|rnwtrk-k3s-m0.drum-map.ts.net> k3s[41206]: time="2023-06-09T16:01:19Z" level=info msg="Failed to test temporary data store connection: context canceled"
Jun 09 16:01:19 <http://rnwtrk-k3s-m0.drum-map.ts.net|rnwtrk-k3s-m0.drum-map.ts.net> k3s[41206]: time="2023-06-09T16:01:19Z" level=info msg="Failed to test temporary data store connection: failed to dial endpoint <http://127.0.0.1:2399> with maintenance client: context canceled"
curently I have only this singl;e master
b
k3s fails to start then?
b
hold on; it kept repeating
Failed to test temporary data store connection
after a while systemd kills it, and when it then restarts it seems to not have that issue
ok, and then it starts up without any issue it seems
now I need to restart a worker node, and check tailscale
worker complains about setting kubernetes subnets as routes
I couldn't understand it from your patch, I assume I should publish and accept both pod cidr and service cidr via tailscale?
hm, on the agent, I keep having an issue when it wants to add a route for its pod cidr 19818 route_network.go:92] Subnet added: 10.32.0.0/24 via 100.65.172.11 19818 route_network.go:167] Error adding route to {Ifindex: 9 Dst: 10.32.0.0/24 Src: <nil> Gw: 100.65.172.11 Flags: [] Table: 0 Realm: 0}: network is unreachable
b
oh
Yeah, that makes sense. I don't think you can really use hostgw
b
seems like an issue with how tailscale implements routing
how is this solved in your PR?
b
you can't have a
xxx via xxx
type of route because you are not L2 connected to it
You need to do everything in L3 layer
you must use the subnet router feature of tailscale
b
ok, I'll need to dive a bit more into that. thanks for the headsup
xx via yy is L3 btw L2 would mean the local subnet
ok, so, tailscale subnet router means using the `
Copy code
--advertise-routes=
parameter, which I did.
might be an ACL issue
I think I managed to get it working with
host-gw
now to let it run a bit and do some more checks
eto be sure
b
๐Ÿ‘ great, congratulations ๐Ÿ™‚
My PR got merged
๐Ÿ‘๐Ÿป 1
b
I think one issue was to not set flannel-iface to tailscal0, but let the host networking do it's thing (combined with advertising teh right routes on each node on the taiolscale network
266 Views