I'm having trouble with IPv6. I've got a cluster ...
# general
a
I'm having trouble with IPv6. I've got a cluster configured dualStack as well as IPv6 only. The dualStack cluster has the following in /etc/rancher/rke2/config.yaml: cluster-cidr: "10.42.0.0/16,fd90ee2cc536::/56" service-cidr: "10.43.0.0/16,fd1884bc7bd3::/112" and the IPv6 only cluster has the following: cluster-cidr: "fd90ee2cc536::/56" service-cidr: "fd1884bc7bd3::/112" in the case of the dualStack cluster, all pod and service IPs are IPv4 only. as soon as the server node is initialized, I lose the ability to make any IPv6 connections to the node. In the case of the IPv6 only cluster, I do have IPv6 IPs on pods and services, but as soon as the server is initialized, I also lose the ability to make new IPv6 connections to the node. This happens with firewalld disabled.
c
how did you pick those ipv6 CIDRs? Do those CIDRs overlap with actual IPv6 subnets in your environment?
I suspect that you’re breaking IP routing on your nodes by having the CNI add routes to cluster or service CIDRs that actually exist on your network
a
They were generated randomly in the RFC approved manner, and my nodes are on an isolated subnet w/o a default route.
yeah, that's a fair assumption, but in this case I have discrete subnets for the host and the pod/service CIDRs. Here's the host CIDR: fd31471efcfb::/64
c
I’d probably still check the route tables on the node after you bring it up
a
I checked the IPv6 routes on the dualStack host after rke2-server init, and here's what I have: unique global I statically assigned on eth0 an fd90:: on flannel-wg-v6, twice, one with and one w/o /56 mask finally the linkLocal fe80 dynamically generated for eth0 and this is set as the default route. I'm a relative IPv6 newb, but none of this seems problematic or reason for rke2 to not assign both IPv6 and IPv5 to services and pods, but I have only IPv4 on pods and services. They are, however, from the CIDRs I assigned in config.yaml.
c
There’s a note in the K3s docs that might also be relevant for RKE2, regarding IPV6 RAs: https://docs.k3s.io/networking/basic-network-options#single-stack-ipv6-networking
👀 1
a
yeah, the RA issue doesn't apply here because I don't have a router assigning the default route, just a local network w/o a default route.
c
Just to confirm, you’re setting it via static ipv6 interface config and routes?
I’m not sure what you mean by you “don’t have a router assigning the default route”. Unless you literally have a static ipv6 IP and default route set in your network config you are most likely using RAs.
or are you saying that you literally do not have a default route in this environment? Kubernetes will not like that, you really need a default route for things to work properly. Set up a blackhole route if you do not actually have one.
see https://docs.k3s.io/installation/airgap#default-network-route - this is for ipv4 but as far as I know Kubernetes needs the same for ipv6
a
OMG, thank you so much. I literally just came to that same idea as something to try, but from a point of pure ignorance. Thanks for confirming! So, yes, you are correct that I have a purely static, and isolated IPv6 subnet and configuration such that there is no default route other than what is automatically generated for the linkLocal address. I'm going to setup one of my other hosts as the default route and see how that goes. I appreciate the help so much.
unfortunately that's not it. I set the default route to the host from which I ssh to my rke2 nodes (via IPv6 as well as IPv4) and connectivity is fine until I initialize rke2-server. My dualStack VMs initialize fine and IPv4 connectivity is not disrupted, but IPv6 connectivity is disrupted and pods and services have only IPv4 addresses.
I had been using this address for my jumpbox: fd31471efcfb::/64 and then fd31471efcfb::1/64, fd31471efcfb::2/64, etc for my VMs, and setting the default route to fd31471efcfb:: did not help. However, I decided on a whim to assign fd31471efcfb::20/64 to my jumpbox and use that as the default route, and now I don't lose any IPv6 connectivity after initialization. However, I still don't see IPv6 IP's assigned to service and pods on the dualStack VM. Just want to confirm that I should see them there, and that a config like this would do the trick: cluster-cidr: "10.42.0.0/16,2001cafe42::/56" service-cidr: "10.43.0.0/16,2001cafe43::/112"