adamant-kite-43734
04/10/2024, 11:07 AMfuture-chef-71223
04/10/2024, 3:22 PMLabels: <http://beta.kubernetes.io/arch=amd64|beta.kubernetes.io/arch=amd64>
<http://beta.kubernetes.io/instance-type=k3s|beta.kubernetes.io/instance-type=k3s>
<http://beta.kubernetes.io/os=linux|beta.kubernetes.io/os=linux>
<http://kubernetes.io/arch=amd64|kubernetes.io/arch=amd64>
<http://kubernetes.io/hostname=phchbs-st32018|kubernetes.io/hostname=phchbs-st32018>
<http://kubernetes.io/os=linux|kubernetes.io/os=linux>
<http://node-role.kubernetes.io/control-plane=true|node-role.kubernetes.io/control-plane=true>
<http://node-role.kubernetes.io/master=true|node-role.kubernetes.io/master=true>
<http://node.kubernetes.io/instance-type=k3s|node.kubernetes.io/instance-type=k3s>
Annotations: <http://alpha.kubernetes.io/provided-node-ip|alpha.kubernetes.io/provided-node-ip>: <MY-IPV4>
<http://csi.volume.kubernetes.io/nodeid|csi.volume.kubernetes.io/nodeid>: {"<http://driver.longhorn.io|driver.longhorn.io>":"phchbs-st32018"}
<http://flannel.alpha.coreos.com/backend-data|flannel.alpha.coreos.com/backend-data>: {"VNI":1,"VtepMAC":"7e:65:80:f9:9f:cb"}
<http://flannel.alpha.coreos.com/backend-type|flannel.alpha.coreos.com/backend-type>: vxlan
<http://flannel.alpha.coreos.com/kube-subnet-manager|flannel.alpha.coreos.com/kube-subnet-manager>: true
<http://flannel.alpha.coreos.com/public-ip|flannel.alpha.coreos.com/public-ip>: <MY-IPV4>
<http://k3s.io/hostname|k3s.io/hostname>: phchbs-st32018
<http://k3s.io/internal-ip|k3s.io/internal-ip>: <MY-IPV4>
<http://k3s.io/node-args|k3s.io/node-args>:
["server","--disable","traefik","--flannel-iface","ens160","--data-dir","/opt/k3s","--prefer-bundled-bin","--resolv-conf","/etc...
<http://k3s.io/node-config-hash|k3s.io/node-config-hash>: VWJ7GMRNIEVMZ5NDM7NJVQP7REBOJBVASIV4F273RFU4UY3GQXIA====
<http://k3s.io/node-env|k3s.io/node-env>:
{"K3S_DATA_DIR":"/opt/k3s/data/3fcd4fcf3ae2ba4d577d4ee08ad7092538cd7a7f0da701efa2a8807d44a25f66","K3S_KUBECONFIG_MODE":"644"}
<http://node.alpha.kubernetes.io/ttl|node.alpha.kubernetes.io/ttl>: 0
<http://volumes.kubernetes.io/controller-managed-attach-detach|volumes.kubernetes.io/controller-managed-attach-detach>: true
creamy-pencil-82913
04/10/2024, 4:50 PMcreamy-pencil-82913
04/10/2024, 4:51 PMfuture-chef-71223
04/10/2024, 4:55 PMk3s-killall.sh
first, and then re-run the installer with the new k3s version desired. Should I go through each version instead?
The cluster is ipv4 only and I checked from the k3s service logs (systemd) that only ipv4 CIDR ranges are passed as --service-cluster-ip-range
or --cluster-cidr
creamy-pencil-82913
04/10/2024, 4:56 PMcreamy-pencil-82913
04/10/2024, 4:56 PMWhen attempting to upgrade to a new version of K3s, the Kubernetes version skew policy applies. Ensure that your plan does not skip intermediate minor versions when upgrading.
future-chef-71223
04/10/2024, 4:58 PMhttp_proxy/https_proxy
variables. Since the upgrade, it seems that all the networking operations are trying to resolve the proxy domain using ipv6 and that is failing. The workaround I found so far is to use the proxy IP directly instead of the domain, but I'm not sure it won't change in the future and I didn't have this issue before the upgrade, although I was using the same proxy.creamy-pencil-82913
04/10/2024, 4:58 PMcreamy-pencil-82913
04/10/2024, 5:00 PMip addr
in the pods? Things running in pods should be smart enough to not try to connect to ipv6 addresses if they don’t have that address family available. However, you said you’re getting “could not resolve host” errors, not “could not connect to host”… which suggests that DNS lookups are failing?future-chef-71223
04/10/2024, 5:02 PMip addr
from a random nginx pod in the cluster (sorry for the screenshot but I can't copy/paste on this node)future-chef-71223
04/10/2024, 5:04 PMHowever, you said you’re getting “could not resolve host” errors, not “could not connect to host”… which suggests that DNS lookups are failing?Yes, it's failing to resolve the domain of the HTTP proxy, but that doesn't happen if I force ipv4 for example running
curl -4 <http://myproxy.com>
.
It seems that even for DNS lookups, it's trying to solve AAAA
instead of A
, and I don't have any AAAA
entry for the proxyfuture-chef-71223
04/10/2024, 5:13 PMip addr
it seems like the pod has an ipv6
address as well, but why is it the case? I never configured the dual-stack herecreamy-pencil-82913
04/10/2024, 5:39 PMcreamy-pencil-82913
04/10/2024, 5:40 PMnet.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
future-chef-71223
04/10/2024, 5:40 PMcreamy-pencil-82913
04/10/2024, 5:44 PMcreamy-pencil-82913
04/10/2024, 5:45 PMcreamy-pencil-82913
04/10/2024, 5:45 PMcreamy-pencil-82913
04/10/2024, 5:45 PMfuture-chef-71223
04/10/2024, 5:47 PMcurl
or ping
would fail in the pods but not on the node itself, and I get the same dns lookup errors in containerd when pulling imagesfuture-chef-71223
04/10/2024, 5:48 PMtry those sysctls and see what happensI'll give it a try tomorrow and let you know. Thanks a lot for your help here, really appreciated!
future-chef-71223
04/17/2024, 12:50 PMsysctl
ipv6 configs. Still not sure what caused it in the first place, but it wasn't related to K3s in the end.
Thanks again for your help 👍