This message was deleted.
# rke2
a
This message was deleted.
c
There is a fair bit of Rancher stuff that still requires ipv4, IIRC. The public charts repos and such.
s
Ah, I haven’t considered that (yet). (I’ve gotten by with a local registry for images to bootstrap the rke2 install process)
Sounds like dual stack would be warranted
c
I would recommend sticking with ipv4 only or dual stack for now. There are too many things still not quite ready for ipv6 only.
s
That being said, based on the documentation it seems that one should be able to deploy a downstream cluster with ipv6 in Rancher today
a
@creamy-pencil-82913 - do you have a specific examples for key components of k8s (and rancher) that is not ipv6 ready?
c
mostly just external infra I believe. cloudflare, github, things that are hosted on s3 buckets.
a
oh you mean external services that k8s might depend on if say the org is cloud native?
We're heavily on-prem (virtualization, version control, image repository, etc) that are pretty heavily ipv6 native
c
I am mostly thinking about stuff that Rancher uses to install or update components over the public internet. If you are airgapped and everything is ipv6-only it should be fine.
s
I would agree, I did find quite a bit of success by using a local container registry instead of docker.io for instance. I think it could be an issue for specific applications that are deployed on the downstream clusters
However, I am in a spot where the following checks never clear during the "RKE2 cluster create" process:
Copy code
waiting for probes: etcd, kube-controller-manager, kube-scheduler, kubelet
Outside of that, the init node does seem to have a 'functioning' set of kubernetes components
c
Are pods for those components running and healthy on the nodes in question?
s
Hmm, this is the current set of all of the pods:
Copy code
NAMESPACE             NAME                                                              READY   STATUS      RESTARTS   AGE
cattle-fleet-system   fleet-agent-76f98f6d4c-rd89d                                      1/1     Running     0          2m27s
cattle-system         apply-system-agent-upgrader-on-wagon-pool1-10bea2e6-68c4z-hh8hs   0/1     Completed   0          2m20s
cattle-system         cattle-cluster-agent-77c959c4cd-z59nj                             1/1     Running     0          3m4s
cattle-system         helm-operation-k476k                                              0/2     Completed   0          2m1s
cattle-system         rancher-webhook-5559ff9b56-bbrzb                                  1/1     Running     0          110s
cattle-system         system-upgrade-controller-7b646bf548-fjrpw                        1/1     Running     0          2m27s
kube-system           cilium-f5whf                                                      1/1     Running     0          4m22s
kube-system           cilium-operator-f8749ccf4-dgsfz                                   0/1     Pending     0          4m22s
kube-system           cilium-operator-f8749ccf4-gb49d                                   1/1     Running     0          4m22s
kube-system           cloud-controller-manager-wagon-pool1-10bea2e6-68c4z               1/1     Running     0          4m42s
kube-system           etcd-wagon-pool1-10bea2e6-68c4z                                   1/1     Running     0          4m5s
kube-system           helm-install-rke2-cilium-9grq5                                    0/1     Completed   0          4m30s
kube-system           helm-install-rke2-coredns-gjfsm                                   0/1     Completed   0          4m30s
kube-system           helm-install-rke2-ingress-nginx-rjfh9                             0/1     Completed   0          4m30s
kube-system           helm-install-rke2-metrics-server-vkdgv                            0/1     Completed   0          4m30s
kube-system           helm-install-rke2-snapshot-controller-cjhct                       0/1     Completed   1          4m30s
kube-system           helm-install-rke2-snapshot-controller-crd-vg7tn                   0/1     Completed   0          4m30s
kube-system           helm-install-rke2-snapshot-validation-webhook-ljscx               0/1     Completed   0          4m30s
kube-system           kube-apiserver-wagon-pool1-10bea2e6-68c4z                         1/1     Running     0          4m39s
kube-system           kube-controller-manager-wagon-pool1-10bea2e6-68c4z                1/1     Running     0          4m43s
kube-system           kube-proxy-wagon-pool1-10bea2e6-68c4z                             1/1     Running     0          4m38s
kube-system           kube-scheduler-wagon-pool1-10bea2e6-68c4z                         1/1     Running     0          4m43s
kube-system           rke2-coredns-rke2-coredns-547c6db96c-dldd2                        1/1     Running     0          4m23s
kube-system           rke2-coredns-rke2-coredns-autoscaler-78f94c7d76-lqdb7             1/1     Running     0          4m23s
kube-system           rke2-ingress-nginx-controller-dw8lr                               1/1     Running     0          3m34s
kube-system           rke2-metrics-server-6c859d8846-jhgqh                              1/1     Running     0          3m50s
kube-system           rke2-snapshot-controller-7bc6cbb866-7cwhc                         1/1     Running     0          3m44s
kube-system           rke2-snapshot-validation-webhook-58f4f66646-zk6d7                 1/1     Running     0          3m50s
Do you know if the
waiting for probes
is a 'callback' from the downstream node -> rancher MCM?
c
no. well not quite.
the rancher-system-agent running on each node probes the components locally, and sends the collected results back to the rancher server. It’s not quite a callback, its part of a control loop. The Rancher server gives the agent a plan to execute, which may include success probes, and the agent executes it and sends the results of the commands and probes back to the server.
You might check the rancher-system-agent logs in journald to see if there are any clues as to why the probes are failing.