damp-vegetable-48645
07/07/2022, 4:43 PMgreat-bear-19718
07/08/2022, 1:46 AMdamp-vegetable-48645
07/08/2022, 1:47 AMgreat-bear-19718
07/08/2022, 1:48 AMdamp-vegetable-48645
07/08/2022, 1:48 AMgreat-bear-19718
07/08/2022, 1:48 AMdamp-vegetable-48645
07/08/2022, 1:49 AMgreat-bear-19718
07/08/2022, 1:50 AMdamp-vegetable-48645
07/08/2022, 1:51 AMgreat-bear-19718
07/08/2022, 1:52 AMdamp-vegetable-48645
07/08/2022, 1:53 AMgreat-bear-19718
07/08/2022, 2:52 AMdamp-vegetable-48645
07/08/2022, 2:56 AMcustom-*-machin-plan
secrets which contain an applied-plan field, which references the old node's IP for the server field. I updated the secrets with an IP from the remaining nodes and rebuild Node 1 again, unfortunately, it seems a new plan secret has been created when the new node tried coming online, though (currently) it's empty.replica 3
after removing the node. I changed that to 2 to take the volumes out of a the degraded state. Unfortunately, so far, nothing has been able to bring the rebuilt node #1 back in to the cluster.
Grasping for straws, I'm wondering if there's a configuration item, either on the remaining servers, or within the internal K8s construct that is referencing the original Node #1's IP address, and passing that in to the node bootstrap config, instead of using an IP from one of the remaining nodes.great-bear-19718
07/08/2022, 3:00 AMdamp-vegetable-48645
07/08/2022, 3:01 AMgreat-bear-19718
07/08/2022, 3:04 AMdamp-vegetable-48645
07/08/2022, 3:05 AMgreat-bear-19718
07/08/2022, 3:05 AMdamp-vegetable-48645
07/08/2022, 3:05 AMgreat-bear-19718
07/08/2022, 3:11 AMkubectl get clusters.cluster -A
resource?tempharv1
is not in the cluster there is little info about it in the 2nd bundle(⎈ |default:default)➜ nodes k get machine -n fleet-local
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
custom-4287b915efeb local tempharv3 <rke2://tempharv3> Running 123m
custom-b49899265d4e local tempharv2 <rke2://tempharv2> Running 103m
custom-e6d3236f2c36 local Provisioning 79m
damp-vegetable-48645
07/08/2022, 1:16 PMkubectl get clusters.cluster -A
NAMESPACE NAME PHASE AGE VERSION
fleet-local local Provisioning 12h
kubectl get machines -n fleet-local
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
custom-4287b915efeb local tempharv3 <rke2://tempharv3> Running 11h
custom-8154486041ba local Provisioning 10h
custom-b49899265d4e local tempharv2 <rke2://tempharv2> Running 11h
ping <http://registry-1.docker.io|registry-1.docker.io>
PING <http://registry-1.docker.io|registry-1.docker.io> (44.207.51.64) 56(84) bytes of data