adamant-kite-43734
12/13/2023, 12:00 PMadamant-kite-43734
12/13/2023, 3:48 PMworried-electrician-89379
01/12/2024, 2:21 PMworried-electrician-89379
01/12/2024, 2:21 PMbest-microphone-20624
01/16/2024, 9:06 PMkubectl apply -f rke2-docker-example.yaml
is included below. I believe I fixed this problem by changing the RKE2ControlPlane and RKE2ConfigTemplate CRD versions in the rke2-sample.yaml
manifest from v1beta1 to v1alpha1. Should I open an issue for this problem and submit a PR?
$ kubectl apply -f rke2-docker-example.yaml
namespace/example created
cluster.cluster.x-k8s.io/capd-rke2-test created
dockercluster.infrastructure.cluster.x-k8s.io/capd-rke2-test created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/controlplane created
machinedeployment.cluster.x-k8s.io/worker-md-0 created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/worker created
configmap/capd-rke2-test-lb-config created
resource mapping not found for name: "capd-rke2-test-control-plane" namespace: "example" from "rke2-docker-example.yaml": no matches for kind "RKE2ControlPlane" in version "controlplane.cluster.x-k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "capd-rke2-test-agent" namespace: "example" from "rke2-docker-example.yaml": no matches for kind "RKE2ConfigTemplate" in version "bootstrap.cluster.x-k8s.io/v1beta1"
ensure CRDs are installed first
Furthermore, with the above fix applied, the capd-rke2-test
cluster control-plane machine gets stuck in the Provisioning
Phase with the capd infrastructure controller continuously reporting the following two lines:
I0117 04:05:20.817749 1 machine.go:476] "Setting Kubernetes node providerID" controller="dockermachine" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="DockerMachine" DockerMachine="example/controlplane-jw2rq" namespace="example" name="controlplane-jw2rq" reconcileID="3cbea1e6-98c0-4606-9042-2f2a9db99db3" Machine="example/capd-rke2-test-control-plane-cpmdm" Machine="example/capd-rke2-test-control-plane-cpmdm" Cluster="example/capd-rke2-test"
E0117 04:05:20.822936 1 dockermachine_controller.go:426] "failed to patch the Kubernetes node with the machine providerID" err="failed to patch node: Node \"capd-rke2-test-control-plane-cpmdm\" is invalid: spec.providerID: Forbidden: node updates may not change providerID except from \"\" to valid" controller="dockermachine" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="DockerMachine" DockerMachine="example/controlplane-jw2rq" namespace="example" name="controlplane-jw2rq" reconcileID="3cbea1e6-98c0-4606-9042-2f2a9db99db3" Machine="example/capd-rke2-test-control-plane-cpmdm" Machine="example/capd-rke2-test-control-plane-cpmdm" Cluster="example/capd-rke2-test"
Do you have any suggestions for moving the control-plane machine beyond the Provisioning Phase?rapid-van-91305
01/18/2024, 12:15 PMrapid-van-91305
01/18/2024, 12:18 PMcreamy-wolf-46823
01/19/2024, 4:19 AMsome-addition-13540
01/22/2024, 8:36 PMkubeconfig
secret? My cluster using the manifests slightly modded from your samples folder is "stuck" in waiting for the kubeconfig and the "capi-core" no longer will create this for yousome-addition-13540
01/22/2024, 8:43 PM<https://github.com/cluster-api-provider-k3s/cluster-api-k3s/blob/dadf3a5648f5eae616882f33b8b4482395a10a6d/pkg/kubeconfig/kubeconfig.go>
worried-electrician-89379
01/24/2024, 8:12 AMworried-electrician-89379
01/24/2024, 4:11 PMrapid-van-91305
01/24/2024, 6:36 PMrapid-van-91305
01/24/2024, 6:36 PMworried-electrician-89379
01/24/2024, 7:06 PMsome-addition-13540
01/24/2024, 9:27 PMsome-addition-13540
01/24/2024, 9:38 PMconfigs:
"<http://registry.example.com:5000|registry.example.com:5000>":
auth:
username: xxxxxx # this is the registry username
password: xxxxxx # this is the registry password
vs the below that yields that you are missing the tls part of the secret
privateRegistriesConfig:
configs:
<http://registry.bar.com|registry.bar.com>:
authSecret:
apiVersion: v1
kind: Secret
namespace: kube-system
name: registry-regcred
some-addition-13540
01/24/2024, 10:39 PMsome-addition-13540
01/25/2024, 3:35 PMworried-electrician-89379
01/30/2024, 1:25 PMworried-electrician-89379
02/14/2024, 1:25 PMworried-electrician-89379
02/16/2024, 11:45 AMrapid-van-91305
02/16/2024, 12:05 PMv0.3.0
) just yet as it contains API version bump and there are a few open PRs we'd like to get in it (i.e. clusterclass support). However, we can back port this fix and do a v0.2.5
release today or Monday. Would that work?worried-electrician-89379
02/16/2024, 12:05 PMworried-electrician-89379
02/16/2024, 12:07 PMworried-electrician-89379
02/16/2024, 12:07 PMlimited-football-68766
02/22/2024, 11:07 AMworried-electrician-89379
02/22/2024, 3:39 PMbest-microphone-20624
02/23/2024, 5:30 PMclusterctl describe cluster -n example-aws rke2-aws
reports that all components are READY except Cluster/rke2-aws -> Workers -> MachineDeployment/rke2-aws-md-0 which reports "Warning WaitingForAvailableMachines...current 0 available".
When I review the /var/log/cloud-init*.log files on the control plane ec2 instance, I see no indication that the remote rke2 installer was invoked via runcmd. I do see some presumably cloud-init related entries being created and lingering in Secrets Manager but it is unclear to me how to make sense of them. Any suggestions for troubleshooting this problem are much appreciated.stale-orange-31544
03/05/2024, 12:36 AMControlPlaneEndpoint
(which in our case is a nodebalancer) on the SAN of the server cert but not the public/private IPs of the linode instance. This works fine for normal api commands but throws a cert error when trying to get logs because neither of the internal or external IPs of the nodes are on the certs.
we were able to get around this be adding the internal IP as the node IP with a shell command but I noticed the aws implementation does not need to do this so Iβm wondering if anyone knows what might be off here that would force us to do this?