thousands-breakfast-69764
06/30/2025, 2:48 PMable-printer-16910
06/30/2025, 2:56 PMsquare-rose-18388
06/30/2025, 4:27 PMpurple-rose-91888
07/01/2025, 8:02 AMcreamy-tailor-26997
07/01/2025, 7:24 PMcreamy-tailor-26997
07/01/2025, 7:24 PMcreamy-tailor-26997
07/01/2025, 7:24 PMcreamy-tailor-26997
07/01/2025, 7:24 PMfaint-soccer-70506
07/01/2025, 7:31 PMred-intern-36131
07/02/2025, 7:10 AMboundless-scientist-9417
07/02/2025, 5:04 PMflat-spring-34799
07/02/2025, 8:52 PMbland-easter-37603
07/03/2025, 7:19 AMI0703 07:15:09.113085 17 factory.go:221] Registration of the crio container factory failed: Get "<http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info>": dial unix /var/run/crio/crio.sock: connect: no such file or directory
I0703 07:15:09.124398 17 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4"
I0703 07:15:09.127578 17 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6"
I0703 07:15:09.127611 17 status_manager.go:230] "Starting to sync pod status with apiserver"
I0703 07:15:09.127634 17 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
I0703 07:15:09.127641 17 kubelet.go:2436] "Starting kubelet main sync loop"
E0703 07:15:09.127698 17 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
time="2025-07-03T07:15:09Z" level=info msg="Applying CRD <http://addons.k3s.cattle.io|addons.k3s.cattle.io>"
I0703 07:15:09.149273 17 factory.go:223] Registration of the containerd container factory successfully
time="2025-07-03T07:15:09Z" level=info msg="Flannel found PodCIDR assigned for node 7343353f304f"
time="2025-07-03T07:15:09Z" level=info msg="The interface eth0 with ipv4 address 172.24.0.4 will be used by flannel"
I0703 07:15:09.210568 17 kube.go:139] Waiting 10m0s for node controller to sync
I0703 07:15:09.210689 17 kube.go:469] Starting kube subnet manager
I0703 07:15:09.216454 17 cpu_manager.go:221] "Starting CPU manager" policy="none"
I0703 07:15:09.216571 17 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
I0703 07:15:09.216597 17 state_mem.go:36] "Initialized new in-memory state store"
I0703 07:15:09.216762 17 state_mem.go:88] "Updated default CPUSet" cpuSet=""
I0703 07:15:09.216967 17 state_mem.go:96] "Updated CPUSet assignments" assignments={}
I0703 07:15:09.216984 17 policy_none.go:49] "None policy: Start"
I0703 07:15:09.216996 17 memory_manager.go:186] "Starting memorymanager" policy="None"
I0703 07:15:09.217005 17 state_mem.go:35] "Initializing new in-memory state store"
I0703 07:15:09.217099 17 state_mem.go:75] "Updated machine memory state"
E0703 07:15:09.219087 17 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
I0703 07:15:09.219438 17 eviction_manager.go:189] "Eviction manager: starting control loop"
I0703 07:15:09.219559 17 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
I0703 07:15:09.222575 17 kube.go:490] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.0.0/24]
I0703 07:15:09.227004 17 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
I0703 07:15:09.244032 17 server.go:715] "Successfully retrieved node IP(s)" IPs=["172.24.0.2"]
E0703 07:15:09.244440 17 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using
--nodeport-addresses primary`"`
I0703 07:15:09.247644 17 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0703 07:15:09.247856 17 server_linux.go:145] "Using iptables Proxier"
I0703 07:15:09.270383 17 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0703 07:15:09.270940 17 server.go:516] "Version info" version="v1.33.2+k3s1"
I0703 07:15:09.271050 17 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0703 07:15:09.277080 17 config.go:199] "Starting service config controller"
I0703 07:15:09.277265 17 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
I0703 07:15:09.277368 17 config.go:105] "Starting endpoint slice config controller"
I0703 07:15:09.277453 17 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
I0703 07:15:09.277525 17 config.go:440] "Starting serviceCIDR config controller"
I0703 07:15:09.277597 17 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
I0703 07:15:09.278390 17 config.go:329] "Starting node config controller"
I0703 07:15:09.278525 17 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
E0703 07:15:09.292754 17 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
I0703 07:15:09.318748 17 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b216c0f025b0efd2879ada393fc77bc808594146a1aa3759b37063c0103031c"
I0703 07:15:09.353082 17 kubelet_node_status.go:75] "Attempting to register node" node="7343353f304f"
I0703 07:15:09.391194 17 shared_informer.go:357] "Caches are synced" controller="node config"
I0703 07:15:09.391310 17 shared_informer.go:357] "Caches are synced" controller="service config"
I0703 07:15:09.391377 17 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
I0703 07:15:09.394486 17 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
I0703 07:15:09.427079 17 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3534178e457d4f53f49bd0e60b9b322987b1ac7c3781a198dd067d12fc036a4"
time="2025-07-03T07:15:09Z" level=info msg="Applying CRD <http://etcdsnapshotfiles.k3s.cattle.io|etcdsnapshotfiles.k3s.cattle.io>"
time="2025-07-03T07:15:09Z" level=fatal msg="Failed to start networking: unable to initialize network policy controller: error getting node subnet: failed to find interface with specified node ip"
most-memory-45748
07/03/2025, 9:54 AM/usr/local/bin/kubectl get pods -n cattle-monitoring-system
##[error]E0703 11:02:37.026548 3323374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: {\"Code\":{\"Code\":\"Forbidden\",\"Status\":403},\"Message\":\"<http://clusters.management.cattle.io|clusters.management.cattle.io> \\\"c-m-7m9crrsz\\\" is forbidden: User \\\"system:unauthenticated\\\" cannot get resource \\\"clusters\\\" in API group \\\"<http://management.cattle.io|management.cattle.io>\\\" at the cluster scope\",\"Cause\":null,\"FieldName\":\"\"}"
steep-petabyte-14152
07/03/2025, 4:45 PMbright-address-70198
07/04/2025, 10:18 AMwide-alligator-26985
07/08/2025, 12:43 PMbillions-kilobyte-26686
07/08/2025, 3:15 PMbillions-kilobyte-26686
07/08/2025, 3:32 PMrefined-application-74576
07/08/2025, 6:22 PMacoustic-alligator-67807
07/09/2025, 3:13 PMacoustic-alligator-67807
07/09/2025, 3:15 PMbillions-kilobyte-26686
07/09/2025, 3:44 PMabundant-hair-58573
07/09/2025, 8:30 PMWaitiong for control-plane node <ip> startup: nodes <ip> not found
with the ip being the node of that control plane that I'm on. In Rancher I see errors in "Recent Events" for the rke2-canal pod running on that node. It's a lot to transcribe from an air-gapped network so I'm summarizing
MountVolume.SetUp failed for volume "flannel-cfg" : failed to sync configmap cache: timeout waiting for condition
MountVolume.SetUp failed for volume "kube-api-access-..." : [failed to fetch token: serviceaccounts "canal" is forbidden: User "system:node:<node-hostname>" cannot create resource "serviceacccounts/token" in API group "" in the namespace "kube-system": no relationship found between node '<node-fqdn>' and this object, failed to sync configmap cache: time out waiting for the condition]
narrow-twilight-31778
07/09/2025, 9:07 PMnarrow-twilight-31778
07/09/2025, 9:07 PMnarrow-twilight-31778
07/09/2025, 9:12 PMagreeable-planet-37457
07/10/2025, 9:21 AM<http://node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule|node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule>
taint is being applied to the node which holts the master node from going into a ready state.
How can I resolve this please?better-garden-71117
07/10/2025, 3:54 PMkube-controller-manager
and kube-scheduler
certificates on my RKE2 server have expired. I attempted to rotate them using rke2 certificate rotate
, but the renewal didn’t go through as expected. Let me know if anyone has encountered this before or has suggestions to resolve it.colossal-forest-90809
07/11/2025, 4:57 PM