ancient-mechanic-72639
12/10/2022, 4:03 PMancient-mechanic-72639
12/10/2022, 4:04 PMbillions-truck-68762
12/11/2022, 5:42 AMbillions-truck-68762
12/11/2022, 5:42 AMbillions-truck-68762
12/11/2022, 5:44 AMconditions:
- lastHeartbeatTime: "2022-12-11T05:38:06Z"
lastTransitionTime: "2022-12-11T05:38:06Z"
message: Flannel is running on this node
reason: FlannelIsUp
status: "False"
type: NetworkUnavailable
- lastHeartbeatTime: "2022-12-11T05:40:15Z"
lastTransitionTime: "2022-12-11T04:30:51Z"
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
status: "False"
type: MemoryPressure
- lastHeartbeatTime: "2022-12-11T05:40:15Z"
lastTransitionTime: "2022-12-11T04:30:51Z"
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
status: "False"
type: DiskPressure
- lastHeartbeatTime: "2022-12-11T05:40:15Z"
lastTransitionTime: "2022-12-11T04:30:51Z"
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
status: "False"
type: PIDPressure
- lastHeartbeatTime: "2022-12-11T05:40:15Z"
lastTransitionTime: "2022-12-11T04:30:51Z"
message: 'container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady
message:docker: network plugin is not ready: cni config uninitialized'
reason: KubeletNotReady
status: "False"
type: Ready
billions-truck-68762
12/11/2022, 5:45 AMbillions-truck-68762
12/11/2022, 5:45 AM[root@VM-8-2-centos ~]# kubectl -n kube-flannel logs kube-flannel-ds-hrlp9
Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
I1211 05:38:04.531603 1 main.go:204] CLI flags config: {etcdEndpoints:<http://127.0.0.1:4001>,<http://127.0.0.1:2379> etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:<http://flannel.alpha.coreos.com|flannel.alpha.coreos.com> kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W1211 05:38:04.531676 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1211 05:38:04.540479 1 kube.go:126] Waiting 10m0s for node controller to sync
I1211 05:38:04.631479 1 kube.go:431] Starting kube subnet manager
I1211 05:38:04.633428 1 kube.go:452] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.244.0.0/24]
I1211 05:38:05.631564 1 kube.go:133] Node controller sync successful
I1211 05:38:05.631583 1 main.go:224] Created subnet manager: Kubernetes Subnet Manager - vm-8-2-centos
I1211 05:38:05.631587 1 main.go:227] Installing signal handlers
I1211 05:38:05.631659 1 main.go:467] Found network config - Backend type: vxlan
I1211 05:38:05.631696 1 match.go:206] Determining IP address of default interface
I1211 05:38:05.632363 1 match.go:259] Using interface with name eth0 and address 10.0.8.2
I1211 05:38:05.632394 1 match.go:281] Defaulting external address to interface address (10.0.8.2)
I1211 05:38:05.632466 1 vxlan.go:138] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I1211 05:38:05.639390 1 main.go:416] Current network or subnet (10.244.0.0/16, 10.244.0.0/24) is not equal to previous one (0.0.0.0/0, 0.0.0.0/0), trying to recycle old iptables rules
I1211 05:38:05.640425 1 kube.go:452] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.244.0.0/24]
I1211 05:38:05.935150 1 main.go:342] Setting up masking rules
I1211 05:38:05.937659 1 main.go:364] Changing default FORWARD chain policy to ACCEPT
I1211 05:38:06.031834 1 main.go:379] Wrote subnet file to /run/flannel/subnet.env
I1211 05:38:06.031856 1 main.go:383] Running backend.
I1211 05:38:06.032031 1 vxlan_network.go:61] watching for new subnet leases
I1211 05:38:06.132099 1 main.go:404] Waiting for all goroutines to exit
I1211 05:38:06.333454 1 iptables.go:270] bootstrap done
I1211 05:38:06.335965 1 iptables.go:270] bootstrap done
straight-actor-37028
12/11/2022, 7:55 AM{
"node-label": [
"cattle.io/os=linux",
"rke.cattle.io/machine=35570e9e-4d18-4f14-b6f2-5e3dee9dec51"
],
"private-registry": "/etc/rancher/rke2/registries.yaml",
"profile": "cis-1.6",
"protect-kernel-defaults": true,
"server": "https://10.200.9.4:9345",
"token": ""cuddly-breakfast-76667
12/11/2022, 8:36 AMlively-gpu-91507
12/11/2022, 12:05 PMbreezy-ram-80329
12/12/2022, 7:49 AMbreezy-ram-80329
12/12/2022, 8:27 AMmagnificent-balloon-44813
12/12/2022, 8:28 AMastonishing-rain-93930
12/12/2022, 9:26 AM[FATAL] <http://clusters.management.cattle.io|clusters.management.cattle.io> is forbidden: User "system:serviceaccount:cattle-system:rancher" cannot list resource "clusters" in API group "<http://management.cattle.io|management.cattle.io>" at the cluster scope
Any idea what this might be about?
Thanks!swift-wall-53633
12/12/2022, 12:51 PMExpose Rancher Desktop's Kubernetes configuration and Docker socket to Windows Subsystem for Linux (WSL) distros
But I have to run as root, or else I get :
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "<http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json>": dial unix /var/run/docker.sock: connect: permission denied
Because
$ ls -l /var/run/docker.sock
srwxr-xr-x 1 root root 0 Dec 12 13:46 /var/run/docker.sock
I'd like to run with some group membership, so I can run docker cli as a normal usersquare-alarm-76296
12/12/2022, 12:51 PMdazzling-jewelry-63449
12/12/2022, 1:29 PMdamp-magician-5939
12/12/2022, 1:45 PMaloof-glass-99040
12/12/2022, 4:02 PMIgnore
, finish the upgrade. At the end of it, the webhook configuration is reset back to the original failurePolicy of Fail. We usually advise the users to exclude the managed components in the kube-system ns from the webhook configuration, but I suspect that they probably use helm or something provided from rancher and just apply. I was wondering if there any mitigation strategies you would recommend? Thank you! 🙂blue-jewelry-65830
12/12/2022, 7:27 PMblue-jewelry-65830
12/12/2022, 7:28 PMcreamy-accountant-88363
12/12/2022, 7:46 PMtls-rancher-internal-ca
secret and the /cacerts
endpoint? I have noticed when using Rancher with a private CA, this secret has a "dynamiclistener" Rancher-generated secret, instead of the user provided TLS CA + Key. This can cause an issue if you're using RKE2, since the RKE2 provisioning jobs will check the /cacerts
endpoint, get the invalid CA, and fail. Manually updating the tls-rancher-internal-ca
secret will fix this issue if you're using private CA as a workaround.
Any thoughts? Thanks. FYI I am using Rancher 2.6.8, with the Rancher helm chart.white-yacht-56857
12/12/2022, 7:51 PMquiet-house-51252
12/12/2022, 8:02 PMstocky-vr-48505
12/12/2022, 9:22 PMmost-crowd-3167
12/12/2022, 9:50 PMearly-solstice-46134
12/13/2022, 7:40 AM201
response code, but never actually starts rolling out the monitoring stack to the managed cluster.steep-window-46329
12/13/2022, 10:39 AMkube-reserved
, system-reserved
) set per default on kubernetes and rancher. So there are evictions happening and in most cases just the OOM-Killer of Systemd running wild.
We've set a custom value for eviction-hard
and system-reserved
and this seems to improve the stability of the worker-nodes and in case of low memory, the correct pods are getting killed instead of random system processes.
Is there a general recommendation from Rancher for how much memory and CPU should be reserved on downstream-clusters based on the size of the worker-node? It would help a lot to get some general guidelines. Thanks!
Reserve Compute Resources for System Daemons
https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/boundless-country-59971
12/13/2022, 1:56 PMcrooked-cat-21365
12/13/2022, 4:03 PM# ~/rke.v1.3.15 etcd snapshot-restore --config ~/kube002.config.yaml --name c-blwbg-rl-mxhdw_2022-12-13T05:18:35Z
INFO[0000] Running RKE version: v1.3.15
INFO[0000] Checking if state file is included in snapshot file for [c-blwbg-rl-mxhdw_2022-12-13T05:18:35Z]
FATA[0000] Cluster must have at least one etcd plane host: please specify one or more etcd in cluster config
Problem is, where do I get a working cluster.yaml from? I had expected that all necessary information is stored in the snapshot (next to rke.state and the etcd database). The cluster.yaml I saved via the GUI appears to be incomplete.
Every helpful comment is highly appreciated