bulky-sundown-31714
06/19/2023, 8:27 AMbored-nest-98612
06/19/2023, 1:58 PMwonderful-terabyte-28236
06/22/2023, 1:17 PMFailed Mount
Unable to attach or mount volumes: unmounted volumes=[data-0], unattached volumes=[ready-files kube-api-access-8jhkt data-0 app-name-tmp cluster-ca broker-certs client-ca-cert kafka-metrics-and-logging]: timed out waiting for the condition
limited-eye-27484
06/28/2023, 8:28 PMbitter-restaurant-32653
07/05/2023, 5:17 AMcalm-twilight-27465
07/10/2023, 5:39 PMwonderful-terabyte-28236
07/12/2023, 4:14 PMAttachVolume.Attach failed for volume "pvc-examplename" : timed out waiting for external-attacher of <http://file.csi.azure.com|file.csi.azure.com> CSI driver to attach volume resource-group-name#rkeinfra43b5b092#rke-infra-dynamic-pvc-examplename#pvc-examplename#namespace-example
We reset the service principal password but still doesn't seem to be connecting. We are opened a case with Azure but not getting much help either, we are not using AKS. Does anyone have any ideas about this issue or have dealt with anything like this in the past? Thanks in advance.wonderful-terabyte-28236
07/12/2023, 4:27 PMdry-businessperson-9236
07/13/2023, 8:09 PMechoing-address-20868
07/14/2023, 4:41 PMaloof-pencil-74759
07/27/2023, 10:17 AMfaint-shampoo-17603
07/28/2023, 5:54 AMlate-lunch-73194
07/30/2023, 2:23 AMbored-nest-98612
08/01/2023, 12:34 PMbored-nest-98612
08/01/2023, 12:50 PMhundreds-evening-84071
08/03/2023, 3:23 PMwonderful-terabyte-28236
08/04/2023, 7:44 PMwonderful-terabyte-28236
08/07/2023, 2:34 PMmost-kite-870
08/12/2023, 1:23 AMValidating webhook is configured in such a way that it may be problematic during upgrades.
Resources
validating webhook configuration: <http://rancher.cattle.io|rancher.cattle.io>
most-kite-870
08/12/2023, 2:01 AMwonderful-terabyte-28236
08/15/2023, 7:35 PMkube-controller-manager
is set to that node and is currently deleted. Is there anyway to change the kube-controller-manager or get etcd issue to fix so we can redeploy this node? Basically our cluster is stuck waiting to provision a node that doesn't exist, and when we try to create it, it fails.
$ kubectl describe endpoints kube-controller-manager -n kube-system
Name: kube-controller-manager
Namespace: kube-system
Labels: <none>
Annotations: <http://control-plane.alpha.kubernetes.io/leader|control-plane.alpha.kubernetes.io/leader>:
{"holderIdentity":"control-1","leaseDurationSeconds":15,"acquireTime":"2022-01-11T16:00:56Z...
Subsets:
Events: <none>
wonderful-terabyte-28236
08/16/2023, 8:33 PMhallowed-carpet-38238
08/17/2023, 1:25 PMhallowed-carpet-38238
08/17/2023, 1:25 PMgorgeous-pizza-36569
08/17/2023, 4:02 PMwonderful-terabyte-28236
08/17/2023, 4:04 PMINFO: Arguments: --server <https://rancher.nameofplatform.com> --token REDACTED -r -n m-ccnpb
INFO: Environment: CATTLE_ADDRESS=10.217.43.18 CATTLE_AGENT_CONNECT=true CATTLE_INTERNAL_ADDRESS= CATTLE_NODE_NAME=m-ccnpb CATTLE_SERVER=<https://rancher.nameofplatform.com> CATTLE_TOKEN=REDACTED
INFO: Using resolv.conf: nameserver 127.0.0.53 options edns0 trust-ad search <http://3xjoa1i2lshejpt0ninj35opia.bx.internal.cloudapp.net|3xjoa1i2lshejpt0ninj35opia.bx.internal.cloudapp.net>
WARN: Loopback address found in /etc/resolv.conf, please refer to the documentation how to configure your cluster to resolve DNS properly
INFO: <https://rancher.nameofplatform.com/ping> is accessible
INFO: <http://rancher.nameofplatform.com|rancher.nameofplatform.com> resolves to 10.217.42.5
time="2023-08-17T15:57:24Z" level=info msg="Listening on /tmp/log.sock"
time="2023-08-17T15:57:24Z" level=info msg="Rancher agent version v2.6.13 is starting"
time="2023-08-17T15:57:24Z" level=info msg="Option etcd=false"
time="2023-08-17T15:57:24Z" level=info msg="Option controlPlane=false"
time="2023-08-17T15:57:24Z" level=info msg="Option worker=false"
time="2023-08-17T15:57:24Z" level=info msg="Option requestedHostname=m-ccnpb"
time="2023-08-17T15:57:24Z" level=info msg="Option dockerInfo={FUN4:2SZJ:AXCV:IW6F:JUDO:MHCD:WIJT:QGAB:PAE4:2VWN:GZCI:SGDX 1 1 0 0 1 overlay2 [[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] [] {[local] [bridge host ipvlan macvlan null overlay] [] [awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} true true false false true true true true true true true true false 32 false 40 2023-08-17T15:57:24.266091701Z json-file systemd 2 0 5.15.0-1041-azure Ubuntu 22.04.3 LTS 22.04 linux x86_64 <https://index.docker.io/v1/> 0xc001296150 2 8324927488 [] /var/lib/docker rke-infra-control-4 [provider=azure] false 20.10.24 map[io.containerd.runc.v2:{runc [] <nil>} io.containerd.runtime.v1.linux:{runc [] <nil>} runc:{runc [] <nil>}] runc { inactive false [] 0 0 <nil> []} false docker-init {8165feabfdfe38c65b599c4993d227328c231fca 8165feabfdfe38c65b599c4993d227328c231fca} {v1.1.8-0-g82f18fe v1.1.8-0-g82f18fe} {de40ad0 de40ad0} [name=apparmor name=seccomp,profile=default name=cgroupns] [] []}"
time="2023-08-17T15:57:24Z" level=info msg="Option customConfig=map[address:10.217.43.18 internalAddress: label:map[] roles:[] taints:[]]"
time="2023-08-17T15:57:24Z" level=info msg="Connecting to <wss://rancher.nameofplatform.com/v3/connect> with token starting with 945w6gmwldmfpsadasda42knk6r"
time="2023-08-17T15:57:24Z" level=info msg="Connecting to proxy" url="<wss://rancher.nameofplatform.com/v3/connect>"
time="2023-08-17T15:57:24Z" level=info msg="Requesting kubelet certificate regeneration"
time="2023-08-17T15:57:24Z" level=info msg="Starting plan monitor, checking every 15 seconds"
time="2023-08-17T15:57:39Z" level=info msg="Requesting kubelet certificate regeneration"
time="2023-08-17T15:57:39Z" level=info msg="Plan monitor checking 120 seconds"
billowy-egg-51769
08/21/2023, 5:40 PMssh_key_path: /home/...
cluster_name: kubernetes
#kubernetes_version: 'v1.21.14-rancher1-1'
kubernetes_version: 'v1.24.16-rancher1-1'
enable_cri_dockerd: true
authorization:
mode: rbac
#network:
# plugin: calico
ingress:
provider: none
services:
kube-api:
service_cluster_ip_range: 172.16.128.0/17
kube_controller:
cluster_cidr: 172.16.0.0/17
service_cluster_ip_range: 172.16.128.0/17
kubelet:
cluster_dns_server: 172.16.128.10
nodes:
- address: 10.75.144.169
user: rke
role:
- etcd
- controlplane
wide-easter-7639
08/22/2023, 9:44 AMwide-easter-7639
08/22/2023, 9:46 AMrich-shoe-36510
08/24/2023, 9:49 AM