https://rancher.com/ logo
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
general
  • b

    bright-businessperson-46654

    05/10/2022, 6:01 AM
    Hi everyone, I have faced a couple of problems on my k3s cluster contains (3 master nodes + 5 worker nodes). One problem is in name resolutions of container runtime. I have internal DNS server and an internal image registry. All cluster nodes dns configs are set to my internal dns server. On the node I can resolve private image registry dns name with nslookup and everything is ok, BUT when I deploy a pod with an image on private image registry I got this error : "Failed to pull image "reg.mydomain.com/tools/cicd-helper": rpc error: code = Unknown desc = failed to pull and unpack image "reg.mydomain.com/tools/cicd-helper:latest": failed to resolve reference "reg.mydomain.com/tools/cicd-helper:latest": failed to do request: Head "https://reg.mydomain.com/v2/tools/cicd-helper/manifests/latest": dial tcp: lookup reg.mydomain.com: no such host" Same error on the worker node with this command: sudo crictl pull --creds user:pass1 reg.mydomain.com/tools/cicd-helper:latest Anyone can help me with this issue?
    c
    r
    • 3
    • 18
  • a

    ancient-florist-59155

    05/10/2022, 11:54 AM
    Someone around to give me a hint how to add a menu section in the rancher UI which renders the Link like the Longhorn addon does
    s
    a
    • 3
    • 7
  • a

    ancient-energy-15842

    07/06/2022, 7:32 PM
    Hi, I'm running Rancher 2.5.15, upgraded from 2.5.9 a few weeks ago (installer around june of past year), yesterday my Rancher container rebooted automatically, and I noticed when the Rancher Web UI became innacesible, after starting the container agains, everything seems to work fine, but I see that my local cluster is at a
    Configuring
    stage and if I take a look into the docker container logs I see a lot of this lines
    [ERROR] failed to call leader func: failed to add management data: problem reconciling role templates: couldn't update project-member: Internal error occurred: failed calling webhook "<http://rancherauth.cattle.io|rancherauth.cattle.io>": Post "<https://rancher-we>
    bhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s": x509: certificate has expired or is not yet valid: current time 2022-07-06T19:29:10Z is after 2022-07-05T20:45:47Z
    I tried what was recomended at this page https://rancher.com/docs/rancher/v2.6/en/troubleshooting/expired-webhook-certificates/ but I got kubectl delete secret -n cattle-system cattle-webhook-tls
    Error from server (NotFound): secrets "cattle-webhook-tls" not found
    kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io --ignore-not-found=true rancher.cattle.io This one went fine
    kubectl delete pod -n cattle-system -l app=rancher-webhook
    No resources found After that, I restarted the container and the error still shows up
    c
    • 2
    • 3
  • c

    clean-pager-20838

    07/06/2022, 11:27 PM
    is there a way for docker create/start to fail if network ports it is exposing are already in use? maybe more of a docker question (though i am using rancher with moby as my docker context)
    a
    • 2
    • 1
  • s

    some-lawyer-90399

    07/06/2022, 11:59 PM
    hello Ranchers, Is there anyway to grant access to users to specific rancher projects externally using script ?
    t
    • 2
    • 2
  • k

    kind-air-74358

    07/07/2022, 8:48 AM
    Hi, I am deploying Rancher on a Kubernetes cluster following this guide. To configure SSL, I have to install cert-manager. Is it possible to install cert-manager after installing Rancher, but still using Let’s Encrypt for provisioning a valid certificate)? A little background; We want to use Terraform to deploy Rancher, cert-manager and monitoring. When we have to install cert-manager before Rancher I will use the Helm chart as described in the guide. Next installing Rancher and adding the Rancher App ‘Monitoring’ (conditional). Next I want to update cert-manager to deploy a ServiceMonitor (which can be done via upgrading the existing Helm Release). I can’t configure cert-manager at first as the CRD for ServiceMonitor is not yet available. So my idea was actually to deploy first Rancher, then Monitoring (via Apps) and last cert-manager (via Apps instead directly via Helm).
    a
    • 2
    • 12
  • s

    salmon-carpenter-62625

    07/07/2022, 8:53 AM
    Hi, Is it possible to restore cattle-* namespace with rancher stuff after removing it by accident. as now I see
    Cluster agent is not connected
    from rancher side,
    a
    • 2
    • 4
  • p

    powerful-evening-21148

    07/07/2022, 10:16 AM
    Hi all. We have Rancher 2.6.4. I have the problem that if I add a project member (local or ADFS user) then I can save it but next time I check the project members the entry is not there anymore. Does anybody know how to fix this?
    • 1
    • 2
  • g

    gifted-breakfast-73755

    07/07/2022, 2:53 PM
    Hi, is there a way to upload a custom node driver to Rancher without needing to place it publicly on the internet which the UI seems to require?
    f
    • 2
    • 2
  • a

    acceptable-lifeguard-35365

    07/07/2022, 11:29 PM
    Upgraded rancher version to Version: 1.4.1, now the docket is not working
    w
    • 2
    • 1
  • m

    melodic-hamburger-23329

    07/08/2022, 5:54 AM
    This is not exactly channel for Traefik (it seems they don’t have Slack, Gitter, etc…?), but would anyone happen to know about best practices regarding Traefik ingresses? My traffic from outside mostly comes in HTTP/2, but it seems that the services receive it in HTTP/1.1. Is there any performance impact in this, or would it be better to have Traefik<->Service communication also in HTTP/2 (or h2c)?
    w
    • 2
    • 2
  • t

    thankful-parrot-99187

    07/08/2022, 1:23 PM
    Hello, I initially installed Rancher with the "rancher" option for the certificates using helm, I was hoping to change to Lets Encrypt once our security team put the requisite firewall rules in place which they have done. Unfortunately it seems its not as simple as changing the option to "ingress.tls.source=letsEncrypt" in my helm command, that has left me in a state where the cattle-cluster-agent is in a CrashLoopBackoff because it no longer trusts the self signed cert that was not replaced by a Lets Encrypt one. I found instructions for how to do the opposite here in the docs and I assume the process would be similar but I can't seem to find how to fix it now. I would hope I am not the only one who tried this and someone can point me in the right direction?
    • 1
    • 3
  • b

    blue-businessperson-77519

    07/08/2022, 4:49 PM
    Good evening! Can anyone help me with the problem below? When I try to see the logs of a pod from the "View Logs" option in the Rancher UI, the terminal that opens has no information. These errors appear when I open the browser's developer tools.
    s
    • 2
    • 3
  • a

    acceptable-van-59252

    07/08/2022, 5:06 PM
    I wanted to give an update on my k3s problem (NOW SOLVED) and request more information on how I can write a better test case that demonstrates this problem so I can file a bug. Essentially, my understanding of this problem is when you have two different default routes with the same metrics then Linux will choose one of those default routes and k3s (or flannel or the vxlan configuration) will choose another default route. They have different rules for disambiguation in this case. This means that sometimes you will have EVERYTHING work outside of k3s, and everything inside of k3s that needs to communicate with the outside world broke. I've written up a description of this here, but I would like assistance if someone knows how to create the dummy interfaces and default routes required to demonstrate this problem outside of terraform. https://devops.stackexchange.com/q/16161/18965
    a
    • 2
    • 14
  • v

    victorious-monkey-50550

    07/08/2022, 6:19 PM
    We are using the KlipperLB load balancer and are finding that the svclb services are getting created in the kube-system namespaces. In addition, we can not delete the pods in that namespace - the initial pod is deleted, but then it gets recreated, so it won't die. Initial pods in kube-system namespace:
    k get pods -n kube-system
    NAME                                                        READY   STATUS    RESTARTS       AGE
    coredns-d76bd69b-bnx9m                                      1/1     Running   3 (9d ago)     9d
    metrics-server-7cd5fcb6b7-gflch                             1/1     Running   3 (9d ago)     9d
    local-path-provisioner-6c79684f77-hlwsg                     1/1     Running   5 (9d ago)     9d
    svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-k7wsp   1/1     Running   0              7d23h
    svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-df8cj   1/1     Running   0              7d23h
    svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-sxb88   1/1     Running   0              7d23h
    svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-7nqq6   1/1     Running   1 (7d6h ago)   7d23h
    svclb-istio-ingressgateway-c9776a98-m6hzn                   3/3     Running   0              3d2h
    svclb-istio-ingressgateway-c9776a98-kwvr7                   3/3     Running   0              3d2h
    svclb-istio-ingressgateway-c9776a98-4jn4m                   3/3     Running   0              3d2h
    svclb-istio-ingressgateway-c9776a98-hd4jw                   3/3     Running   0              3d2h
    svclb-keycloak-fb0704bb-pnq85                               1/1     Running   0              2d23h
    svclb-keycloak-fb0704bb-xfh2p                               1/1     Running   0              2d23h
    svclb-keycloak-fb0704bb-llc4r                               1/1     Running   0              2d23h
    svclb-keycloak-fb0704bb-cjx4v                               1/1     Running   0              2d23h
    svclb-postgres-db-40a02dd4-7h7q7                            1/1     Running   0              2d22h
    svclb-postgres-db-40a02dd4-svf68                            1/1     Running   0              2d22h
    svclb-postgres-db-40a02dd4-cn25l                            1/1     Running   0              2d22h
    svclb-postgres-db-40a02dd4-tskt4                            1/1     Running   0              2d22h
    svclb-stackgres-restapi-e9fc8149-xqhgw                      0/1     Pending   0              23h
    svclb-stackgres-restapi-e9fc8149-jp4cx                      0/1     Pending   0              23h
    svclb-stackgres-restapi-e9fc8149-444fj                      0/1     Pending   0              23h
    svclb-stackgres-restapi-e9fc8149-jxgjm                      0/1     Pending   0              23h
    svclb-nginx-service-2fcd8743-zbxv6                          1/1     Running   0              3h9m
    svclb-nginx-service-2fcd8743-dpw6q                          1/1     Running   0              3h9m
    svclb-nginx-service-2fcd8743-hj4gk                          1/1     Running   0              3h9m
    svclb-nginx-service-2fcd8743-dt5pn                          1/1     Running   0              3h9m
    svclb-nginx-b1379761-dhw6w                                  1/1     Running   0              3h27m
    svclb-nginx-b1379761-zcgnd                                  1/1     Running   0              3h27m
    svclb-nginx-b1379761-czflw                                  1/1     Running   0              3h27m
    svclb-nginx-b1379761-jbpmv                                  1/1     Running   0              3h27m
    svclb-west2-service-9d3852cf-rjhfw                          1/1     Running   0              64m
    svclb-west2-service-9d3852cf-9p29w                          1/1     Running   0              64m
    svclb-west2-service-9d3852cf-t2s5h                          1/1     Running   0              64m
    svclb-west2-service-9d3852cf-7z2xw                          1/1     Running   0              64m
    svclb-test-service-ac1dfd76-6qj28                           1/1     Running   0              47m
    svclb-test-service-ac1dfd76-ckfdz                           1/1     Running   0              47m
    svclb-test-service-ac1dfd76-cmdgz                           1/1     Running   0              47m
    svclb-test-service-ac1dfd76-47j66                           1/1     Running   0              47m
    svclb-test-f79880dc-gxrcx                                   0/2     Pending   0              4m35s
    svclb-test-f79880dc-l9b9k                                   0/2     Pending   0              4m35s
    svclb-test-f79880dc-x2bcz                                   0/2     Pending   0              4m35s
    svclb-test-f79880dc-zkwdz                                   0/2     Pending   0              4m35s
    svclb-test-replicas-2fef0816-l2xfv                          0/2     Pending   0              4m35s
    svclb-test-replicas-2fef0816-zctcp                          0/2     Pending   0              4m35s
    svclb-test-replicas-2fef0816-lgj2v                          0/2     Pending   0              4m35s
    svclb-test-replicas-2fef0816-gjgwv                          0/2     Pending   0              4m35s
    After deleting a couple of pods:
    k delete pods -n kube-system svclb-test-f79880dc-gxrcx svclb-test-f79880dc-l9b9k
    pod "svclb-test-f79880dc-gxrcx" deleted
    pod "svclb-test-f79880dc-l9b9k" deleted
    (envgen)$ k get pods -n kube-system
    NAME                                                        READY   STATUS    RESTARTS       AGE
    coredns-d76bd69b-bnx9m                                      1/1     Running   3 (9d ago)     9d
    metrics-server-7cd5fcb6b7-gflch                             1/1     Running   3 (9d ago)     9d
    local-path-provisioner-6c79684f77-hlwsg                     1/1     Running   5 (9d ago)     9d
    svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-k7wsp   1/1     Running   0              7d23h
    svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-df8cj   1/1     Running   0              7d23h
    svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-sxb88   1/1     Running   0              7d23h
    svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-7nqq6   1/1     Running   1 (7d6h ago)   7d23h
    svclb-istio-ingressgateway-c9776a98-m6hzn                   3/3     Running   0              3d2h
    svclb-istio-ingressgateway-c9776a98-kwvr7                   3/3     Running   0              3d2h
    svclb-istio-ingressgateway-c9776a98-4jn4m                   3/3     Running   0              3d2h
    svclb-istio-ingressgateway-c9776a98-hd4jw                   3/3     Running   0              3d2h
    svclb-keycloak-fb0704bb-pnq85                               1/1     Running   0              2d23h
    svclb-keycloak-fb0704bb-xfh2p                               1/1     Running   0              2d23h
    svclb-keycloak-fb0704bb-llc4r                               1/1     Running   0              2d23h
    svclb-keycloak-fb0704bb-cjx4v                               1/1     Running   0              2d23h
    svclb-postgres-db-40a02dd4-7h7q7                            1/1     Running   0              2d22h
    svclb-postgres-db-40a02dd4-svf68                            1/1     Running   0              2d22h
    svclb-postgres-db-40a02dd4-cn25l                            1/1     Running   0              2d22h
    svclb-postgres-db-40a02dd4-tskt4                            1/1     Running   0              2d22h
    svclb-stackgres-restapi-e9fc8149-xqhgw                      0/1     Pending   0              23h
    svclb-stackgres-restapi-e9fc8149-jp4cx                      0/1     Pending   0              23h
    svclb-stackgres-restapi-e9fc8149-444fj                      0/1     Pending   0              23h
    svclb-stackgres-restapi-e9fc8149-jxgjm                      0/1     Pending   0              23h
    svclb-nginx-service-2fcd8743-zbxv6                          1/1     Running   0              3h11m
    svclb-nginx-service-2fcd8743-dpw6q                          1/1     Running   0              3h11m
    svclb-nginx-service-2fcd8743-hj4gk                          1/1     Running   0              3h11m
    svclb-nginx-service-2fcd8743-dt5pn                          1/1     Running   0              3h11m
    svclb-nginx-b1379761-dhw6w                                  1/1     Running   0              3h29m
    svclb-nginx-b1379761-zcgnd                                  1/1     Running   0              3h29m
    svclb-nginx-b1379761-czflw                                  1/1     Running   0              3h29m
    svclb-nginx-b1379761-jbpmv                                  1/1     Running   0              3h29m
    svclb-west2-service-9d3852cf-rjhfw                          1/1     Running   0              66m
    svclb-west2-service-9d3852cf-9p29w                          1/1     Running   0              66m
    svclb-west2-service-9d3852cf-t2s5h                          1/1     Running   0              66m
    svclb-west2-service-9d3852cf-7z2xw                          1/1     Running   0              66m
    svclb-test-service-ac1dfd76-6qj28                           1/1     Running   0              48m
    svclb-test-service-ac1dfd76-ckfdz                           1/1     Running   0              48m
    svclb-test-service-ac1dfd76-cmdgz                           1/1     Running   0              48m
    svclb-test-service-ac1dfd76-47j66                           1/1     Running   0              48m
    svclb-test-f79880dc-x2bcz                                   0/2     Pending   0              6m23s
    svclb-test-f79880dc-zkwdz                                   0/2     Pending   0              6m23s
    svclb-test-replicas-2fef0816-l2xfv                          0/2     Pending   0              6m23s
    svclb-test-replicas-2fef0816-zctcp                          0/2     Pending   0              6m23s
    svclb-test-replicas-2fef0816-lgj2v                          0/2     Pending   0              6m23s
    svclb-test-replicas-2fef0816-gjgwv                          0/2     Pending   0              6m23s
    svclb-test-f79880dc-bfhcc                                   0/2     Pending   0              3s
    svclb-test-f79880dc-vz52q                                   0/2     Pending   0              3s
    c
    • 2
    • 9
  • c

    chilly-laptop-5627

    07/08/2022, 8:51 PM
    Anyone know if there is documentation on how to move legacy longhorn that was installed prior to 2.5 to the apps and market place past 2.5? I have done some googling but have not been fruitful
    t
    • 2
    • 2
  • a

    adventurous-mechanic-95332

    07/09/2022, 1:44 AM
    hey Gents, Anyone knows where I can get the root ca for my local rancher domain https://rancher.192.168.20.20.sslip.io/. I want to add that ca to mac keychain to avoid the cert invalidate error reported from chrome browser. Thx. I tried those crt in local cluster config but none of then works.
    • 1
    • 1
  • m

    melodic-hamburger-23329

    07/09/2022, 3:11 AM
    If I use stargz snapshotter, can I disable all other snapshotter plugins..? Or how does containerd utilize snapshotter plugins, and are there some interdependencies, etc.? (ping @best-accountant-68201)
    • 1
    • 2
  • c

    clean-painting-58815

    07/09/2022, 12:25 PM
    Good morning, I'm in the process of moving from Rancher 2.4.17 to 2.6.6 and it looks like there are some significant changes in how we update a workload from the REST API. Before, we would use this - curl -s -k -X PUT -H "Accept: application/json" -H "Authorization: Bearer $TOKEN” -H "Content-Type: application/json" https://${RANCHER_URL}/v3/project/${CLUSTERID}:${PROJECTID}/workloads/deployment:${NAMESPACE}:${DEPLOYMENT_NAME} -d "@deployment.json" Now it seems we need to use something like this: curl -s -k -X PUT -H "Accept: application/json" -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" https://$(RANCHER_URL)/k8s/clusters/${CLUSTERID}/v1/apps.deployments/${NAMESPACE}/${DEPLOYMENT_NAME} -d "@deployment.json" Is that the recommended endpoint for doing a deployment update? Also, I noticed a couple more things I'd like to confirm. #1. That the json body now has to start with {"apiVersion":"apps/v1", "kind":"Deployment"} and #2. We now have to include "resourceVersion":"XXXXXXXXXXX" of the most recent deployment as part of the PUT update? I'd like to minimize the amount of new things we need to add to the deployment JSON so if I can avoid having to do the extra step of pulling in the most recent resourceVersion from the existing deployment, I'd rather not. Or maybe the expectation is that we delete the workload and POST it brand new? Are there any good articles that talk about the REST API changes between Rancher versions and how it relates to things like resource updates? Thanks!
    r
    h
    • 3
    • 2
  • b

    brainy-tomato-18651

    07/09/2022, 10:36 PM
    Hi, can anyone help me to fix this? I have my RKE2 cluster deployed and I want to install grafana/prometheus chart and when I finalize a pod stuck failing I attach a screenshot about the log of the pod
    • 1
    • 1
  • s

    shy-shampoo-27820

    07/11/2022, 6:53 AM
    Rancher user interface doesn't seem to be good, i need to zoom out to see lists of role when selecting it.
    s
    • 2
    • 3
  • m

    modern-dress-80156

    07/11/2022, 8:07 AM
    Hi, Our rancher server 2.6.1 is getting crashed with the below error
    Observed a panic: "invalid memory address or nil pointer dereference"
    , any idea what could have been the reason , I have filed a github ticket 2 weeks back but unfortunately no response https://github.com/rancher/rancher/issues/38073 and there is an other user who is facing similar issue https://github.com/rancher/rancher/issues/38142
    • 1
    • 2
  • f

    freezing-wolf-12088

    07/11/2022, 1:16 PM
    Hi, Our rancher server 2.6.5 running rke on docker. Is there a way to specify kubelet args per node basis based on the node role? I find this https://github.com/rancher/rancher/issues/33904 for rke2. Do we have something similar for rke? Thanks
    t
    • 2
    • 1
  • h

    hundreds-evening-84071

    07/11/2022, 3:25 PM
    Hey folks, I've Rancher 2.6.5 stable release. Working on adding Alerting according to this : https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/configuration/receiver/ As per above document:
    Go to the cluster where you want to create receivers. Click Monitoring -> Alerting -> AlertManagerConfigs.
    Ciick Create.
    Click Add Receiver.
    But the Add Receiver button is not available (it is greyed out). Did I do something wrong? I can go to Routes and Receivers and add a receiver there. But will that work? If so, how do I test this setup?
    • 1
    • 6
  • f

    future-monitor-61871

    07/11/2022, 7:14 PM
    Trying to install the Rancher UI on a new RKE2 cluster v1.23.7. Getting an error on helm install: Error: INSTALLATION FAILED: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://rke2-ingress-nginx-controller-admission.kube-system.svc:443/networking/v1/ingresses?timeout=10s": context deadline exceeded The rke2-ingress-nginx is deployed (4.1.003) and from what I can tell the validating web hook looks to be present but not sure if the error is in that config or in the ingress defintiion in Rancher. Any suggestions as to what to look at next?
    • 1
    • 1
  • c

    clever-guitar-78476

    07/12/2022, 3:27 AM
    can docker-desktop ( runnning k8s on 6443 ) and rancher-desktop ( running on 7443) ..can install on same machine?
    r
    • 2
    • 2
  • m

    millions-wire-67573

    07/12/2022, 8:04 AM
    Hi All, can someone help with information of how many RKE2 or RKE cluster I can add in Rancher (which have 16 vCPU and 64 GB RAM)
    h
    • 2
    • 2
  • b

    better-analyst-33401

    07/12/2022, 8:05 AM
    👋 Hi everyone! I tried to deploy some services in RKE setup. But exteranal IP is showing as pending. backend LoadBalancer 10.43.4.125 <pending> 8046:31124/TCP Can any one help me how to make this work >
    m
    • 2
    • 14
  • p

    polite-forest-41591

    07/12/2022, 8:10 AM
    Hi Team, I have been using Fleet for CD and have created a git repo to monitor. I have added my k8s resource yamls files and have observed that they did get deployed into my k8s cluster. But in the UI, I see 2 of my deployments in Orphaned state. But on my k8s cluster, when I do kubeclt get deployment I see that the deployment is up and running - [opc@wls0701-admin ~]$ k get deploy -n hello-helidon NAME READY UP-TO-DATE AVAILABLE AGE hello-helidon-deployment 1/1 1 1 23h hello-helidon-deployment-v2 1/1 1 1 23h Due to this the cluster ready state is 0/1. Why does the deployment show as orphaned state ?
    t
    b
    • 3
    • 2
  • r

    rough-ghost-44810

    07/12/2022, 8:30 AM
    Hi Team, I am facing an issue with the k3s installation while launching rancher desktop. PFA screenshot:
    c
    • 2
    • 1
Powered by Linen
Title
r

rough-ghost-44810

07/12/2022, 8:30 AM
Hi Team, I am facing an issue with the k3s installation while launching rancher desktop. PFA screenshot:
c

creamy-pencil-82913

07/12/2022, 9:11 AM
-> #rancher-desktop
View count: 9