https://rancher.com/ logo
Join Slack
Powered by
# general
  • s

    swift-farmer-84003

    10/28/2025, 10:35 AM
    The rancher UI somettimes hangs on "loading" forever when listing pods or other resources. (kubectl get pods works). I had this before and it seemed gone after deleting 900 Evicted pods. Now it's back and after some digging it seems that the cattle-cluster-agent has an issue. It's emitting this every 30 seconds: time="2025-10-28T103303Z" level=error msg="Error in Store.Add for type _v1_Pod: transaction: begin tx: database is locked (5) (SQLITE_BUSY)" After restarting the pods it's working again but I'd like to know why. Does cattle-cluster-agent use SQLite internally?
    c
    • 2
    • 4
  • r

    rich-thailand-55018

    10/28/2025, 5:26 PM
    Hello, after rotating the credentials of my GKE, Rancher no longer sees the imported cluster as healthy it shows "Cluster agent is not connected". I cannot see anything helpful in the logs, official docs or github issues. Is there a way to force refresh the cluster import since the GKE has new credentials ? I have tried to edit config and save to force a reconciliation without success. Thanks šŸ™
    b
    • 2
    • 37
  • a

    ancient-dinner-76338

    10/30/2025, 4:15 AM
    Hello Rancher team, I have a Rancher cluster that is deployed on top of a Kubernetes K3s cluster. Currently, the disk usage on the worker VM/instance has reached 80%. Do you have any recommendations for the best cleanup practices to reduce disk usage for cleaning up logs on a VM/instance running K3s. I am using Ubuntu.
    s
    h
    • 3
    • 5
  • e

    elegant-truck-75829

    10/30/2025, 7:31 AM
    hi, as a user, im not able to exec to the pods from rancher gui. it shows as disconnected. any help is greatly appreciated.
    l
    a
    h
    • 4
    • 6
  • b

    better-rain-46397

    10/30/2025, 10:03 PM
    Hello all, We're seeing security issues with
    glibc
    used in SUSE Docker images such as
    <http://docker.io/rancher/fleet-agent|docker.io/rancher/fleet-agent>
    . Some of the security issues are CVE-2025-4802, 2025:01702-1, and SUSE-SU-2025:0582-1. Anyone know if there will be updates to the usage of said images within the rancher fleet?
    c
    • 2
    • 2
  • s

    square-gold-26983

    10/31/2025, 12:22 AM
    Hi Everyone. I am using windows 11 Enterprise Edition and installed Rancher Desktop 1.20.0. I an container that try to call "https://repo.maven.apache.org/maven2/". This url exists when I tried to do a docker build, it try to call the above url and it gives me following error: CWWKF1390E: The "https://repo.maven.apache.org/maven2/" configure Maven Repository cannot be reached. Verify that your computer has network access and firewalls are configured correctly, then try the action again. If the connection still fails, the repository server might be temporarily unavailable. Any help or hint would be greatly appreciated it!!
    šŸ‘€ 1
    c
    • 2
    • 1
  • w

    white-notebook-25493

    10/31/2025, 2:41 PM
    Hi everyone. I'm trying to install and configure Rancher Desktop on my PC. Unfortunately, my Windows username has an apostrophe. When I start Rancher, it gets stuck on "initializing Rancher Desktop." In the logs, I see that the error is caused by the apostrophe. Do you have any solutions? Unfortunately, I can't change Windows users because the PC is preconfigured. Thank you all.
    h
    • 2
    • 1
  • s

    strong-action-64019

    11/03/2025, 7:18 AM
    https://www.suse.com/c/rancher_blog/lightning-fast-kubernetes-management-with-ranchers-vai-project/ The link "opening an issue on GitHub" links to https://www.google.com/search?q=link-to-github
    s
    • 2
    • 1
  • b

    better-telephone-21557

    11/03/2025, 4:25 PM
    Hello, I am currently trying to generate a Chef cookbook for rke2-agent and rke2-server configuration using the RPM install method. When I try to install the rancher-system-agent it complains about the rpm managing the install and wants to use a Tarball and it asks me to uninstall the RPM before continuing. Im curious if anybody has a Chef cookbook they have created or some other options for automating the deployment of rke2-agent and rke2-server? Are their advantages to using the Tarball over RPM?
    b
    c
    • 3
    • 11
  • a

    ancient-dinner-76338

    11/04/2025, 3:51 AM
    Hello Rancher team, I have a Fleet issue like this when provisioning an RKE2 cluster from Rancher using the Calico CNI. What is the possible cause of this? This issue does not happen when I use a Security Group with ā€œallow all portsā€. However, this issue occurs when I try to only allow the ports that are recommended by Rancher.
    c
    • 2
    • 2
  • l

    little-ram-70987

    11/04/2025, 2:17 PM
    Hi guys, today we wanted to test backup/restore functionality here is what we did: 1.BACKUP -> s3 2.Install neu vector application 3. Restore Expected result: system restored without neuvector. Actual result: system restored with neuvector application. Any idea why it happened? I don't know why it didn't deleted NeuVector.
    • 1
    • 1
  • n

    nutritious-intern-6999

    11/05/2025, 12:54 PM
    Hi, for a customer we need a SBOM (Software Bill of Materials) for rke2 (and k3s). Where can I find that? Greetings, Josef
    m
    • 2
    • 4
  • a

    astonishing-nail-55291

    11/05/2025, 1:34 PM
    Hello, did anybody get SAML working with Keycloak? I turned off signature and encryption for testing purposes and allowed all users to log in. But once I enter my credentials into Keycloak I’m getting
    Logging in failed: Your account may not be authorized to log in"
    Followed the documentation here: https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permiss[…]uration/authentication-config/configure-keycloak-saml
    • 1
    • 1
  • p

    proud-secretary-84522

    11/05/2025, 5:07 PM
    Hello Rancher team! I have a basic question. I'm working on setting up GPU support on a single-node RKE2 cluster and have run into an issue with the
    nvidia-device-plugin-daemonset
    . The
    nvidia-device-plugin-daemonset
    pod is stuck in a
    CrashLoopBackOff
    loop. The NVIDIA driver is correctly installed on the host, and
    nvidia-smi
    works as expected. When I run
    kubectl describe pod nvidia-device-plugin-daemonset-85bkr -n kube-system
    on the failing pod, I get this specific error message:
    Copy code
    Events:
      Type     Reason   Age                     From     Message
      ----     ------   ----                    ----     -------
      Warning  BackOff  40m (x1266 over 5h15m)  kubelet  Back-off restarting failed container nvidia-device-plugin-ctr in pod nvidia-device-plugin-daemonset-85bkr_kube-system(3f6e3f80-6add-41bc-b49a-6e0aa8f2af30)
      Normal   Pulled   38m (x59 over 5h15m)    kubelet  Container image "<http://nvcr.io/nvidia/k8s-device-plugin:v0.18.0|nvcr.io/nvidia/k8s-device-plugin:v0.18.0>" already present on machine
      Warning  Failed   23m (x5 over 26m)       kubelet  Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/kubepods/besteffort/pod3f6e3f80-6add-41bc-b49a-6e0aa8f2af30/nvidia-device-plugin-ctr" instead
      Normal   Pulled   4m44s (x9 over 26m)     kubelet  Container image "<http://nvcr.io/nvidia/k8s-device-plugin:v0.18.0|nvcr.io/nvidia/k8s-device-plugin:v0.18.0>" already present on machine
      Normal   Created  4m44s (x9 over 26m)     kubelet  Created container: nvidia-device-plugin-ctr
      Warning  BackOff  55s (x117 over 26m)     kubelet  Back-off restarting failed container nvidia-device-plugin-ctr in pod nvidia-device-plugin-daemonset-85bkr_kube-system(3f6e3f80-6add-41bc-b49a-6e0aa8f2af30)
    So any pod that requires the GPU basically is pending...
    Copy code
    kubectl get pods 
    NAME              READY   STATUS    RESTARTS   AGE
    nvidia-gpu-test   0/1     Pending   0          5h10m
    I have been following this guide here https://docs.rke2.io/advanced?_highlight=gpu#deploy-nvidia-operator any help? thanks and this is the config.toml.tmpl
    Copy code
    cat /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl 
    {{ template "base" . }}
    
    [plugins."io.containerd.cri.v1.cri".containerd]
      default_runtime_name = "nvidia"
    
    [plugins."io.containerd.cri.v1.runtime".containerd.runtimes.nvidia]
      privileged_without_host_devices = false
      runtime_type = "io.containerd.runc.v2"
      [plugins."io.containerd.cri.v1.runtime".containerd.runtimes.nvidia.options]
        BinaryName = "/usr/bin/nvidia-container-runtime"
    Any help?
    c
    • 2
    • 12
  • e

    elegant-truck-75829

    11/06/2025, 8:16 AM
    Hey folks šŸ‘‹ I’ve got a Rancher setup with multiple downstream clusters, and I’m trying to set up project-level logging and monitoring. The goal is to have each project with its own isolated metrics and dashboards — so users in a project can only see their own metrics/logs, not others. Has anyone implemented this kind of observability isolation in Rancher? Would it make sense to deploy separate Prometheus/Grafana per project, or can this be handled through Rancher’s built-in Monitoring & Logging with access control? Appreciate any pointers or best practices! šŸ™
    g
    r
    • 3
    • 4
  • n

    nutritious-intern-6999

    11/06/2025, 1:38 PM
    Hi again, similar to my question above I would like to know, how I can get an Open Source Acknowledgment (OSA) document for all SUSE products (SLE Micro 6.x, RKE2, K3s, Longhorn and KubeVirt) containing all OSS licenses and Copyrights . I can't find them anywhere on the SUSE website but we need that for the same reason mentioned above. Thank you again, Josef
    s
    • 2
    • 2
  • s

    shy-gold-40913

    11/06/2025, 5:21 PM
    I'm setting up RKE2 for the first time, and I got this error message when trying to start the rke2-server daemon:
    Copy code
    level=fatal msg="Failed to reconcile with temporary etcd: failed to normalize server token; must be in format K10<CA-HASH>::<USERNAME>:<PASSWORD> or <PASSWORD>"
    In /etc/rancher/rke2/config.yaml I've specified both agent-token and token using the output of rke2 token generate (ran it twice), which produced tokens in [a-z0-9]{6}.[a-z0-9]{16} format. What am I doing wrong here?
    c
    • 2
    • 27
  • c

    creamy-pharmacist-50075

    11/08/2025, 10:36 AM
    have People able to import k0s clusters into Rancher?
  • c

    crooked-sunset-83417

    11/09/2025, 1:16 AM
    Anyone else attending on Monday? šŸ”„šŸ”„
    v
    • 2
    • 1
  • n

    nutritious-intern-6999

    11/10/2025, 10:08 AM
    Hi, I would like to add a custom script in Edge Image Builder that shows a dialog during first boot. I would like to use the shell tool "dialog" for that. My problem is, that the dialog is shown for a sub second but it does not stop to wait for user input. Combustion just proceeds with the other custom scripts of Edge Image Builder that are configured in the directory custom/scripts. How do you configure custom scripts, that show shell dialogs? Thanks and greetings, Josef
  • b

    breezy-restaurant-60331

    11/11/2025, 2:10 PM
    Does the new release support the newer version/s of Electron now?
    c
    • 2
    • 2
  • a

    adamant-kite-43734

    11/12/2025, 9:45 AM
    This message was deleted.
    s
    c
    • 3
    • 2
  • b

    brief-vase-99095

    11/12/2025, 5:14 PM
    Using rancher server's UI, how do I copy secrets between namespaces?
    s
    • 2
    • 1
  • s

    shy-gold-40913

    11/12/2025, 6:27 PM
    Having some issues with overlay networking in RKE2... I went through the steps in here: https://ranchermanager.docs.rancher.com/troubleshooting/other-troubleshooting-tips/networking My firewall configs: Server nodes:
    Copy code
    # firewall-cmd --zone=public --list-all
    public (default, active)
      target: default
      ingress-priority: 0
      egress-priority: 0
      icmp-block-inversion: no
      interfaces: ens192
      sources:
      services: etcd-client etcd-server kube-apiserver kubelet wireguard
      ports: 9345/tcp 9099/tcp 30000-32767/tcp 2381/tcp 51821/udp 8472/udp
      protocols:
      forward: yes
      masquerade: no
      forward-ports:
      source-ports:
      icmp-blocks:
      rich rules:
    Agent nodes:
    Copy code
    # firewall-cmd --zone=public --list-all
    public (default, active)
      target: default
      ingress-priority: 0
      egress-priority: 0
      icmp-block-inversion: no
      interfaces: ens192 ens224
      sources:
      services: kubelet wireguard
      ports: 9099/tcp 30000-32767/tcp 8472/udp 51821/udp
      protocols:
      forward: yes
      masquerade: no
      forward-ports:
      source-ports:
      icmp-blocks:
      rich rules:
    Output from overlaytest:
    Copy code
    # ./overlaytest.sh 
    => Start network overlay test
    k8sagent02 can reach k8sagent02
    command terminated with exit code 1
    FAIL: overlaytest-4dtr4 on k8sagent02 cannot reach pod IP 10.252.2.2 on k8ssvr02
    command terminated with exit code 1
    FAIL: overlaytest-4dtr4 on k8sagent02 cannot reach pod IP 10.252.0.4 on k8ssvr01
    command terminated with exit code 1
    FAIL: overlaytest-4dtr4 on k8sagent02 cannot reach pod IP 10.252.3.19 on k8sagent01
    command terminated with exit code 1
    FAIL: overlaytest-4dtr4 on k8sagent02 cannot reach pod IP 10.252.1.2 on k8ssvr03
    command terminated with exit code 1
    FAIL: overlaytest-8vxld on k8ssvr02 cannot reach pod IP 10.252.4.3 on k8sagent02
    k8ssvr02 can reach k8ssvr02
    command terminated with exit code 1
    FAIL: overlaytest-8vxld on k8ssvr02 cannot reach pod IP 10.252.0.4 on k8ssvr01
    command terminated with exit code 1
    FAIL: overlaytest-8vxld on k8ssvr02 cannot reach pod IP 10.252.3.19 on k8sagent01
    command terminated with exit code 1
    FAIL: overlaytest-8vxld on k8ssvr02 cannot reach pod IP 10.252.1.2 on k8ssvr03
    command terminated with exit code 1
    FAIL: overlaytest-ds7sh on k8ssvr01 cannot reach pod IP 10.252.4.3 on k8sagent02
    command terminated with exit code 1
    FAIL: overlaytest-ds7sh on k8ssvr01 cannot reach pod IP 10.252.2.2 on k8ssvr02
    k8ssvr01 can reach k8ssvr01
    command terminated with exit code 1
    FAIL: overlaytest-ds7sh on k8ssvr01 cannot reach pod IP 10.252.3.19 on k8sagent01
    command terminated with exit code 1
    FAIL: overlaytest-ds7sh on k8ssvr01 cannot reach pod IP 10.252.1.2 on k8ssvr03
    command terminated with exit code 1
    FAIL: overlaytest-jw99g on k8sagent01 cannot reach pod IP 10.252.4.3 on k8sagent02
    command terminated with exit code 1
    FAIL: overlaytest-jw99g on k8sagent01 cannot reach pod IP 10.252.2.2 on k8ssvr02
    command terminated with exit code 1
    FAIL: overlaytest-jw99g on k8sagent01 cannot reach pod IP 10.252.0.4 on k8ssvr01
    k8sagent01 can reach k8sagent01
    command terminated with exit code 1
    FAIL: overlaytest-jw99g on k8sagent01 cannot reach pod IP 10.252.1.2 on k8ssvr03
    command terminated with exit code 1
    FAIL: overlaytest-mmsv9 on k8ssvr03 cannot reach pod IP 10.252.4.3 on k8sagent02
    command terminated with exit code 1
    FAIL: overlaytest-mmsv9 on k8ssvr03 cannot reach pod IP 10.252.2.2 on k8ssvr02
    command terminated with exit code 1
    FAIL: overlaytest-mmsv9 on k8ssvr03 cannot reach pod IP 10.252.0.4 on k8ssvr01
    command terminated with exit code 1
    FAIL: overlaytest-mmsv9 on k8ssvr03 cannot reach pod IP 10.252.3.19 on k8sagent01
    k8ssvr03 can reach k8ssvr03
    => End network overlay test
    I've also excluded the various tunnel interfaces from NetworkManager per this https://docs.rke2.io/known_issues#networkmanager
    Copy code
    # cat /etc/NetworkManager/conf.d/rke2-canal.conf
    [keyfile]
    unmanaged-devices=interface-name:flannel*;interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:vxlan-v6.calico;interface-name:wireguard.cali;interface-name:wg-v6.cali
    How do I begin troubleshooting this? I'm running rke2 stable on AlmaLinux 10
    c
    f
    • 3
    • 7
  • n

    nutritious-petabyte-80748

    11/13/2025, 4:31 AM
    Hi everyone, I'm starting with Rancher, and I installed Helm Monitoring through the Rancher interface and it's working. But I'm having a specific problem. I need to use the Google Monitoring datasource; the data is displayed in Grafana through the Rancher interface, but when I access it via the external Grafana URL, the datasource gives an error. I need to have access to Grafana both internally through the Rancher interface and externally for users who don't have Rancher access.
  • b

    blue-jelly-47972

    11/13/2025, 4:59 AM
    Hello all, with the announcement of the NGINX Ingress retirement -- https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/ I'm curious to understand how that will impact the NGINX Ingress versions supported in Rancher.
    c
    j
    m
    • 4
    • 17
  • m

    most-balloon-51259

    11/13/2025, 9:48 AM
    Hello, I'm extending but separating thread about nginx retirement and would like to clarify a few questions so as not to create additional issues on GitHub. I've been using Rancher since version 2, and I really like the convenience of ingress and web-based configuration of this functionality. I've always used ingress-nginx from the very beginning, but I have a question: 1. Is rancher UI suitable for configuring ingress, traefik, envoy, or any other things other than ingress nginx (I think the answer to my question is yes, but I'm still clarifying it) 2. I'm not familiar with the gateway API, but I also want to clarify: does Rancher support this configuration in the web? For example, we use servicemonitor/podmonitor and write our own manifests for them, since this functionality is configured only via YAML files and doesn't work natively in the UI. Will there be similar issues with gapi, or are you planning to add any related functionality in future releases?
    o
    • 2
    • 1
  • k

    kind-air-74358

    11/13/2025, 10:49 AM
    Hi all, Since a few days we've noticed that our
    fleet-agent
    is constant being restarted. It could be well possible this is caused by either a Rancher update from 2.11 to 2.12 or caused by switching a root certificate from self-signed to a provided root certificate (following these docs). In the
    fleet-controller/fleet-agentmanagement
    we see the following logs constantly
    Copy code
    time="2025-11-13T10:44:39Z" level=info msg="Deleted old agent for cluster (fleet-local/local) in namespace cattle-fleet-local-system"
    time="2025-11-13T10:44:39Z" level=info msg="Cluster import for 'fleet-local/local'. Deployed new agent"
    time="2025-11-13T10:45:00Z" level=info msg="Waiting for service account token key to be populated for secret cluster-fleet-local-local-1a3d67d0a899/request-cs9x7-8645b8de-5e30-4eb0-a9fe-dc96f1081856-token"
    time="2025-11-13T10:45:02Z" level=info msg="Cluster registration request 'fleet-local/request-cs9x7' granted, creating cluster, request service account, registration secret"
    The fleet-agent in the cattle-fleet-local-system isn't reporting any errors but just reposts
    Copy code
    I1113 10:44:40.589439       1 leaderelection.go:257] attempting to acquire leader lease cattle-fleet-local-system/fleet-agent...
    {"level":"info","ts":"2025-11-13T10:44:40Z","logger":"setup","msg":"new leader","identity":"fleet-agent-6d5f55c7d7-4pncc-1"}
    I1113 10:45:00.267179       1 leaderelection.go:271] successfully acquired lease cattle-fleet-local-system/fleet-agent
    {"level":"info","ts":"2025-11-13T10:45:00Z","logger":"setup","msg":"renewed leader","identity":"fleet-agent-5cf8799b4c-xn274-1"}
    time="2025-11-13T10:45:00Z" level=warning msg="Cannot find fleet-agent secret, running registration"
    time="2025-11-13T10:45:00Z" level=info msg="Creating clusterregistration with id 'pwvp47nf7r8pg8zfmd4tx7vxb6rhr5dwv2gcnn2m6zlrtmt54ss9kl' for new token"
    time="2025-11-13T10:45:02Z" level=info msg="Waiting for secret 'cattle-fleet-clusters-system/c-9072b2e8eac3a21368e0428adc1a0244a61acd4ee571c7f88f574d905cd52' on management cluster for request 'fleet-local/request-cs9x7': secrets \"c-9072b2e8eac3a21368e0428adc1a0244a61acd4ee571c7f88f574d905cd52\" not found"
    {"level":"info","ts":"2025-11-13T10:45:04Z","logger":"setup","msg":"successfully registered with upstream cluster","namespace":"cluster-fleet-local-local-1a3d67d0a899"}
    {"level":"info","ts":"2025-11-13T10:45:04Z","logger":"setup","msg":"listening for changes on upstream cluster","cluster":"local","namespace":"cluster-fleet-local-local-1a3d67d0a899"}
    {"level":"info","ts":"2025-11-13T10:45:04Z","logger":"setup","msg":"Starting controller","metricsAddr":":8080","probeAddr":":8081","systemNamespace":"cattle-fleet-local-system"}
    {"level":"info","ts":"2025-11-13T10:45:04Z","logger":"setup","msg":"starting manager"}
    {"level":"info","ts":"2025-11-13T10:45:04Z","logger":"controller-runtime.metrics","msg":"Starting metrics server"}
    {"level":"info","ts":"2025-11-13T10:45:04Z","msg":"starting server","name":"health probe","addr":"0.0.0.0:8081"}
    {"level":"info","ts":"2025-11-13T10:45:04Z","logger":"controller-runtime.metrics","msg":"Serving metrics server","bindAddress":":8080","secure":false}
    {"level":"info","ts":"2025-11-13T10:45:04Z","msg":"Starting EventSource","controller":"bundledeployment","controllerGroup":"<http://fleet.cattle.io|fleet.cattle.io>","controllerKind":"BundleDeployment","source":"kind source: *v1alpha1.BundleDeployment"}
    {"level":"info","ts":"2025-11-13T10:45:04Z","logger":"setup","msg":"Starting cluster status ticker","checkin interval":"15m0s","cluster namespace":"fleet-local","cluster name":"local"}
    {"level":"info","ts":"2025-11-13T10:45:04Z","msg":"Starting EventSource","controller":"drift-reconciler","source":"channel source: 0xc00078f3b0"}
    {"level":"info","ts":"2025-11-13T10:45:04Z","msg":"Starting Controller","controller":"drift-reconciler"}
    {"level":"info","ts":"2025-11-13T10:45:04Z","msg":"Starting workers","controller":"drift-reconciler","worker count":50}
    {"level":"info","ts":"2025-11-13T10:45:04Z","msg":"Starting Controller","controller":"bundledeployment","controllerGroup":"<http://fleet.cattle.io|fleet.cattle.io>","controllerKind":"BundleDeployment"}
    {"level":"info","ts":"2025-11-13T10:45:04Z","msg":"Starting workers","controller":"bundledeployment","controllerGroup":"<http://fleet.cattle.io|fleet.cattle.io>","controllerKind":"BundleDeployment","worker count":50}
    {"level":"info","ts":"2025-11-13T10:45:04Z","logger":"bundledeployment.helm-deployer.install","msg":"Upgrading helm release","controller":"bundledeployment","controllerGroup":"<http://fleet.cattle.io|fleet.cattle.io>","controllerKind":"BundleDeployment","BundleDeployment":{"name":"fleet-agent-local","namespace":"cluster-fleet-local-local-1a3d67d0a899"},"namespace":"cluster-fleet-local-local-1a3d67d0a899","name":"fleet-agent-local","reconcileID":"1e4df644-069b-4d05-84ed-2c447bc54d15","commit":"","dryRun":false}
    {"level":"info","ts":"2025-11-13T10:45:05Z","logger":"bundledeployment.deploy-bundle","msg":"Deployed bundle","controller":"bundledeployment","controllerGroup":"<http://fleet.cattle.io|fleet.cattle.io>","controllerKind":"BundleDeployment","BundleDeployment":{"name":"fleet-agent-local","namespace":"cluster-fleet-local-local-1a3d67d0a899"},"namespace":"cluster-fleet-local-local-1a3d67d0a899","name":"fleet-agent-local","reconcileID":"1e4df644-069b-4d05-84ed-2c447bc54d15","deploymentID":"s-2f332c47bb36e1bc8d70932ee0158e1b3289ae7ef2ea995e2bd77828ef2e9:8a42b4463e55a59ce2ccdf3c53c32455ce5fd0f601587bf57b5624b3cf8bb623","appliedDeploymentID":"s-c1fc5eeb18677acb8c4a8fd2054c2c40c4022f002ea06437f9b108731be8f:8a42b4463e55a59ce2ccdf3c53c32455ce5fd0f601587bf57b5624b3cf8bb623","release":"cattle-fleet-local-system/fleet-agent-local:20","DeploymentID":"s-2f332c47bb36e1bc8d70932ee0158e1b3289ae7ef2ea995e2bd77828ef2e9:8a42b4463e55a59ce2ccdf3c53c32455ce5fd0f601587bf57b5624b3cf8bb623"
    And afterwards the fleet-agent is restarted again... Any idea's on what could be wrong and how to fix it? We've already redeployed the fleet-controller, fleet-agent deployments, reinstalled the fleet-agent and fleet-controller helm charts with no luck
    b
    c
    • 3
    • 36
  • c

    crooked-sunset-83417

    11/13/2025, 7:12 PM
    For anyone interested.
    🫠 1
    ā¤ļø 1
    • 1
    • 1
  • h

    hallowed-manchester-34892

    11/14/2025, 3:39 AM
    Hello guys i have issue with cronjob curl, it always gives me curl: (6) Could not resolve host: upoint-service.nomin.svc.cluster.local even though its on one namespace, i did check with dns-test pod to nslookup but it is giving me same error what should i do