hundreds-oxygen-67526
09/05/2025, 12:14 AMdiffdisk
and update lima.yaml
. Logs show: Resize instance 0's disk from 500GiB to 100GiB diffDisk: Shrinking is currently unavailable
It looks like Rancher Desktop is ignoring the disks:
config in lima.yaml
and enforcing the default size.
Has anyone successfully increased the Rancher Desktop disk beyond 100GiB on macOS (ARM) recently? Any guidance or workarounds would be appreciated 🙏quaint-librarian-68606
09/05/2025, 1:41 PMgentle-dawn-38066
09/06/2025, 4:31 PMbrash-zebra-92886
09/08/2025, 3:35 PMimperative-api-extension
it is installed with a bunch of assumptions and is impacting basic k8s function?numerous-agency-66232
09/08/2025, 3:45 PM2.12.1
) installed via Helm Chart deployed w/ ArgoCD. This lives in a lower EKS environment.
• This is working well and I intend for this to be my management cluster for the time being
• We’ll call this environment source
I’m trying to import another EKS cluster (diff AWS account + region)
• We’ll call this environment target
• I’ve allowed NAT Gateway IPs at the eks level + SG level where relevant (on both source
and target
eks clusters
• However, I’m still getting the error:
failed to communicate with cluster: Get "<https://MY_TARGET_CLUSTER.gr7.us-west-2.eks.amazonaws.com/api/v1/namespaces/cattle-system>": dial tcp TARGET_EKS_PUBLIC_IP:443: i/o timeout
When I check the pod logs of the target
cluster I see the following
INFO: <https://rancher>.<MY_DOMAIN>.com/ping is accessible
...
time="2025-09-04T20:17:18Z" level=info msg="Listening on /tmp/log.sock" │
│ time="2025-09-04T20:17:18Z" level=info msg="starting cattle-credential-cleanup goroutine in the background" │
│ time="2025-09-04T20:17:18Z" level=info msg="Rancher agent version v2.12.1 is starting" │
│ time="2025-09-04T20:17:18Z" level=error msg="unable to read CA file from /etc/kubernetes/ssl/certs/serverca: open /etc/kubernetes/ssl/certs/serverca: no such file or directory" │
│ time="2025-09-04T20:17:18Z" level=info msg="Connecting to <wss://rancher>.<MY_DOMAIN>.com/v3/connect/register with token starting with TOKEN_STRING"
time="2025-09-06T01:49:06Z" level=info msg="Connecting to proxy" url="<wss://rancher>.<MY_DOMAIN>.com/v3/connect"
I think we can ignore those cert errors as I’ve already set
agentTLSMode: "system-store"
• When i fixed this, it proceeded beyond the cert errors -> to the connecting to proxy msg
Further, I’ve added the following to NO_PROXY on source
cluster
,.<http://eks.amazonaws.com|eks.amazonaws.com>,<http://eks.amazonaws.com|eks.amazonaws.com>,<http://TARGET_CLUSTER.gr7.us-west-2.eks.amazonaws.com|TARGET_CLUSTER.gr7.us-west-2.eks.amazonaws.com>
and the following on the target
cluster
rancher.<MY_DOMAIN>.com,.svc,.cluster.local,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
I am at a loss now for why this isn’t connecting. A timeout implies an SG / network issue I would guess
My understanding is these 2 AWS accounts do not need network connectivity i.e via VPC peering. They should only need api access to the cluster URL, but I could be wrong there
• the Public IP for the target
eks cluster it times out on is AWS owned and not anything I have access to
tl;dr why is my import of existing EKS cluster failing with a timeout error?creamy-pencil-82913
09/08/2025, 4:14 PMcreamy-pencil-82913
09/08/2025, 4:16 PMhundreds-evening-84071
09/09/2025, 12:49 PMhandsome-oil-82912
09/09/2025, 1:43 PMGracefulNodeShutdown
setting in kubelet? I have tried dropping the following config in /var/lib/rancher/rke2/agent/etc/kubelet.conf.d
without luck. The nodes are not drained when i reboot or do shutdown
apiVersion: <http://kubelet.config.k8s.io/v1beta1|kubelet.config.k8s.io/v1beta1>
kind: KubeletConfiguration
shutdownGracePeriod: 30s
shutdownGracePeriodCriticalPods: 10s
quaint-librarian-68606
09/09/2025, 5:19 PMnice-businessperson-14225
09/09/2025, 6:17 PMnumerous-agency-66232
09/10/2025, 5:02 PMbland-pillow-38313
09/10/2025, 5:09 PMbland-pillow-38313
09/10/2025, 5:12 PMI0910 20:11:41.266871 16201 versioner.go:87] Right kubectl missing, downloading version 1.28.15+k3s1
F0910 20:11:42.022164 16201 main.go:70] error while trying to get contents of <https://storage.googleapis.com/kubernetes-release/release/v1.28.15/bin/darwin/amd64/kubectl.sha256>: GET <https://storage.googleapis.com/kubernetes-release/release/v1.28.15/bin/darwin/amd64/kubectl.sha256> returned http status 404 Not Found
It's trying to get amd64 kubectl. I have double checked that I have installed the arm64 installer for Rancherloud-potato-42814
09/10/2025, 5:36 PM$ diff -r -u original/ suse/
diff -r -u original/alertmanager/Chart.yaml suse/alertmanager/Chart.yaml
--- original/alertmanager/Chart.yaml 2025-09-03 18:23:29.000000000 +0200
+++ suse/alertmanager/Chart.yaml 2025-09-03 18:38:14.000000000 +0200
@@ -1,26 +1,20 @@
annotations:
- <http://artifacthub.io/license|artifacthub.io/license>: Apache-2.0
- <http://artifacthub.io/links|artifacthub.io/links>: |
- - name: Chart Source
- url: <https://github.com/prometheus-community/helm-charts>
+ <http://helm.sh/images|helm.sh/images>: |
+ - image: <http://dp.apps.rancher.io/containers/alertmanager:0.28.1-10.9|dp.apps.rancher.io/containers/alertmanager:0.28.1-10.9>
+ name: alertmanager
+ - image: <http://dp.apps.rancher.io/containers/prometheus-config-reloader:0.85.0-9.9|dp.apps.rancher.io/containers/prometheus-config-reloader:0.85.0-9.9>
+ name: prometheus-config-reloader
apiVersion: v2
-appVersion: v0.28.1
+appVersion: 0.28.1
description: The Alertmanager handles alerts sent by client applications such as the
- Prometheus server.
-home: <https://prometheus.io/>
-icon: <https://raw.githubusercontent.com/prometheus/prometheus.github.io/master/assets/prometheus_logo-cb55bb5c346.png>
-keywords:
-- monitoring
-kubeVersion: '>=1.25.0-0'
+ Prometheus server. It takes care of deduplicating, grouping, and routing them to
+ the correct receiver integrations such as email, PagerDuty, OpsGenie, or many other
+ mechanisms thanks to the webhook receiver. It also takes care of silencing and inhibition
+ of alerts.
+home: <https://apps.rancher.io/applications/alertmanager>
+icon: <https://apps.rancher.io/logos/alertmanager.png>
maintainers:
-- email: <mailto:monotek23@gmail.com|monotek23@gmail.com>
- name: monotek
- url: <https://github.com/monotek>
-- email: <mailto:naseem@transit.app|naseem@transit.app>
- name: naseemkullah
- url: <https://github.com/naseemkullah>
+- name: SUSE LLC
+ url: <https://www.suse.com/>
name: alertmanager
-sources:
-- <https://github.com/prometheus/alertmanager>
-type: application
version: 1.26.0
...
steep-petabyte-14152
09/10/2025, 6:52 PMfaint-policeman-5206
09/11/2025, 7:52 AMvictorious-action-31342
09/11/2025, 12:12 PMwide-author-88664
09/11/2025, 2:52 PMcattle-cluster-agent
be able to run without being able to contact the Rancher server. However, the cluster import in Rancher still shows as "Provisioning". Is there a way to let Rancher import/manage a cluster, where the cluster cannot reach the Rancher server due to a firewall, but Rancher can reach the cluster API to be imported?wide-author-88664
09/11/2025, 2:55 PMcattle-cluster-agent
container logs:
Thu Sep 11 14:52:40 UTC 2025: Cattle cluster agent running in passive mode for cluster import
silly-animal-7734
09/11/2025, 6:23 PMvictorious-action-31342
09/12/2025, 8:02 AMlittle-lighter-65277
09/12/2025, 2:28 PMfull-shoe-26526
09/15/2025, 3:29 AMAUDIT_LEVEL
variable. However, the number of API call logs is too high. I only want to audit the actions performed by administrators. Is there a good way to achieve this?magnificent-france-42174
09/15/2025, 12:59 PMwatch
, get
, and list
permissions on <http://metrics.k8s.io|metrics.k8s.io>
and <http://management.cattle.io|management.cattle.io>
with ranchermetrics
.astonishing-stone-85106
09/15/2025, 3:34 PMastonishing-stone-85106
09/15/2025, 3:38 PMastonishing-stone-85106
09/15/2025, 3:38 PMpowerful-easter-15334
09/17/2025, 4:37 AMancient-dinner-76338
09/17/2025, 6:55 AM