https://rancher.com/ logo
Docs
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
general
  • p

    prehistoric-solstice-99854

    03/21/2023, 8:03 PM
    I have a question about Rancher 1.6 and the Elasticsearch 2 catalog install. It might be too old for anyone to help but I thought I’d ask here since I haven’t found any help searching online.
    h
    • 2
    • 4
  • p

    polite-piano-74233

    03/21/2023, 8:39 PM
    when i upgrade my rancher version, does that also upgrade the hypercube version?
  • w

    wooden-spoon-95626

    03/22/2023, 2:00 AM
    Hi team, I would like to allocate more cpu and memory to the VM run by rancher desktop, how can I do that? Thanks!
  • b

    bored-farmer-36655

    03/22/2023, 2:12 AM
    @wooden-spoon-95626 in the Preferences (you should ask questions in #rancher-desktop)
    w
    • 2
    • 1
  • c

    clever-butcher-21731

    03/22/2023, 6:29 AM
    Hello, cluster v1.21.5+k3s2 is installed, after restarting the service ( k3s) it applies the default kernel parameters ( net.netfilter.nf_conntrack_max). Ubuntu 18.04.5 LTS root@drm-set1-master01:~# sysctl -p net.core.somaxconn = 65535 net.ipv4.ip_local_port_range = 1024 65535 net.nf_conntrack_max = 4194304 net.netfilter.nf_conntrack_max = 4194304 fs.file-max = 2097152 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 65535 net.ipv4.ip_forward = 1 net.ipv4.ip_local_reserved_ports = 30000-32767 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-arptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 root@drm-set1-master01:~# systemctl restart k3s root@drm-set1-master01:~# sysctl -a | grep net.netfilter.nf_conntrack_max sysctl: reading key "net.ipv6.conf.all.stable_secret" sysctl: reading key "net.ipv6.conf.cni0.stable_secret" sysctl: reading key "net.ipv6.conf.default.stable_secret" sysctl: reading key "net.ipv6.conf.ens160.stable_secret" sysctl: reading key "net.ipv6.conf.flannel/1.stable_secret" sysctl: reading key "net.ipv6.conf.kube-ipvs0.stable_secret" sysctl: reading key "net.ipv6.conf.lo.stable_secret" sysctl: reading key "net.ipv6.conf.veth000ece3e.stable_secret" sysctl: reading key "net.ipv6.conf.veth1412c3ee.stable_secret" sysctl: reading key "net.ipv6.conf.veth3d3b54df.stable_secret" net.netfilter.nf_conntrack_max = 131072
    c
    • 2
    • 6
  • r

    rapid-scientist-25800

    03/22/2023, 9:41 AM
    Hello! I am starting to look into setting up Rancher Management in HA mode. What is the best way for the management cluster setup if starting from scratch ? RCE2 as its newest, or something else ?
    • 1
    • 1
  • m

    microscopic-holiday-21640

    03/22/2023, 11:29 AM
    i ve just install rancher desktop in my pc and i use wsl with linux, i have this alert, how can i already have some config clusterin in the .kube/config how can i solve it?
    r
    • 2
    • 2
  • b

    billowy-apple-60989

    03/22/2023, 12:31 PM
    Am i missing something, or why is
    release-v2.7
    branch of the Rancher Charts https://github.com/rancher/charts/tree/release-v2.7/charts still using extremely old upstream charts? Just looking at the
    rancher-monitoring
    chart it still uses
    101.0.0+up19.0.3
    as it's latest version,
    19.0.3
    is about 18 months old by now?? https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack/19.0.3
    • 1
    • 1
  • s

    swift-hair-47673

    03/22/2023, 2:16 PM
    Hi all. We are running Rancher 2.6.9 and upgraded on of the clusters managed by rancher from 1.23.15 to 1.24.10 We saw a lot of things going wrong. Nodes had trouble communicating to each other, Some storage mounts ( Ceph block storage ) didn't want to unmount. We basically had to restart all our nodes one by one to recover everything. Has anyone else seen such behaviour? We do know that the cri-docker replaced the docker shim part, could this be a root cause of it?
    p
    • 2
    • 2
  • s

    straight-morning-82320

    03/22/2023, 2:39 PM
    Hi all, I'm looking for a role template definition that allows a restricted user to import manifest via Rancher UI (see attached image). Does anybody have any idea which resource I need to include in the Role Template definition please ? FYI below is my Role Template definition :
    administrative: false
    apiVersion: <http://management.cattle.io/v3|management.cattle.io/v3>
    builtin: false
    clusterCreatorDefault: false
    context: project
    description: "Default tenant role attached to a project"
    displayName: project-default-role
    external: false
    hidden: false
    kind: RoleTemplate
    locked: false
    metadata:
      finalizers:
        - <http://controller.cattle.io/mgmt-auth-roletemplate-lifecycle|controller.cattle.io/mgmt-auth-roletemplate-lifecycle>
      labels:
        <http://cattle.io/creator|cattle.io/creator>: norman
      name: project-default-role
    projectCreatorDefault: false
    roleTemplateNames: []
    rules:
      - apiGroups:
          - ''
        nonResourceURLs: []
        resourceNames: []
        resources:
          - namespaces
        verbs:
          - get
      - apiGroups:
          - apps
        nonResourceURLs: []
        resourceNames: []
        resources:
          - deployments
        verbs:
          - create
          - delete
          - list
          - update
      - apiGroups:
          - ''
        nonResourceURLs: []
        resourceNames: []
        resources:
          - pods
        verbs:
          - create
          - delete
          - get
          - list
          - patch
          - update
          - watch
      - apiGroups:
          - ''
        nonResourceURLs: []
        resourceNames: []
        resources:
          - services
        verbs:
          - list
          - create
          - delete
          - update
      - apiGroups:
          - <http://events.k8s.io|events.k8s.io>
        nonResourceURLs: []
        resourceNames: []
        resources:
          - events
        verbs:
          - list
      ########################
      # Add here an apiGroup that allow user to import manifest
      ......
    Thanks in advance ! 🙂
  • p

    polite-piano-74233

    03/22/2023, 7:51 PM
    does anyone have a tl;dr on if i have multiple clusters on the same host machine (seperate vms) if i need to explicitly set the subnet blocks or will the two clusters work fine on the default?
  • p

    polite-piano-74233

    03/22/2023, 8:03 PM
    i assume because they are seperate vm's that would make it fine, given how flannel etc set up vxlan
    r
    • 2
    • 4
  • s

    stale-spring-20280

    03/22/2023, 9:20 PM
    Hello All, I have IAM account in Account-A and EKS cluster and few Kubernetes clusters on EC2 in Account-B. We would like to have IAM account in Account-A. Is there a way for Rancher to create and/or manage the cluster through IAM assume role?
  • f

    few-carpenter-10741

    03/22/2023, 11:13 PM
    Hello Everyone, We lost for a problem with our DC our Rancher Server. The clusters fortunately were in a different DC. For some reason the backups are not working... If I create a brand new rancher server and Import the clusters. Would it work? Thanks ahead!
  • a

    acoustic-whale-32362

    03/23/2023, 12:25 AM
    Hello, I don’t have nfs in storage class provisioner list.
    h
    • 2
    • 21
  • a

    acoustic-whale-32362

    03/23/2023, 12:25 AM
    image.png
  • a

    acoustic-whale-32362

    03/23/2023, 12:26 AM
    Rancher v2.7 installed as docker container.
  • a

    acoustic-whale-32362

    03/23/2023, 12:27 AM
    image.png
  • a

    acoustic-whale-32362

    03/23/2023, 12:27 AM
    It should be enabled by default?
  • f

    flaky-winter-94949

    03/23/2023, 1:35 AM
    Nfs support for container mounting is built into k8s now, no need for an external provisioner
    r
    w
    s
    • 4
    • 13
  • s

    strong-france-26978

    03/23/2023, 5:04 AM
    Hi all ; i am trying to add new worker node in my existing rancher cluster but getting below error . i really dont have any clue .connectivity is ok openssl s_client -connect rancher.lab:443 (no issue with ssl connectiity) Mar 23 04:39:57 pol1.lab rancher-system-agent[11037]: time="2023-03-23T04:39:57Z" level=debug msg="[Prober] (kubelet) running probe" Mar 23 04:39:57 pol1.lab rancher-system-agent[11037]: time="2023-03-23T04:39:57Z" level=debug msg="[Prober] (kubelet) retrieving existing probe status from map if existing" Mar 23 04:39:57 pol1.lab rancher-system-agent[11037]: time="2023-03-23T04:39:57Z" level=debug msg="Probe timeout duration: 5 seconds" Mar 23 04:39:57 pol1.lab rancher-system-agent[11037]: time="2023-03-23T04:39:57Z" level=debug msg="Probe output was Get \"http://127.0.0.1:10248/healthz\": dial tcp 127.0.0.1:10248: connect: connection refused" Mar 23 04:39:57 pol1.lab rancher-system-agent[11037]: time="2023-03-23T04:39:57Z" level=debug msg="Setting success threshold to 1" Mar 23 04:39:57 pol1.lab rancher-system-agent[11037]: time="2023-03-23T04:39:57Z" level=debug msg="Setting failure threshold to 2" Mar 23 04:39:57 pol1.lab rancher-system-agent[11037]: time="2023-03-23T04:39:57Z" level=debug msg="Probe failed" Mar 23 04:39:57 pol1.lab rancher-system-agent[11037]: time="2023-03-23T04:39:57Z" level=debug msg="[Prober] (kubelet) writing probe status to map" Mar 23 04:39:57 pol1.lab rancher-system-agent[11037]: time="2023-03-23T04:39:57Z" level=debug msg="[K8s] Enqueueing after 5.000000 seconds" Mar 23 04:39:57 pol1.lab rancher-system-agent[11037]: time="2023-03-23T04:39:57Z" level=debug msg="[K8s] secret data/string-data did not change, not updating secret"
    p
    • 2
    • 16
  • w

    wide-kitchen-20738

    03/23/2023, 5:16 AM
    Hi all, We are trying to install rancher 2.7 on k3s cluster(v1.23.17+k3s1) in a single node ec2 instance(t3.medium) We selected external tls and one replica. After the installation is done we are getting 404 page not found if we try to access rancher. I see some errors in the rancher logs
    2023/03/23 03:36:04 [INFO] Starting <http://rke.cattle.io/v1|rke.cattle.io/v1>, Kind=RKECluster controller
    2023/03/23 03:36:04 [ERROR] failed to start controller for <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=Cluster: failed to wait for caches to sync
    2023/03/23 03:36:04 [ERROR] failed to start controller for <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineHealthCheck: failed to wait for caches to sync
    2023/03/23 03:36:04 [ERROR] failed to start controller for <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineSet: failed to wait for caches to sync
    E0323 03:36:04.152482      33 gvks.go:69] failed to sync schemas: failed to sync cache for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=DigitaloceanConfig
    2023/03/23 03:36:04 [INFO] Watching metadata for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=Amazonec2Config
    2023/03/23 03:36:04 [INFO] Starting <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineDeployment controller
    2023/03/23 03:36:04 [INFO] [CleanupOrphanBindingsDone] orphan bindings cleanup has already run, skipping
    2023/03/23 03:36:04 [INFO] checking configmap cattle-system/admincreated to determine if orphan bindings cleanup needs to run
    2023/03/23 03:36:04 [INFO] duplicate bindings cleanup has already run, skipping
    2023/03/23 03:36:04 [INFO] Watching metadata for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=AzureConfig
    2023/03/23 03:36:04 [INFO] [clean-catalog-orphan-bindings] cleaning up orphaned catalog bindings
    2023/03/23 03:36:04 [INFO] Starting <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=Amazonec2Config controller
    2023/03/23 03:36:04 [INFO] [clean-catalog-orphan-bindings] Processing 2 rolebindings
    2023/03/23 03:36:04 [INFO] [clean-catalog-orphan-bindings] Deleting orphaned role global-catalog
    2023/03/23 03:36:04 [INFO] Watching metadata for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=DigitaloceanConfig
    2023/03/23 03:36:04 [INFO] Starting <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=AzureConfig controller
    2023/03/23 03:36:04 [WARNING] [clean-catalog-orphan-bindings] Error when deleting role global-catalog, <http://roles.rbac.authorization.k8s.io|roles.rbac.authorization.k8s.io> "global-catalog" not found
    2023/03/23 03:36:04 [WARNING] [CleanupOrphanCatalogBindingsDone] error during orphan binding cleanup: <http://roles.rbac.authorization.k8s.io|roles.rbac.authorization.k8s.io> "global-catalog" not found
    2023/03/23 03:36:04 [ERROR] failed to cleanup orphan catalog bindings
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=HarvesterConfig
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=DigitaloceanConfig controller
    2023/03/23 03:36:05 [INFO] driverMetadata: refreshing data from upstream <https://releases.rancher.com/kontainer-driver-metadata/release-v2.7/data.json>
    2023/03/23 03:36:05 [INFO] Retrieve data.json from local path /var/lib/rancher-data/driver-metadata/data.json
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=LinodeConfig
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=VmwarevsphereConfig
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=VmwarevsphereMachine
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=AzureMachineTemplate
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=HarvesterMachineTemplate
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=Amazonec2Machine
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=DigitaloceanMachineTemplate
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=Amazonec2MachineTemplate
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=LinodeMachineTemplate
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=LinodeMachine
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=AzureMachine
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=DigitaloceanMachine
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=VmwarevsphereMachineTemplate
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=HarvesterMachine
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=Cluster
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineHealthCheck
    2023/03/23 03:36:05 [INFO] Watching metadata for <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineSet
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=HarvesterConfig controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=LinodeConfig controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=VmwarevsphereConfig controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=VmwarevsphereMachine controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=AzureMachineTemplate controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=HarvesterMachineTemplate controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=Amazonec2Machine controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=DigitaloceanMachineTemplate controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=Amazonec2MachineTemplate controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=LinodeMachineTemplate controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=LinodeMachine controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=AzureMachine controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=DigitaloceanMachine controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=VmwarevsphereMachineTemplate controller
    2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=HarvesterMachine controller
    2023/03/23 03:36:05 [INFO] Starting <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=Cluster controller
    2023/03/23 03:36:05 [INFO] Starting <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineHealthCheck controller
    2023/03/23 03:36:05 [INFO] Starting <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineSet controller
    2023/03/23 03:36:08 [INFO] Loaded configuration from <https://releases.rancher.com/kontainer-driver-metadata/release-v2.7/data.json> in [0x6d10098]
    2023/03/23 03:36:08 [INFO] Loaded configuration from <https://releases.rancher.com/kontainer-driver-metadata/release-v2.7/data.json> in [0x6d10098]
    2023/03/23 03:36:08 [INFO] kontainerdriver amazonelasticcontainerservice listening on address 127.0.0.1:42731
    2023/03/23 03:36:08 [INFO] kontainerdriver googlekubernetesengine listening on address 127.0.0.1:41457
    2023/03/23 03:36:08 [INFO] kontainerdriver azurekubernetesservice listening on address 127.0.0.1:45945
    2023/03/23 03:36:08 [INFO] kontainerdriver amazonelasticcontainerservice stopped
    2023/03/23 03:36:08 [INFO] dynamic schema for kontainerdriver amazonelasticcontainerservice updating
    2023/03/23 03:36:08 [INFO] kontainerdriver azurekubernetesservice stopped
    2023/03/23 03:36:08 [INFO] dynamic schema for kontainerdriver azurekubernetesservice updating
    2023/03/23 03:36:08 [INFO] kontainerdriver googlekubernetesengine stopped
    2023/03/23 03:36:08 [INFO] dynamic schema for kontainerdriver googlekubernetesengine updating
    I'm a new to rancher and tried almost every solution that i found on google..
    p
    • 2
    • 8
  • p

    polite-piano-74233

    03/23/2023, 5:19 AM
    ya'll need to learn to put the log paste in a thread under the question 🐿
  • d

    dazzling-cpu-57338

    03/23/2023, 9:59 AM
    Hello all, hope you are doing well, I'm not sure I'm communicating through the right channel I just joined the workspace, I'm trying to monitor a list of static targets , URLs, through cattle-monitoring-system, I can't find any information on the documentation on blackbox-exporter, unlike node-exporter doesn't have a subchart already installed, could you please advise on the best practice on monitoring static targets through cattle-monitoring-system ?
  • m

    most-kite-870

    03/23/2023, 10:11 AM
    I am using Rancher v2.7.1 in a docker container and import an existing Kubernetes v1.24.8 (remote location). The setup works and I have a question. The App->Respository (git) The question is the git must be accessible from the Kubernetes location but not the location of the Rancher docker. Am I right to assume this?
  • g

    great-florist-72127

    03/23/2023, 10:20 AM
    Hi All, I have a question about Legacy Features. I need to disable this feature flag, but I don't see any documentation about what this enables or how I can audit to make sure I don't break an environment I have inherited. Outside of looking at the legacy features section in the UI, for each cluster, is there anything else this feature flag does?
  • b

    busy-flag-55906

    03/23/2023, 10:46 AM
    hi, we have a few clusters within rancher, and 2 of them in the status "Updating", 2 of 3 master nodes remains in the state "Waiting for probes: kube-controller-manager, kube-scheduler" and i have no idea where to look since all services are up and running in those clusters. Rechecked scheduler and controller-manager for errors but there is nothing. Please help.
    p
    • 2
    • 2
  • a

    adorable-nail-78747

    03/23/2023, 11:58 AM
    Has anyone deploy kubernetes cluster with multiple zones using rke 1.3.x?
  • f

    future-magician-11278

    03/23/2023, 2:12 PM
    Does anyone know if azure dev ops can connect to rke2 as a repo ?
  • m

    most-sunset-36476

    03/23/2023, 2:43 PM
    Hi all, We have a Rancher cluster (AKS) as Management cluster with public LoadBalancer and we would like to expose a service internally to the downstream Rancher launched K8s clusters (AKS). Is there a way to achieve that without having to create an extra internal Ingress Controller ? Thanks!
Powered by Linen
Title
m

most-sunset-36476

03/23/2023, 2:43 PM
Hi all, We have a Rancher cluster (AKS) as Management cluster with public LoadBalancer and we would like to expose a service internally to the downstream Rancher launched K8s clusters (AKS). Is there a way to achieve that without having to create an extra internal Ingress Controller ? Thanks!
View count: 1