https://rancher.com/ logo
Docs
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
kubernetes
  • b

    bitter-bear-79635

    01/23/2023, 5:35 PM
    Hello, I installed a cluster using rancher. then i installed metallb from ui then i also installed these crds: --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: creationTimestamp: null name: my-ip-space namespace: metallb-system spec: addresses: - 10.8.4.0/24 status: {} --- apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: creationTimestamp: null name: bgpadvertisement1 namespace: metallb-system spec: aggregationLength: 32 communities: - 64512:1234 ipAddressPools: - my-ip-space localPref: 100 status: {} I didn't see an option to enable this setting. ipvs: strictARP: true I don't know if it's because of this parameter that it doesn't work. metallb provisions ip loadbalancers well. A fortigate firewall is placed in front of the ips load balancers (metallb). Thank you for your help.
  • b

    bitter-bear-79635

    01/23/2023, 5:42 PM
    This is the RKE1
  • w

    wooden-addition-88668

    01/24/2023, 2:33 PM
    Hi All. I've an installation of rancher with a couple of managed cluster (1.24.4rke2). I've an error with default service account of a namespace: "Message":"clusters.management.cattle.io \"c-m-7l9g7r75\" is forbidden: User \"system😒erviceaccount:[namespace]:default\" cannot get resource \"clusters\" in API group \"management.cattle.io\" at the cluster scope","Cause":null,"FieldName":""} (post selfsubjectaccessreviews.authorization.k8s.io)
    ✅ 1
    b
    • 2
    • 1
  • w

    wooden-addition-88668

    01/24/2023, 2:34 PM
    I created both role and rolebinding without success. I've to configure permission on the rancher console?
  • l

    loud-daybreak-83328

    01/26/2023, 6:36 PM
    Hi. Has anyone successfully gotten RKE to use OIDC (Keycloak) as an authentication provider? This is separate from the Rancher front-end, just the cluster itself. I have this config set in the kube-api extra_args section:
    oidc-client-id: <http://myclient.example.org|myclient.example.org>
              oidc-groups-claim: groups
              oidc-issuer-url: <https://keycloak.example.org/realms/test>
              oidc-username-claim: preferred_username
    When I get a token and try to use it to authenticate (just did a kubectl --token=XXXXXXXXXX get nodes, I get a message: error: You must be logged in to the server (the server has asked for the client to provide credentials) and the kube-api server just logs this:
    time="2023-01-26T18:31:29Z" level=info msg="Processing v1Authenticate request..."
    time="2023-01-26T18:31:29Z" level=error msg="found 1 parts of token"
    has anyone done this?
    m
    • 2
    • 15
  • f

    famous-lizard-52395

    02/02/2023, 5:02 PM
    Hey gang! 🤟 Komodor’s latest open-source project, Helm-Dashboard is generally available with the release of V.1.0.0 Coincidently at the same time the project crossed 3K stars on GitHub (and hundreds of daily active users), only three months since it was released! Some of the cool new features you can expect to see in the new version: • Auto-update repositories when installed into cluster • The ability to reconfigure charts without access to their source • Specifying multiple working namespaces • Self-sufficient binary, no helm/kubectl requirement • REST API documented • As always, we welcome everyone to provide feedback and suggestions on the project’s roadmap on our social channels, GitHub, or the Slack Kommunity. We’ve even created a user survey form to make it easier on you 🙂
  • b

    best-fountain-73060

    02/06/2023, 8:57 AM
    Does anyone know if I can use traefik 2.x as my ingress controller to access Rancher UI ? If yes how should I configure it ?
    a
    d
    • 3
    • 13
  • e

    eager-london-83975

    02/06/2023, 2:27 PM
    Hi y'all I am having a very hard time debugging an issue with rke-cli. I am trying to add more nodes, I am following the exact same steps as I did the first time I added them, and many times after. Except something is wrong now cuz when I run the command: rke up --update-only it does not add the new nodes
  • e

    eager-london-83975

    02/06/2023, 2:27 PM
    it actually complains that "Node not found" on the new nodes i have attempted to add
  • e

    eager-london-83975

    02/06/2023, 2:28 PM
    There are no other error logs
  • e

    eager-london-83975

    02/06/2023, 2:28 PM
    The cluster is functioning okay, afaik since everything else is running ( this is also in production )
  • e

    eager-london-83975

    02/06/2023, 2:29 PM
    I have managed to see the node for like 1-2 seconds on the node list and then it went away. The new node's kubelet complains that the node is not found as well when attempting to run kubelet
  • e

    eager-london-83975

    02/06/2023, 2:29 PM
    the k8s version is 1.24.4
  • q

    quaint-candle-18606

    02/06/2023, 7:46 PM
    😛artyparrot: NeuVector 5.1.1 has been released 😛artyparrot: Posted in #neuvector-security
    👍 1
  • a

    able-analyst-76573

    02/10/2023, 10:18 PM
    has anyone installed rancher server on a terraform managed cluster and had it import the cluster? My concern is that TF would maybe recreate the cluster or possibly remove the rancher agent when doing upgrades, but im honestly not sure. just wondering if anyone has ever done this.
  • l

    little-gpu-19383

    02/13/2023, 10:11 AM
    Hi together, I have running a cluster that has a version of 1.17.4, how I am able to update that cluster? For other clusters I am able to select a newer version.
  • l

    little-gpu-19383

    02/13/2023, 10:12 AM
    I am using rancher 2.5
    a
    • 2
    • 1
  • l

    limited-eye-27484

    02/13/2023, 11:16 PM
    Hi all, we started to have pods being unable to schedule in our Rancher cluster sometime last night. Looking into the control plane nodes the docker
    kube-apiserver
    container is showing a lot of errors like this:
    E0213 22:18:56.420705       1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: <http://leases.coordination.k8s.io|leases.coordination.k8s.io> "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "<http://coordination.k8s.io|coordination.k8s.io>" in the namespace "kube-system"
    Where should I even be looking in Rancher to start troubleshooting this problem?
  • l

    little-ambulance-5584

    02/16/2023, 2:43 AM
    Figured I'd forward this because the storage chat is kind of dead, this one is racking my brain 😄 any ideas are appreciated.
  • b

    blue-farmer-46993

    02/20/2023, 9:27 AM
    Hi, I found this git issue that has been closed https://github.com/k3s-io/k3s/issues/6611. According to this k3s is compatible with Rhel 9 version. Even when I tried to setup k3s over rhel 9.1 it worked. Could you please let me know when or why use support matrix will include Rhel 9 user like us to upgrade to Rhel 9 from 8 with K3S
  • l

    little-horse-77834

    02/23/2023, 4:07 PM
    Are there any tools that can generate diagrams in graphviz or mermaid format for a namespace? Ideally I can just do something like
    kubectl -n myns get all > myns.yaml
    and then
    generator myns.yaml
  • l

    lemon-application-97336

    02/23/2023, 4:28 PM
    Hi, I'm trying to install racher 2.7 on an existing Kubernetes Cluster (1.24.10) with external loadbalancer and ssl-termination. I run the installation by helm:
    helm upgrade --install rancher rancher-stable/rancher --namespace cattle-system --set hostname="myhost.mydomain" --set tls=external
  • l

    lemon-application-97336

    02/23/2023, 4:36 PM
    sorry, here's my complete question: Hi, I'm trying to install racher 2.7 on an existing Kubernetes Cluster (1.24.10) with external loadbalancer and ssl-termination. I run the installation by helm:
    helm upgrade --install rancher rancher-stable/rancher --namespace cattle-system --set hostname="myhost.mydomain" --set tls=external
    Installation was ok, but the Ingress reports: 'nginx-ingress-controller Scheduled for sync' In the rancher log I see the following errors:
    [ERROR] Failed to connect to peer <wss://10.45.3.4/v3/connect> [local ID=10.45.4.5]: dial tcp 10.45.3.4:443: i/o timeout
    I'm confused, I would have expected the internal connections were going to port 80, which is open. Anybody can give me a hint, what could be wrong? Thanks
    d
    • 2
    • 4
  • l

    lemon-noon-36352

    03/03/2023, 10:10 AM
    hi - We have an application running on kubernetes(EKS), we use rolling update strategy for our deployment. Our application does data migration from on pre m to cloud and this sometimes takes a day or two depending on the data size. The issue for us is when we do the deployment , the pod thats performing the migration gets terminated as expected . But we would want to avoid this as the migration has to be started all over again by the customer. So does anyone know of a way to exclude the pod from upgrade thats is performing the migration until the process is complete.
  • b

    big-tiger-67977

    03/04/2023, 2:48 PM
    Hi Folks, I'm looking for a help in re-create the secret for the imported cluster in the rancher
  • b

    big-jordan-45387

    03/07/2023, 8:52 AM
    hi, I had an rke2 cluster managed by rancher and the nodes failed are down, how can I bring the rke2 cluster up again? I mean rejoining the nodes members to rancher?
  • a

    abundant-gpu-72225

    03/10/2023, 8:13 PM
    What will the consequences be if I were to move one of my two master nodes to a different network (it will be assigned a new static IP on the new network)?
  • s

    sparse-artist-18151

    03/13/2023, 12:48 PM
    Deploying RKE2 with Metallb Issue - IP gets assigned to the loadbalancer but no ARP entries are showing up under the interfaces, ARP ping and curl fails to the deployed loadbalancer IP + port IP Pool and L2Advertisement config
    apiVersion: <http://metallb.io/v1beta1|metallb.io/v1beta1>
    kind: IPAddressPool
    metadata:
      name: core-net-192.168.92.140-159
      namespace: metallb-system
    spec:
      addresses:
      - 192.168.94.140-192.168.94.159
    ---
    apiVersion: <http://metallb.io/v1beta1|metallb.io/v1beta1>
    kind: L2Advertisement
    metadata:
      name: metallb-pool
      namespace: metallb-system
    spec:
      ipAddressPools:
      - core-net-192.168.99.140-159
    How can we enable kubeproxy ipvs on the management cluster? (at the moment i only have one cluster with workernodes added to the management cluster)
    kubeproxy:
          extra_args:
            ipvs-scheduler: lc
            proxy-mode: ipvs
    do i need to deploy a separate cluster with workernodes for this? Thanks a lot for your input, if im on the wrong channel for these questions please let me know, i apologize in advance Posted in #rke2
  • w

    white-garden-41931

    03/14/2023, 12:31 AM
    :q
    😄 1
  • q

    quiet-park-6213

    03/21/2023, 8:14 AM
    Do Rancher provide resource management on kubernetes ? Like I have 3 worker nodes and one node is using memory 100% while other nodes are having lot of free memory
    c
    • 2
    • 3
Powered by Linen
Title
q

quiet-park-6213

03/21/2023, 8:14 AM
Do Rancher provide resource management on kubernetes ? Like I have 3 worker nodes and one node is using memory 100% while other nodes are having lot of free memory
c

creamy-pencil-82913

03/21/2023, 8:33 AM
https://github.com/kubernetes-sigs/descheduler
q

quiet-park-6213

03/21/2023, 10:02 AM
Thanks @creamy-pencil-82913 So Kubernetes doesn't manage by itself ? We always need to use additional tool for this
c

creamy-pencil-82913

03/21/2023, 5:59 PM
correct. Kubernetes has no guarantees around keeping workloads balanced across nodes
View count: 5