https://rancher.com/ logo
#rke2
Title
l

little-ram-17683

02/25/2023, 4:32 AM
Hi! Thats my case: https://github.com/rancher/rke2/issues/3710 I would like to know, where exactly should I add this:
Copy code
kube-proxy-arg:
  - proxy-mode=ipvs
  - ipvs-strict-arp=true
in
cluster.yaml
. I mean in which section of yaml
c

creamy-pencil-82913

02/25/2023, 5:01 AM
l

little-ram-17683

02/25/2023, 5:17 AM
Yes, and like you said in this github topic:
Copy code
Also, you should NOT edit the kube-proxy static pod manifest; any changes you make will be reverted when RKE2 is restarted. You should use the following in your config.yaml instead:
So even if I've created cluster using GUI I have to change this config manually on every node, or what? It's really confusing. Standard behavior for every config is: "You have to do changes in
cluster.yaml
in GUI, otherwise it will be overwritten during next rke restart
Or should I do it somewhere here?
for example for RKE1 change should be done in
cluster.yaml
and it's documented. For RKE2 it's total mess
Copy code
apiVersion: <http://provisioning.cattle.io/v1|provisioning.cattle.io/v1>
kind: Cluster
metadata:
  name: <cluster_name>
  annotations:
    <http://field.cattle.io/creatorId|field.cattle.io/creatorId>: u-ofzeh6vy3n
#    key: string
  creationTimestamp: '2023-02-09T15:07:17Z'
  finalizers:
    - <http://wrangler.cattle.io/cloud-config-secret-remover|wrangler.cattle.io/cloud-config-secret-remover>
    - <http://wrangler.cattle.io/provisioning-cluster-remove|wrangler.cattle.io/provisioning-cluster-remove>
    - <http://wrangler.cattle.io/rke-cluster-remove|wrangler.cattle.io/rke-cluster-remove>
#    - string
  generation: 20
  labels:
    {}
#    key: string
  namespace: fleet-default
  resourceVersion: '65757783'
  uid: cb31e6c1-51b2-4960-ae4d-038172744181
  fields:
    - <cluster_name>
    - 'true'
    - <cluster_name>-kubeconfig
spec:
  defaultPodSecurityPolicyTemplateName: ''
  kubernetesVersion: v1.24.9+rke2r2
  localClusterAuthEndpoint:
    caCerts: ''
    enabled: false
    fqdn: ''
  rkeConfig:
    additionalManifest: |-
      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: coredns
        namespace: kube-system
      data:
        Corefile: |
          .:53 {
              errors 
              health  {
                  lameduck 5s
              }
              ready 
              kubernetes   cluster.local  cluster.local in-addr.arpa ip6.arpa {
                  pods insecure
                  fallthrough in-addr.arpa ip6.arpa
                  ttl 30
              }
              prometheus   0.0.0.0:9153
              forward   .  <my_ip>
              cache   30
              loop 
              reload 
              loadbalance W
          }
    chartValues:
      rke2-calico:
        installation:
          calicoNetwork:
            bgp: Enabled
            controlPlaneTolerations:
              - effect: NoSchedule
                key: <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>
                operator: Exists
              - effect: NoExecute
                key: <http://node-role.kubernetes.io/etcd|node-role.kubernetes.io/etcd>
                operator: Exists
            ipPools:
              - blockSize: 26
                cidr: 10.48.0.0/21
                encapsulation: IPIP
                natOutgoing: Enabled
                nodeSelector: all()
              - blockSize: 122
                cidr: 2001::00/64
                encapsulation: None
                natOutgoing: Enabled
                nodeSelector: all()
    etcd:
      disableSnapshots: false
      snapshotRetention: 5
      snapshotScheduleCron: 0 */5 * * *
    machineGlobalConfig:
      cni: calico
      disable:
        - rke2-ingress-nginx
      disable-kube-proxy: false
      etcd-expose-metrics: false
      profile: null
    machinePools:
    machineSelectorConfig:
      - config:
          protect-kernel-defaults: false
    registries:
      configs:
        {}
      mirrors:
        {}
    upgradeStrategy:
      controlPlaneConcurrency: '1'
      controlPlaneDrainOptions:
        deleteEmptyDirData: true
        disableEviction: false
        enabled: false
        force: false
        gracePeriod: -1
        ignoreDaemonSets: true
        skipWaitForDeleteTimeoutSeconds: 0
        timeout: 120
      workerConcurrency: '1'
      workerDrainOptions:
        deleteEmptyDirData: true
        disableEviction: false
        enabled: false
        force: false
        gracePeriod: -1
        ignoreDaemonSets: true
        skipWaitForDeleteTimeoutSeconds: 0
        timeout: 120
  machineSelectorConfig:
    - config: {}
__clone: true
this is my
cluster.yaml
When I try to add
Copy code
kube-proxy-arg:
  - proxy-mode=ipvs
  - ipvs-strict-arp=true
under:
rkeConfig:
it disappears
and as you can read here: It should work: https://rancher.com/docs/rancher/v2.6/en/cluster-admin/editing-clusters/rke2-config-reference/
Copy code
Edit the RKE options under the rkeConfig directive.
And then in GUI you have to add:
Copy code
kube-proxy-arg:
        - proxy-mode=ipvs
        - ipvs-strict-arp=true
under:
Copy code
machineGlobalConfig:
after that, on every node it looks like:
Copy code
root     3751955 3751907  0 05:42 ?        00:00:00 kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=worker-2 --ipvs-strict-arp=true --kubeconfig=/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig --proxy-mode=ipvs
c

creamy-pencil-82913

02/25/2023, 7:06 PM
Ah, it would have helped if you'd asked how to do it through the Rancher UI. Neither Rancher nor RKE2 have a cluster.yaml so I had no idea what you were talking about.
l

little-ram-17683

02/27/2023, 3:07 AM
Maybe it would, and next time I will ask about Rancher UI solution. But come one: 1. Rancher UI is *G*raphic *U*ser *I*nterface 2. You have only one GUI 🙂. So when I wrote about solution by GUI it's obvious.
3. In your docs you use 500 times terminology
cluster.yaml
in relation to yaml configuration file editable from rancher UI
4. There is only one case when the cluster is "custom RKE2" in Rancher UI. It's situation, when cluster is created directly from Rancher UI, using RKE2. So I wrote that too 🙂. In other way it would be "imported". Even this one, only information proves that I created the cluster via rancher UI and I have to change what I want via rancher ui. Nvm, I use Rancher for many years and I really like it. But believe me or not, your docs are total mess
c

creamy-pencil-82913

02/27/2023, 3:53 AM
You didn't mention the GUI until after I responded with a link to the RKE2 docs. The RKE2 docs cover the product pretty extensively. Rancher is a separate product and there are definitely some thin points in the docs for provisioning rke2 and k3s clusters, but this isn't the correct channel for that discussion.
I suspect that any docs referencing cluster.yaml are for rke1, since it does have that file.
9 Views