https://rancher.com/ logo
#rke2
Title
# rke2
m

mysterious-whale-87222

01/02/2023, 8:40 PM
Hi, I am trying to install rke2 on linode (unmanaged) with rancher 2.7. For linode, as far as I can see, I would still have to install the ccm and set the cloud provider to external. I just can't find a good doc on how to install the ccm. As soon as I set the cloud provider to external, the node can no longer join. All the instructions I have found refer to rke1 and there the config options look a bit different. The existing documentation for rke2 is relatively thin and I don't know where to set which options in the cluster configuration. I tried the following (using additinalManifest):
Copy code
apiVersion: <http://provisioning.cattle.io/v1|provisioning.cattle.io/v1>
kind: Cluster
metadata:
  name: test
  annotations:
    {}
  labels:
    {}
  namespace: fleet-default
spec:
  cloudCredentialSecretName: cattle-global-data:cc-ddncq
  defaultPodSecurityPolicyTemplateName: ''
  kubernetesVersion: v1.24.8+rke2r1
  localClusterAuthEndpoint:
    caCerts: ''
    enabled: false
    fqdn: ''
  rkeConfig:
    additionalManifest: |+
      apiVersion: v1
      kind: Secret
      metadata:
        name: ccm-linode
        namespace: kube-system
      type: Opaque
      data:
        apiToken: xxxx
        region: xxxx
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: ccm-linode
        namespace: kube-system
      ---
      apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
      kind: ClusterRole
      metadata:
        name: ccm-linode-clusterrole
      rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "watch", "list", "update", "create"]
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["get", "watch", "list", "update", "delete", "patch"]
      - apiGroups: [""]
        resources: ["nodes/status"]
        verbs: ["get", "watch", "list", "update", "delete", "patch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["get", "watch", "list", "update", "create", "patch"]
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "watch", "list", "update"]
      - apiGroups: [""]
        resources: ["secrets"]
        verbs: ["get"]
      - apiGroups: [""]
        resources: ["services"]
        verbs: ["get", "watch", "list"]
      - apiGroups: [""]
        resources: ["services/status"]
        verbs: ["get", "watch", "list", "update", "patch"]
      ---
      kind: ClusterRoleBinding
      apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
      metadata:
        name: ccm-linode-clusterrolebinding
      roleRef:
        apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
        kind: ClusterRole
        name: ccm-linode-clusterrole
      subjects:
      - kind: ServiceAccount
        name: ccm-linode
        namespace: kube-system
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: ccm-linode
        labels:
          app: ccm-linode
        namespace: kube-system
      spec:
        selector:
          matchLabels:
            app: ccm-linode
        template:
          metadata:
            labels:
              app: ccm-linode
          spec:
            serviceAccountName: ccm-linode
            nodeSelector:
              # The CCM will only run on a Node labelled as a master, you may want to change this
              <http://node-role.kubernetes.io/master|node-role.kubernetes.io/master>: ""
            tolerations:
              # The CCM can run on Nodes tainted as masters
              - key: "<http://node-role.kubernetes.io/master|node-role.kubernetes.io/master>"
                effect: "NoSchedule"
              # The CCM is a "critical addon"
              - key: "CriticalAddonsOnly"
                operator: "Exists"
              # This taint is set on all Nodes when an external CCM is used
              - key: <http://node.cloudprovider.kubernetes.io/uninitialized|node.cloudprovider.kubernetes.io/uninitialized>
                value: "true"
                effect: NoSchedule
              - key: <http://node.kubernetes.io/not-ready|node.kubernetes.io/not-ready>
                operator: Exists
                effect: NoSchedule
              - key: <http://node.kubernetes.io/unreachable|node.kubernetes.io/unreachable>
                operator: Exists
                effect: NoSchedule
            hostNetwork: true
            containers:
              - image: linode/linode-cloud-controller-manager:latest
                imagePullPolicy: Always
                name: ccm-linode
                args:
                - --leader-elect-resource-lock=endpoints
                - --v=3
                - --port=0
                - --secure-port=10253
                volumeMounts:
                - mountPath: /etc/kubernetes
                  name: k8s
                env:
                  - name: LINODE_API_TOKEN
                    valueFrom:
                      secretKeyRef:
                        name: ccm-linode
                        key: apiToken
                  - name: LINODE_REGION
                    valueFrom:
                      secretKeyRef:
                        name: ccm-linode
                        key: region
            volumes:
            - name: k8s
              hostPath:
                path: /etc/kubernetes
    chartValues:
      rke2-calico: {}
    etcd:
      disableSnapshots: false
      s3:
      snapshotRetention: 5
      snapshotScheduleCron: 0 */5 * * *
    machineGlobalConfig:
      cni: calico
      disable-kube-proxy: false
      etcd-expose-metrics: false
      profile: null
    machinePools:
      - name: pool1
        etcdRole: true
        controlPlaneRole: true
        workerRole: true
        hostnamePrefix: ''
        quantity: 1
        unhealthyNodeTimeout: 0m
        machineConfigRef:
          kind: LinodeConfig
          name: nc-test-pool1-gzwhp
        labels: {}

    machineSelectorConfig:
      - config:
          cloud-provider-name: external
          protect-kernel-defaults: false
    registries:
      configs:
        {}
      mirrors:
        {}
    upgradeStrategy:
      controlPlaneConcurrency: '1'
      controlPlaneDrainOptions:
        deleteEmptyDirData: true
        disableEviction: false
        enabled: false
        force: false
        gracePeriod: -1
        ignoreDaemonSets: true
        skipWaitForDeleteTimeoutSeconds: 0
        timeout: 120
      workerConcurrency: '1'
      workerDrainOptions:
        deleteEmptyDirData: true
        disableEviction: false
        enabled: false
        force: false
        gracePeriod: -1
        ignoreDaemonSets: true
        skipWaitForDeleteTimeoutSeconds: 0
        timeout: 120
  machineSelectorConfig:
    - config: {}
__clone: true
What the exact error is, I can not currently determine, because I get no access to the cluster via the rancher ui yet.
Found a way to login into the nodes. Problem is that my manifest wasn't executed. Did it manually and it worked. How do I provide the ccm manifest to rke2 cluster config?
g

gorgeous-oyster-35026

01/03/2023, 1:58 PM
I am not sure if the plus sign after additionalManifest is the problem. When I paste my ccm yaml into the UI and check the yaml then its a minus (-) sign. Maybe it's not executed because its malformed when there are multiple unexpected spaces.
a

acceptable-leather-15942

03/01/2023, 10:21 PM
@mysterious-whale-87222 Were you able to find a solution to your problem? Running into the same issue I think
7 Views