When running Rancher inside Kubernetes, how do you...
# general
w
When running Rancher inside Kubernetes, how do you get it to have SSH access to the nodes it dynamically creates? I am using the vCenter connector and it correctly creates the VMs, but then cannot configure the OS. I have the following in my Helm values that I used to install Rancher 2.11.1
Copy code
deploy@ansible:~/git/Project-SignalWave/core-services/Rancher$ helm get values rancher -n cattle-system
USER-SUPPLIED VALUES:
addLocal: "true"
additionalTrustedCAs: false
agentTLSMode: ""
antiAffinity: preferred
bootstrapPassword: admin
certmanager:
  version: ""
debug: false
extraNodeSelectorTerms: {}
extraTolerations: {}
hostname: <http://rancher.home.virtualelephant.com|rancher.home.virtualelephant.com>
imagePullSecrets: []
ingress:
  annotations:
    <http://haproxy.org/ssl-passthrough|haproxy.org/ssl-passthrough>: "true"
    <http://nginx.ingress.kubernetes.io/backend-protocol|nginx.ingress.kubernetes.io/backend-protocol>: HTTPS
  enabled: false
  extraAnnotations: {}
  includeDefaultExtraAnnotations: false
  ingressClassName: ""
  path: /
  pathType: Prefix
  servicePort: 443
  tls:
    source: rancher
livenessProbe:
  failureThreshold: 5
  periodSeconds: 30
  timeoutSeconds: 5
noProxy: 127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
postDelete:
  enabled: true
  ignoreTimeoutError: false
  image:
    repository: rancher/shell
    tag: v0.4.0
  namespaceList:
  - cattle-fleet-system
  - cattle-system
  - rancher-operator-system
  timeout: 120
priorityClassName: rancher-critical
privateCA: false
rancherImage: rancher/rancher
rancherImagePullPolicy: Always
readinessProbe:
  failureThreshold: 5
  periodSeconds: 30
  timeoutSeconds: 5
replicas: 3
resources:
  limits:
    cpu: 2
    memory: 4Gi
  requests:
    cpu: 500m
    memory: 1Gi
service:
  annotations: {}
  disableHTTP: false
  type: ClusterIP
sshKey:
  enabled: true
  name: rancher-ssh-key
startupProbe:
  failureThreshold: 12
  periodSeconds: 10
  timeoutSeconds: 5
systemDefaultRegistry: ""
tls: ingress
topologyKey: <http://kubernetes.io/hostname|kubernetes.io/hostname>
useBundledSystemChart: false
I created the
rancher-ssh-key
in the cattle-system namespace using the private key file. When I create the VMs through Rancher, in the cloud-config I specify the following and I can log into those nodes from a generic VM with the ssh in the environment -- but the pods for Rancher inside the kubernetes cluster cannot.
Copy code
#cloud-config
package_update: true
package_upgrade: true
packages:
  - nfs-common
  - cri-tools
  - wget
  - net-tools
  - curl
  - dnsutils
  - traceroute
  - tar
  - sha256sum
  - rsyslog
users:
  - name: rancher
    shell: /bin/bash
    groups: wheel
    sudo: ['ALL=(ALL) NOPASSWD: ALL']
    ssh-authorized-keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDs81WPnHTiQCDpBQTOSzQp97Wo8ahk084dFk1v8h1xeqjo2ZQyv+FxH2uLDMxh3wS5p+vpTkAsRtQN5CX1FWDaX+a3RZefXzQLLuqZOlZI+PNOAiimq0Q57Htl3iqfaUjhhCo2N3YNBjYl5uVxd+Z6b2Vy/K2PNzANy5GDI15rSlE9vGILl408LsqLO8zQnx9URWEZCfiCMIxxE3Rf/dR8yei7vn6QbZ5BwTTuoyaUWWwI5G+OEb0CM1ka7ucLQWP0EvVRqp083Ncw5xoRCmsDVhx8w+E/xyBKXiY2rxd8ZZgwf4BmPgE0xJ2chH5qziaXcKSYP9dIkNooFqMzfaa3LQsmEMMYy6XMZGrQ3VmHA/9wnlkkCSq2kP97V37IsluYOSTJ5KlR7M19eR06QyWtpuEeTn3vNo5ys93AgJJR3No+sZURumNwhQs5l64XTLS4kdkAYC/W5m5kgRMwIIFGIsHjpy+YBcgHXG+V7SYYWFlYKaahgGNOSjb3JvHaCTJViE4AerL8NeB6qsw/YSxPqBj0bsXuUcFsbe60vh1+vKPvdGLkLbAISZp6c/8J3QoK848iW+GB6h+nB7wmoT7Z0cnhBWwuY9dubloWYGuVohS9IS3rVqV9tjMfgatugDqQ0xWmwwB+el6mDNH05k8q7gzwZhLAEwEVxpTP+zjg8Q== ve-lab-2025-key
no_ssh_fingerprints: true