This message was deleted.
# harvester
a
This message was deleted.
f
if i try to get a shell in the pod it gives me the shell in the harvester node
Copy code
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    <http://deployment.kubernetes.io/revision|deployment.kubernetes.io/revision>: '6'
    <http://kubectl.kubernetes.io/last-applied-configuration|kubectl.kubernetes.io/last-applied-configuration>: >
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"bind-dns"},"name":"bind-dns","namespace":"bind-dns"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"bind-dns"}},"template":{"metadata":{"labels":{"app":"bind-dns"}},"spec":{"containers":[{"image":"ubuntu/bind9:latest","name":"bind","ports":[{"containerPort":53,"protocol":"UDP"},{"containerPort":53,"protocol":"TCP"}],"resources":{"limits":{"cpu":"500m","memory":"512Mi"},"requests":{"cpu":"200m","memory":"256Mi"}},"volumeMounts":[{"mountPath":"/etc/bind/named.conf.local","name":"bind-config","subPath":"named.conf.local"},{"mountPath":"/var/lib/bind","name":"bind-records"}]}],"initContainers":[{"command":["sh","-c","for
      file in /config/*; do if [ ! -f \"/var/lib/bind/$(basename $file)\" ];
      then cp $file /var/lib/bind/; fi;
      done"],"image":"busybox","name":"init-bind","volumeMounts":[{"mountPath":"/config","name":"zones-config"},{"mountPath":"/var/lib/bind","name":"bind-records"}]}],"volumes":[{"configMap":{"name":"bind-config"},"name":"bind-config"},{"configMap":{"name":"zones-config"},"name":"zones-config"},{"name":"bind-records","persistentVolumeClaim":{"claimName":"bind-dns-pvc"}}]}}}}
  creationTimestamp: '2024-11-12T19:47:18Z'
  generation: 14
  labels:
    app: bind-dns
  managedFields:
    - apiVersion: apps/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:<http://kubectl.kubernetes.io/last-applied-configuration|kubectl.kubernetes.io/last-applied-configuration>: {}
          f:labels:
            .: {}
            f:app: {}
        f:spec:
          f:progressDeadlineSeconds: {}
          f:revisionHistoryLimit: {}
          f:selector: {}
          f:strategy:
            f:rollingUpdate:
              .: {}
              f:maxSurge: {}
              f:maxUnavailable: {}
            f:type: {}
          f:template:
            f:metadata:
              f:labels:
                .: {}
                f:app: {}
            f:spec:
              f:containers:
                k:{"name":"bind"}:
                  .: {}
                  f:image: {}
                  f:imagePullPolicy: {}
                  f:name: {}
                  f:ports:
                    .: {}
                    k:{"containerPort":53,"protocol":"TCP"}:
                      .: {}
                      f:containerPort: {}
                      f:protocol: {}
                    k:{"containerPort":53,"protocol":"UDP"}:
                      .: {}
                      f:containerPort: {}
                      f:protocol: {}
                  f:resources:
                    .: {}
                    f:limits:
                      .: {}
                      f:cpu: {}
                      f:memory: {}
                    f:requests:
                      .: {}
                      f:cpu: {}
                      f:memory: {}
                  f:terminationMessagePath: {}
                  f:terminationMessagePolicy: {}
                  f:volumeMounts:
                    .: {}
                    k:{"mountPath":"/etc/bind/named.conf.local"}:
                      .: {}
                      f:mountPath: {}
                      f:name: {}
                      f:subPath: {}
                    k:{"mountPath":"/var/lib/bind"}:
                      .: {}
                      f:mountPath: {}
                      f:name: {}
              f:dnsPolicy: {}
              f:initContainers:
                .: {}
                k:{"name":"init-bind"}:
                  .: {}
                  f:command: {}
                  f:image: {}
                  f:imagePullPolicy: {}
                  f:name: {}
                  f:resources: {}
                  f:terminationMessagePath: {}
                  f:terminationMessagePolicy: {}
                  f:volumeMounts:
                    .: {}
                    k:{"mountPath":"/config"}:
                      .: {}
                      f:mountPath: {}
                      f:name: {}
                    k:{"mountPath":"/var/lib/bind"}:
                      .: {}
                      f:mountPath: {}
                      f:name: {}
              f:restartPolicy: {}
              f:schedulerName: {}
              f:securityContext: {}
              f:terminationGracePeriodSeconds: {}
              f:volumes:
                .: {}
                k:{"name":"bind-config"}:
                  .: {}
                  f:configMap:
                    .: {}
                    f:defaultMode: {}
                    f:name: {}
                  f:name: {}
                k:{"name":"bind-records"}:
                  .: {}
                  f:name: {}
                  f:persistentVolumeClaim:
                    .: {}
                    f:claimName: {}
                k:{"name":"zones-config"}:
                  .: {}
                  f:configMap:
                    .: {}
                    f:defaultMode: {}
                    f:name: {}
                  f:name: {}
      manager: kubectl-client-side-apply
      operation: Update
      time: '2024-11-12T19:47:18Z'
    - apiVersion: apps/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:spec:
          f:replicas: {}
          f:template:
            f:metadata:
              f:annotations:
                .: {}
                f:<http://cattle.io/timestamp|cattle.io/timestamp>: {}
              f:namespace: {}
            f:spec:
              f:containers:
                k:{"name":"bind"}:
                  f:ports:
                    k:{"containerPort":53,"protocol":"TCP"}:
                      f:hostPort: {}
                      f:name: {}
                    k:{"containerPort":53,"protocol":"UDP"}:
                      f:hostPort: {}
                      f:name: {}
                  f:volumeMounts:
                    k:{"mountPath":"/etc/bind/named.conf.options"}:
                      .: {}
                      f:mountPath: {}
                      f:name: {}
                      f:subPath: {}
              f:hostNetwork: {}
              f:volumes:
                k:{"name":"named-conf-options"}:
                  .: {}
                  f:configMap:
                    .: {}
                    f:defaultMode: {}
                    f:name: {}
                  f:name: {}
      manager: rancher
      operation: Update
      time: '2025-01-16T23:08:21Z'
    - apiVersion: apps/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            f:<http://deployment.kubernetes.io/revision|deployment.kubernetes.io/revision>: {}
        f:status:
          f:availableReplicas: {}
          f:conditions:
            .: {}
            k:{"type":"Available"}:
              .: {}
              f:lastTransitionTime: {}
              f:lastUpdateTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Progressing"}:
              .: {}
              f:lastTransitionTime: {}
              f:lastUpdateTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
          f:observedGeneration: {}
          f:readyReplicas: {}
          f:replicas: {}
          f:updatedReplicas: {}
      manager: kube-controller-manager
      operation: Update
      subresource: status
      time: '2025-02-05T00:12:10Z'
  name: bind-dns
  namespace: bind-dns
  resourceVersion: '389882386'
  uid: b0887cab-ee9c-48f4-a78d-b564a91c308b
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: bind-dns
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        <http://cattle.io/timestamp|cattle.io/timestamp>: '2025-01-11T22:46:35Z'
      creationTimestamp: null
      labels:
        app: bind-dns
      namespace: bind-dns
    spec:
      containers:
        - image: ubuntu/bind9:latest
          imagePullPolicy: Always
          name: bind
          ports:
            - containerPort: 53
              hostPort: 53
              name: bind-udp-53
              protocol: UDP
            - containerPort: 53
              hostPort: 53
              name: bind-tcp-53
              protocol: TCP
          resources:
            limits:
              cpu: 500m
              memory: 512Mi
            requests:
              cpu: 200m
              memory: 256Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /var/lib/bind
              name: bind-records
            - mountPath: /etc/bind/named.conf.local
              name: bind-config
              subPath: named.conf.local
            - mountPath: /etc/bind/named.conf.options
              name: named-conf-options
              subPath: named.conf.options
      dnsPolicy: ClusterFirst
      hostNetwork: true
      initContainers:
        - command:
            - sh
            - '-c'
            - >-
              for file in /config/*; do if [ ! -f "/var/lib/bind/$(basename
              $file)" ]; then cp $file /var/lib/bind/; fi; done
          image: busybox
          imagePullPolicy: Always
          name: init-bind
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /config
              name: zones-config
            - mountPath: /var/lib/bind
              name: bind-records
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
        - configMap:
            defaultMode: 420
            name: named-conf-options
          name: named-conf-options
        - configMap:
            defaultMode: 420
            name: bind-config
          name: bind-config
        - configMap:
            defaultMode: 420
            name: zones-config
          name: zones-config
        - name: bind-records
          persistentVolumeClaim:
            claimName: bind-dns-pvc
status:
  availableReplicas: 1
  conditions:
    - lastTransitionTime: '2025-01-16T20:54:55Z'
      lastUpdateTime: '2025-01-16T23:10:11Z'
      message: ReplicaSet "bind-dns-db8957f59" has successfully progressed.
      reason: NewReplicaSetAvailable
      status: 'True'
      type: Progressing
    - lastTransitionTime: '2025-02-05T00:12:10Z'
      lastUpdateTime: '2025-02-05T00:12:10Z'
      message: Deployment has minimum availability.
      reason: MinimumReplicasAvailable
      status: 'True'
      type: Available
  observedGeneration: 14
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1
t
that is how it works. You need to then run a
kubectl exec
to get into a pod.
f
Its shell form the node man
Usually it gives shell form pod
Not node
t
rancher gives a shell in the pod. Harvester is the node. From that shell you can shell into a pod if you want. You can also download the kubeconfig to connect to the “cluster”.
f
Bro i have other deployments in harvester
And it gives me the shell into the pod i select
t
are you using the hidden rancher gui view of harvester? your image was not clear
f
In theory u are not supposed to use harvester node shells
No
t
how is rancher and harvester setup?
f
I have a 3 node harvester cluster And an other 3 node rke2 where i have rancher
t
did you add harvester as a cluster or as harvester itself?
f
I added as a hypervisro
t
ok.. let me test. what versions of Rancher and harvester are you running?
f
I think 4.0 and 10.3 but give me 5 min to check
t
the latest is harv - 1.4.1 and rancher 2.10.2 https://dzver.rfed.io/
f
Rancher v2.10.1
Harvester v1.4.0
t
ok cool, my cluster is coming up now.
f
The whole idea was that we wanted to have a DDNS so we can provision other clusters in harvester using rancher and we had 4 servers so we decided to do so : server 1 : 4 vms rke2 and install rancher server 2: harvester node 1 server 3: harvester node 2 server 4: harverster node 3 and we are trying to deploy the DDNS and KeaDhcp in harvester cluster as k8s deploymet
if u have any other solutions to this we would appreciate it
t

https://youtu.be/9y37tuMBSUg

a quick vid I put together for ya.
better vid :

https://www.youtube.com/watch?v=GPXkoi6ueVA&amp;t=0s

f
Yes but if u pres create cluster when the nodes spin up they need ddns
To be able to talk to each other
Still lets not deviate from the main issue
t
what is the big picture?
what are you trying to do? like connect App A on harvester to appB on rancher?
f
From rancher spin up multiple rke2 clusters
And the clusters are vms in harvester
t
ok the one video shows how to do that.
Oh and that is where the shell issue comes into play?
f
No the shell is when we deploy ddns
Bind9
t
ah. big picture, I would use an external DNS to manage everything.
f
Ye but i cant use that
Thats why im in this loop
And still
The node shell issue conserns
At first looked as RCE(remot code execution)
But i wanted to clarify
t
let me rebuild to see if I can recreate the shell issue.
👍 1
f
Let me know if u need anything form me
t
$1mill! 😄
😁 1
f
I wish i had them
t
ok, have the env up. did you hit the
~
key or click the 3 dots and select shell?
f
3dots
t
and just to confirm you named you cluster harvester?
f
Yes harvester-farm
Let me show u
t
want to jump on zoom?
f
yes
google meet better
t
DM a link
f
Hi @thousands-advantage-10804 I upgraded the servers to latest version also fixed some hdd drivers and i will test this weekend with the fresh installed harvester
Just to update on the topic