This message was deleted.
# elemental
a
This message was deleted.
w
Hi Sweta! How are you running the upgrade? And where is that log-line from? Is it the elemental-operator pod?
b
Hi Fredrik,I upgraded it by running below commands in local cluster kube shell:
Copy code
# Install/upgrade the CRDS chart
helm upgrade \
    --install -n cattle-elemental-system --create-namespace elemental-operator-crds \
    <oci://registry.suse.com/rancher/elemental-operator-crds-chart>

# Install/upgrade the operator chart
helm upgrade \
    --install -n cattle-elemental-system --create-namespace elemental-operator \
    <oci://registry.suse.com/rancher/elemental-operator-chart>
Yes the logs were from elemental-operator
Hi, I suspect there is some policy in cattle-elemental-system due to which the service account token is not getting mounted. I have created a nginx pod as well it is also not mounting the service account token volume. Could there be any policy triggered in this ns that can remove or could this be the issue?
w
Hi! I'm not aware of such a policy.. what output do you get from
k get pod -n cattle-elemental-system -l app=elemental-operator -o yaml
or describing the pod?
b
k get pod -n cattle-elemental-system -l app=elemental-operator -o yaml
apiVersion: v1 items: - apiVersion: v1 kind: Pod metadata: annotations: cattle.io/timestamp: "2023-11-02T091111Z" kubernetes.io/psp: eks.privileged creationTimestamp: "2023-11-02T094325Z" generateName: elemental-operator-7f86c74759- labels: app: elemental-operator pod-template-hash: 7f86c74759 name: elemental-operator-7f86c74759-phwf2 namespace: cattle-elemental-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: elemental-operator-7f86c74759 uid: 1ad2c7c1-1e03-4685-9e44-930450ef6081 resourceVersion: "194747274" uid: 250d1c2f-0551-40a2-914b-50a710971b70 spec: containers: - args: - operator - --namespace - cattle-elemental-system - --operator-image - registry.suse.com/rancher/elemental-operator:1.3.4 - --seedimage-image - registry.suse.com/rancher/seedimage-builder:1.3.4 - --seedimage-image-pullpolicy - IfNotPresent env: - name: NO_PROXY value: 127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local image: registry.suse.com/rancher/elemental-operator:1.3.4 imagePullPolicy: IfNotPresent name: elemental-operator resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: harbor-dev-2 nodeName: ip-10-60-3-180.eu-central-1.compute.internal nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: elemental-operator serviceAccountName: elemental-operator terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: cattle.io/os operator: Equal value: linux - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 status: conditions: - lastProbeTime: null lastTransitionTime: "2023-11-02T094325Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-11-02T094325Z" message: 'containers with unready status: [elemental-operator]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-11-02T094325Z" message: 'containers with unready status: [elemental-operator]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-11-02T094325Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://2a171fa9941a31e3efc656565afee8d6b4fdb80b1a7e6f992adbd09ec3265c7a image: registry.suse.com/rancher/elemental-operator:1.3.4 imageID: docker-pullable://registry.suse.com/rancher/elemental-operator@sha256:203941cb6837c7256bc44c9c61fb46f6ed586987a0040d3f384873ed3923deb8 lastState: terminated: containerID: docker://2a171fa9941a31e3efc656565afee8d6b4fdb80b1a7e6f992adbd09ec3265c7a exitCode: 1 finishedAt: "2023-11-02T094912Z" reason: Error startedAt: "2023-11-02T094911Z" name: elemental-operator ready: false restartCount: 6 started: false state: waiting: message: back-off 5m0s restarting failed container=elemental-operator pod=elemental-operator-7f86c74759-phwf2_cattle-elemental-system(250d1c2f-0551-40a2-914b-50a710971b70) reason: CrashLoopBackOff hostIP: 10.60.3.180 phase: Running podIP: 10.60.3.244 podIPs: - ip: 10.60.3.244 qosClass: BestEffort startTime: "2023-11-02T094325Z" kind: List metadata: resourceVersion: ""
w
Is it running in AWS? Maybe they have some custom way of mounting the kubeconfig...
b
I found the issue , something/someone unchecked automounting in service account
thanks for your help @witty-table-40840
w
Awesome, good find! 👍