adamant-kite-43734
10/31/2023, 7:08 AMprehistoric-balloon-31801
10/31/2023, 7:22 AM/spec/customizeComponents/patches/
BTW, can you share the reason you want to evaluate a different image? Thanks.salmon-city-57654
10/31/2023, 8:27 AMsalmon-city-57654
10/31/2023, 8:27 AMpowerful-soccer-11224
10/31/2023, 10:39 AMcustomizeComponents:
patches:
- patch: '{"spec":{"template":{"spec":{"containers":[{"name":"virt-controller", "args":["--launcher-image","<http://test-registry.example.com/myrepo/virt-launcher@sha256:9144b0697edd1e26a23282007a017af7564b9813eb32fd5356afaff244c470c5|test-registry.example.com/myrepo/virt-launcher@sha256:9144b0697edd1e26a23282007a017af7564b9813eb32fd5356afaff244c470c5>","--port","8443", "-v", "2"]}]}}}}'
resourceName: virt-controller
resourceType: Deployment
type: strategic
- patch: '{"spec":{"template":{"spec":{"initContainers":[{"name":"virt-launcher","image":"<http://test-registry.example.com/myrepo/virt-launcher@sha256:9144b0697edd1e26a23282007a017af7564b9813eb32fd5356afaff244c470c5|test-registry.example.com/myrepo/virt-launcher@sha256:9144b0697edd1e26a23282007a017af7564b9813eb32fd5356afaff244c470c5>","imagePullPolicy":"Always"}]}}}}'
resourceName: virt-handler
resourceType: Daemonset
type: strategic
I tried removing /spec/customizeComponents/patches/
and it did kind of work but wanted to be sure if there's any other process to be followed herethousands-action-31843
10/31/2023, 3:18 PMsalmon-city-57654
10/31/2023, 5:18 PMvirt-launcher
image. These two parts should be patched. Did you try after you patched the virt-launcher
image?
Also, if you want to rollback, just remove your added parts from the /spec/customizeComponents/patches/
on the KubeVirt
CR.
Did it work now?powerful-soccer-11224
10/31/2023, 5:20 PMsalmon-city-57654
10/31/2023, 5:21 PMsalmon-city-57654
10/31/2023, 5:22 PMpowerful-soccer-11224
10/31/2023, 5:25 PM/spec/customizeComponents/patches/
to use the image registry.suse.com/suse/sles/15.4/virt-launcher:0.54.0-150400.3.7.1 but the controller pods are in CrashLoopBackOff state and on inspecting the container logs , I found this : exec /usr/bin/virt-controller: exec format error
salmon-city-57654
10/31/2023, 5:27 PMpowerful-soccer-11224
10/31/2023, 5:29 PMEvents: │
│ Type Reason Age From Message │
│ ---- ------ ---- ---- ------- │
│ Normal Scheduled 38m default-scheduler Successfully assigned harvester-system/virt-controller-9c6f7799-lwhsj to iaas-node-002 │
│ Normal AddedInterface 38m multus Add eth0 [10.52.2.151/32] from k8s-pod-network │
│ Normal Pulled 37m (x4 over 38m) kubelet Container image "registry.suse.com/suse/sles/15.4/virt-controller:0.54.0-150400.3.7.1" already present on machine │
│ Normal Created 37m (x4 over 38m) kubelet Created container virt-controller │
│ Normal Started 37m (x4 over 38m) kubelet Started container virt-controller │
│ Warning BackOff 3m7s (x179 over 38m) kubelet Back-off restarting failed container
salmon-city-57654
10/31/2023, 5:34 PMkubectl get deployment virt-controller -n harvester-system -o yaml |yq -e .spec.template.spec.containers
salmon-city-57654
10/31/2023, 5:37 PMpowerful-soccer-11224
10/31/2023, 5:48 PM- args:
- --launcher-image
- <http://registry.suse.com/suse/sles/15.4/virt-launcher:0.54.0-150400.3.7.1|registry.suse.com/suse/sles/15.4/virt-launcher:0.54.0-150400.3.7.1>
- --port
- "8443"
- -v
- "2"
command:
- virt-controller
image: <http://registry.suse.com/suse/sles/15.4/virt-controller:0.54.0-150400.3.7.1|registry.suse.com/suse/sles/15.4/virt-controller:0.54.0-150400.3.7.1>
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
path: /healthz
port: 8443
scheme: HTTPS
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
name: virt-controller
ports:
- containerPort: 8443
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /leader
port: 8443
scheme: HTTPS
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
resources:
requests:
cpu: 10m
memory: 150Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/virt-controller/certificates
name: kubevirt-controller-certs
readOnly: true
- mountPath: /profile-data
name: profile-data
And If I use my custom image , it works very well . The only time it's causing the issue it while rolling back to the same image registry.suse.com/suse/sles/15.4/virt-controller:0.54.0-150400.3.7.1, one of the pods is in crashLoopBackOff statesalmon-city-57654
11/01/2023, 2:53 AMsalmon-city-57654
11/01/2023, 6:53 AM0.54.0-150400.3.7.1
-> 0.54.0-150400.3.19.1
- patch: '{"spec":{"template":{"spec":{"containers":[{"name":"virt-controller",
"args":["--launcher-image","registry.suse.com/suse/sles/15.4/virt-launcher:0.54.0-150400.3.19.1","--port","8443",
"-v", "2"],"image":"registry.suse.com/suse/sles/15.4/virt-controller:0.54.0-150400.3.19.1","imagePullPolicy":"Always"}]}}}}'
resourceName: virt-controller
resourceType: Deployment
type: strategic
3. check the virt-controller deployment contain the above patch
4. check the virt-controller pod running normally
5. remove the above patch
6. check the virt-controller deployment with the default config
7. check the virt-controller pod running normally
I did not encounter any errors.salmon-city-57654
11/01/2023, 3:23 PMvirt-controller
you ran into an error?salmon-city-57654
11/01/2023, 3:23 PMsalmon-city-57654
11/01/2023, 3:39 PMpowerful-soccer-11224
11/02/2023, 7:24 AM/usr/bin/virt-controller: exec format error
. I am still not able to find out why it's working on the remaining 2 nodes and only failing on this node even though all the configuration is same for all nodessalmon-city-57654
11/02/2023, 9:09 AMsalmon-city-57654
11/02/2023, 9:10 AMAdditional context
sectionpowerful-soccer-11224
11/02/2023, 12:50 PM