This message was deleted.
# k3s
a
This message was deleted.
h
@jolly-waitress-71272 "waiting for first consumer to be created before binding," typically occurs when the pvc is in a pending state because it requires a pod to consume it before it can be bound to a pv. In a single node cluster like yours, where there are no other pods scheduled to use the pvc, this can cause the pvc to remain in a pending state indefinitely.
I would say you can manually create a pv that corresponds to the pvc you created.
Something like
Copy code
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-iso-win2k19
spec:
  storageClassName: local-path
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /the/folder/path
Replace /the/folder/path with the actual path on the host where you want to store the iso.
Then modify the pvc to reference the new pv: kubectl edit pvc iso-win2k19
Under the spec, you can add a section like: volumeName: pv-iso-win2k19
Then verify the status again: kubectl get pvc iso-win2k19
Also, I don't know who told you that you cannot mount volumes on a single node cluster. That's simply not true, in fact, you can mount volumes on a single node cluster just like you would on a multi node cluster. The ability to mount volumes is a fundamental feature of k8s, regardless of the cluster size.
j
The upload command expires from the ttl flag:
Copy code
# virtctl image-upload --image-path windows_server_2019.iso      --pvc-name iso-win2k19 --access-mode ReadWriteOnce      --pvc-size 10G --uploadproxy-url 10.43.65.246:443      --insecure --wait-secs=240
Flag --pvc-name has been deprecated, specify the name as the second argument instead.
Flag --pvc-size has been deprecated, use --size instead.
Using existing PVC default/iso-win2k19
Waiting for PVC iso-win2k19 upload pod to be ready...
timed out waiting for the condition
I appreciate the advice to create a pv, but I think something in this CDI is supposed to do that maybe?
h
Have you checked the CDI logs?.
kubectl logs -n cdi cdi_pod
j
Copy code
# kubectl logs -n cdi cdi_pod
Error from server (NotFound): pods "cdi_pod" not found
h
Replace that value with the name of your cdi pod
That was just an example.
j
I see.
There's nothing that stands out as the expected pod to me, since they're all old:
Copy code
# kubectl get pods -n cdi
NAME                               READY   STATUS    RESTARTS   AGE
cdi-operator-7c58b4f8c8-ngdhw      1/1     Running   0          93m
cdi-apiserver-57d47bc55d-bndpt     1/1     Running   0          92m
cdi-uploadproxy-5b5cc54d48-vzrkf   1/1     Running   0          92m
cdi-deployment-69d44b67b5-6pfjg    1/1     Running   0          92m
h
Also when executing
Copy code
virtctl image-upload windows_server_2019.iso iso-win2k19 --access-mode ReadWriteOnce --size 10G --uploadproxy-url 10.43.65.246:443 --insecure --wait-secs=240
Make sure to replace the windows_server_2019.iso with the correct path to your iso.
I want the logs not the pods name.
j
Yeah, that's my iso name.
This is suspicious:
Copy code
# kubectl logs -n cdi cdi-uploadproxy-5b5cc54d48-vzrkf
I0627 18:34:05.700348       1 uploadproxy.go:64] Note: increase the -v level in the api deployment for more detailed logging, eg. -v=2 or -v=3
W0627 18:34:05.700561       1 client_config.go:617] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0627 18:34:06.801816       1 certwatcher.go:125] Updated current TLS certificate
I0627 18:34:06.802161       1 certwatcher.go:81] Starting certificate watcher
So is this:
Copy code
# kubectl logs -n cdi cdi-deployment-69d44b67b5-6pfjg
{"level":"debug","ts":1687892069.9512436,"logger":"controller.clone-controller","msg":"PVC annotation not found, skipping pvc","PVC":"default/iso-win2k19","annotation":"<http://k8s.io/CloneRequest|k8s.io/CloneRequest>"}
{"level":"debug","ts":1687892069.951248,"logger":"controller.clone-controller","msg":"PVC annotation not found, skipping pvc","PVC":"default/iso-win2k19","annotation":"<http://k8s.io/CloneRequest|k8s.io/CloneRequest>"}
{"level":"debug","ts":1687892069.9512522,"logger":"controller.clone-controller","msg":"PVC not bound, skipping pvc","PVC":"default/iso-win2k19","Phase":"Pending"}
{"level":"debug","ts":1687892069.9512594,"logger":"controller.clone-controller","msg":"Should not reconcile this PVC","PVC":"default/iso-win2k19","checkPVC(AnnCloneRequest)":false,"NOT has annotation(AnnCloneOf)":true,"isBound":false,"has finalizer?":false}
{"level":"debug","ts":1687892069.9512548,"logger":"controller.upload-controller","msg":"PVC not bound, skipping pvc","PVC":"default/iso-win2k19","Phase":"Pending"}
{"level":"debug","ts":1687892069.9512773,"logger":"controller.upload-controller","msg":"PVC not bound, skipping pvc","PVC":"default/iso-win2k19","Phase":"Pending"}
{"level":"debug","ts":1687892069.9512832,"logger":"controller.upload-controller","msg":"not doing anything with PVC","PVC":"default/iso-win2k19","isUpload":true,"isCloneTarget":false,"isBound":false,"podSucceededFromPVC":false,"deletionTimeStamp set?":false}
{"level":"debug","ts":1687892069.9511857,"logger":"controller.import-controller","msg":"PVC not bound, skipping pvc","PVC":"default/iso-win2k19","Phase":"Pending"}
h
Ok let me take a look
🙏 1
hmm: it seems that the cdi controllers are not processing this pvc default/iso-win2k19
The pvc does not have the k8s.io/CloneRequest annotation, so the clone controller skips processing it.
This indicates that the cdi controllers are not taking any actions with the pvc, which could be the reason for the pvc not being processed or bound to a pv.
j
Someone suggested I install local-path-provisioner, but i was under the impression k3s came with that already.
h
What I can say is start by checking the pvc definition. Verify that the definition is correct and properly configured. Also ensure that the pvc is created in the correct namespace default in this case and that it specifies the appropriate storage class, the access mode, etc.
Also, I suggest verifying the SC referenced by the pvc is available. You can use: kubectl get sc to list the available storage classes and verify that the one specified in the pvc exists and is in the available state.
j
I'm reading more about this: https://kubevirt.io/2019/How-To-Import-VM-into-Kubevirt.html You may be right that I am responsible for creating the PV
h
Yeah
j
I was right about the local-path provisioner being available:
Copy code
# kubectl get sc
NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   <http://rancher.io/local-path|rancher.io/local-path>   Delete          WaitForFirstConsumer   false                  25d
h
Then you should be able to create the pv for the pvc.
j
I'm trying to put that together now. What do you think is a reasonable place to put this disk?
Copy code
path: /the/folder/path
h
There are several things to take into account when choosing the right path. Like permissions, usage, etc. I cannot speak for your own specs. I would say use a location you know is available and that you have permission and that it has space.
I mean idk perhaps you can create a dir on /var or /opt is up to you though.
Any luck?
j
No. I struggled with creating a correct PV and ultimately bailed because separately I found out trashing the existing pvc and rerunning the upload command with --force-bind was successful in getting the image up.
However now I'm that dog in a labcoat. I have no idea what I'm doing.
h
So you were able to get it to work?. What do you need help with?.
j
My end goal is an Ubuntu 22.04 instance with GPU passthrough working.
I don't know how to get there from here.
h
That's different. I'm talking about the original issue the one you posted in the thread and the one we've been addressing. Your last request has nothing to do with helping you with the PVC though.
j
Right, yeah. That's fixed.
--force-bind was the answer.
h
Umm ok so that forced the binding. Makes sense since --force implies everything. But my question is, is that the right way?.
j
One of the devs, or at least someone who felt confident enough to ask someone else to help me, said:
Copy code
But most local storage is WFFC, and if you have multiple nodes you want the scheduler to decide which node the volume ends up on, not the scheduler for CDI. since the requirements for the VM are wildly different than CDI

With the --force-bind you say I don't care, just put it somewhere

which if you have a single node works just fine since there only one node

try with the force-bind, that essentially adds an annotation to the pvc that tells cdi just populate it. But if it uses an existing PVC it won't mess with the annotations.
Which ultimately worked:
Copy code
# virtctl image-upload --image-path windows_server_2019.iso      --pvc-name iso-win2k19 --access-mode ReadWriteOnce      --pvc-size 10G --uploadproxy-url <https://10.43.65.246>      --insecure --wait-secs=240 --force-bind
Flag --pvc-name has been deprecated, specify the name as the second argument instead.
Flag --pvc-size has been deprecated, use --size instead.
Using existing PVC default/iso-win2k19
Uploading data to <https://10.43.65.246>

 5.26 GiB / 5.26 GiB [================================================================================================================================================================================] 100.00% 19s

Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress
Processing completed successfully
Uploading windows_server_2019.iso completed successfully
h
Yeah so --force-bind did the job of everything i've mentioned 😆
j
I tried pretty hard to make a pv, but there's too many keys with unknown values. The one you freehanded didn't work.
h
The one I show you was a regular way to create a PV that works most of the time for me. You cannot take examples for granted, you need to make the proper changes. I was just mimicking your env.
j
Yeah, I was trying to write that just now. :D
415 Views