This message was deleted.
# harvester
a
This message was deleted.
p
I think we have the same problem. See my thread above. There is this setting "reserved memory" but the docs aren't clear on what that actually is or what's the default setting. I agree with you that this shouldn't happen. It seems like a bug to me. In my opinion, the default config should have sufficient buffer to not run into this problem.
1
f
"reserved memory" seems to control the resource request for the vm, which doesn't affect the limit. And, you get an error if you try to set it higher than the guest resource number.
As said, this might be a kubevirt thing. In that: Kubevirt should set the qemu guest memory lower (with sufficient margin) than the cgroup limit for the virt-launcher pod. But it doesn't seem to do that.
It might be as simple as Harvester needs to configure the overcommit settings for Kubevirt. Or at least make them configurable in the interface: https://github.com/kubevirt/kubevirt/issues/10057 https://github.com/kubevirt/kubevirt/issues/4560
p
From my understanding it's additional memory added for virtualization overhead. See https://docs.harvesterhci.io/v1.2/rancher/resource-quota/#overhead-memory-of-virtual-machine
Because requesting 256MB on a 8GB+ machine like in the example wouldn't make sense. If you look in the yaml of the VM you can actually see that request should be lower. In my case request was something like 10Gb and limit 16Gb.
f
Right.. AFAIC it's the limit that's the problem and not the request, because on my host I have plenty of free memory. And the pod cgroup should be allowed to burst way passed its resource request. I'm not entirely sure I understand the full stack, but: https://github.com/harvester/harvester/blob/master/pkg/builder/vm.go#L144C21-L150 Wouldn't it help to just set: domain: memory: guest: lower than: resources: limit: memory: ?
p
No idea, I still try to wrap my head around how it works. Documentation says set the reserved memory setting but doesn't further elaborate 🤷‍♂️
From my understanding this would increase one of those limits
f
Basically that maps to plain Kubernetes resource requests and limits. And because the limit is breached (in my case), the cgroup is killed by my kernel.
So here's how my VM resouce looks in the cluster:
Copy code
memory:
            guest: 24476Mi
          resources:
            limits:
              cpu: "6"
              memory: 24Gi
            requests:
              cpu: 375m
              memory: 16Gi
24Gi is 24576Mi but that's awfully close. Also: Is there even a reason why Harvester sets "limits"? libvirt should limit the memory just fine. But IF a pod-limit is set, there should be bigger headroom I feel.