Hi there!
I cordoned, drained, upgraded the resources of a worker node and restarted it, but k3s still doesn't schedule pods on the upgraded node, whenever i restart some workload they are still assigned to one of the other smaller nodes.
To force-schedule pods to the new node I have to cordon all the other ones and delete some pods, only then after some time they are re-scheduled to the under-utilized node, but every time they are restarted they are placed on the other smaller nodes.
So now I have one of my nodes sitting at 91% RAM and another at 30%. Apparently resource requests are not affecting the scheduling.
Any idea of what could be happening?