This message was deleted.
# general
a
This message was deleted.
p
Sounds like your nodes were pending reboot following kernel updates. Similar issues happen when Docker is trying to start several subsystems depending on kernel modules that were not loaded before and are not available anymore
m
Thats awesome to know, though I do not remember doing any updates recently, is there a mechanism of me troubleshooting this? As in get some additional logging for such information in the future, I want to avoid a long investigation and then just rebooting because I couldnt find anything :)
p
Yeah, starting a nginx container and getting "operation not supported" error is the perfect way of knowing this is happening
m
Can you elaborate on that please? Do I just start an unconfigured nginx pod, and read its logs?
p
Not even a pod, just do "docker run nginx" on a node
m
I see, and if I have it running as a pod, then it wouldnt pick it up because it was already initalized before the problem arised? What should I do to have logging to indicate this issue, maybe a cron creating and destroying nginx every hour or so and reading the logs from that?
p
Nah, just apply some proper patch management with needrestart just to avoid pending updates
b
@powerful-librarian-10572 can you please advise where are the logs which can show us the root cause of this? Is there an easy way how we get the logs - some location where they are stored? In the Rancher UI it shows only "Waiting for provider" and the status of the cluster is ok, but nothing else
p
I don't think you can find this in rancher