This message was deleted.
# general
a
This message was deleted.
f
Oops, not used to Slack. This is what I meant. I wasn't very clear sorry.
if you have a kubeconfig from the rancher server to talk to this cluster, does it have additional direct-connect contexts?
I do not unfortunately. I am sure that I selected them to all have the 3 roles. This is number 3 in that cluster. The reason I tried to add it back this way was I found the container logs were eating the drive which was causing disk pressure on all nodes in that cluster. I went to apply limits to docker logging, read the the setting only applies to newly created containers, pulled a silly and removed the containers, for some reason thinking they would redeploy. That was obviously a goof, and not the right way to handle it - I know better haha. Anyway, if it helps to note, the alert I am seeing for the cluster that I posted does not change IPs. It's always the IP of the suspect node. Here is a list of containers on one of the remaining nodes. It does show controller, apiserver, and etcd. Seems the apiserver and etcd are not enjoying themselves, based on the uptime. All nodes were alerting on disk pressure, which was noticed within /var/lib/docker/overlay2 on the node I tried to fix and broke worse. The remaining node in this screenshot isnt passed 46% though. /var has 30g for itself.