https://rancher.com/ logo
#harvester
Title
# harvester
r

rough-teacher-88372

03/22/2023, 4:27 PM
we have a kubernetes cluster running on harvester, created via an external rancher. we had a brief outage earlier today, and harvester seems to have rebooted all relevant vms. however, since the vms are all assigned ips via dhcp, and we had a fairly short lease time configured, they all changed ip addresses. this was reported correctly to harvester, which displays the new ip addresses, but the kubernetes cluster in rancher still seems to expect the old addresses for the nodes. is there anything we can do to flush those?