refined-analyst-8898
04/23/2023, 1:56 PMifcfg-mgmt-br
interface obtains a static lease via DHCP. I moved the node to a new network, and need to reconfigure the ingress VIP.rancher@mira:~> kubectl get svc -n kube-system ingress-expose -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2023-03-20T12:23:35Z"
finalizers:
- <http://service.kubernetes.io/load-balancer-cleanup|service.kubernetes.io/load-balancer-cleanup>
name: ingress-expose
namespace: kube-system
resourceVersion: "7043"
uid: 5edddecd-886b-41d8-8d0f-690eea33c6d0
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.53.34.200
clusterIPs:
- 10.53.34.200
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerIP: 192.168.1.251
ports:
- name: https-internal
nodePort: 31773
port: 443
protocol: TCP
targetPort: 443
- name: http
nodePort: 30127
port: 80
protocol: TCP
targetPort: 80
selector:
<http://app.kubernetes.io/name|app.kubernetes.io/name>: rke2-ingress-nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.168.1.251
/oem/99_custom.yaml
and rebooting but the management bridge still has the interface addresses from the old network, i.e. the management VIP and some load balancer IPs from the vip-pool I created.
rancher@mira:~> ip addr sh mgmt-br
3: mgmt-br: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d8:5e:d3:83:97:5e brd ff:ff:ff:ff:ff:ff
inet 192.168.2.252/24 brd 192.168.2.255 scope global mgmt-br
valid_lft forever preferred_lft forever
inet 192.168.1.216/32 scope global mgmt-br
valid_lft forever preferred_lft forever
inet 192.168.1.203/32 scope global mgmt-br
valid_lft forever preferred_lft forever
inet 192.168.1.219/32 scope global mgmt-br
valid_lft forever preferred_lft forever
inet 192.168.1.202/32 scope global mgmt-br
valid_lft forever preferred_lft forever
inet 192.168.1.214/32 scope global mgmt-br
valid_lft forever preferred_lft forever
inet 192.168.1.251/32 scope global mgmt-br
valid_lft forever preferred_lft forever
inet 192.168.1.208/32 scope global mgmt-br
valid_lft forever preferred_lft forever
/oem/99_custom.yaml
is successfully built into these manifests, and the old management VIP address is absent.
rancher@mira:~> sudo grep -lr 192.168.2.251 /etc/rancher/
/etc/rancher/rancherd/config.yaml.d/13-monitoring.yaml
/etc/rancher/rancherd/config.yaml.d/10-harvester.yaml
/etc/rancher/rke2/config.yaml.d/90-harvester-server.yaml
/usr/local/cloud-config
, but there are many occurrences of the old VIP address in the state bind mounts under /usr/local/.state
, which I'm beginning to dig through for clues about how the VIP is configured or persisted./oem/99_custom.yaml
file is used at all during a reboot. I have modified the initramfs
commands array to have two additional commands and they are not executed during a reboot.
(venv) rancher@mira:~> sudo yq '.stages.initramfs[0].commands' /oem/99_custom.yaml
- rm -f /etc/sysconfig/network/ifroute-mgmt-br
- chown :rancher /etc/rancher/rke2/rke2.yaml
- chmod g+r /etc/rancher/rke2/rke2.yaml
k edit svc -n kube-system ingress-expose
. However, this does not address persisting that configuration./var/lib/rancher/rancherd/working
• /var/lib/rancher/rancherd/bootstrapped
prehistoric-balloon-31801
04/25/2023, 6:35 AMrefined-analyst-8898
04/25/2023, 9:54 AM