This message was deleted.
# harvester
a
This message was deleted.
r
Copy code
rancher@mira:~> kubectl get svc -n kube-system ingress-expose -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2023-03-20T12:23:35Z"
  finalizers:
  - <http://service.kubernetes.io/load-balancer-cleanup|service.kubernetes.io/load-balancer-cleanup>
  name: ingress-expose
  namespace: kube-system
  resourceVersion: "7043"
  uid: 5edddecd-886b-41d8-8d0f-690eea33c6d0
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 10.53.34.200
  clusterIPs:
  - 10.53.34.200
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  loadBalancerIP: 192.168.1.251
  ports:
  - name: https-internal
    nodePort: 31773
    port: 443
    protocol: TCP
    targetPort: 443
  - name: http
    nodePort: 30127
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    <http://app.kubernetes.io/name|app.kubernetes.io/name>: rke2-ingress-nginx
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 192.168.1.251
Relatedly, this GH Issue aims to create the missing doc: https://github.com/harvester/harvester/issues/2418
I tried modifying
/oem/99_custom.yaml
and rebooting but the management bridge still has the interface addresses from the old network, i.e. the management VIP and some load balancer IPs from the vip-pool I created.
Copy code
rancher@mira:~> ip addr sh mgmt-br
3: mgmt-br: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d8:5e:d3:83:97:5e brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.252/24 brd 192.168.2.255 scope global mgmt-br
       valid_lft forever preferred_lft forever
    inet 192.168.1.216/32 scope global mgmt-br
       valid_lft forever preferred_lft forever
    inet 192.168.1.203/32 scope global mgmt-br
       valid_lft forever preferred_lft forever
    inet 192.168.1.219/32 scope global mgmt-br
       valid_lft forever preferred_lft forever
    inet 192.168.1.202/32 scope global mgmt-br
       valid_lft forever preferred_lft forever
    inet 192.168.1.214/32 scope global mgmt-br
       valid_lft forever preferred_lft forever
    inet 192.168.1.251/32 scope global mgmt-br
       valid_lft forever preferred_lft forever
    inet 192.168.1.208/32 scope global mgmt-br
       valid_lft forever preferred_lft forever
Another PR is poised to contribute this new article about updating Harvester configuration after install: https://deploy-preview-296--harvester-preview.netlify.app/v1.1/install/update-harvester-configuration/, but it doesn't cover changing the management VIP.
I can see the the changed management IP I specified in
/oem/99_custom.yaml
is successfully built into these manifests, and the old management VIP address is absent.
Copy code
rancher@mira:~> sudo  grep -lr 192.168.2.251 /etc/rancher/
/etc/rancher/rancherd/config.yaml.d/13-monitoring.yaml
/etc/rancher/rancherd/config.yaml.d/10-harvester.yaml
/etc/rancher/rke2/config.yaml.d/90-harvester-server.yaml
On closer inspection of those manifests the only recurrence of the VIP address is for configuring Alertmanager and specifying the IP SAN for the server certificate, so I suspect the VIP address persistence is somewhere downstream of the Elemental cloud-config. I don't have any directives in
/usr/local/cloud-config
, but there are many occurrences of the old VIP address in the state bind mounts under
/usr/local/.state
, which I'm beginning to dig through for clues about how the VIP is configured or persisted.
I do not think the
/oem/99_custom.yaml
file is used at all during a reboot. I have modified the
initramfs
commands array to have two additional commands and they are not executed during a reboot.
Copy code
(venv) rancher@mira:~> sudo yq '.stages.initramfs[0].commands' /oem/99_custom.yaml 
- rm -f /etc/sysconfig/network/ifroute-mgmt-br
- chown :rancher /etc/rancher/rke2/rke2.yaml
- chmod g+r /etc/rancher/rke2/rke2.yaml
I discovered it is possible to effect the desired configuration manually by editing the Service resource's IP address, e.g.,
k edit svc -n kube-system ingress-expose
. However, this does not address persisting that configuration.
I additionally munged the following files where the old VIP address recurred, substituting the new. The new VIP is intact after a reboot and the TLS certificate has the new VIP address as an IP SAN. Somehow, I doubt this was the best procedure, but it seems to have been successful. •
/var/lib/rancher/rancherd/working
/var/lib/rancher/rancherd/bootstrapped
p
@refined-analyst-8898 Sorry to confirm again, are you changing the VIP or node’s IP?
r
First I moved the node to a new network and the node IP updated automatically with the new static lease. Then I started this thread about changing the management VIP to the new network.