This message was deleted.
# harvester
a
This message was deleted.
c
Did the VM ips change during the upgrade? I'm not sure the answer tbh but have you tried redeploying the pods or maybe just a single pod using rwx?
b
All the same IPs. This is a new workload for rwx so it's the first pod. Tried enabling Storage Network for RWX Volume Enabled in longhorn but that's just changed the IP for the network unreachable message
... to one in the storage network range
s
Hi @brainy-kilobyte-33711, Did your guest VM have the same vlan network? The
Storage Network for RWX Volume Enabled
is correct.
b
(responded in github issue)
s
Thanks @brainy-kilobyte-33711, I am still finding your SB. Could you just simply use
ip a
to check the network on the guest VM? Is the
10.150.8.44
you mentioned on the GH issue the vlan scope, right?
b
Sent you a DM with the SB. That IP falls within the storage network configured on harvester
Copy code
{
  "clusterNetwork": "storage",
  "range": "10.150.8.0/22",
  "vlan": 101
}
Here is
ip a
from a guest rke2 VM. The guest rke2 is deployed with cilium. The compute network is
10.150.2.0/23
VLAN100
, and the management network is
10.150.1.0/24
VLAN99
Copy code
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 0e:dc:4d:0c:07:d9 brd ff:ff:ff:ff:ff:ff
    altname enp1s0
    inet 10.150.2.30/23 brd 10.150.3.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::cdc:4dff:fe0c:7d9/64 scope link
       valid_lft forever preferred_lft forever
3: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:ba:06:a4:97:79 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::30ba:6ff:fea4:9779/64 scope link
       valid_lft forever preferred_lft forever
4: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ba:f2:9f:61:63:80 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.120/32 scope global cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::b8f2:9fff:fe61:6380/64 scope link
       valid_lft forever preferred_lft forever
5: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether 22:77:c7:4d:9f:70 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::2077:c7ff:fe4d:9f70/64 scope link
       valid_lft forever preferred_lft forever
7: lxc_health@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 12:ec:38:af:f5:0d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::10ec:38ff:feaf:f50d/64 scope link
       valid_lft forever preferred_lft forever
9: lxc0bb199b1fbe5@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 86:fe:7b:a4:07:1e brd ff:ff:ff:ff:ff:ff link-netns cni-a9028fb6-8c7d-6a68-8d24-5d50b809c69a
    inet6 fe80::84fe:7bff:fea4:71e/64 scope link
       valid_lft forever preferred_lft forever
11: lxce31a6fe8b6db@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d6:2a:37:3e:f0:23 brd ff:ff:ff:ff:ff:ff link-netns cni-80fb6a54-7bfb-6ccc-df61-27be4592baef
    inet6 fe80::d42a:37ff:fe3e:f023/64 scope link
       valid_lft forever preferred_lft forever
13: lxc3f0c3f829df3@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c6:b0:bd:d8:35:24 brd ff:ff:ff:ff:ff:ff link-netns cni-d9161f1c-24b7-a1bd-719e-e556777e73c6
    inet6 fe80::c4b0:bdff:fed8:3524/64 scope link
       valid_lft forever preferred_lft forever
15: lxc33c872469176@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 36:76:3f:04:33:40 brd ff:ff:ff:ff:ff:ff link-netns cni-52e2910b-726e-8b69-13a5-8f639ddaeb98
    inet6 fe80::3476:3fff:fe04:3340/64 scope link
       valid_lft forever preferred_lft forever
17: lxc63cca5d8b042@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 06:49:97:6c:c3:7c brd ff:ff:ff:ff:ff:ff link-netns cni-16cd1feb-bfb4-d938-f770-8418aadf6a1d
    inet6 fe80::449:97ff:fe6c:c37c/64 scope link
       valid_lft forever preferred_lft forever
19: lxc34fd233894e1@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f6:ac:e0:e8:8c:aa brd ff:ff:ff:ff:ff:ff link-netns cni-767b0f21-d4c1-9816-b2e1-1cde1e3b121f
    inet6 fe80::f4ac:e0ff:fee8:8caa/64 scope link
       valid_lft forever preferred_lft forever
21: lxc43aae4f9b81a@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 02:c2:8d:74:a7:35 brd ff:ff:ff:ff:ff:ff link-netns cni-1cd2a25f-d744-85bb-b234-27831f10fb02
    inet6 fe80::c2:8dff:fe74:a735/64 scope link
       valid_lft forever preferred_lft forever
23: lxc626ed4db3d11@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:22:eb:d8:52:10 brd ff:ff:ff:ff:ff:ff link-netns cni-bdb26552-1cc1-9bb8-0724-360fa3e1923b
    inet6 fe80::ac22:ebff:fed8:5210/64 scope link
       valid_lft forever preferred_lft forever
25: lxca315cc647dc5@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3a:57:d1:e1:2a:7b brd ff:ff:ff:ff:ff:ff link-netns cni-91fe3e8d-083b-b345-83e5-61603e3ea726
    inet6 fe80::3857:d1ff:fee1:2a7b/64 scope link
       valid_lft forever preferred_lft forever
27: lxce43013fad42b@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1a:85:b1:32:22:0e brd ff:ff:ff:ff:ff:ff link-netns cni-d2eb2dff-6f96-1dca-2527-da06ca4702f9
    inet6 fe80::1885:b1ff:fe32:220e/64 scope link
       valid_lft forever preferred_lft forever
29: lxcdb3c5f778896@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 06:60:07:a9:64:77 brd ff:ff:ff:ff:ff:ff link-netns cni-b73b9da5-e009-9e4a-ac95-20ccdf724319
    inet6 fe80::460:7ff:fea9:6477/64 scope link
       valid_lft forever preferred_lft forever
31: lxcddcceecd04a2@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 62:4e:47:a4:0f:09 brd ff:ff:ff:ff:ff:ff link-netns cni-51511a83-bf92-527a-827d-388fd0afa4cb
    inet6 fe80::604e:47ff:fea4:f09/64 scope link
       valid_lft forever preferred_lft forever
33: lxcde7845a4268a@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e2:7d:eb:62:ef:99 brd ff:ff:ff:ff:ff:ff link-netns cni-4f8f7a59-625f-0626-be6c-cd59c32e2aa3
    inet6 fe80::e07d:ebff:fe62:ef99/64 scope link
       valid_lft forever preferred_lft forever
35: lxc397c982d2df7@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5e:c2:31:45:2c:27 brd ff:ff:ff:ff:ff:ff link-netns cni-7536f535-67dd-e019-1811-e3af883e91ba
    inet6 fe80::5cc2:31ff:fe45:2c27/64 scope link
       valid_lft forever preferred_lft forever
37: lxca531dca7aee7@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:c3:16:b3:ab:54 brd ff:ff:ff:ff:ff:ff link-netns cni-439f3773-cf39-d479-85d4-91fb57607b2a
    inet6 fe80::8c3:16ff:feb3:ab54/64 scope link
       valid_lft forever preferred_lft forever
39: lxca3f6fb3d0dcd@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:6e:5c:73:04:07 brd ff:ff:ff:ff:ff:ff link-netns cni-8a30a43a-8d6e-06d6-7cbe-e169cc7e9566
    inet6 fe80::86e:5cff:fe73:407/64 scope link
       valid_lft forever preferred_lft forever
41: lxc0d77bc3a2806@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 8e:02:36:a7:fd:ec brd ff:ff:ff:ff:ff:ff link-netns cni-f0369d79-e681-8d4b-e888-dd1f4deba314
    inet6 fe80::8c02:36ff:fea7:fdec/64 scope link
       valid_lft forever preferred_lft forever
43: lxcbc99547ad4e4@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 42:be:3f:11:c9:7a brd ff:ff:ff:ff:ff:ff link-netns cni-a1a57dbe-e883-f5d2-2925-fa4bdd659f25
    inet6 fe80::40be:3fff:fe11:c97a/64 scope link
       valid_lft forever preferred_lft forever
51: lxc0138b1204622@if50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 42:77:c0:65:54:9d brd ff:ff:ff:ff:ff:ff link-netns cni-90174479-c61f-d717-dde3-89303df6cdc7
    inet6 fe80::4077:c0ff:fe65:549d/64 scope link
       valid_lft forever preferred_lft forever
s
Hi @brainy-kilobyte-33711, Could you check again if your VM network (vlan100/vlan99) can work with the storage network (vlan101)? It should depend on your gateway. I test on my local. It could work with the same vlan.
b
Hi - As we have the storage network configured and the VMs are launching fine with disks attached to them, I assume there are no issues with the network setup? Is there additional connectivity we need to ensure for this compared to a standard setup.
s
Hi @brainy-kilobyte-33711, The VM and hotplug volume did not relate to the storage network. It will related to the CIDR. Could you try adding an extra nic card with vlan101 on the guest VM and provisioning the pod with RWX volume?
b
I added another NIC but it's the same error. The NIC did not get an IP attached to it.
s
It needs the DHCP server for this vlan.
Or you can try to manually assign the IP to this nic, and ensure the IP did not conflict
b
the longhorn instance manager pods in harvester are all getting IPs from the storage network assigned to them so I assume there is already a DHCP server somewhere?
s
I thought that relying on the whereabouts instead of the real DHCP server. So, that’s why it cannot get IP when you attach it to VM.
b
I manually added an IP in the storage network CIDR to the new NIC and I lost all external connectivity to the VM. The RWX pod on that node now fails to start with "connect: no route to host" rather than "Network is unreachable"
s
I mean you can use
ip
command to add static ip in the guest VM.
b
That's what I did, attached the nic then added an IP to it using the ip command in the VM
s
Could you show the
ip a
again?
It’s weird because if you just add an extra nic. It’s should not break anything…
Or you also change any route?
b
Adding the extra nic was fine, it was when I assigned an IP to it using
ip
that external connectivity went down. I cannot run an
ip a
with the IP assigned as I cannot ssh to the VM anymore. I did not change any routes
did it work in your local test setup with a storage network configured?
s
yeah, I tested again yesterday, and it works.
hmm
could you just remove the extra nic and ensure the network could be accessed again?
b
Yep NIC removed and after a reboot all back to normal. Is there any other connectivity differences between a RWO volume and a RWX volume?
s
RWO volume relies on the kubevirt hot-plug volume. RWX volume depends on the CSI driver and the nfs endpoint with LH share-manager.
b
Appreciate the help so far with this but I am going on annual leave for a while now. I will pick it up again in the new year