This message was deleted.
# rke2
a
This message was deleted.
c
You need the storage to be accessible to any node you want to mount volumes. The host mounts the volume, and then the pod accesses the filesystem.
l
@creamy-pencil-82913 thanks for your reply. it seems not. I ran a test that my VM could only reach the vCenter node, and could not access the ESXi node or vSAN IP addresses. However, the pod could still mount the pvc and read/write data into the volume. it’s surprising. The more specific thing related to it is how to design the underlying networking for VM nodes and vSphere nodes. it’s a common case that we use separate storage network for vSAN, and if we should set up a high speed networking between vSAN network and the k8s VMs’ network? I have no idea about that.
c
the CPI and CSI controller pods need to reach vsphere. Nodes that are only mounting storage need to reach the vsan. Basically, things need to reach the things they talk to.
I would probably go checkout the vsphere cpi and csi docs if you have more questions.
l
@creamy-pencil-82913 great thanks for your help. do you mean we could have two kinds of nodes for vsphere-csi. 1) VMs run on top of vsphere. 2) nodes that do not run on top of vsphere, like baremetal ones. However, I have heard that vsphere-csi could not be used in hybrid mode, which means if I try to add some bare-mental nodes into rke2 cluster with vsphere-csi enabled, it will pop error. I will check the docs you provide to me. thanks agian