https://rancher.com/ logo
Title
l

late-needle-80860

02/02/2023, 9:49 AM
I’ve been debating dedicating an LVM2 group to containerd and kubelet data … in this issue: https://github.com/k3s-io/k3s/issues/2068 … I’ve followed Mr. @*mdrakiburrahman* suggestions in that issue and also
symlinked
/var/lib/kubelet to the dedicated disk >> this to get all Pod related data onto that disk. Now I’m unfortunately seeing, for workloads using a PVC’s, the following err:
MountVolume.SetUp failed for volume "pvc-7dc11d73-3595-47a9-bb02-a95f23518ca5" : applyFSGroup failed for vol pvc-7dc11d73-3595-47a9-bb02-a95f23518ca5: lstat /k3s-worker-data/kubelet/pods/12bfddbb-a8fe-4edb-9620-6e96f40ce840/volumes/kubernetes.io~csi/pvc-7dc11d73-3595-47a9-bb02-a95f23518ca5/mount: no such file or directory
If I create the
mount
directory manually files are created in the dir … However, comparing the permissions on the dir with permissions on another workload running on a cluster where I’m NOT symlinking to dedicate a disk to container data, the permissions are different. On the non-working node it’s the
fsGroup
on the files in sub-dirs to
mount
. I’m in doubt in regards to how to set these permissions, if at all I need to. The Stateful workload I’m troubleshooting do set the
fsGroup
in the
SecurityContext
in it’s
Kind: StatefulSet
manifest. Any ideas? Suggestions are VERY WELCOME
My reason for
symlinking
the
kubelet
dir from
var/lib/kubelet
to
myDir
is that if I didn’t I would get
CSINode … <http://driver.longhorn.io|driver.longhorn.io>
not found. As that driver would end up in
/var/lib/kubelet
instead of the new kubelet root-dir I set via
--kubelet-arg root-dir=…
And yes I’ve also configured the
kubeletRootDir
variable to the Longhorn Helm Chart on the CSI component.
I can also see that in
/var/lib/kubelet/plugins/kubernetes.io/csi/driver.longhorn.io/SOME_VOLUME/
the
globalmount
dir have
1001
in the
gid
section when e.g. viewing permission with
ls -la
… and that’s NOT the case on the node whereon I’ve
symlinked
the kubelet and containerd directories
@creamy-pencil-82913 would you be so kind, if you have any input on this one … me again, from the good ol’ #2068 on the
K3s
repo … trying in the
Longhorn
channel here .. as Longhorn is the final thing not fully working after doing what I describe above. Any help is highly appreciated.
The node-driver-registrar do not throw any errors … however, it seems to be registering the driver to
/var/lib/kubelet
and not reflect the value to
kubeletRootDir
Hmm diving deeper I see that the Longhorn project uses https://github.com/kubernetes-csi/node-driver-registrar … and there’s a
--kubelet-reigstration-path
parameter … However, if one changes the
--kubelet-arg root-dir
parameter this is not reflected on the node-driver-registrar parameters ..
So setting csi.kubeletRootDir in the values.yaml file of the Longhorn Helm chart sets the KUBELET_ROOT_DIR env. var in the
longhorn-driver-deployer
Deployment
… and that is reflected in the
longhorn-csi-plugin
DaemonSet
workload … However, if one have a cluster with a cluster already running Longhorn and one want to change the
kubelet
and
conatinerd
data paths … then apparently it’s not enough to set the
csi.kubeletRootDir
and then delete the
longhorn-csi-plugin
DaemonSet
as it doesn’t come back… I ended up having to uninstall Longhorn >> install it again >> the created
longhorn-csi-plugin
DaemonSet
then reflected the
csi.kubeletRootDir
setting. --- For people wanting to change
kubelet
and
containerd
paths and are running
Longhorn
is there a better way than uninstalling Longhorn as that have potentially a prolonged downtime effect. ---
So setting csi.kubeletRootDir in the values.yaml file of the Longhorn Helm chart sets the KUBELET_ROOT_DIR env. var in the
longhorn-driver-deployer
Deployment
… and that is reflected in the
longhorn-csi-plugin
DaemonSet
workload … However, if one have a cluster with a cluster already running Longhorn and one want to change the
kubelet
and
conatinerd
data paths … then apparently it’s not enough to set the
csi.kubeletRootDir
and then delete the
longhorn-csi-plugin
DaemonSet
as it doesn’t come back… I ended up having to uninstall Longhorn >> install it again >> the created
longhorn-csi-plugin
DaemonSet
then reflected the
csi.kubeletRootDir
setting. --- For people wanting to change
kubelet
and
containerd
paths and are running
Longhorn
is there a better way than uninstalling Longhorn as that have potentially a prolonged downtime effect???? --- Thank you very much.
Ey yo 😄 - anyone
Anyone with an idea on this? Thanks What’s the process for updating and applying the
csi.kubeletRootDir
value when Longhorn is deployed via the Helm chart?
Anyone with an idea on this? Thanks What’s the process for updating and applying the
csi.kubeletRootDir
value when Longhorn is deployed via the Helm chart?
n

narrow-egg-98197

02/16/2023, 11:06 AM
I guess it should also include re-create longhorn manager, since the interface of CSI node server and controller server were implement by longhorn-manager. https://github.com/longhorn/longhorn-manager/tree/master/csi
l

late-needle-80860

02/17/2023, 7:41 AM
Alright cool. I’ll try the recreate the longhorn manager approach
😀 1