https://rancher.com/ logo
#rke2
Title
# rke2
h

handsome-monitor-68857

02/15/2023, 8:43 AM
I need to know if RKE2 support CephFS like it does with RBD.
a

agreeable-oil-87482

02/15/2023, 8:49 AM
RKE2 will support any storage/CSI driver that upstream k8s supports.
h

handsome-monitor-68857

02/15/2023, 8:56 AM
I've managed to install CephFS, got this 'selfLink was empty' as below
Copy code
an@MyDesktop:~/MyWorks/RKE/Ceph/CephFS$ kubectl -n cephfs logs deployments/cephfs-provisioner
I0215 08:11:48.513447       1 cephfs-provisioner.go:411] Creating CephFS provisioner <http://ceph.com/cephfs|ceph.com/cephfs> with identity: cephfs-provisioner-1, secret namespace: cephfs
I0215 08:11:48.514233       1 leaderelection.go:185] attempting to acquire leader lease  cephfs/ceph.com-cephfs...
E0215 08:11:48.631097       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ceph.com-cephfs", GenerateName:"", Namespace:"cephfs", SelfLink:"", UID:"bbe3dc06-7842-4a9e-9cdf-2376afcb6b33", ResourceVersion:"47819351", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63812045508, loc:(*time.Location)(0x19b4b00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"<http://control-plane.alpha.kubernetes.io/leader|control-plane.alpha.kubernetes.io/leader>":"{\"holderIdentity\":\"cephfs-provisioner-7dd49fc7-mrh6w_650e6409-ad08-11ed-8f64-d601a5529059\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2023-02-15T08:11:48Z\",\"renewTime\":\"2023-02-15T08:11:48Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'cephfs-provisioner-7dd49fc7-mrh6w_650e6409-ad08-11ed-8f64-d601a5529059 became leader'
I0215 08:11:48.631218       1 leaderelection.go:194] successfully acquired lease cephfs/ceph.com-cephfs
I0215 08:11:48.631320       1 controller.go:631] Starting provisioner controller <http://ceph.com/cephfs_cephfs-provisioner-7dd49fc7-mrh6w_650e6409-ad08-11ed-8f64-d601a5529059|ceph.com/cephfs_cephfs-provisioner-7dd49fc7-mrh6w_650e6409-ad08-11ed-8f64-d601a5529059>!
I0215 08:11:48.731602       1 controller.go:680] Started provisioner controller <http://ceph.com/cephfs_cephfs-provisioner-7dd49fc7-mrh6w_650e6409-ad08-11ed-8f64-d601a5529059|ceph.com/cephfs_cephfs-provisioner-7dd49fc7-mrh6w_650e6409-ad08-11ed-8f64-d601a5529059>!
I0215 08:12:20.821324       1 controller.go:987] provision "cephfs/cephfs-claim1" class "cephfs": started
E0215 08:12:20.831245       1 controller.go:1004] provision "cephfs/cephfs-claim1" class "cephfs": unexpected error getting claim reference: selfLink was empty, can't make reference
a

agreeable-oil-87482

02/15/2023, 8:59 AM
Which version of k8s and the ceph chart are you using?
selfLink
was removed in upstream k8s in 1.20 iirc
h

handsome-monitor-68857

02/15/2023, 9:03 AM
K8S: v1.24.8+rke2r1 Ceph: ceph-csi-rbd-3.7.2
a

agreeable-oil-87482

02/15/2023, 9:29 AM
Which method did you use to install CephFS?
h

handsome-monitor-68857

02/16/2023, 5:05 AM
I use this CephFS provisioner which seems to be retired https://github.com/kubernetes-retired/external-storage/tree/master/ceph/cephfs
I changed my approach to as below: • create a PV having a spec for Cephfs config (monitor, secret) • create a PVC to claim a storage from that PV • mount pods from that PVC
this way I don't need a CephFS provisioner, but the volumes must be created manually
a

agreeable-oil-87482

02/16/2023, 11:05 AM
There's a more recent CSI driver you should consider instead of that retired project
49 Views