https://rancher.com/ logo
Title
s

shy-tent-66642

05/08/2023, 5:39 AM
Hi all, So I have this single node k3s cluster. k3s uses local-path (https://github.com/rancher/local-path-provisioner) as default SC that allows one to create dynamic volumes using nodes local storage. So we have this statefulset for which the volume part (volumeMounts, volumes and pvc) looks like below.
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: sceptre-themes-manager
  name: sceptre-themes-manager
  namespace: enterprise
spec:
  replicas: 1
  serviceName: "sceptre-themes-manager"
  selector:
    matchLabels:
      app: sceptre-themes-manager
  template:
    spec:
      containers:
        .
        .
        .
        volumeMounts:
        - mountPath: /storage
          name: sceptre-repo-storage
      volumes:
      - name: sceptre-repo-storage
        #persistentVolumeClaim:
          #claimName: data
      imagePullSecrets:
      - name: xxxxxx
  volumeClaimTemplates:
  - apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: data
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
      volumeMode: Filesystem
Once I create a statefulset, I could see pvc and pv being created and are in bound state
kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                      STORAGECLASS   REASON   AGE
pvc-aa65bc02-d0bf-432d-a943-dc1223e03687   20Gi       RWO            Delete           Bound    enterprise/data-sceptre-themes-manager-0   local-path              2d8h
kubectl get pvc -n enterprise
NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-sceptre-themes-manager-0   Bound    pvc-aa65bc02-d0bf-432d-a943-dc1223e03687   20Gi       RWO            local-path     2d8h
I ssh into the statefulset container and I could see the /storage is mounted.
root@sceptre-themes-manager-0:/usr/src/app# df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay         547G   69G  450G  14% /
tmpfs            64M     0   64M   0% /dev
tmpfs            32G     0   32G   0% /sys/fs/cgroup
/dev/sda4       547G   69G  450G  14% /storage
shm              64M     0   64M   0% /dev/shm
tmpfs            63G   12K   63G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs            32G     0   32G   0% /proc/acpi
I go to /storage dir and touch a file called test. Then I delete the stateful, I could see the pvc and pv remains intact in bound state. I recreated SFS and ssh into new container and this test file is gone. I could see a proper 777 permission directory is getting created inside /var/lib/rancher/k3s/storage/ for this volume.
root@ent-edge-chennai-metal:/var/lib/rancher/k3s/storage# ls -lat
drwxrwxrwx 2 root root 4096 May  8 04:32 pvc-11089e28-a17b-4aee-8d6e-7429a0b1f752_enterprise_data-sceptre-themes-manager-0
Any idea what might be wrong. Is there something here I am missing?
BTW, I have created pvc & pod mentioned on the projects official git - https://github.com/rancher/local-path-provisioner#usage It's actually retaining data across pod deletion for this example pod. I am more confused now