This message was deleted.
# k3s
a
This message was deleted.
πŸ‘€ 1
βœ… 1
πŸ™Œ 1
b
b
Hi @bored-farmer-36655, Thank you for your reply! I found this guide: https://mrkandreev.name/snippets/how_to_move_k3s_data_to_another_location/ As I understood, it's possible to move data after the cluster has been installed. Second thing, if I don't want to use external storage/DB and I don't want to disable local-storage
--disable local-storage
, but use VM storage (sdb disk lvm), is it possible to specify the storage path (for example, /data as /var and /data/tmp as /var/tmp) during installation? Perhaps @creamy-pencil-82913 could help with that as well, please? Br,
b
@breezy-autumn-81048 I'm assuming thats all covered in the chart options <https://github.com/rancher/local-path-provisioner/blob/master/deploy/chart/local-path-provisioner/README.md>
b
Hi @bored-farmer-36655, Thank you! I have installed local-path-provisioner in the namespace where pods will be running, then created pvc.yaml and ran my deployment: pvc.yaml:
Copy code
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-path-pvc
  namespace: my-runners
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 500Gi
deployment.yaml:
Copy code
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: my-runners
  namespace: my-runners
spec:
  template:
    spec:
      organization: main
      labels:
        - x-medium
        - medium
      group: Default
      resources:
        limits:
          cpu: "0.25"
          memory: "0.5Gi"
          ephemeral-storage: "2Gi"
        requests:
          cpu: "0.125"
          memory: "0.5Gi"
          ephemeral-storage: "1Gi"
          volumeMounts:
            - name: local-path-storage
              mountPath: /data
      volumes:
        - name: local-path-storage
          persistentVolumeClaim:
            claimName: local-path-pvc
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
  name: my-runners-autoscaler
  namespace: my-runners
spec:
  scaleDownDelaySecondsAfterScaleOut: 300
  scaleTargetRef:
    kind: RunnerDeployment
    name: my-runners
  minReplicas: 6
  maxReplicas: 16
  metrics:
  - type: PercentageRunnersBusy
    scaleUpThreshold: '0.75'
    scaleDownThreshold: '0.25'
    scaleUpFactor: '2'
    scaleDownFactor: '0.5'
df -h before I run deployment.yaml:
Copy code
df -h                                                                                                                  <http://yc9611.danskenet.net|yc9611.danskenet.net>: Tue Nov 14 10:37:41 2023

Filesystem                          Size  Used Avail Use% Mounted on
devtmpfs                             12G     0   12G   0% /dev
tmpfs                                12G     0   12G   0% /dev/shm
tmpfs                                12G  2.7M   12G   1% /run
tmpfs                                12G     0   12G   0% /sys/fs/cgroup
/dev/mapper/systemvg-rootlv         2.0G  217M  1.8G  11% /
/dev/mapper/systemvg-usrlv           10G  2.4G  7.7G  24% /usr
/dev/sda2                          1014M  227M  788M  23% /boot
/dev/sda1                           200M  5.8M  195M   3% /boot/efi
/dev/mapper/systemvg-tmplv          5.0G   69M  5.0G   2% /tmp
/dev/mapper/systemvg-varlv          5.0G  1.3G  3.8G  25% /var
/dev/mapper/systemvg-homelv         2.0G   47M  2.0G   3% /home
/dev/mapper/systemvg-optlv          5.0G  676M  4.4G  14% /opt
/dev/mapper/systemvg-varloglv       5.0G  249M  4.8G   5% /var/log
/dev/mapper/systemvg-kdumplv        2.0G   47M  2.0G   3% /var/crash
/dev/mapper/systemvg-varlogauditlv   10G  327M  9.7G   4% /var/log/audit
tmpfs                               2.4G     0  2.4G   0% /run/user/988
/dev/mapper/datavg-datalv1          610G   11G  600G   2% /data
df -h after I run deployment.yaml:
Copy code
df -h                                                                                                                  <http://yc9611.danskenet.net|yc9611.danskenet.net>: Tue Nov 14 10:38:39 2023

Filesystem                          Size  Used Avail Use% Mounted on
devtmpfs                             12G     0   12G   0% /dev
tmpfs                                12G     0   12G   0% /dev/shm
tmpfs                                12G  3.3M   12G   1% /run
tmpfs                                12G     0   12G   0% /sys/fs/cgroup
/dev/mapper/systemvg-rootlv         2.0G  217M  1.8G  11% /
/dev/mapper/systemvg-usrlv           10G  2.4G  7.7G  24% /usr
/dev/sda2                          1014M  227M  788M  23% /boot
/dev/sda1                           200M  5.8M  195M   3% /boot/efi
/dev/mapper/systemvg-tmplv          5.0G   69M  5.0G   2% /tmp
/dev/mapper/systemvg-varlv          5.0G  1.3G  3.8G  25% /var
/dev/mapper/systemvg-homelv         2.0G   47M  2.0G   3% /home
/dev/mapper/systemvg-optlv          5.0G  676M  4.4G  14% /opt
/dev/mapper/systemvg-varloglv       5.0G  252M  4.8G   5% /var/log
/dev/mapper/systemvg-kdumplv        2.0G   47M  2.0G   3% /var/crash
/dev/mapper/systemvg-varlogauditlv   10G  327M  9.7G   4% /var/log/audit
tmpfs                               2.4G     0  2.4G   0% /run/user/988
/dev/mapper/datavg-datalv1          610G   13G  598G   3% /data
It uses /data dir. However, only 4 of 6 pods are being created:
Copy code
my-runners   my-runners-l82rl-5rfdp    2/2     Running     0             73s   10.42.0.51   <http://vm1.host.com|vm1.host.com>   <none>		  <none>
my-runners   my-runners-l82rl-76b7b    2/2     Running     0             73s   10.42.0.50   <http://vm1.host.com|vm1.host.com>   <none>           <none>
my-runners   my-runners-l82rl-gbqvj    0/2     Pending     0             73s   <none>	  <none>                 <none>		  <none>
my-runners   my-runners-l82rl-l4tm8    0/2     Pending     0             73s   <none>       <none>                 <none>           <none>
my-runners   my-runners-l82rl-qplmp    2/2     Running     0             73s   10.42.0.49   <http://vm1.host.com|vm1.host.com>   <none>		  <none>
my-runners   my-runners-l82rl-wz8sl    2/2     Running     0             73s   10.42.0.52   <http://vm1.host.com|vm1.host.com>   <none>           <none>
my-runners   local-path-provisioner-d4bbbbbc4-zdkr2       1/1     Running     0             13h   10.42.0.14   <http://vm1.host.com|vm1.host.com>   <none>		  <none>
If I describe even bigger ephemeral storage, all pods status will change to "Pending". "Pending" pods describe:
Copy code
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  5m2s  default-scheduler  0/1 nodes are available: 1 Insufficient ephemeral-storage. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..
The scheduler says that "Insufficient ephemeral-storage", while in fact, sdb disk has 600Gi. Am I missing something? Br,
Copy code
Containers:
  docker:
    Image:      <http://repo.artifactory.host.com/db/docker:dind|repo.artifactory.host.com/db/docker:dind>
    Port:       <none>
    Host Port:  <none>
    Args:
      dockerd
      --host=unix:///run/docker.sock
      --group=$(DOCKER_GROUP_GID)
      --registry-mirror=<https://repo.artifactory.host.com>
    Environment:
      DOCKER_GROUP_GID:  1001
    Mounts:
      /run from var-run (rw)
      /runner from runner (rw)
      /runner/_work from work (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9cvjc (ro)
  runner:
    Image:      <http://repo.artifactory.host.com/github-runners:v1|repo.artifactory.host.com/github-runners:v1>
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:                250m
      ephemeral-storage:  2Gi
      memory:             512Mi
    Requests:
      cpu:                125m
      ephemeral-storage:  1Gi
      memory:             512Mi
    Environment:
      GIT_SSL_CAINFO:                          /etc/ssl/certs/ca-certificates.crt
      RUNNER_ORG:                              main
      RUNNER_REPO:
      RUNNER_ENTERPRISE:
      RUNNER_LABELS:                           x-medium,medium
      RUNNER_GROUP:                            Default
      DOCKER_ENABLED:                          true
      DOCKERD_IN_RUNNER:                       false
      GITHUB_URL:                              <https://my-github.host.com/>
      RUNNER_WORKDIR:                          /runner/_work
      RUNNER_EPHEMERAL:                        true
      RUNNER_STATUS_UPDATE_HOOK:               false
      GITHUB_ACTIONS_RUNNER_EXTRA_USER_AGENT:  actions-runner-controller/v0.27.5
      DOCKER_HOST:                             unix:///run/docker.sock
      RUNNER_NAME:                             my-runners-l82rl-l4tm8
      RUNNER_TOKEN:                            OOOORHERGSEEEW3FKNGJ3AVPNFXHSGERGBERGIASCAWDALHOJQXI2LPNZEW443UMFWGYYLUNFXW4
    Mounts:
      /data from local-path-storage (rw)
      /run from var-run (rw)
      /runner from runner (rw)
      /runner/_work from work (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9cvjc (ro)
local-path-provisioner.yaml:
Copy code
apiVersion: v1
kind: Namespace
metadata:
  name: my-runners

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: local-path-provisioner-service-account
  namespace: my-runners

---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRole
metadata:
  name: local-path-provisioner-role
rules:
  - apiGroups: [ "" ]
    resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
    verbs: [ "get", "list", "watch" ]
  - apiGroups: [ "" ]
    resources: [ "endpoints", "persistentvolumes", "pods" ]
    verbs: [ "*" ]
  - apiGroups: [ "" ]
    resources: [ "events" ]
    verbs: [ "create", "patch" ]
  - apiGroups: [ "<http://storage.k8s.io|storage.k8s.io>" ]
    resources: [ "storageclasses" ]
    verbs: [ "get", "list", "watch" ]

---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRoleBinding
metadata:
  name: local-path-provisioner-bind
roleRef:
  apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
  kind: ClusterRole
  name: local-path-provisioner-role
subjects:
  - kind: ServiceAccount
    name: local-path-provisioner-service-account
    namespace: my-runners

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: local-path-provisioner
  namespace: my-runners
spec:
  replicas: 1
  selector:
    matchLabels:
      app: local-path-provisioner
  template:
    metadata:
      labels:
        app: local-path-provisioner
    spec:
      serviceAccountName: local-path-provisioner-service-account
      containers:
        - name: local-path-provisioner
          image: <http://repo.artifactory.host.com/rancher/local-path-provisioner:v0.0.24|repo.artifactory.host.com/rancher/local-path-provisioner:v0.0.24>
          imagePullPolicy: IfNotPresent
          command:
            - local-path-provisioner
            - --debug
            - start
            - --config
            - /etc/config/config.json
          volumeMounts:
            - name: config-volume
              mountPath: /etc/config/
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      volumes:
        - name: config-volume
          configMap:
            name: local-path-config

---
apiVersion: <http://storage.k8s.io/v1|storage.k8s.io/v1>
kind: StorageClass
metadata:
  name: local-path
provisioner: <http://rancher.io/local-path|rancher.io/local-path>
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: local-path-config
  namespace: my-runners
data:
  config.json: |-
    {
            "nodePathMap":[
            {
                    "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                    "paths":["/data"]
            }
            ]
    }
  setup: |-
    #!/bin/sh
    set -eu
    mkdir -m 0777 -p "$VOL_DIR"
  teardown: |-
    #!/bin/sh
    set -eu
    rm -rf "$VOL_DIR"
  helperPod.yaml: |-
    apiVersion: v1
    kind: Pod
    metadata:
      name: helper-pod
    spec:
      containers:
      - name: helper-pod
        image: <http://repo.artifactory.host.com/busybox|repo.artifactory.host.com/busybox>
        imagePullPolicy: IfNotPresent
It's a single-node test cluster, so its node describe output:
Copy code
k3s kubectl describe node <http://vm1.host.com|vm1.host.com>
Name:               <http://vm1.host.com|vm1.host.com>
Roles:              control-plane,etcd,master
Labels:             <http://beta.kubernetes.io/arch=amd64|beta.kubernetes.io/arch=amd64>
                    <http://beta.kubernetes.io/instance-type=k3s|beta.kubernetes.io/instance-type=k3s>
                    <http://beta.kubernetes.io/os=linux|beta.kubernetes.io/os=linux>
                    <http://kubernetes.io/arch=amd64|kubernetes.io/arch=amd64>
                    <http://kubernetes.io/hostname=vm1.host.com|kubernetes.io/hostname=vm1.host.com>
                    <http://kubernetes.io/os=linux|kubernetes.io/os=linux>
                    <http://node-role.kubernetes.io/control-plane=true|node-role.kubernetes.io/control-plane=true>
                    <http://node-role.kubernetes.io/etcd=true|node-role.kubernetes.io/etcd=true>
                    <http://node-role.kubernetes.io/master=true|node-role.kubernetes.io/master=true>
                    <http://node.kubernetes.io/instance-type=k3s|node.kubernetes.io/instance-type=k3s>
Annotations:        <http://etcd.k3s.cattle.io/node-address|etcd.k3s.cattle.io/node-address>: 10.154.106.42
                    <http://etcd.k3s.cattle.io/node-name|etcd.k3s.cattle.io/node-name>: vm1.host.com-59b00e85
                    <http://flannel.alpha.coreos.com/backend-data|flannel.alpha.coreos.com/backend-data>: {"VNI":1,"VtepMAC":"2a:ba:81:db:24:4c"}
                    <http://flannel.alpha.coreos.com/backend-type|flannel.alpha.coreos.com/backend-type>: vxlan
                    <http://flannel.alpha.coreos.com/kube-subnet-manager|flannel.alpha.coreos.com/kube-subnet-manager>: true
                    <http://flannel.alpha.coreos.com/public-ip|flannel.alpha.coreos.com/public-ip>: 10.154.106.42
                    <http://k3s.io/hostname|k3s.io/hostname>: <http://vm1.host.com|vm1.host.com>
                    <http://k3s.io/internal-ip|k3s.io/internal-ip>: 10.154.106.42
                    <http://k3s.io/node-args|k3s.io/node-args>:
                      ["server","--default-local-storage-path","/data","--data-dir","/data","--resolv-conf","/etc/rancher/k3s/resolv.conf","server","--cluster-i...
                    <http://k3s.io/node-config-hash|k3s.io/node-config-hash>: 2BWELM2YTLTE5BP6WIQ5GAHGVDK66LCZPMKKL2KUL2NZEY4CVH3Q====
                    <http://k3s.io/node-env|k3s.io/node-env>:
                      {"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/d1373f31227cf763459011fa1123224f72798511c86a56d61a9d4e3c0fa8a0c9","K3S_TOKEN":"********"}
                    <http://node.alpha.kubernetes.io/ttl|node.alpha.kubernetes.io/ttl>: 0
                    <http://volumes.kubernetes.io/controller-managed-attach-detach|volumes.kubernetes.io/controller-managed-attach-detach>: true
CreationTimestamp:  Mon, 13 Nov 2023 13:10:22 +0100
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  <http://vm1.host.com|vm1.host.com>
  AcquireTime:     <unset>
  RenewTime:       Tue, 14 Nov 2023 12:17:27 +0100
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 14 Nov 2023 12:16:16 +0100   Mon, 13 Nov 2023 13:10:22 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 14 Nov 2023 12:16:16 +0100   Mon, 13 Nov 2023 13:10:22 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 14 Nov 2023 12:16:16 +0100   Mon, 13 Nov 2023 13:10:22 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 14 Nov 2023 12:16:16 +0100   Mon, 13 Nov 2023 15:10:01 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.154.106.42
  Hostname:    <http://vm1.host.com|vm1.host.com>
Capacity:
  cpu:                8
  ephemeral-storage:  5110Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             24395328Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  5090312189
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             24395328Ki
  pods:               110
System Info:
  Machine ID:                    de701b83c2a54b118c8026665ac8343e
  System UUID:                   61300642-96c7-0c04-eac6-c75f1dc0e165
  Boot ID:                       7dd25424-783e-4267-bf33-dd339e32e296
  Kernel Version:                4.18.0-477.27.1.el8_8.x86_64
  OS Image:                      Red Hat Enterprise Linux 8.8 (Ootpa)
  Operating System:              linux
  Architecture:                  amd64
  Container Runtime Version:     <containerd://1.6.15-k3s1>
  Kubelet Version:               v1.26.2+k3s1
  Kube-Proxy Version:            v1.26.2+k3s1
PodCIDR:                         10.42.0.0/24
PodCIDRs:                        10.42.0.0/24
ProviderID:                      <k3s://vm1.host.com>
Non-terminated Pods:             (15 in total)
  Namespace                      Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                      ----                                          ------------  ----------  ---------------  -------------  ---
  actions-runner-system          actions-runner-controller-75957b4bf5-jvnt9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22h
  actions-runner-system          actions-runner-controller-75957b4bf5-jvx5l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22h
  cert-manager                   cert-manager-7596bfbf8b-jz4l9                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         22h
  cert-manager                   cert-manager-cainjector-8545d4c7d4-z2m2c      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22h
  cert-manager                   cert-manager-webhook-746cbcf764-w8lcj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22h
  my-runners  my-runners-l82rl-5rfdp     125m (1%)     250m (3%)   512Mi (2%)       512Mi (2%)     99m
  my-runners  my-runners-l82rl-76b7b     125m (1%)     250m (3%)   512Mi (2%)       512Mi (2%)     99m
  my-runners  my-runners-l82rl-qplmp     125m (1%)     250m (3%)   512Mi (2%)       512Mi (2%)     99m
  my-runners  my-runners-l82rl-wz8sl     125m (1%)     250m (3%)   512Mi (2%)       512Mi (2%)     99m
  my-runners  local-path-provisioner-d4bbbbbc4-zdkr2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15h
  kube-system                    coredns-5c6b6c5476-sjmkr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23h
  kube-system                    local-path-provisioner-5d56847996-wkqm4       0 (0%)        0 (0%)      0 (0%)           0 (0%)         23h
  kube-system                    metrics-server-7b67f64457-pz9vn               100m (1%)     0 (0%)      70Mi (0%)        0 (0%)         23h
  kube-system                    svclb-traefik-b7b5b96e-47fwg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23h
  kube-system                    traefik-56b8c5fb5c-k48vh                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         23h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                700m (8%)    1 (12%)
  memory             2188Mi (9%)  2218Mi (9%)
  ephemeral-storage  4Gi (84%)    8Gi (168%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:              <none>
E
b
Should this not be /data rather than "K3S_DATA_DIR":"/var/lib/rancher/k3s/data/..." maybe use "K3S_DATA_DIR=" on install...
b
I didn't specify K3D_DATA_DIR:
Copy code
INSTALL_K3S_SKIP_DOWNLOAD=true K3S_TOKEN=TOKEN INSTALL_K3S_EXEC="--default-local-storage-path=/data --data-dir=/data --resolv-conf=/etc/rancher/k3s/resolv.conf" sh install.sh server --cluster-init
So, do you suggest using this instead?
Copy code
INSTALL_K3S_SKIP_DOWNLOAD=true K3S_TOKEN=TOKEN "K3S_DATA_DIR=" INSTALL_K3S_EXEC="--default-local-storage-path=/data --data-dir=/data --resolv-conf=/etc/rancher/k3s/resolv.conf" sh install.sh server --cluster-init
b
Or K3S_DATA_DIR=/data not really sure...
b
I have stopped systemd k3s.service and killed all k3s processes. Then ran:
Copy code
INSTALL_K3S_SKIP_DOWNLOAD=true K3S_TOKEN=TOKEN K3S_DATA_DIR=/data INSTALL_K3S_EXEC="--default-local-storage-path=/data --data-dir=/data --resolv-conf=/etc/rancher/k3s/resolv.conf" sh install.sh server --cluster-init
Copy code
INSTALL_K3S_SKIP_DOWNLOAD=true K3S_TOKEN=TOKEN K3S_DATA_DIR="/data" INSTALL_K3S_EXEC="--default-local-storage-path=/data --data-dir=/data --resolv-conf=/etc/rancher/k3s/resolv.conf" sh install.sh server --cluster-init
But both commands didn't change anything. I believe K3S_DATA_DIR= is equal to --data-dir=/data, so when we use --data-dir K3S_DATA_DIR default value should be changed, but it's not
b
I'm not really sure TBH...
b
Ok, no worries, thanks. @creamy-pencil-82913 maybe you could check, please?
b
I'm using MicroOS with k3s along with selinux, I do note that when monitoring is installed it also creates it's info in /var/lib/kubelet, so I'm not sure that would wind up in your /data dir?
b
Hi @bored-farmer-36655, I think I resolved an issue. The problem was that I had symlinked only:
Copy code
ln -s /data/k3s/ /run/k3s
ln -s /data/k3s-pods/ /var/lib/kubelet/pods
ln -s /data/k3s-rancher/ /var/lib/rancher
But /var/lib/kubelet still was on the sda disk. So, I stopped the k3s service and killed all its processes with k3s-killall.sh, moved /var/lib/kubelet to /data/k3s-kubelet:
Copy code
mv /var/lib/kubelet/ /data/k3s-kubelet/
and symlinked it:
Copy code
ln -s /data/k3s-kubelet/ /var/lib/kubelet
Then ephemeral storage for the cluster node changed to:
Copy code
Capacity:
  cpu:                8
  ephemeral-storage:  639570948Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             24395328Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  622174617727
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             24395328Ki
  pods:               110
After that created pvc and edited my deployment to use it. That's basically it. 😊
βœ… 2
b
Good job, you might want to look at creating actual partitions and add as mount points next time?
b
Yes, good point, I'll have to give it a try πŸ˜„