quiet-area-89381
08/07/2023, 5:06 PMFailed to save detach error to "csi-cf8e26e84124ff4cbbc0598e7949327d8d5f8fbbe83996f006043e22eb70360b": <http://volumeattachments.storage.k8s.io|volumeattachments.storage.k8s.io> "csi-cf8e26e84124ff4cbbc0598e7949327d8d5f8fbbe83996f006043e22eb70360b" not found
I can't figure out if:
• it's the root cause of other later attachment on the same pvc/pv
• why this error shows upmysterious-london-79865
08/09/2023, 12:04 PMmysterious-london-79865
08/09/2023, 12:05 PMfuture-megabyte-74526
08/09/2023, 7:44 PMfuture-megabyte-74526
08/09/2023, 7:44 PMfuture-megabyte-74526
08/09/2023, 7:45 PMfuture-megabyte-74526
08/09/2023, 7:45 PMfuture-megabyte-74526
08/09/2023, 7:46 PMfuture-megabyte-74526
08/09/2023, 7:46 PMfuture-megabyte-74526
08/09/2023, 9:10 PMfuture-megabyte-74526
08/09/2023, 9:10 PMfuture-megabyte-74526
08/09/2023, 9:11 PMfuture-megabyte-74526
08/09/2023, 9:11 PMquiet-area-89381
08/10/2023, 4:46 PMlost+found
and when I check on the kubelet, that folder is showing a input/output error like in the kubernetes event.
sudo ls -al "/var/snap/microk8s/common/var/lib/kubelet/pods/8721351a-d14e-4da3-81a9-b3e06a701d04/volumes/kubernetes.io~csi/pvc-2e1be23b-4021-45d2-878b-d37cd71679dc/mount/lost+found"
ls: reading directory '/var/snap/microk8s/common/var/lib/kubelet/pods/8721351a-d14e-4da3-81a9-b3e06a701d04/volumes/kubernetes.io~csi/pvc-2e1be23b-4021-45d2-878b-d37cd71679dc/mount/lost+found': Input/output error
The related longhorn volume and replicas are not showing any errors.quiet-area-89381
08/10/2023, 5:31 PM- name: KUBELET_ROOT_DIR
value: /var/snap/microk8s/common/var/lib/kubelet
I'd like to at least use the last 1.4.x version.quiet-area-89381
08/10/2023, 7:09 PMquiet-area-89381
08/11/2023, 3:03 AMhelm upgrade longhorn ./longhorn/chart/ \
--namespace longhorn-system \
--values values.yaml
I have the Chart.yaml, questions.yaml and values.yaml in the chart directory, from the 1.5.1 tag on github
in my values.yaml I only cusotmize the kubeletRootDir and the loglevel.
Helm didn't complain and executed. Now I have no longhorn UI, all containers have the same version, and one provisioner is stuck in crashloop.
I definitely need help here...quiet-area-89381
08/11/2023, 5:25 AMquiet-area-89381
08/11/2023, 6:58 AMbitter-diamond-88656
08/19/2023, 1:15 AM$ kubectl get deployment -n longhorn-system
NAME READY UP-TO-DATE AVAILABLE AGE
longhorn-ui 1/1 1 1 54d
longhorn-driver-deployer 1/1 1 1 54d
csi-attacher 3/3 3 3 3h27m
csi-provisioner 3/3 3 3 3h26m
csi-resizer 3/3 3 3 3h26m
csi-snapshotter 3/3 3 3 3h26m
abundant-hair-58573
08/23/2023, 1:13 AMlate-needle-80860
08/23/2023, 5:11 PMVolume/Volume's
is attached.
Is it the same thing for the Create Backup
option in the Volume
tab through the Longhorn UI. As, if I click on an Detached
Volume
that button is greyed out. Whereas for a Attached Volume
it’s available.
Thank you very muchbright-wall-24452
08/28/2023, 1:20 PMfancy-pencil-16998
08/29/2023, 1:57 PMquaint-alarm-7893
08/30/2023, 8:19 PMnice-tent-65195
08/31/2023, 11:23 AMtime="2023-08-31T09:22:42Z" level=info msg="Found 22 backups in the backup target that do not exist in the cluster and need to be pulled" backupVolume=pvc-609e82a6-853e-43c7-b2d4-cabad74304a6 controller=longhorn-backup-volume node=tst-agent3
time="2023-08-31T09:22:42Z" level=info msg="Listening on 10.42.5.213:9500" node=tst-agent3
time="2023-08-31T09:22:42Z" level=info msg="Debug Server listening on 127.0.0.1:6060" node=tst-agent3
time="2023-08-31T09:22:42Z" level=info msg="Cron is changed from to 0 0 */7 * *. Next snapshot check job will be executed at 2023-09-01 00:00:00 +0000 UTC" controller=longhorn-node monitor="snapshot monitor" node=tst-agent3
E0831 09:22:43.354415 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 905 [running]:
<http://k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1d7d280|k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1d7d280>, 0x3566c30})
/go/src/github.com/longhorn/longhorn-manager/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x7d
<http://k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0|k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0>, 0x0, 0xc000fa6c90})
/go/src/github.com/longhorn/longhorn-manager/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75
panic({0x1d7d280, 0x3566c30})
/usr/local/go/src/runtime/panic.go:1038 +0x215
<http://github.com/longhorn/longhorn-manager/controller.(*BackupVolumeController).reconcile(0xc000621f80|github.com/longhorn/longhorn-manager/controller.(*BackupVolumeController).reconcile(0xc000621f80>, {0xc000ad08d0, 0x28})
/go/src/github.com/longhorn/longhorn-manager/controller/backup_volume_controller.go:326 +0x1f2d
<http://github.com/longhorn/longhorn-manager/controller.(*BackupVolumeController).syncHandler(0xc000621f80|github.com/longhorn/longhorn-manager/controller.(*BackupVolumeController).syncHandler(0xc000621f80>, {0xc000ad08c0, 0x8})
/go/src/github.com/longhorn/longhorn-manager/controller/backup_volume_controller.go:146 +0x118
<http://github.com/longhorn/longhorn-manager/controller.(*BackupVolumeController).processNextWorkItem(0xc000621f80)|github.com/longhorn/longhorn-manager/controller.(*BackupVolumeController).processNextWorkItem(0xc000621f80)>
/go/src/github.com/longhorn/longhorn-manager/controller/backup_volume_controller.go:128 +0xdb
<http://github.com/longhorn/longhorn-manager/controller.(*BackupVolumeController).worker(...)|github.com/longhorn/longhorn-manager/controller.(*BackupVolumeController).worker(...)>
/go/src/github.com/longhorn/longhorn-manager/controller/backup_volume_controller.go:118
<http://k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fc3a3ea7e18)|k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fc3a3ea7e18)>
/go/src/github.com/longhorn/longhorn-manager/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x67
<http://k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005aa180|k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005aa180>, {0x234da20, 0xc002c19d70}, 0x1, 0xc0005aa180)
/go/src/github.com/longhorn/longhorn-manager/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
<http://k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000bab7b0|k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000bab7b0>, 0x3b9aca00, 0x0, 0x0, 0xc000bab7d0)
/go/src/github.com/longhorn/longhorn-manager/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
<http://k8s.io/apimachinery/pkg/util/wait.Until(0x8506aa|k8s.io/apimachinery/pkg/util/wait.Until(0x8506aa>, 0xc000125d40, 0x0)
/go/src/github.com/longhorn/longhorn-manager/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
created by <http://github.com/longhorn/longhorn-manager/controller.(*BackupVolumeController).Run|github.com/longhorn/longhorn-manager/controller.(*BackupVolumeController).Run>
/go/src/github.com/longhorn/longhorn-manager/controller/backup_volume_controller.go:112 +0x208
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x70 pc=0x18d99cd]
millions-engine-33820
09/06/2023, 12:23 PM/dev/longhorn/pvc-9a6a41b0-72f3-4c69-baa0-08b4ddf79fa0 is mounted; will not make a filesystem here!
. We are running on Longhorn 1.4.3quiet-area-89381
09/07/2023, 6:56 PMdocker run \
-v /dev:/host/dev \
-v /proc:/host/proc \
-v ${REPLICA_PATH}:/volume \
--privileged \
longhornio/longhorn-engine:v1.5.1 \
launch-simple-longhorn ${VOLUME_ID} ${VOLUME_SIZE}
2 questions:
• is there an equivalent for crictl
?
• to avoid the --privileged, can it be done in a totally different folder than /dev or it need devfs?ambitious-dream-97559
09/14/2023, 10:31 AMTerminating
state and new pods are provisioned however longhorn is unable to attach volume replica to the newly created pod because Kubernetes reports that it is still attached to the Terminating pod.
Any suggestions for how we might overcome this are welcome 🙂future-fountain-82544
09/21/2023, 3:25 AM