quaint-alarm-789302/10/2023, 6:19 AM
note, it show's its attached to tow instance managers. i've tried rebooting the nodes, and deleting the instance manager pods, no luck...
State: Attached Health: No node redundancy: all the healthy replicas are running on the same node Degraded Ready for workload:Not Ready Conditions: restore scheduled Frontend:Block Device Attached Node & Endpoint: Size: 100 Gi Actual Size:Unknown Data Locality:disabled Access Mode:ReadWriteMany Backing Image:vdi-image-hzwdh Backing Image Size:50 Gi Engine Image:longhornio/longhorn-engine:v1.3.2 Created:a month ago Encrypted:False Node Tags: Disk Tags: Last Backup: Last Backup At: Replicas Auto Balance:ignored Instance Manager: instance-manager-e-c2f010c0 instance-manager-e-50d28d2d Last time used by Pod:10 hours ago Namespace:vdi PVC Name:vmname-rootdisk-ossjw PV Name:pvc-13787ebc-7d4e-41ad-9206-de9a6adb938a PV Status:Bound Revision Counter Disabled:False Last Pod Name:virt-launcher-vmname-mpbn2 Last Workload Name:vmname Last Workload Type:VirtualMachineInstance
narrow-egg-9819702/11/2023, 3:00 AM
quaint-alarm-789302/11/2023, 6:39 PM
narrow-egg-9819702/14/2023, 4:18 PM
can i just delete the pvc without losing the data? i guess that's my worry is if i delete the pvc, the data would be lost. sounds like that's the work around though.Is it workable to create backup for this volume then redeploy related workloads, e.g., Longhorn volume, PVC, PV and Pods?
quaint-alarm-789302/14/2023, 4:20 PM
i see the two engine instances gauravm [6:13 PM] i am just checking if one can be deleted safely
(⎈|default:longhorn-system)➜ harvester-758d56647d-wlgj2 k get engine | grep pvc-13787ebc-7d4e-41ad-9206-de9a6adb938a pvc-13787ebc-7d4e-41ad-9206-de9a6adb938a-e-4596c93c error harvester-05 instance-manager-e-c2f010c0 4d7h pvc-13787ebc-7d4e-41ad-9206-de9a6adb938a-e-6591f94e error harvester-03 instance-manager-e-50d28d2d 5d