bright-fireman-42144
12/27/2022, 6:26 PMsticky-summer-13450
12/28/2022, 4:17 PMresource "harvester_virtualmachine" "ubuntu2204" {
name = "ubuntu2204"
namespace = "default"
restart_after_update = true
description = "test ubuntu2203 raw image terraform"
tags = {
ssh-user = "ubuntu"
}
cpu = 2
memory = "2Gi"
efi = false
secure_boot = false
run_strategy = "RerunOnFailure"
hostname = "ubuntu2204"
machine_type = "q35"
network_interface {
name = "nic-1"
network_name = "vlan648"
model = "virtio"
type = "bridge"
wait_for_lease = false
}
disk {
name = "rootdisk"
size = "10Gi"
type = "disk"
bus = "virtio"
boot_order = 1
image = "jammy-server-cloudimg-amd64.img"
auto_delete = true
}
cloudinit {
user_data = <<-EOF
#cloud-config
EOF
...
network_data = <<-EOF
network:
version: 1
config:
...
EOF
}
}
best-hair-19862
12/28/2022, 9:53 PMquaint-alarm-7893
12/30/2022, 6:05 PMsalmon-afternoon-72196
12/31/2022, 6:16 PMbillions-diamond-53716
01/01/2023, 3:58 AMflat-evening-58664
01/02/2023, 12:25 PMbright-fireman-42144
01/03/2023, 1:18 AMlittle-dress-13576
01/03/2023, 4:01 PMsalmon-afternoon-72196
01/03/2023, 6:27 PMwonderful-pizza-30919
01/03/2023, 9:05 PMquaint-alarm-7893
01/03/2023, 9:25 PMNode harvester-01 is ready 2.2 mins ago
Node harvester-01 is ready 2.2 mins ago
Node harvester-01 is ready 2.2 mins ago
Node harvester-01 is ready 2.2 mins ago
Node harvester-01 is down: the manager pod longhorn-manager-fdc28 is not running 2.2 mins ago
Node harvester-01 is down: the manager pod longhorn-manager-fdc28 is not running 2.2 mins ago
Node harvester-01 is down: the manager pod longhorn-manager-fdc28 is not running 2.2 mins ago
quaint-alarm-7893
01/03/2023, 9:25 PMbig-judge-33880
01/04/2023, 1:45 PM<http://harvesterhci.io/storageClassName=fast-replicas-3|harvesterhci.io/storageClassName=fast-replicas-3>
(provider Longhorn, selects disks in Longhorn with the “fast” tag) and one new volume with the same storageclass. It gets scheduled to node 5, while the volumes get attached to node 5 and 2 respectively, and the VM then fails to mount the volume that is attached to a different node. Is this a bug, or am I doing something obviously stupid?stale-painting-80203
01/05/2023, 5:02 PMkubectl describe sc
Name: harvester
IsDefaultClass: Yes
Annotations: <http://meta.helm.sh/release-name=harvester-csi-driver,meta.helm.sh/release-namespace=kube-system,storageclass.kubernetes.io/is-default-class=true|meta.helm.sh/release-name=harvester-csi-driver,meta.helm.sh/release-namespace=kube-system,storageclass.kubernetes.io/is-default-class=true>
Provisioner: <http://driver.harvesterhci.io|driver.harvesterhci.io>
Parameters: <none>
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
cuddly-vegetable-29975
01/07/2023, 11:15 AMsalmon-bear-45866
01/10/2023, 4:27 AMsalmon-bear-45866
01/10/2023, 4:42 AMadorable-exabyte-35533
01/11/2023, 4:40 AMfull-crayon-745
01/11/2023, 10:36 AMlively-zebra-61132
01/12/2023, 2:52 PMpost-draining
status. At some point during that process a vm was spawned which got stuck in Containercreating because of a problem with the longhorn volume it was trying to mount. I deleted the vm launcher, the pvc and the post-drain job (because it was making it impossible to delete the vm + pvc). Now I want to restart the upgrade but when trying kubectl delete <http://upgrade.harvesterhci.io/hvst-upgrade-gxc4t|upgrade.harvesterhci.io/hvst-upgrade-gxc4t> -n harvester-system
it fails with Error from server (BadRequest): admission webhook "<http://validator.harvesterhci.io|validator.harvesterhci.io>" denied the request: cluster fleet-local/local status is provisioning, please wait for it to be provisioned
is there anything I can do to fix this?quaint-alarm-7893
01/12/2023, 3:36 PMquaint-alarm-7893
01/12/2023, 3:42 PMmagnificent-vr-88571
01/16/2023, 3:13 AMNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-38d9351f-8f48-435e-bc92-e796031a92ef 800Gi RWO Delete Bound kubeflow/minio-pvc longhorn 108d
In actuals 800Gi as allocated and 415Gi is used being shown in the volume mounted server
>> lsblk
sdc 8:32 0 800G 0 disk /var/lib/kubelet/pods/d4ea8ad2-6cc9-4e00-bd29-25ff5e8678bc/volume-subpaths/pvc-38d9351f-8f48-435e-bc92-e796031a92ef/minio/0
>> /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-38d9351f-8f48-435e-bc92-e796031a92ef/globalmount/minio# du -sh
415G .
on the replica server 658G in the replica created server.
>>/var/lib/longhorn/replicas/pvc-38d9351f-8f48-435e-bc92-e796031a92ef-2dbe6c38# du -sh
658G .
I would like to know how we can reduce the 658G in the replica server to the actual usage as 415Gi.sparse-apple-20577
01/16/2023, 8:35 PMimportant-cat-93045
01/17/2023, 2:16 AMfull-crayon-745
01/17/2023, 11:06 AMrhythmic-jelly-81455
01/17/2023, 12:42 PMmelodic-receptionist-94532
01/18/2023, 5:47 PMwonderful-pizza-30919
01/19/2023, 2:34 AM