https://rancher.com/ logo
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
harvester
  • a

    adventurous-engine-93026

    11/22/2022, 1:34 AM
    How do I restore my 1.0.3 VM backups to my 1.1.0 cluster? (I tried upgrading 1.0.3 and it had issues, so I rebuilt it). I've configured the S3 backup target, and it sounds like the list should sync, but I don't see anything.
    g
    • 2
    • 8
  • m

    magnificent-vr-88571

    11/22/2022, 8:54 AM
    Is is possible to install harvester in a physical server managed from MAAS https://maas.io/ ?
    👀 1
    • 1
    • 1
  • j

    jolly-electrician-10061

    11/22/2022, 3:34 PM
    hey all, i'm trying to do a "rancher in harvester" install, but i'm having trouble getting harvester to register to rancher, i've set the cluster_registration_url, but I get a error message "Client.Timeout exceeded while awaiting headers" if i log into the harvester node, i can curl the url just fine, albeit it has a self-signed cert i'm running harvester 1.1.0 and rancher 2.7.0 has anyone got any hints, or advice on how to debug, i'm v new to harvester
    w
    • 2
    • 2
  • j

    jolly-camera-40096

    11/23/2022, 10:34 PM
    Total newbie here. Been installing 1.1.0 cluster for a couple of day now...(just familiarizing myself)....got a couple of VMs going, etc. However, realized that secure boot and TPM not enabled on my supermicro servers (sys-e300-9d)....when I enabled secure boot, I got an error (expected). However, when I added the bootx64.efi file to the DB, the nodes just boot to grub. I can't seem to get any of the nodes to boot after that. Anyone, encounter this issue. I thought it might be a openSuse microOS issue, so I blew away one of the nodes to test...I only get this grub issue with harvester...not with fedora, opensuse, ESXI v8, etc. Any guidance much appreciated.
    f
    • 2
    • 5
  • c

    chilly-jewelry-23407

    11/24/2022, 12:45 AM
    My group provisions VMs with PXE (just like bare metal). When creating a VM, I don't see this as an option. Example: boot order: Network, CD, Disk. Is PXE not an option? Thank you!
    f
    • 2
    • 4
  • b

    big-judge-33880

    11/25/2022, 12:15 PM
    is there a preferred way to seed the certificate for VIP during install?
    • 1
    • 1
  • w

    wonderful-pizza-30919

    11/27/2022, 1:04 PM
    Hi, Can harvester be used as a VDI like KVM or Horizon View? Thank you
    s
    • 2
    • 6
  • b

    big-judge-33880

    11/27/2022, 4:25 PM
    just stood up a new 1.1.0 node and the harvester-cluster-repo pod fails due to
    Failed to pull image "rancher/harvester-cluster-repo:v1.1.0": rpc error: code = Unknown desc = failed to pull and unpack image "<http://docker.io/rancher/harvester-cluster-repo:v1.1.0|docker.io/rancher/harvester-cluster-repo:v1.1.0>": failed to resolve reference "<http://docker.io/rancher/harvester-cluster-repo:v1.1.0|docker.io/rancher/harvester-cluster-repo:v1.1.0>": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
    - did someone just nuke the repo?
    g
    • 2
    • 4
  • b

    big-judge-33880

    11/28/2022, 2:03 PM
    Does anyone here run a cluster with MTU > 1500?
    👀 1
    s
    b
    • 3
    • 2
  • l

    lively-translator-30710

    11/29/2022, 3:40 PM
    We have are trying to spin up clusters from Rancher on a three node Harvester cluster, but the never seem to finish starting up - they end up in this endless loop below. Is there anything specific we should be looking for to troubleshot this? Are they any timeouts we can increase - if it is just taking too long…. Anything else?
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for agent to check in and apply initial plan
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico, etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico, etcd, kube-apiserver, kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico, kube-apiserver, kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico, kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico, etcd, kube-apiserver
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico, kube-apiserver
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico, kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico, kube-apiserver
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico, etcd, kube-apiserver
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico, etcd, kube-apiserver, kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico, kube-controller-manager, kube-scheduler, kubelet
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico, kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: calico
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for cluster agent to connect
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: etcd, kube-apiserver
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-apiserver, kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-apiserver, kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-controller-manager, kube-scheduler, kubelet
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for cluster agent to connect
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-apiserver
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for cluster agent to connect
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: etcd, kube-apiserver
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for cluster agent to connect
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: etcd, kube-apiserver
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for cluster agent to connect
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-apiserver
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for cluster agent to connect
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-apiserver
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for cluster agent to connect
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-controller-manager
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-apiserver, kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for cluster agent to connect
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: etcd, kube-apiserver
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: etcd, kube-apiserver, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-apiserver, kube-controller-manager, kube-scheduler, kubelet
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-controller-manager, kube-scheduler, kubelet
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-controller-manager, kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-scheduler
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for cluster agent to connect
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for probes: kube-apiserver
    [INFO ] configuring bootstrap node(s) pax-pool1-cb9c56f68-l9fss: waiting for cluster agent to connect
    s
    w
    f
    • 4
    • 25
  • a

    alert-leather-47297

    12/01/2022, 12:42 PM
    Hi, I'd like to know if there is a known issue installing harvester 1.1.0 from ,iso via Virtual-Media on a server? I've been trying to install 1.1.0 and it keep failing at this stage (see screenshot), tried on multiple servers. I don't see that issue when installing on a Virtual-Machine. Would there be a way to see more log from the install process to know why it stall there ?
    c
    g
    +3
    • 6
    • 16
  • p

    prehistoric-balloon-31801

    12/02/2022, 6:28 AM
    Hi channel, we’ve released Harvester v1.1.1. Please check https://github.com/harvester/harvester/releases/tag/v1.1.1 for the release note. If you have a running v1.0.3 cluster, we’d suggest upgrading it to v1.1.1 directly (skip v1.1.0).
    👍 3
  • k

    kind-waitress-15815

    12/02/2022, 9:24 AM
    I'm trying to get a "test production environment" up and running. The issue is bootstrapping. I have a "clean" harvester environment up and running, far exceeding recommended system requirements. But, the goal is simple. The goal is a Rancher environment running on Harvester. There's a catch-22, you need a Rancher environment to deploy RKE2 on Harvester. Once you deploy RKE2 on Harvester, if you attempt to deploy Rancher on that RKE2, the Rancher you used to deploy RKE2 loses connectivity and the Rancher fails to deploy.
    c
    • 2
    • 2
  • k

    kind-waitress-15815

    12/02/2022, 9:25 AM
    Is there a good recommended method of deploying Rancher on Harvester... which would in turn manage the Harvester cluster?
    w
    m
    • 3
    • 7
  • l

    limited-magazine-85005

    12/03/2022, 12:03 AM
    Hi there, been running harvester in a home lab (4 machines with each 8 xores x 32GB RAM x 1TB SSD) since 1.0.3. Upgrade to 1.1.0 was fine, but i attempted upgrade to 1.1.1 and the UI is now just giving me a "Handler Disconnected" error, even after waiting and let it sit overnight. Anybody else ran into this? Any place i can start troubleshooting?
  • l

    limited-magazine-85005

    12/03/2022, 12:06 AM
    checking dmesg, there are OOM errors:
    [ 1061.946668] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=cri-containerd-c3aa5aaa98e869f97a404b8a1bc27cbdfbdcd85960dcb021930bceffe70c6061.scope,mems_allowed=0,oom_memcg=/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod568dc473_4171_4b3f_9e8f_61ad43186fa1.slice/cri-containerd-c3aa5aaa98e869f97a404b8a1bc27cbdfbdcd85960dcb021930bceffe70c6061.scope,task_memcg=/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod568dc473_4171_4b3f_9e8f_61ad43186fa1.slice/cri-containerd-c3aa5aaa98e869f97a404b8a1bc27cbdfbdcd85960dcb021930bceffe70c6061.scope,task=ruby,pid=11345,uid=100
    [ 1061.946678] Memory cgroup out of memory: Killed process 11345 (ruby) total-vm:955300kB, anon-rss:771480kB, file-rss:6780kB, shmem-rss:0kB
  • l

    limited-magazine-85005

    12/03/2022, 12:14 AM
    Any suggesions on which pods/logs to check?
    s
    • 2
    • 1
  • f

    fast-oyster-13791

    12/03/2022, 8:44 PM
    When restoring a backup from another cluster (1.1.0) to a new clean install cluster (1.1.1) the disk restore completes but no VM is created.
  • f

    fast-oyster-13791

    12/03/2022, 8:44 PM
    Any suggestions?
  • f

    fast-oyster-13791

    12/03/2022, 8:55 PM
    Issue found. Available networks don't match the network name in the backup
    🙌 2
    s
    • 2
    • 1
  • e

    echoing-easter-77539

    12/04/2022, 3:12 AM
    Anyone have success running Harvester on an underpowered system? I just tried on a 4 Core (I5) with 16GB RAM without success. CPU stays at 100% and 11GB of ram usage. What the minimum system that you have got Harvester to function with?
    s
    b
    • 3
    • 4
  • w

    wonderful-pizza-30919

    12/04/2022, 4:52 AM
    General question. I have four servers and was thinking to dedicate three of the servers for Harvester. However, do I need external DNS and DHCP server in order to deploy the Harvester cluster?
    s
    s
    • 3
    • 5
  • s

    sticky-summer-13450

    12/04/2022, 12:56 PM
    When upgrading a Harvester cluster from v1.1.0 to v1.1.1, how long should I wait watching logs like this:
    $ kubectl logs deployment/rancher -n cattle-system --context=harvester --since=1m --follow
    ...
    evicting pod longhorn-system/instance-manager-r-1503169c
    evicting pod longhorn-system/instance-manager-e-029f5eba
    error when evicting pods/"instance-manager-r-1503169c" -n "longhorn-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
    error when evicting pods/"instance-manager-e-029f5eba" -n "longhorn-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
    evicting pod longhorn-system/instance-manager-r-1503169c
    evicting pod longhorn-system/instance-manager-e-029f5eba
    error when evicting pods/"instance-manager-e-029f5eba" -n "longhorn-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
    error when evicting pods/"instance-manager-r-1503169c" -n "longhorn-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
    evicting pod longhorn-system/instance-manager-r-1503169c
    evicting pod longhorn-system/instance-manager-e-029f5eba
    FWIW I got bored after watching them for around 15 minutes and killed the pods myself:
    $ kubectl delete pod instance-manager-r-1503169c -n "longhorn-system" --context=harvester; kubectl delete pod instance-manager-e-029f5eba -n "longhorn-system" --context=harvester
    Which got the upgrade moving again. This happened at the post-drained phase of the 2nd and 3rd nodes of a 3 node Harvester cluster.
    s
    • 2
    • 4
  • w

    wonderful-pizza-30919

    12/04/2022, 11:36 PM
    Would you mind sharing the steps and script for Terraform? Thanks
  • w

    wonderful-pizza-30919

    12/04/2022, 11:39 PM
    Hi, Is the VMDK import capability available now in Harvester or still on the roadmap?
    w
    • 2
    • 2
  • q

    quaint-alarm-7893

    12/05/2022, 6:16 AM
    hello everyone, i do want to upgrade my cluster from v1.0.3 to v.1.1.1 to prevent corruption issues i've been having (that's fixed in longhorn upgrade), are their any pre-upgrade tasks i can run to ensure a smooth upgrade? this is a production cluster, so i dont wanna have issues with it hanging up.
    s
    • 2
    • 22
  • w

    wonderful-pizza-30919

    12/06/2022, 3:11 PM
    What is the maximum numbers of VM I can provision in a three nodes cluster? Not considering memory and CPU limitations. Does Harvester has a limit on the number of VM’s running? What is the max number or replication on VM?
    w
    h
    • 3
    • 4
  • q

    quaint-alarm-7893

    12/06/2022, 9:46 PM
    is there a way to force harvester to re-scan an NFS backup target? i have two clusters, and if i backup in one cluster, and want to restore into the second cluster, i have to wait a file for the backup to show up once it's done.
    s
    • 2
    • 7
  • c

    crooked-scooter-58172

    12/07/2022, 7:47 PM
    Team, It seems that Harvester doesn’t consider “_*requests.memory*_” param while assigning memory for a VM. It always use the limit memory (minus some overhead) and assign that much of memory to the VM. I am just wondering whether others faced the similar issue. If yes, then any suggestions?
    f
    • 2
    • 4
  • c

    crooked-scooter-58172

    12/08/2022, 2:33 PM
    Hello team, I have downloaded the harvesterhci source code. Just wondering whether anyone can direct me to the function which gets called when Harvester UI triggers a create VM call. Thanks in advance.
Powered by Linen
Title
c

crooked-scooter-58172

12/08/2022, 2:33 PM
Hello team, I have downloaded the harvesterhci source code. Just wondering whether anyone can direct me to the function which gets called when Harvester UI triggers a create VM call. Thanks in advance.
View count: 18