Hi everyone! I’ve run into an issue during the upg...
# harvester
a
Hi everyone! I’ve run into an issue during the upgrade process and would really appreciate any insights or suggestions. [BUG] Harvester upgrade to v1.5.0 succeeded but host version remains 1.4.2, harvester-operator missing #8801 Thanks in advance for any help! https://github.com/harvester/harvester/issues/8801
supportbundle_e28a4a19-5076-46fc-a187-df0b9123326c_2025-08-04T08-02-27Z.zip
the new
supportbundle_e28a4a19-5076-46fc-a187-df0b9123326c_2025-08-14T11-53-59Z.zip
c
Hi @astonishing-gpu-76609, there is no yamls folder in the support bundle, do you mind generating the support bundle again? Thanks! To push this forward, could you check resource directly?
Copy code
# Use this command to get the latest upgrade
kubectl get upgrades -A

# Then paste the output here.
kubectl get upgrades hvst-upgrade-xxx -n harvester-system -o yaml

# Then, get the managedChart again
kubectl get managedCharts -A

# The, get the harvester version, i'd like to check this again
kubectl get managedCharts harvester -n fleet-local -o=jsonpath='{.spec.version}'
a
Copy code
[11:45][ HARVESTER ]~/DevOps/deployments ➭ kubectl get upgrades -A
[main]
NAMESPACE          NAME                 AGE
harvester-system   hvst-upgrade-hr4zm   96d
harvester-system   hvst-upgrade-lq7pc   17d
Copy code
[11:45][ HARVESTER ]~/DevOps/deployments ➭ kubectl get upgrades hvst-upgrade-lq7pc -n harvester-system -o y
[main]
aml
apiVersion: <http://harvesterhci.io/v1beta1|harvesterhci.io/v1beta1>
kind: Upgrade
metadata:
  creationTimestamp: "2025-07-28T11:55:01Z"
  finalizers:
  - <http://wrangler.cattle.io/harvester-upgrade-controller|wrangler.cattle.io/harvester-upgrade-controller>
  generateName: hvst-upgrade-
  generation: 2
  labels:
    <http://harvesterhci.io/latestUpgrade|harvesterhci.io/latestUpgrade>: "true"
    <http://harvesterhci.io/upgradeState|harvesterhci.io/upgradeState>: PreparingLoggingInfra
  name: hvst-upgrade-lq7pc
  namespace: harvester-system
  resourceVersion: "2058143092"
  uid: 358c07e8-a45d-4687-9714-8cac56208b61
spec:
  image: ""
  logEnabled: true
  version: v1.5.1
status:
  conditions:
  - status: Unknown
    type: Completed
  - status: Unknown
    type: LogReady
  previousVersion: v1.5.0
  upgradeLog: hvst-upgrade-lq7pc-upgradelog
[11:46][ HARVESTER ]~/DevOps/deployments ➭ kubectl get upgrades hvst-upgrade-hr4zm -n harvester-system -o y
[main]
aml
apiVersion: <http://harvesterhci.io/v1beta1|harvesterhci.io/v1beta1>
kind: Upgrade
metadata:
  annotations:
    <http://harvesterhci.io/auto-cleanup-system-generated-snapshot|harvesterhci.io/auto-cleanup-system-generated-snapshot>: "true"
    <http://harvesterhci.io/image-cleanup-plan-completed|harvesterhci.io/image-cleanup-plan-completed>: "true"
  creationTimestamp: "2025-05-10T10:21:02Z"
  finalizers:
  - <http://wrangler.cattle.io/harvester-upgrade-controller|wrangler.cattle.io/harvester-upgrade-controller>
  generateName: hvst-upgrade-
  generation: 217
  labels:
    <http://harvesterhci.io/read-message|harvesterhci.io/read-message>: "true"
    <http://harvesterhci.io/upgradeState|harvesterhci.io/upgradeState>: Succeeded
  name: hvst-upgrade-hr4zm
  namespace: harvester-system
  resourceVersion: "1891572032"
  uid: 9259e436-ae41-4cf9-9329-dafc056ccdfd
spec:
  image: ""
  logEnabled: true
  version: v1.4.2
status:
  conditions:
  - status: "True"
    type: Completed
  - status: "True"
    type: LogReady
  - status: "True"
    type: ImageReady
  - status: "True"
    type: RepoReady
  - status: "True"
    type: NodesPrepared
  - status: "True"
    type: SystemServicesUpgraded
  - status: "True"
    type: NodesUpgraded
  imageID: harvester-system/hvst-upgrade-hr4zm
  nodeStatuses:
    kotygoroshko:
      state: Succeeded
    kotygoroshko02:
      state: Succeeded
    kotygoroshko03:
      state: Succeeded
    node02:
      state: Succeeded
    node03:
      state: Succeeded
    node04:
      state: Succeeded
    node05:
      state: Succeeded
    node06:
      state: Succeeded
    node07:
      state: Succeeded
    node08:
      state: Succeeded
    node09:
      state: Succeeded
    node10:
      state: Succeeded
    node11:
      state: Succeeded
    node12:
      state: Succeeded
    node13:
      state: Succeeded
    node14:
      state: Succeeded
    node15:
      state: Succeeded
    node16:
      state: Succeeded
    node17:
      state: Succeeded
    node18:
      state: Succeeded
    node19:
      state: Succeeded
    node20:
      state: Succeeded
    node21:
      state: Succeeded
    node22:
      state: Succeeded
    node23:
      state: Succeeded
    node24:
      state: Succeeded
    node25:
      state: Succeeded
    node26:
      state: Succeeded
    node27:
      state: Succeeded
  previousVersion: v1.4.1
  repoInfo: |
    release:
      harvester: v1.4.2
      harvesterChart: 1.4.2
      os: Harvester v1.4.2
      kubernetes: v1.31.4+rke2r1
      rancher: v2.10.1
      monitoringChart: 103.1.1+up45.31.1
      minUpgradableVersion: v1.4.1
Copy code
[11:46][ HARVESTER ]~/DevOps/deployments ➭ kubectl get managedCharts -A
NAMESPACE     NAME                                      AGE
fleet-local   harvester                                 462d
fleet-local   harvester-crd                             462d
fleet-local   hvst-upgrade-lq7pc-upgradelog-operator    17d
fleet-local   local-managed-system-upgrade-controller   462d
fleet-local   rancher-logging-crd                       462d
fleet-local   rancher-monitoring-crd                    462d
Copy code
[11:46][ HARVESTER ]~/DevOps/deployments ➭ kubectl get managedCharts harvester -n fleet-local -o=jsonpath='
[main]
{.spec.version}'
1.5.0%
but it's look so on each node
c
still v1.4.2? I thought you’ve upgraded to v1.5.0
a
That’s exactly the point — in some places it shows 1.4.2, the cluster is 1.5.0, and after attempting to upgrade further, the logs say 1.5.1.
c
I think you could try to delete the
hvst-upgrade-lq7pc
upgrade resource and
hvst-upgrade-lq7pc-upgradelog-operator
managedChart resource. it’s on the log preparation phase, it should be fine.
Is only node09 still in v1.4.2?
a
no, all of them
c
may i know the detail of upgrading v1.4.2 to v1.5.0? because like i mentioned on the github, i cound’t see any upgrade resource related to upgrade from v1.4.2 to v1.5.0 did you try to stop the upgrade while they were upgrading?
the latest upgrade (
hvst-upgrade-lq7pc
) is from v1.5.0 to v1.5.1. so i’m thinking there might be a error from v1.4.2 to v1.5.0 like deleting the upgrade (v1.4.2 to v1.5.0) resource before it finished
a
Look, I don’t know all the details of what happened. As my colleague told me, there was an error with the Longhorn volumes, and after that — according to him — he “nudged” the upgrade and it started moving, but then it got completely stuck again. That’s basically the state I’m seeing now, and that’s what I’m sharing with you
but yes, you’re absolutely right — the error occurred during the 1.4.2 → 1.5.0 upgrade
c
Thanks for that. I’m trying to understand the details on it because it seems weird to me that there is no 1.4.2 -> 1.5.0 upgrade resource, only 1.5.0 to v.1.5.1. That’s why I’m curious about how your colleague “nudged” the upgrade.
a
If I only knew, I would tell you everything, but unfortunately I can’t, since he resigned right after several incidents
c
@red-king-19196 Do you have any ideas about this? They tried to upgrade from v1.4.2 to v1.5.0, However, I found the upgrade resource not matching. The latest upgrade resource is from v1.5.0 to v1.5.1, and it stays in LogReady phase. The previous upgrade is from 1.4.1 to 1.4.2. All of nodes still remain in v1.4.2. But, the image of deployment is v1.5.0. Hence, I’m thinking the 1.4.2 to v1.5.0 upgrade resource might be deleted. Do you mind pointing us to possible workaround? You can use this support bundle to check the cluster.
a
just-generated support bundle
1
@clean-cpu-90380 @red-king-19196 Hi, do you have any ideas? What can be done with the cluster in this situation?
c
Zespre left a comment on the Github, you can check it out.
a
added the new one supportbudle