This message was deleted.
# harvester
a
This message was deleted.
p
In fact, this might cause fail the upgrade. Can you list the bundle status:
Copy code
kubectl get bundles -A
q
@prehistoric-balloon-31801 sorry i didnt see your reply.
Copy code
NAMESPACE     NAME                                          BUNDLEDEPLOYMENTS-READY   STATUS
fleet-local   fleet-agent-local                             1/1
fleet-local   local-managed-system-agent                    1/1
fleet-local   mcc-harvester                                 0/1                       Modified(1) [Cluster fleet-local/local]; <http://storageclass.storage.k8s.io|storageclass.storage.k8s.io> harvester-longhorn modified {"metadata":{"annotations":{"<http://storageclass.kubernetes.io/is-default-class|storageclass.kubernetes.io/is-default-class>":"true"}}}; <http://storageclass.storage.k8s.io|storageclass.storage.k8s.io> single-replica extra
fleet-local   mcc-harvester-crd                             1/1
fleet-local   mcc-local-managed-system-upgrade-controller   1/1
fleet-local   mcc-rancher-logging                           1/1
fleet-local   mcc-rancher-logging-crd                       1/1
fleet-local   mcc-rancher-monitoring                        0/1                       ErrApplied(1) [Cluster fleet-local/local: another operation (install/upgrade/rollback) is in progress]
fleet-local   mcc-rancher-monitoring-crd                    1/1
p
Thanks, the status of chart
mcc-rancher-monitoring-crd
, it’s a known issue: please check the link on how to fix it https://github.com/harvester/harvester/issues/1983#issuecomment-1260122413 And for
mcc-harvester
can you capture output of these two commands:
Copy code
kubectl get bundles -n fleet-local -o yaml
kubectl get sc harvester-longhorn -o yaml
I don’t expect sensitive info in these command outputs but feel free to DM me if you feel there are sensitive contents in the output.
q
thanks! i'll fix the mcc-rancher issue, and then get you the mcc-harvester outputs. standby.
now i have this:
dme'd you those bundles
fixed grafana by attaching the pvc to the node grafana was on, that seemed to bring it right up. as for the harvester issue, i also see thsi in deployments in harvesters rke:
@prehistoric-balloon-31801 ^
@prehistoric-balloon-31801 any follow up on the harvester deployment?
hi @prehistoric-balloon-31801 any feedback on getting harvester back working? dm'ed you those bundles the other day. lmk what you think.
p
Hi Albert, I received the bundle but I have PTO today, I’ll check it as soon as possible!
q
no worries! assuming PTO is paid time off, Enjoy! 🙂 thanks for at least hitting me back and letting me know.
hi Kiefer, any chance you looked at why my harvester installed chart failed? i'd like to try to get apps cleaned up so i can upgrade to 1.1.2 soon.
p
Sure, can you check the deployment
rancher-monitoring-grafana
in
cattle-monitoring-system
ns. The bundle says it’s not ready
q
hi @prehistoric-balloon-31801 any chance you can help me w/ my bundle issue now? we keep missing each other. go this from Zespre a while ago, but missed his response too: albert [4:18 PM] hi everyone. i'd like to prep for an update from v1.1.1 > v1.1.2, but i noticed some bundles have errors. can anyone help me sort this out:
Copy code
NAMESPACE     NAME                                          BUNDLEDEPLOYMENTS-READY   STATUS
fleet-local   fleet-agent-local                             1/1
fleet-local   local-managed-system-agent                    1/1
fleet-local   mcc-harvester                                 0/1                       Modified(1) [Cluster fleet-local/local]; <http://storageclass.storage.k8s.io|storageclass.storage.k8s.io> harvester-longhorn modified {"metadata":{"annotations":{"<http://storageclass.kubernetes.io/is-default-class|storageclass.kubernetes.io/is-default-class>":"true"}}}; <http://storageclass.storage.k8s.io|storageclass.storage.k8s.io> single-replica extra
(edited) Zespre Chang [1:58 AM] Hi albert, did you get a support bundle at hand? We can look into it further. It seems something has changed in the
harvester-longhorn
StorageClass and the controller is trying to reconcile it.
@red-king-19196 any chance you got a sec to look at this? ^
r
I was just finding where the support bundle file is. Thanks for bringing me here.
q
@red-king-19196 any love?
p
@quaint-alarm-7893 It’s a known issue in 1.1.1, can you make
harvester-longhorn
as the default storage class and check if the message is gone
q
so now i show this with get bundles -A
Copy code
fleet-local   mcc-harvester                                 0/1                       Modified(1) [Cluster fleet-local/local]; <http://storageclass.storage.k8s.io|storageclass.storage.k8s.io> single-replica extra

fleet-local   mcc-rancher-monitoring                        0/1                       Modified(1) [Cluster fleet-local/local]; <http://prometheus.monitoring.coreos.com|prometheus.monitoring.coreos.com> cattle-monitoring-system/rancher-monitoring-prometheus missing
i did the helm rollback on both rancher-monitoring and rancher-monitoring-crd, seems to have worked for crd.
should i do the same for mmc-harvester (rollback helm) or should i do something else? also, maybe worth noting, only thing shows pending now is harvester:
i tried to do the support bundle but i keep getting a time out via the UI. so i ran this:
Copy code
k get bundle -o yaml > /mnt/c/Users/albert.kohl/bundle.yaml
@prehistoric-balloon-31801 ^^
r
Copy code
summary:
      desiredReady: 1
      modified: 1
      nonReadyResources:
      - bundleState: Modified
        modifiedStatus:
        - apiVersion: <http://storage.k8s.io/v1|storage.k8s.io/v1>
          delete: true
          kind: StorageClass
          name: single-replica
        name: fleet-local/local
      ready: 0
Is the StorageClass
single-replica
being used? If not, getting rid of it may bring the bundle status back.
I noticed that almost all the metadata of
single-replica
is the same as those from
harvester-longhorn
. How did you create it?
q
yeah, i created it. not sure if i'm using it any more, i'm checking
👍 1