This message was deleted.
# harvester
a
This message was deleted.
πŸ‘€ 2
b
Coworker from $previousJob is looking at switching. Β―\_(ツ)_/Β―
πŸ‘€ 1
p
Company I'm at got screwed over by vmware (as did many people) so we're replacing everything with Harvester. We are Suse partners, so naturally ended up looking into Harvester.
πŸ™ 1
t
@powerful-easter-15334 how has that been in your experience?
p
@thousands-flower-1918 all issues I encountered were human error. Now, in my opinion, the docs could be much better and would help reduce the risk of human error. But tbh nothing can beat my network guy putting the ethernet cable into the ILO port and me wasting a few days wondering why I'm not getting internet.
πŸ‘ 2
So yeah! Very positive, though I wasn't around to know what VMware is like, so I don't have a good point of reference if you want me to compare.
c
Small question to you guys: would your company. before migrating, have been interested in testing Harvester on a dedicated Bare metal cloud? In other words, would you have done a sort of POC or even a productive environment where Harvester is fully managed ?
t
@cool-thailand-26552 there aren't too many offerings of this kind yet as far as I have seen. But doing a lab build isn't a lot of work for a poc
It's actually a area where I'm interested in starting a company in, managed harvester
p
@cool-thailand-26552 We had old servers lying around which we started messing around with and which started off our Harvester cluster. We also have a local CSP who we could've asked for a week of fun on some of their spare bare metal. Though trying out Harvester "live" would definitely be interesting for many companies over here who are looking to escape from VMWare
c
@thousands-flower-1918 I also thought about that, but I fear the market is too small: most companies fleeing VMWare have on-premise infrastructure. Most companies beginning a virtualization journey might start with Proxmox. Most the others who rely on the cloud would not really see the benefit of a managed Harvester except maybe on prices. But I might be failing to see the potential here.
@powerful-easter-15334 Thanks for the insights!
t
Yeah price I'd say, I mean we are talking 2-3 dollars a core a month and nearly 10x lower storage prices
But great points, I also see that. Does seem like something that could be market tested fairly quickly
m
thanks all for replies, reading the harvester troubleshooting today, can't see my vmware guy having the patience, I think stay orthogonal and go proxmox...
p
Huh. What kinda troubleshooting?
If that's something you can share, slightly
m
just the latest in this channel
p
From Albert?
m
they one with the 3 node cluster
p
Okay
b
yeah... in my experience things like that have hit us, but it's always been hardware issues or config stuff on our end.
Can't throw in 10 year old spinning rust and expect them to perform like nvme disks.
m
well if they're up for the challenge πŸ™‚ I will prepare to be dazzled
b
Honestly if it's that important to you make sure you're doing Supported Rancher though Suse.
m
We are
b
We did PoC's with a few places before landing with them and they were great.
Reminded me of Pre IBM RedHat.
m
For some reason we demoed OpenShift first, left a bad impression on the whole situation
b
I used OpenShift with VMware at my previous job and it was great. Years later, we demoed it at my current job with bare metal and it took months to stand up the cluster. Then we saw the bill and it was a very quick "no". Suse had things up and running with us in like a week and we had meetings with some of the engineering teams we had issues with. It was night and day difference.
πŸ€” 1
m
Ah I know, it was the Ansible Tower consultant that pushed the demo
b
There are still things I like about OKD/OpenShift, but not at the pricepoint, and not at the trial of getting it running on baremetal.
m
interesting...
b
Their UI stuff for allowing devs to quickly launch a container image without knowing a lot of yaml, and their visualizations for deployments can be very useful. Just not $300,000 useful. Not when support can't help with getting nodes boostrapped properly in over a month.
m
Right. Well in order to justify the devs would also have to be moved off Rancher onto Openshift and drop rancher and migrate production... a little bit of disruption there 🀯
b
More than a little and not at all worth it imho.
Dealing with vendors will always come with some frustration, but in my experience with Harvester and Rancher is that we still get to the teams/engineers that actually know much more quickly than with others.
πŸ‘ 1
t
Devtron on top of rancher kubernetes does a ton of that work as well and costs significantly less
Not as enterprisy but still a great tool
p
Btw @miniature-notebook-6405 I'll see if I can help your colleague tomorrow. We're also running on old hardware but haven't had this sort of issue.
(i just gotta figure out how to read the supportbundles)
m
Just a followup article. Seems like it's almost a dereliction of duty now for me to use on-prem rancher server to standup rke2 clusters. Going forward I'm going to have to take another try at command line rke2 and import to minimize disruption. https://arstechnica.com/information-technology/2025/01/a-long-costly-road-ahead-for-customers-abandoning-broadcoms-vmware/
Wouldn't it all make sense if Broadcom was getting kickbacks from Oracle to pull all this crap so customers running Oracle on-prem would be finally going to Oracle's cloud...
t
Haha, yeah might be a take. But last I heard its just partners suffering, not broadcom/vmware itself. https://www.crn.com/news/virtualization/2024/broadcom-is-making-shareholders-rich-rivals-happy-and-vmware-partners-bitter
c
@miniature-notebook-6405 you can always use Cluster API, that's the future of Rancher cluster provisioning
m
Wouldn't rke2 command line provisioning sort of embed Cluster API, or do I need another client?
c
Well no, cluster API is a kubernetes native declarative way of provisioning clusters
It relies on RKE2 CLI not the other way around
m
Ok, sort of a kubeadm, I see there's be a plugin model rke2 would plug in
Is the client kubeadm (I should do my own research... πŸ™‚
c
Client is a k8s cluster called management cluster where you create custom resources that describe target clusters (called workload clusters)
πŸ™ 1
m
Sort of a Fleet for clusters?
c
You can say that, sort of, it can actually be combined with fleet to manage clusters in GitOps
m
Oh so Fleet can drive the process, that's cool!
@thousands-flower-1918 I read that article I don't see how Oracle is an ordinary partner, seems like they'd be waiting with open arms for their database customers to start using OCI to run some of their stuff dependent on the database.
They even get to raise their prices from the increased demand
@cool-thailand-26552 I really really want in place upgrades, I know rancher rodeo newbie 101 training is "clusters are ephemeral" in 2018 but it is so disruptive because we aren't fully gitops with our apps and supporting infra... https://www.reddit.com/r/kubernetes/comments/1gglaff/upstream_cluster_api_does_not_support_inplace/
I'm gonna assume I can muddle through for a while with rke2 command line and not go CAPI... https://www.siderolabs.com/platform/saas-for-kubernetes/
Well of course there's the dreaded "You should not import a cluster which has already been connected to another instance of Rancher as it will lead to data corruption." You know I'm aboutta just go k9s... this is the whole reason I said "hell with it" and just have Rancher server stand up everything since it won't matter one way or the other. You cannot "reimport" an imported cluster to another DR Rancher manager, you only get once chance, is what it seems to be saying 😩. Clusters are ephemeral, Rancher mgmt server is ephemeral. Need to contain this ephemerality. No important data in there for sure.