so like i thought i was super cool and installed r...
# harvester
a
so like i thought i was super cool and installed rook-ceph on my harvester nodes. this might not have been a great idea.
s
Nothing wrong with that IMO. Just be aware Harvester brings and is tested with Longhorn, and i am not aware any method of disabling Longhorn in Harvester (but probably its not impossible). Ceph and Longhorn is developed for different use cases and at the end of the day both can be used for permanent storage, but totally different architecture and therefore different approaches. Once a Suse engineer told in a storage workshop, that you probably do not need Ceph, but if your use case fits Ceph's design goals then nothing else will do :) Putting Ceph storage on Longhorn provided volumes, that is a double headache and fraction of the performance (they both need physical disks)
a
So i booped rook-ceph on the harvester nodes with 8x sata on each of the 4 nodes (8x14, 8x20, 8x,22 and 8x22 tb). I use nvme for longhorn. I really only run like 8 VM(s). I expose ceph via a samba operator for you know stuff like tv and photos and whatever. My problem was not realizing until 6 hours in that upgrading harvester was going to suck because rook-ceph was all 'no you can't drain this node because osd duh'. I started out with only 2 nodes originally and did dumb stuff with longhorn storage classes (like same host can have more than 1 repl) trying to be clever. At least now i can backup stuff to ceph (nfs) and velero (s3).
the last time i had on prem stuff was esxi on cisco ucs with iscsi.
everything since has been aws or google and now there's whole teams for running k8s and i'm basically an old janitor for stuff that isn't 'cloudy'