This message was deleted.
# general
a
This message was deleted.
b
There probably already there. Are you trying to set up a LoadBalancer in the web UI, or somewhere else?
When you click on the VM it's there near the top:
b
yes, those exist, but I want something like cluster: rocky so I can preconfigure it regardless of what generated host names get assigned
b
You can add custom labels when you launch to configure that part out
b
not sure I get what you mean
b
When you create a cluster, add a custom label you want to use
b
I tried adding stuff there but they didn't appear to do anything
b
Does it show up in the VM in the Harvester view?
b
it didn't last night, let me recheck
b
Put screenshots of both the cluster view from rancher, and the VM view from harvester if you can
b
added a label cluster set to rocky and increased number of nodes to 2, the new vm got created, but I cannot see the expected label:
b
I screwed you up, sorry, that's the cluster labels, not labels for the machine pool
b
no worries
I still have my attempted label add from last night in the pool;
b
You probably need to do it fresh
or add them manually
Actually I'm realizing that the second one we talked about might be for inside the cluster and not the VM labels
b
the one label shows up here in the cluster admin, at least:
b
Yeah that's the cluster object inside of the rancher cluster
and the first one we tried/mentioned
if you were to do kubectl describe node <node> from inside your rocky cluster you'd see it there from the second one we did
b
can see them here
b
yeah that's the machine object but not the VM object inside of Harvester
b
cool, so back to my goal, how do I load balance a cluster on harvester? 🙂
b
If you want to LB in Harvester, that's the one you're gonna have to append/edit
It's the VM label
not these other ones
b
but the vm label is randomized and changes over time?
b
Yes to randomized for the name, no to changes over time
b
it doesn't seem right that I need to reconfigure the LB each time the cluster definition changes?
b
Unless you have autoscaling it's not going to change
b
well, I could change some settings as part of a test, right?
b
Unless Rancher kills the node and re-deploys it, it shouldn't change
b
that depends on what changes I make, and I could very well rescale it
hard locking LB to these generated annotations just doesn't feel right to me
b
So append a different label.
b
manually to each created vm? that also seems just wrong, I should be able to add instance labels from the cluster definition, surely?
b
write a script?
b
I'd be more inclined to write a patch, if I knew where to look 🙂
b
Well I'm not a dev and I don't work for Suse, but I'm guessing it's something in the provisioner section that's mentioned in that existing label.
Which you could use instead if you only have 1 downstream cluster.
b
maybe one of these advanced args would be able to do it?
b
but not for if you have seperate pools for CP and workernodes
b
true, but if I was targeting that kind of infra, I'd be able to simply use different harvesters to separate them
I'm just going slightly overkill for a homelab here 🙂
b
Well there's open tickets for LB with downstream clusters and afaik the official supported thing is to use nginx externally as a LB.
I can also tell you that I think it generates all the VMs from the HarvesterConfig object it generates
kubectl get HarvesterConfig -A
and looking at it, I don't see anything that would indicate downstream node labels.
b
that's a shame, I was hoping I could leverage harvester lb, I got pointed in that direction last time I was trying to set something up using kube-vip inside a VM and failing
b
You still can, just not that automagically
You can also write a simple bash script and run it on a cron to add the labels as a hacky work around, or just manage it by hand.
b
I think I'll just leave it, not like I need this for my use case, just wanted to play around with it