This message was deleted.
# harvester
a
This message was deleted.
c
Hopefully that makes sense too
Ive also got a work around for this by changing the kube-vip daemonset to only deploy on the workers. Im not sure why its defaulting to the management nodes
t
did you add the harvester cloud provider?
correction are you running it from VMS or on the harvester kube itself?
you need to add the cloud provider
c
Thanks for the response, im running the Harvester Cloud provider
t
So Rancher Vcluster deploying vms?
Side note v1.30 is stable.
c
Yep thats right
hmm really? I didnt realize rancher was promoting the non-stable versions
t
Really. Try 1.30?
c
Yeah i figured everything they showed on their ui was the stable version
t
Nope.
c
Good to know thanks. Im deploying another cluster with version 1.30.6.
t
Check dzver.rfed.io. I wrote an app they scraps GitHub.
c
Looks like cluster v1.30 loadbalancers still use the master node for loadbalancers
The kubevip daemonset runs on the master nodes. Then it appears when I request a loadbalancer via the harvester cloud provider, it configures the IP address on that master node/vipHost
Also thanks for that. Bookmarked!
Hauler looks interesting too
For context what im trying to do is add a LoadBalancer type to the rke2-ingress service so I can add a wildcard record to my loadbalancer/ingress.
t
you can actually skip the LBs and VIPs with better control of DNS and nginx ingress.
c
How? Got a video? Im still newish to kubernetes. There seem to be a lot of way to do the same thing lol. Still trying to figure out all the options
t
I will make one for you tomorrow. šŸ˜„
c
Cool lol thanks are you a harvester/rancher terraform user? Ive got everything in terraform currently
t
I am a harvester / bash kind of guy. šŸ˜„
c
Nice!
t
Hope this helps
Kubernetes Ingress/VIP/Load Balancer Conversation: to LB or not to LB -

https://youtu.be/-Qih8pLriFYā–¾

šŸ‘ 2
m
@creamy-crayon-86622 the vip is used by other worker nodes to join the cluster, hence, it needs to be on the mgmt node
t
Oh you want to use it for rke2 control plane. The same idea works.
m
iiuc, you are trying to add a wildcard ingress in front of the
kube-system/ingress-expose
LB
service
, to route (application?) traffic through the rke2-ingress-controller, to the worker nodes
i am not sure if you want to use the mgmt vip for that purpose, since it's meant for cluster management
t
That is how I run my clusters. To the pods.
m
so your vip is exposed to north-south?
t
Yup.
Actually, I don’t use a VIP. I use multiple DNA records. But I know people, I have customers, that use VIS cause they like how it works. And the gist is there’s many ways to do it.
m
interesting
in Neil's case, assuming the rke2-ingress pods are running on all master and worker nodes (because daemonset), it should just work
just wondering why there is a need to change kube-vip to run only on workers
t
Honestly, there really isn’t.
c
Thanks @thousands-advantage-10804 for the video, im still watching it. Thanks for the shoutout lol. Also my rke2-ingress pods are only running on the worker nodes. Not masters. Thats default out of the box at least with the rancher rke2-ingress
Here are the pics of my rke2-ingress controller. https://rancher-users.slack.com/archives/C01GKHKAG0K/p1733095611039869?thread_ts=1733092073.749129&cid=C01GKHKAG0K you can see the the loadbalancer is using a manager node but the ingress pods are on the workers
I could try setting up the rke2 ingress to deploy to all nodes which may solve the issue.
The only configuration im doing with the rke2-ingress is:
Copy code
apiVersion: <http://helm.cattle.io/v1|helm.cattle.io/v1>
kind: HelmChartConfig
metadata:
  name: rke2-ingress-nginx
  namespace: kube-system
spec:
  valuesContent: |-
    controller:
      # <https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers>
      config:
        use-forwarded-headers: "true"
        enable-real-ip: "true"
        proxy-buffer-size: "16k"
      publishService:
        enabled: true
      service:
        enabled: true
        type: LoadBalancer
        external:
          enabled: true
        externalTrafficPolicy: Local
So maybe not completely out of the box.
Thanks again for that video Andy, I wish I had that when I was learning about loadbalancers and ingress. That would have helped. To add a bit more clarity tho. Im using harvester to create a loadbalancer for my rke2-ingress service. So data path would be:
user --> <http://homepage.domain.net|homepage.domain.net> --> rke2-ingress loadbalancer --> ingress --> service --> homepage app (pod)
The pic below shows my new app has a service type of loadbalancer which harvester creates the loadbalancer automagically for me but its showing on that master node but the app/rke2-ingress pods are on the worker nodes.
As far as I can tell, im not using the mgmt vip for any of these loadbalancers. Im grabbing a new ip address out of my harvester pool
So this https://rancher-users.slack.com/archives/C01GKHKAG0K/p1733176157135369?thread_ts=1733092073.749129&amp;cid=C01GKHKAG0K is not how things are currently setup. the daemonset only deploys on the workers
t
No worries. Let’s think big picture. What are trying to solve? What is the current setup doing or not doing?
c
My primary goal at the moment is an automated cluster deployment that gives me a single static IP address that I can add a wildcard record to. I figured I'd use native services such as the harvester loadbalancer behind the native RKE2 ingress controller. I've done this at work with openstack/rancher but harvester is a bit different it seems
t
Ok. would it make sense to get on a huddle to show me what you got? Are you seeing the loadbalancer being created?
c
Id be down for a huddle whenever your free. I do see the loadbalancer is getting created. It does work as long as my ingress controller service and kube VIP are on the same nodes
They just don't get created on the same nodes which maybe is by design
t
Ah you might not need the kubevip then. In theory the LB will get attached to the nginx serivce.
c
Okay so I just added a few tolerations to my rke2-ingress helm chart to deploy ingress controller pods to my control plane nodes and everything is working as expected
Copy code
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-ingress-nginx
  namespace: kube-system
spec:
  valuesContent: |-
    controller:
      # <https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers>
      config:
        use-forwarded-headers: "true"
        enable-real-ip: "true"
        proxy-buffer-size: "16k"
      publishService:
        enabled: true
      service:
        enabled: true
        type: LoadBalancer
        external:
          enabled: true
        externalTrafficPolicy: Local
      kind: DaemonSet # Deploy the ingress controller as a DaemonSet
      tolerations: # Allow the ingress controller to run on control plane nodes
        - key: "node-role.kubernetes.io/control-plane"
          operator: "Exists"
          effect: "NoSchedule"
        - key: "node-role.kubernetes.io/etcd"
          operator: "Exists"
          effect: "NoExecute"
      nodeSelector: {} # Allow the ingress controller to schedule on any node
This should give me flexibility if I ever need to deploy a standalone loadbalancer without ingress. Tho I prolly wont need that
Thanks for all the assistance guys. Networking is fun bleh
t
lol. glad to be able to point you in ā€œAā€ direction.
c
Yeah to many directions available lol. This is how i have things setup at work (kinda) as well so its a bit better for me
r
so the key is
externalTrafficPolicy: Local
, do you want to preserve the client ip address for incoming requests? If that’s not your intention, you can change it to
Cluster
, and you don’t have to worry about where the kube-vip pods are running.
šŸŽ‰ 1
With that said, I do agree we don’t need kube-vip pods running on master nodes in downstream cluster deployments since there is no default VIP for cluster access. I guess we have the nodeAffinity rules merely because we inherited the config from the one we use in the underlying Harvester cluster.
BTW, great video! Thanks @thousands-advantage-10804
t
Thanks. Let me know if there are any video ideas you have.
šŸ‘ 1
a
@thousands-advantage-10804 I was actually hemming-hawing about bothering you via email on a best practice suggestion from your brain vault. But I read all this, and why not strike here (partially off topic) 1. What do I do if intend to avoid all in-cluster load balancing? We terminate all certs/tls on a HA pair of F5 BIG-IPs. Everytime I start to build a solid production cluster, I end up with a messy cert manager, self signed thing. (My lack of understanding ) 2. we’re working with f5 this week to wrap our heads around CIS, but with all the CNI options, I’m not sure which I should be pursuing? 3. Eternal debate…is STIG’d OS image is the goal, which do you suggest Ubuntu or Rocky9
t
Hey Bryan. We should start a new thread. i’ll tag you.
a
Heck yeah, you got it!