This message was deleted.
# general
a
This message was deleted.
h
@melodic-account-62504 Rancher uses an ingress controller for load balancing the API server requests to the master nodes. When you use the kubectl command. This sends a request to the k8s API server. The context and credentials for connecting to the correct k8s cluster are stored in the kubeconfig file, which typically points to the URL of your rancher server.
Then the request is received by the ingress controller of the rancher server, which is responsible for managing external access to the services in a cluster. The ingress controller, which typically uses a round robin strategy, forwards the request to one of the API servers running on your master nodes. Then the master node API server processes the request, interacting with the etcd database or the k8s scheduler as needed. After the request is processed, the response travels back through the same path to kubectl.
In your case, if you want a deeper understanding I suggest inspecting the logs of the ingress controller and the API server pods. The logs would show incoming requests and to which master node they're forwarded.
m
Thanks @hundreds-battery-84841 for you answer. Sure I will check ingress controller logs but i have already checked api server pods logs which shows nothing. I will check it again. Thanks
h
Sure
m
@hundreds-battery-84841 do you have any docs or link where i can find the similar scenarios for working env?
h
Can you please be more specific?. Working env?. What docs exactly do you want?.
About ingress?
m
Yes about ingress working in multimaster setup
h
Most of my knowledge comes from experience and the resources I've read throughout my career. I mostly use GOOGLE + official docs. There's really no magic in understanding distros like k3s/rancher, etc. You just need a good understanding of k8s, its architecture, and how it works. In the end, all of them are just k8s cluster managers.
I would say try to dig more on k8s api objects.
In k8s, INGRESS is an API object that manages external access to the services in a cluster. Ingress can provide load balancing, SSL termination, and name based virtual hosting. When you have a k8s setup with multiple master nodes it means you have a HA k8s cluster. HA clusters improve the reliability of systems by adding redundancy to your infrastructure, preventing a single point of failure.
When setting up an ingress controller in a multi master environment, it will generally only run on a single node at a time, but can be scheduled on any of the master nodes as needed. If the node running the ingress controller fails, k8s will automatically reschedule the ingress controller on another node. You can use a service of the type LoadBalancer for the ingress controller, which creates an external load balancer in the current cloud and assigns a fixed, external ip to the service. If your cloud supports it, of course, all master nodes are registered with the load balancer, and incoming traffic is distributed across them. For example, on AWS, the AWS load balancer controller satisfies ingress resources by provisioning application load balancers.
In a multi master setup, the load balancer distributes incoming traffic to all master nodes. The ingress controller on each master node then handles the traffic by forwarding it to the appropriate service in the cluster. For even higher availability, you can set up multiple replicas of the ingress controller. With multiple replicas, if one of the instances fails, k8s can automatically switch to another one.
Now while it's typical to run the ingress controller on the master nodes, it's not strictly necessary. The ingress controller can run on any node within the cluster.
m
Thanks for detail answere. Yes you are right in my setup it is running on worker node. Logs are not showing any activity for incoming traffic. I think you are right here still if we have any docs or somehow get logs or something that will make our claim 100% that ingress doing load balancing here for apiservers
h
Checking if ingress is load balancing is not difficult to achieve, you can start by checking the ingress status: kubectl describe ingress <ingress-name>
m
Yes Ingress is up and running, i already checked
h
You can also check the ingress controller node logs: kubectl logs ingress_controller_pod_name -n namespace
So you checked is working. What is it that you want then?. What exactly do you want to see?.
m
Ingress is working but its logs not showing traffic flow for apiserver load balancing as you said
h
What's the output of: kubectl describe ingress the_ingress_name -n namespace ?
I'm out for today, so later.
m
Sure Thanks
No ingess has been created in my cluster which mean maybe load balancing is not related to ingress here
m
what is the output of
kubectl get ingress -A
I don't know if it was made clear. For the ingress to be working you need some ingress controller pods and then you need an ingress ressource for the host and path you want the traffic to be redirected to
in you case somewhere in the cluster there should be a ingress ressource for the kube api service
m
No ingress found is output for above command
m
Well that would explain that
m
Yes ingress controller pod is available in kube system
m
probably related to the default ingressclassname
there should be atleast one entry that is also the default class
if that is not available no ingress ressource could be created
m
Thanks Sure i will check
b
I think rancher uses ingress controller to expose its own API, which
kubeconfig
can route through. But then each downstream cluster has a
cattle-agent
that the request flows through to the downstream API server
so ingress controller won't show you which api server you're hitting. It'll show you which rancher manager pod your request lands at
and from there onto cattle-agent in the downstream cluster. I believe the
local
cluster (ie, where rancher installed) works the same -- there's a
cattle-agent
pod that receives requests from the rancher api
you can also use a cluster endpoint for kubeconfig to point directly to the cluster's api servers (rancher management cluster or downstream) to skip the rancher api hop