https://rancher.com/ logo
#general
Title
# general
a

adamant-kite-43734

05/23/2023, 3:57 PM
This message was deleted.
b

bulky-sunset-52084

05/23/2023, 8:43 PM
In order for clusters to talk to Rancher - they need a port to talk to Rancher on. I'm afraid there isn't much of a way around that. Some hosting providers allow you to use tools like ipsec tunnels to create local routes between the hosted network and your own private one. But it completely depends on the provider.
r

red-waitress-37932

05/24/2023, 8:39 AM
the problem arises because I need connections both ways. If I only needed incoming connections, I could move rancher to the DMZ. If I only needed outgoing connections, I could leave it in the internal network.
actually nm that. rancher in the DMZ with full control over a bunch of clusters in the internal network would be bad™
r

rough-farmer-49135

05/25/2023, 2:56 PM
As a note, when debugging something else I ended up finding out that Rancher doesn't actually have any kind of hostname/IP info to connect to any of its' managed clusters. The clusters all initiate conversations and apparently the managed control plane host/IP is never actually persisted in Rancher. I found that somewhat counter-intuitive and wasted a fair amount of debugging time assuming it was the reverse.
(this was Rancher 2.6.2 if I recall, so maybe it's changed?)
r

red-waitress-37932

05/26/2023, 8:15 AM
idk when/if it changed, but rancher shows the node IPs in its interface under cluster management. of course it knows them
I also conigured external-dns on my rancher cluster to give each node a DNS name 🙂
(that requires the unstructured-source PR, btw)
r

rough-farmer-49135

05/26/2023, 11:57 AM
I was having problems with after updates & reboots on a cluster that Rancher couldn't find the cluster again. It might've remembered the nodes' IPs, but when I was looking for what address it used to talk to the control plane of the managed cluster I couldn't find anything and one of the Rancher folks here told me that Rancher didn't initiate the connection to downstream clusters and that it was the downstream clusters that initiated talking to Rancher. Note that the cluster I was using had three control plane/etcd nodes and I was using a DNS hostname with three A records to do a poor man's load balance between them. That's why I was looking to try to find out if maybe it was always going to one IP instead of using the DNS name and where I fell in that rabbit hole.
For knowing the node IPs, I was assuming it was just using what came from
kubectl get nodes
and was caching/displaying it. My understanding of Rancher v2 vs v1 is that in v2 it moved to doing everything in & with existing Kubernetes object types.
r

red-waitress-37932

05/26/2023, 3:12 PM
how would it run
kubectl get nodes
without knowing at least one node IP address?
I'm still looking for a solution to self-host a single rancher instance to control both internal and external clusters without creating an attack vector into our internal network
r

rough-farmer-49135

05/26/2023, 3:23 PM
I don't have a Rancher instance to check the name, but there's a Rancher service on the downstream clusters that initiates the connection. The Rancher server (2.6.2 at least) doesn't know how to find the cluster on its own otherwise. Once the connection's there and authenticated, it can query all it likes (until the connection goes away if you fully reboot your cluster, which is a bad idea but I was supposed to be testing & documenting things so was actively trying some bad ideas).
b

bulky-sunset-52084

05/26/2023, 5:07 PM
Bill is right - just verifying the documentation and it confirms how rancher works. The downstream cluster nodes need access to Rancher as the cattle-cluster-agent initializes a connection to the upstream Rancher endpoint and creates a tunnel. Then via that tunnel rancher can talk to 10.43.0.1 (the internal kubernetes service endpoint) on the cluster. Rancher talks to the downstream cluster using a proxy from the cattle cluster agent. So yes cluster nodes need access to the rancher server but no rancher doesn't need access to the downstream nodes. Except in the case of node provisioning in which it needs access to ssh or the hypervisor API or whatever.
a

acceptable-match-53099

05/28/2023, 2:38 PM
@red-waitress-37932 You should be able to tweak rancher to run on secondary network interface K8S master node (in each specific cloud you have to find how to add it) and then it won't be exposed to your public IP.
r

red-waitress-37932

06/05/2023, 7:11 AM
@acceptable-match-53099 the machine that hosts rancher is of course not connected directly to the internet, so the decision to use the public IP does not happen in rancher, but in our public-facing firewall. @bulky-sunset-52084 so since everything goes through a tunnel anyway, couldn't you change rancher to only make outgoing connections?
b

bulky-sunset-52084

06/05/2023, 1:50 PM
No - the cattle-cluster-agent needs to be able to communicate with the upstream rancher server in order to establish a tunnel.
r

red-waitress-37932

06/06/2023, 8:04 AM
but can't the upstream rancher server connect to the cattle-cluster-agent (instead of the other way around) in order to establish that tunnel?
r

rough-farmer-49135

06/06/2023, 1:31 PM
That's where I was saying that Rancher doesn't keep any hostname/IP info to know where to connect. I was of the same mindset as you as it seemed bizarre to me, but after I'd hunted everywhere I could think and could find no way that Rancher possibly knew where to connect to get to downstream clusters, I accepted the Rancher folks statements.
r

red-waitress-37932

06/08/2023, 3:18 PM
It literally does, though. I see it in the data structures through the API. I see it in the kubernetes objects that rancher uses to manage its data:
Copy code
$ kubectl --context <name of the context for the cluster that rancher runs on> get <http://nodes.management.cattle.io|nodes.management.cattle.io> -n c-<API id of the cluster> m-<API id of the node> -o yaml | grep -E ':$|172'
metadata:
  annotations:
  finalizers:
  labels:
spec:
  internalNodeSpec:
    podCIDRs:
  metadataUpdate:
status:
  conditions:
  dockerInfo:
    Labels:
    SecurityOptions:
  internalNodeStatus:
    addresses:
    - address: <control node IP>
    - address: <control node IP>
    allocatable:
    capacity:
    conditions:
    daemonEndpoints:
      kubeletEndpoint:
    nodeInfo:
  limits:
  nodeAnnotations:
    <http://flannel.alpha.coreos.com/public-ip|flannel.alpha.coreos.com/public-ip>: <control node IP>
    <http://projectcalico.org/IPv4Address|projectcalico.org/IPv4Address>: <control node IP>/24
    <http://rke.cattle.io/external-ip|rke.cattle.io/external-ip>: <control node IP>
    <http://rke.cattle.io/internal-ip|rke.cattle.io/internal-ip>: <control node IP>
  nodeLabels:
  nodePlan:
    plan:
      files:
      processes:
        kube-proxy:
          binds:
          command:
          - --bind-address=<control node IP>
          healthCheck:
          labels:
          volumesFrom:
        kubelet:
          binds:
          command:
          env:
          healthCheck:
          labels:
          volumesFrom:
        nginx-proxy:
          args:
          - CP_HOSTS=<worker1 IP>,<worker2 ip>
          command:
          env:
          - CP_HOSTS=<worker1 IP>,<worker2 ip>
          labels:
        service-sidekick:
          command:
          labels:
        share-mnt:
          args:
          binds:
  nodeTemplateSpec:
  requested:
  rkeNode:
    address: <control node IP>
    labels:
    role:
I am using that data to generate DNS entries for all nodes, even.
r

rough-farmer-49135

06/08/2023, 3:39 PM
I was poking around when my cluster couldn't connect and trying to find what hostname or IP it was using to connect to the cluster. I had three control nodes and a DNS entry with three A records for it to use. I was trying to find if it was just using the one IP or was trying all three or was using the DNS hostname (which I think it didn't have the chance to know since I deployed via Rancher and never gave it that hostname). I found spots where it remembered control plane & worker IPs, but I found nowhere as to what IP it was using to connect to the cluster (except the cluster internal IP of 10.43.0.1 which is just the default cluster internal service IP for Kubernetes). After a few days of that and asking on here I was told that it does what Trent V says above.
b

bulky-sunset-52084

06/08/2023, 8:18 PM
It's a client initiated connection. Rancher fills in the metadata in the machine crd AFTER the machine registers. The reason it's client initiated is because rancher doesn't know the node ip address until the node tells rancher. IPAM is generally handled external from rancher with something like a DHCP server.
r

red-waitress-37932

06/12/2023, 7:31 AM
rancher sets up that cluster via SSH. can the IP from that initial setup not be used for the connection?
b

bulky-sunset-52084

06/12/2023, 1:24 PM
Node IP != Cluster IP Ok think about it like this: In order to talk from an external service (rancher) to an internal one (cattle-cluster-agent) you need: • a layer 4 load balancer for all the kubernetes nodes • An ingress controller installed and an ingress configured In the rancher cluster we have both of those. In your downstream. You may or may not that's up to you. The address of the application would not be the same address of the node. Likely it would need to be the address of the external load balancer. On the other hand since Rancher is already set up - the address isn't really subject to change. This also means you'd need to maintain that endpoint inside of each downstream cluster rather than just one endpoint (rancher).
r

red-waitress-37932

06/12/2023, 1:29 PM
so you're saying this design choice was made because by having the clusters instead of rancher initiate the connection, you can cover the use case where there is a kubernetes cluster without any kubernetes API endpoint?
didnt even know that was possible
I mean you'd need some access in order to use kubectl
I'm not sure why you'd need an ingress controller btw
25 Views