I was looking at the <port requirements for Harves...
# harvester
b
I was looking at the port requirements for Harvester, but it's not clear which ports are required for the VIP/Management Address. From the bottom of the description it sounds like only `80`/`443` but I imagine if you're using a proxy to get to kubectl you'd want
6443
and
9345
as well. Are there any other missing ports for that?
b
There have to be more, than are required for Rancher. I've an external ingress node (nginx and nat) as the defgw for my network configured to load-balance across RKE2 control nodes for rancher, and pointing at the first deployed node in my harvester cluster. When I launch the web or serial console I get nothing thru the ingress while local desktop can launch both just fine. port 663 ? and/or others for this?
b
web vnc is working fine for me.
I only did those 4 ports
b
Do you have nginx redirecting, or an ssh tunnel to get from external to harvester?
b
Neither. HAProxy forwards to a private subnet/address.
b
Can you share your config (obfuscate external IP) ?
b
Sure.
Copy code
frontend hydra
  bind 1169.<publicIP>:80
  bind 169.<publicIP>:443
  bind 169.<publicIP>:6443
  bind 169.<publicIP>:9345
  default_backend hydra_backend

backend hydra_backend
  server hydra1170 10.6.<private VLAN VIP> check port 6443
b
tyvm.
oof. you must have done something for the 443 cert too.
b
?
Besides cert-manager and Let'sEncrypt?
b
I've read that in docs, but I was just pushing the rke2 self-signed cert to the front in nginx. worked for rancher over rke2. Will read again and attempt.
in nginx I had copied locally
Copy code
ssl_certificate /etc/ssl/certs/serving-kube-apiserver.crt;
ssl_certificate_key /etc/ssl/certs/serving-kube-apiserver.key;
from
Copy code
/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt
/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
bad idea?
b
I'm not sure the kube-api server is the same cert that the web front end uses by default.
b
given I don't control when harvester and rancher update their internal keys it's probably got more than a few holes in it.
b
Plus if you're just proxying I'm not sure what that gains you.
b
thanks again
b
You don't want to re-encrypt harvester VIP traffic as your rancher instance.
That would break all kinds of things.
Some of that traffic, rancher will proxy for you via it's own gui.
That's not to be confused with the built in rancher to Harvester.
b
Well, I did have nginx serving two front-ends in what I thought was a clean way. I had one cert/key pair copied from the HA rke2 cluster where rancher was deployed; and, this was accessed by URL using the cname pointing to the nginx instance. Also, I had another cert/key pair copied from the rke2 harvester cluster accessed by the URL using the cname pointing to the harvester VIP. While both these front-ends worked for getting to the Web UI of rancher and harvester internally and externally (local external /etc/hosts entries to match cname to externally visible IP) I couldn't get rancher Harvester UI Extension to get past "pending", and as I began the reply here, the web and serial consoles were not working. So, time for me to invest more energy into your experienced and working path with HAProxy, cert-manager, and Let'sEncrypt. Thanks again.
b
If you can tell nginx to proxy and not re-encrypt I don't think you'll need to add HAProxy to your stack. (unless you want HA and having it be available via two different boxes) I think you can just add the 80 port and let the clusters handle the certs.
Then you just install cert-manager on the downstream cluster and add the annotations to the ingresses there and automagic... signed encrypted traffic.
b
Well, I'll be damned. You massively simplified my nginx config, solved my freaking web-console issue and got my rancher Harvester UI integration working. YOU ROCK.
b
🎉
b
In case it is helpful for others nginx config and cnames used: internally rancher.lab is the cname for the nginx box doing load-balancing for the ha-rke2 cluster harvester.lab is the cname for the harvester-vip both cnames had to be used at the time of deployment for the cluster managed self-signed certs to work. externally modified /etc/hosts for both hostnames to point to the external IP address of the nginx box.
The only remaining issue is that for the Working with Ingress Quick Start. It is deployed over port 80 and uses the rancher UI path with /hello. Chrome hates port 80 after using 443 for the base of a URL. So you have to test that app with a separate browser 😄