https://rancher.com/ logo
Title
h

hundreds-evening-84071

05/18/2023, 10:27 PM
Hey all; I have neuvector v5.1.3 running on RKE2 cluster. Cluster has 3 nodes... Install was done via helm command... I do have web UI for neuvector with valid SSL cert, and that does work. I am trying to setup Enterprise multi-cluster management as per this doc: https://open-docs.neuvector.com/navigation/multicluster But, I do not see any external IPs for command below... Trying to figure out what did I miss?
# kubectl get svc -n cattle-neuvector-system
NAME                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
neuvector-service-webui           NodePort    10.43.84.36     <none>        8443:31179/TCP                  103m
neuvector-svc-admission-webhook   ClusterIP   10.43.135.174   <none>        443/TCP                         103m
neuvector-svc-controller          ClusterIP   None            <none>        18300/TCP,18301/TCP,18301/UDP   103m
neuvector-svc-crd-webhook         ClusterIP   10.43.142.225   <none>        443/TCP                         103m
I also do not have ports 11443 and 10443 up and listening... so clearly I missed something... But I am not sure what
q

quaint-candle-18606

05/19/2023, 8:47 PM
Two-for-one answer… 🙂
set
controller.federation.mastersvc.type
and
controller.federation.mastersvc.type
to [NodePort | LoadBalancer | ClusterIP]
h

hundreds-evening-84071

05/19/2023, 8:53 PM
thank you! I will give it a try
Using the values.yaml does this look correct? Want to make sure federation part is under controller? tls and secret setup appears to be working under manager (with my custom cert).
k3s:
  enabled: true
controller:
  replicas: 1
  pvc:
    enabled: true
    storageClass: vsphere-csi
  federation:
    mastersvc:
      type: NodePort
cve:
  scanner:
    replicas: 1
manager:
  ingress:
    enabled: true
    host: <http://neuvector.mydomain.org|neuvector.mydomain.org>
    annotations:
      <http://nginx.ingress.kubernetes.io/backend-protocol|nginx.ingress.kubernetes.io/backend-protocol>: "HTTPS"
    tls: true
    secretName: neuvector-tls-cert-secret
q

quaint-candle-18606

05/19/2023, 9:14 PM
managedsvc:
      type: NodePort
too
I leave those on all the time. 😄
h

hundreds-evening-84071

05/22/2023, 2:12 PM
I am stuck at the moment... with below in values.yaml, I still do not see
neuvector-service-controller-fed-master
or
neuvector-service-controller-fed-worker
...
k3s:
  enabled: true
controller:
  replicas: 1
  pvc:
    enabled: true
    storageClass: vsphere-csi
  federation:
    mastersvc:
      type: NodePort
    managedsvc:
      type: NodePort
cve:
  scanner:
    replicas: 1
manager:
  ingress:
    enabled: true
    host: <http://neuvector.mydomain.org|neuvector.mydomain.org>
    annotations:
      <http://nginx.ingress.kubernetes.io/backend-protocol|nginx.ingress.kubernetes.io/backend-protocol>: "HTTPS"
    tls: true
    secretName: neuvector-tls-cert-secret
This is the exact helm command I am running:
helm install neuvector neuvector/core --namespace cattle-neuvector-system -f ./neuvector-values.yaml
I started with RKE2 v1.25... so thought maybe that is too new? uninstalled and created new cluster with RKE2 v1.24.13 I am using RPM method to install RKE2, could that be it? This is what I have:
# kubectl get po -n cattle-neuvector-system 
NAME                                        READY   STATUS    RESTARTS   AGE
neuvector-controller-pod-57f9b45dcc-5kmmt   1/1     Running   0          34m
neuvector-enforcer-pod-c6tvz                1/1     Running   0          34m
neuvector-enforcer-pod-p8nfl                1/1     Running   0          34m
neuvector-enforcer-pod-xn76j                1/1     Running   0          34m
neuvector-manager-pod-7b6d594bdd-8mdff      1/1     Running   0          34m
neuvector-scanner-pod-7f4556856f-nlxfm      1/1     Running   0          34m
q

quaint-candle-18606

05/22/2023, 2:14 PM
Those won’t show up as pods, they are services. Check for the services in that name space.
h

hundreds-evening-84071

05/22/2023, 2:14 PM
# kubectl get svc -n cattle-neuvector-system
NAME                                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
neuvector-service-webui                NodePort    10.43.201.156   <none>        8443:31695/TCP                  37m
neuvector-svc-admission-webhook        ClusterIP   10.43.147.128   <none>        443/TCP                         37m
neuvector-svc-controller               ClusterIP   None            <none>        18300/TCP,18301/TCP,18301/UDP   37m
neuvector-svc-controller-fed-managed   NodePort    10.43.161.107   <none>        10443:32590/TCP                 37m
neuvector-svc-controller-fed-master    NodePort    10.43.47.223    <none>        11443:32279/TCP                 37m
neuvector-svc-crd-webhook              ClusterIP   10.43.234.163   <none>        443/TCP                         37m
q

quaint-candle-18606

05/22/2023, 2:15 PM
That looks like good news to me
As long as the two clusters in question can communicate with each other at those ports, you should be in good shape to federate them with neuvector
Remember to jot down those two IP addresses for those services. You will need them as part of the set up. :-)
h

hundreds-evening-84071

05/22/2023, 2:19 PM
so from above the 2 cluster IPs:
10.43.161.107
10.43.47.223
a

acoustic-sugar-94270

05/22/2023, 5:52 PM
Here’s a working example from one of our clusters..
when I promote the master, I use the fed-master external-IP on 11443
so you may have some routing to do to get to the cluster-ip…
then when you join a managed cluster to the primary (master) cluster… Copy the token from the master… Paste token into token field first.. it will populate the primary server and primary cluster port… Then add the controller-fed-worker IP address.