This message was deleted.
# neuvector-security
a
This message was deleted.
h
I also do not have ports 11443 and 10443 up and listening... so clearly I missed something... But I am not sure what
q
Two-for-one answer… 🙂
set
controller.federation.mastersvc.type
and
controller.federation.mastersvc.type
to [NodePort | LoadBalancer | ClusterIP]
h
thank you! I will give it a try
Using the values.yaml does this look correct? Want to make sure federation part is under controller? tls and secret setup appears to be working under manager (with my custom cert).
Copy code
k3s:
  enabled: true
controller:
  replicas: 1
  pvc:
    enabled: true
    storageClass: vsphere-csi
  federation:
    mastersvc:
      type: NodePort
cve:
  scanner:
    replicas: 1
manager:
  ingress:
    enabled: true
    host: <http://neuvector.mydomain.org|neuvector.mydomain.org>
    annotations:
      <http://nginx.ingress.kubernetes.io/backend-protocol|nginx.ingress.kubernetes.io/backend-protocol>: "HTTPS"
    tls: true
    secretName: neuvector-tls-cert-secret
q
Copy code
managedsvc:
      type: NodePort
too
I leave those on all the time. 😄
h
I am stuck at the moment... with below in values.yaml, I still do not see
neuvector-service-controller-fed-master
or
neuvector-service-controller-fed-worker
...
Copy code
k3s:
  enabled: true
controller:
  replicas: 1
  pvc:
    enabled: true
    storageClass: vsphere-csi
  federation:
    mastersvc:
      type: NodePort
    managedsvc:
      type: NodePort
cve:
  scanner:
    replicas: 1
manager:
  ingress:
    enabled: true
    host: <http://neuvector.mydomain.org|neuvector.mydomain.org>
    annotations:
      <http://nginx.ingress.kubernetes.io/backend-protocol|nginx.ingress.kubernetes.io/backend-protocol>: "HTTPS"
    tls: true
    secretName: neuvector-tls-cert-secret
This is the exact helm command I am running:
Copy code
helm install neuvector neuvector/core --namespace cattle-neuvector-system -f ./neuvector-values.yaml
I started with RKE2 v1.25... so thought maybe that is too new? uninstalled and created new cluster with RKE2 v1.24.13 I am using RPM method to install RKE2, could that be it? This is what I have:
Copy code
# kubectl get po -n cattle-neuvector-system 
NAME                                        READY   STATUS    RESTARTS   AGE
neuvector-controller-pod-57f9b45dcc-5kmmt   1/1     Running   0          34m
neuvector-enforcer-pod-c6tvz                1/1     Running   0          34m
neuvector-enforcer-pod-p8nfl                1/1     Running   0          34m
neuvector-enforcer-pod-xn76j                1/1     Running   0          34m
neuvector-manager-pod-7b6d594bdd-8mdff      1/1     Running   0          34m
neuvector-scanner-pod-7f4556856f-nlxfm      1/1     Running   0          34m
q
Those won’t show up as pods, they are services. Check for the services in that name space.
h
Copy code
# kubectl get svc -n cattle-neuvector-system
NAME                                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
neuvector-service-webui                NodePort    10.43.201.156   <none>        8443:31695/TCP                  37m
neuvector-svc-admission-webhook        ClusterIP   10.43.147.128   <none>        443/TCP                         37m
neuvector-svc-controller               ClusterIP   None            <none>        18300/TCP,18301/TCP,18301/UDP   37m
neuvector-svc-controller-fed-managed   NodePort    10.43.161.107   <none>        10443:32590/TCP                 37m
neuvector-svc-controller-fed-master    NodePort    10.43.47.223    <none>        11443:32279/TCP                 37m
neuvector-svc-crd-webhook              ClusterIP   10.43.234.163   <none>        443/TCP                         37m
q
That looks like good news to me
As long as the two clusters in question can communicate with each other at those ports, you should be in good shape to federate them with neuvector
Remember to jot down those two IP addresses for those services. You will need them as part of the set up. :-)
h
so from above the 2 cluster IPs:
Copy code
10.43.161.107
10.43.47.223
a
Here’s a working example from one of our clusters..
when I promote the master, I use the fed-master external-IP on 11443
so you may have some routing to do to get to the cluster-ip…
then when you join a managed cluster to the primary (master) cluster… Copy the token from the master… Paste token into token field first.. it will populate the primary server and primary cluster port… Then add the controller-fed-worker IP address.