This message was deleted.
# neuvector-security
a
This message was deleted.
🙏 1
h
Thank you
q
didi you make it??? 😄
h
I wish - too many fires to put out
q
can relate
h
Does the Rodeo get into details of setting up Federation Master and such ? Or that is too advanced
or too much to do in 1 rodeo
q
It doesn’t go into that part; just only so much time
✔️ 1
Because somebody asked about it, we did show it off a little bit in today’s rodeo.
It’s something that’s been on my to do list for a very long time: make a video about how to do federation. It’s pretty easy, but only after you’ve seen it done once.
h
If I just have this in values.yaml is that enough (for mastersvc) ?
Copy code
controller:
  federation:
    mastersvc:
      type: NodePort
I need to just try that and perhaps other mastersvc parameters listed in docs
q
they both need to be “turned on”
h

https://media1.giphy.com/media/3ohs7JG6cq7EWesFcQ/200.gif▾

I am doing this on-prim so deployed metallb and added IP pool range Have 2 x 3-nodes RKE2 clusters... So one will be Fed-Master and another Fed-Worker On Master NV is running, and added this yaml
Copy code
apiVersion: v1
kind: Service
metadata:
  name: neuvector-service-controller-fed-master
  namespace: cattle-neuvector-system
spec:
  ports:
  - port: 11443
    name: fed
    protocol: TCP
  type: LoadBalancer
  selector:
    app: neuvector-controller-pod
On Worker, same setup as above except added this yaml
Copy code
apiVersion: v1
kind: Service
metadata:
  name: neuvector-service-controller-fed-worker
  namespace: cattle-neuvector-system
spec:
  ports:
  - port: 10443
    name: fed
    protocol: TCP
  type: LoadBalancer
  selector:
    app: neuvector-controller-pod
After deploying above yamls, I see On Fed-Master Loadbalancer IP was added to svc and ports 11443 On Fed-Worker Loadbalancer IP was added to svc and ports 10443 On Master, in the UI - promoted that cluster and generated token On Worker, in the UI - pasted the token and for Controller server added LB IP address I have on Fed-Worker cluster But this results in time out with error:
q
That looks to me like your clusters cannot talk to each other via the IPs and service ports.
h
you are correct, when I use nc command to the port from worker it does not connect. But it does locally from same cluster
let me do some digging
👍 1