This message was deleted.
# neuvector-security
a
This message was deleted.
h
yes - that's correct (is what I have)
do you have taints setup on CP/etcd nodes?
w
yes, the CP nodes have taints. The NV deployments also have the tolerations for the CP taint. I just didn't expect the toleration to be there by default
a little bit of a sanity check, a little bit of "did I do that?"
h
These are the taints I have on CP/etcd nodes; my cluster was created from Rancher UI 3 x CP/etcd nodes 3 x worker (agent) nodes
kubectl taint nodes <master node1> <http://node-role.kubernetes.io/controlplane=|node-role.kubernetes.io/controlplane=>*true*:NoSchedule
kubectl taint nodes <master node1> <http://node-role.kubernetes.io/etcd=|node-role.kubernetes.io/etcd=>*true*:NoExecute
w
yep I have the same,
Copy code
<http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane> : true
<http://node-role.kubernetes.io/etcd|node-role.kubernetes.io/etcd> : true
<http://node-role.kubernetes.io/master|node-role.kubernetes.io/master> : true
etc
h
yeah that's odd then - if you see controller and manager pods on CP/etcd nodes
Or - it could be my setup is broken 😆
I also have 3 identical clusters and neither of them have controller and manager pods on CP/etcd nodes
w
There's also a toleration on those deployments for
Copy code
<http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>
so i was a bit confused whether or not that was there by default
spun up a default, and it looks like it's probably something I or another operator did
🤦
h
I do not have that on my deployments for controller and manager
w
I can see it in the helm values
joy
rubber duck
Copy code
controller:
  apisvc:
    type: ClusterIP
  federation:
    mastersvc:
      type: ClusterIP
  nodeSelector:
    <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>: "true"
  pvc:
    capacity: 10Gi
    enabled: true
    storageClass: nfs-client
  tolerations:
  - effect: NoSchedule
    key: <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>
    operator: Exists
cve:
  scanner:
    nodeSelector:
      <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>: "true"
    tolerations:
    - effect: NoSchedule
      key: <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>
      operator: Exists
docker:
  enabled: false
enforcer:
  tolerations:
  - effect: NoSchedule
    key: <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>
    operator: Exists
  - effect: NoSchedule
    key: <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>
k3s:
  enabled: true
manager:
  nodeSelector:
    <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>: "true"
  svc:
    type: ClusterIP
  tolerations:
  - effect: NoSchedule
    key: <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>
    operator: Exists
that's the culprit
thanks!
h
NP - good luck
do you use federation ?
w
yes, we do
is that related?
h
no - not related
I am curious... In NV GUI (with SAML auth) is there a way to do access control based on clusters with federation? so in situation where we want to: users in group-a to access clusters a, b and c users in group-b to access clusters x, y and z is that possible ?
I have not found a solution for this and have debated opening a ticket - just have not had time to do it
w
There's still a little bit of question about how to handle that. You should be able to control that with the cluster roles (in rancher). There should be a Cluster Role for Neuvector or Neuvector Admin that was added
h
oh - let me check that - thanks!
w
Here is a long-open PR for that info in the docs: https://github.com/rancher/docs/pull/4280
TJ is no longer with the team, but here's a repo for it from him: https://github.com/horantj/rancher-nv-rbac
originally you had to do it manually, but I think it adds the roles as part of the neuvector install now
essentially, the proxied API request will fail if the user doesn't have permissions to access the NV api Service through the kubeproxy (or some such), so that failure means "user can't access that cluster, don't show it" (or some such)
h
oh nice
w
(part of me though says please open a support ticket and advocate for the feature. you'll notice that you need cluster admin rights to do anything useful, and the security guys shouldn't always need that level of access)
(Grand Master @quaint-candle-18606 can correct me if I'm wrong if that's changed)
q
I vote for firing off that ticket
❤️ 2
h
So, I did open a ticket... I reviewed the GH pages above... If I understand correctly that is RBAC when using ranchersso One major issue is - what if there is no ranchersso - like when if NV is not deployed via Rancher Apps? NV has auth options (LDAP/SAML/OIDC) so how can we setup access roles when users are authenticating with these options directly into NV UI?
hopefully there is more than just me asking for this 😆 🤞