This message was deleted.
# neuvector-security
a
This message was deleted.
f
This is the only lines that I'm seeing in the log of NeuVector Manager:
q
Just to check, is your user a rancher admin or cluster owner? Those are the only rancher roles that are allowed in SSO in this version. the
saml-g
INFO message is normal
f
I'm using the default admin user that is generated when you install Rancher to access Rancher UI. As additional information, I manually set the Rancher's admin password during the installation of Rancher, with the Helm chart, using --set bootstrapPassword flag.
Even doing a port forward to NeuVector WebUI at Kubernetes level, I cannot access to NeuVector WebUI with default admin:admin credentials:
More strange: I can log to the NeuVector Manager from CLI with admin:admin, and even I created a new admin user, aguida, but I cannot login to the Web UI:
r
the webui is still not allowing
admin|admin
? that is indeed peculiar šŸ¤”
q
I am can't reproduce so far on k3s or rke2 using the rancher chart and longhorn. Does it work OK if you don't use a longhorn PVC?
f
No, even if I not use Longhorn as PVC the error is the same. To add some additional information, my K3S deploy is without Traefik (I'm using NGINX as Ingress) and without servicelb (I'm using kube-vip and MetalLB), and is a HA of 3 nodes beeing Masters and Workers at the same time. K3S is installed on Ubuntu 22.04, with k3sup.
q
definitely strange. can you check the controller logs when trying to log in, vs the manager logs. and maybe enable debug mode if there is nothing special. Nothing about your arch seems like it would cause an issue. And since the cli works, there is pretty clearly not a connectivity problem between manager and controllers.
f
Hi. Here are the logs, with debug mode enabled: The Logs from the Manager:
2022-06-28 14:54:39,170|INFO |MANAGER|com.neu.api.AuthenticationService(apply:824): post path auth
2022-06-28 14:54:39,256|WARN |MANAGER|com.neu.api.AuthenticationService(apply:828): Status: 401 Unauthorized
Body: {"code":3,"error":"Authentication failed","message":"Authentication failed"}
2022-06-28 14:54:39,262|INFO |MANAGER|com.neu.api.AuthenticationService(apply:147): saml-g: servername is empty
The Logs from the Leader Controller:
2022-06-28T14:54:38.808|DEBU|CTL|cache.UpdateConnections: - agent=1d1b9febe7e0 app=1001 bytes=592 client=Host:neoappl clientIP=10.12.2.222 clientPort=0 extIP=false external=false first=1656428055 host=neoappliance-k8s-02:6dce4d56-1a13-a2b1-74e8-b16dd06f283e ingress=true ipproto=6 last=1656428055 local=true network= policyAction=1 policyID=10005 policyViolates=0 scope=global server=793ca6076462 serverIP=192.168.193.66 serverPort=80 sessions=0 threatID=0 threatSev=0 toSidecar=false xff=false
2022-06-28T14:54:38.808|DEBU|CTL|cache.cacheMutexRLock: Acquire ... - goroutine=318
2022-06-28T14:54:38.808|DEBU|CTL|cache.cacheMutexRUnlock: Released - goroutine=318
2022-06-28T14:54:38.808|DEBU|CTL|cache.cacheMutexLock: Acquire ... - goroutine=318
2022-06-28T14:54:38.808|DEBU|CTL|cache.cacheMutexUnlock: Released - goroutine=318
2022-06-28T14:54:38.808|DEBU|CTL|cache.cacheMutexRLock: Acquire ... - goroutine=318
2022-06-28T14:54:38.808|DEBU|CTL|cache.cacheMutexRUnlock: Released - goroutine=318
2022-06-28T14:54:38.808|DEBU|CTL|cache.cacheMutexRLock: Acquire ... - goroutine=318
2022-06-28T14:54:38.808|DEBU|CTL|cache.cacheMutexRUnlock: Released - goroutine=318
2022-06-28T14:54:38.808|DEBU|CTL|cache.UpdateConnections: - agent=1d1b9febe7e0 app=0 bytes=9760 client=c16edcf22e89 clientIP=192.168.193.125 clientPort=0 extIP=false external=false first=1656428072 host=neoappliance-k8s-02:6dce4d56-1a13-a2b1-74e8-b16dd06f283e ingress=false ipproto=6 last=1656428072 local=false network= policyAction=0 policyID=0 policyViolates=0 scope=global server=nv.ip.kubern serverIP=10.43.0.1 serverPort=443 sessions=0 threatID=0 threatSev=0 toSidecar=false xff=false
2022-06-28T14:54:38.808|DEBU|CTL|cache.graphMutexUnlock: Released - goroutine=318
2022-06-28T14:54:38.808|DEBU|CTL|cache.cacheMutexRLock: Acquire ... - goroutine=318
2022-06-28T14:54:38.808|DEBU|CTL|cache.cacheMutexRUnlock: Released - goroutine=318
2022-06-28T14:54:42.706|DEBU|CTL|cache.cacheMutexRLock: Acquire ... - goroutine=1100956
2022-06-28T14:54:42.706|DEBU|CTL|cache.cacheMutexRUnlock: Released - goroutine=1100956
2022-06-28T14:54:42.707|DEBU|CTL|cache.graphMutexLock: Acquire ... - goroutine=318
2022-06-28T14:54:42.707|DEBU|CTL|cache.cacheMutexRLock: Acquire ... - goroutine=318
2022-06-28T14:54:42.707|DEBU|CTL|cache.cacheMutexRUnlock: Released - goroutine=318
2022-06-28T14:54:42.707|DEBU|CTL|cache.cacheMutexLock: Acquire ... - goroutine=318
2022-06-28T14:54:42.707|DEBU|CTL|cache.cacheMutexUnlock: Released - goroutine=318
2022-06-28T14:54:42.707|DEBU|CTL|cache.preProcessConnectPAI: Ignore ingress connection from nv device - client=192.168.67.168 server=192.168.164.101
2022-06-28T14:54:42.707|DEBU|CTL|cache.cacheMutexRLock: Acquire ... - goroutine=318
2022-06-28T14:54:42.707|DEBU|CTL|cache.cacheMutexRUnlock: Released - goroutine=318
2022-06-28T14:54:42.707|DEBU|CTL|cache.cacheMutexLock: Acquire ... - goroutine=318
2022-06-28T14:54:42.707|DEBU|CTL|cache.cacheMutexUnlock: Released - goroutine=318
Additional tests that I did in other server (CentOS 7), always with the same result, "Authentication Failed" when I try to access NeuVector from Rancher UI: ā€¢ Installed K3S single node, without "cluster" mode (not etcd, use sqlite) ā€¢ Installed K3S single node, using Traefik instead of Nginx Ingress Controller I don't know what can other things I can try. Maybe not use Calico, but is not an option on my production environment. I cannot try to use original LoadBalancer from K3S (I'm using metallb), because is not working on CentOS 7 (there is a confirmed bug on GitHub).
q
do you see anything like this in your controller debug, this is showing a failed login. I dont even see the login attempt (first line) in what you pasted above, as if the auth connection does not even arrive.
Copy code
2022-06-29T17:29:18.382|DEBU|CTL|rest.handlerAuthLogin: - URL=<https://neuvector-svc-controller.cattle-neuvector-system:10443/v1/auth>
2022-06-29T17:29:18.382|DEBU|CTL|rest.getAuthServersInOrder: - auth-order=[]
2022-06-29T17:29:18.387|DEBU|CTL|rest.handlerAuthLogin: - server=local
2022-06-29T17:29:18.391|INFO|CTL|rest.handlerAuthLogin: - error=Wrong password user=admin
2022-06-29T17:29:18.391|ERRO|CTL|rest.handlerAuthLogin: User login failed - msg=Wrong password user=admin
2022-06-29T17:29:18.392|DEBU|CTL|rest.writer.WriteHeader: 401 - Method=POST URL=<https://neuvector-svc-controller.cattle-neuvector-system:10443/v1/auth>
if you have > 1 controller, you might need to look at all of them, as the connection "should" use the controller service and wouldn't necessarily hit the leader.
r
Can you hit the NeuVector UI directly at
https://<the.ui.service.ip>:8443
?
f
@ripe-actor-83292, the NeuVector UI internally is HTTPS or HTTP on port 8443?
r
HTTPS
f
Ok, let me try with a kubectl port-forward
r
thanks for playing along. šŸ™‚ This is definitely a weird one.
šŸ‘ 1
Alejandro, you mentioned prod clusters. Are you perhaps doing a PoC for a future Rancher/NeuVector support contract? Just curious.
f
No @ripe-actor-83292, the "prod" word here is maybe oversized. Basically refer to the idea that Flannel is not an option in the future for a prod environment as our network engine. We are still testing what we are going to use for Containers Workloads, and clearly Rancher/K3S/Neuvector is an option, but we still don't know for sure.
q
I'm looking into behavior when rancher auth provider is enabled. It could be the culprit if cli is OK (I also verified cli and ui should use the same api endpoints for authentication)
h
Curious, if you open your browser dev tools and try to log in, do you see any console errors? Or anything interesting in the returned responses?
ā˜ļø 1
a
Curious, can you post your values.yaml please?
f
@ripe-actor-83292, I cannot access to the UI even doin a kubectl port-forward, I get a Connection Reset whe I try to access from the browser:
root@appliance-k8s-01:/home/aguida# kubectl port-forward service/neuvector-service-webui -n cattle-neuvector-system 8443:8443
Forwarding from 127.0.0.1:8443 -> 8443
Forwarding from [::1]:8443 -> 8443
Handling connection for 8443
Handling connection for 8443
E0629 18:36:17.344121 2335862 portforward.go:406] an error occurred forwarding 8443 -> 8443: error forwarding port 8443 to pod 4d0d9e20c0c56b4f6a26e9e3ebc997be715f11798662f801b60f6069900dadfb, uid : failed to execute portforward in network namespace "/var/run/netns/cni-e8f9fd3b-9834-d83b-0260-a767438f30e6": read tcp4 127.0.0.1:54932->127.0.0.1:8443: read: connection reset by peer
E0629 18:36:17.344927 2335862 portforward.go:234] lost connection to pod
@hallowed-ocean-20951, nothing interesting, only the 401 Unauthorized answer
q
This is due to a bug in kubectl 1.23+ . if you have a 1.22 kubectl, it should work
f
@quiet-fountain-46593 I need to have Kubernetes 1.22 too, or only replacing the kubectl binary with 1.22 version should it work?
q
just using the different kubectl should work
they changed the behavior, and connection reset by peer triggers kubectl to halt the port forward, even though in some cases it can be normal
can you check the
- name: RANCHER_EP
value for the controller deployment, and make sure controller pods can resolve/connect to it?
this is injected by the rancher helm deploy.
f
@ripe-actor-83292, can't login with port-forward, with default admin/admin credentials:
r
ok. check some of the other replies/suggestions here
šŸ‘ 1
q
will try and reproduce myself with an unreachable RANCHER_EP as well, and see if the behavior is the same
šŸ‘ 1
f
@quiet-fountain-46593, RANCHER_EP inside neuvector-controller resolve to the https://rancherFQDN, where rancherFQDN is the FQDN that I configured to access Rancher when I installed it with the Helm Chart. The resolution of that FQDN is only possible from my machine, where I added a line with "IP FQDN" to my local hosts file. Running nslookup inside neuvector-controller, It cannot resolve the FQDN. What I don't understand, understanding that maybe that is the problem that is not allowing me to SSO to Neuvector from Rancher, is why I cannot login to NeuVector with default admin/admin credentials doing the kubectl port-forward, but I can login to Neuvector-Controller from Neuvector-Manager pod using the CLI, with default credentials. Weird, šŸ˜‘
q
yea, this could be a poor failure
of the sso piece
or as designed
so, I can mostly reproduce. If I deploy NV via rancher. then modify RANCHER_EP to something bogus, and delete all 3 controller pods. I can then not log in via SSO or direct to UI with created users. (admin does seem to work, but likely that's because on deploy the URL was working and some magic happens for that initial user) I suspect if I install a test rancher locally with a hosts entry like you did, I would also not be working for default admin user.
I'll bring it up internally. But in the short term, if you just install the neuvector chart outside of rancher, it should work OK and you can play around.
f
Ok @quiet-fountain-46593 , I'm going to try. Thanks all for your help.
a
Going all the way back to your first post - ā€œk3sā€ should be your selected runtime, not containerd. And what CNI are you using if not Flannel?
f
I'm using Calico @adventurous-battery-36116
r
Calico. ā€¦which will present some issues with NeuVector until some time later this year
Notably that
Protect
mode will not work, if thatā€™s relevant to the use-case.
f
I'm selecting K3S as runtime when I install NeuVector @adventurous-battery-36116, was a mistake when I wrote that message
a
ok cool - just checking šŸ™‚
šŸ‘ 1
Is Calico eBPF in the picture at all?
f
@ripe-actor-83292 which is the recommended CNI to use with NeuVector, the most "compatible" one?
r
UGH! what I said was a typo; sorry. Itā€™s Cilium that has that limitation. Calico is fine
šŸ‘ 1
šŸŽÆ 1
NeuVector will use eBPF if eBPF has relevant data handy. But eBPF is not effective as a network protection tool.
a
What (if any) network policies are in place?
kubectl get NetworkPolicy -A
And from personal experience, I have noticedā€¦strangeā€¦behavior when using cgroupsv2, which is default when using Ubuntu 22.04. Any chance you can kick the tires on a 20.04 install and see if youā€™re hit with the same thing?
f
Network Policies @adventurous-battery-36116:
Copy code
root@appliance-k8s-01:/home/aguida# kubectl get NetworkPolicy -A
NAMESPACE                   NAME                POD-SELECTOR     AGE
calico-apiserver            allow-apiserver     apiserver=true   13d
cattle-fleet-local-system   default-allow-all   <none>           13d
c
if you have CLI access, run
show user
is admin user listed?
If so; then scale controller to replicas=1 and enabled cpath debugging; then attempt login and look for lines with [A|a]uthen from DEBUG level.
also strange you say you can authentication with CLI as CLI auth is the same as webui auth as both are REST API against /v1/auth endpoint to the controller
f
Hi @clean-magazine-25026. On one of my previous posts, I uploaded an image that show the admin user and a user that I created for me, aguida, from the CLI. So the authentication from the CLI os working right.