This message was deleted.
# k3s
a
This message was deleted.
l
MSSQL and K3s shouldn’t run on the same node in this way. MSSQL should be in a Pod. However, the last time I checked there was no good way ( Kubernetes native way ) of running MSSQL on Kubernetes. This sounds like a too harshly specked setup - resource wise!
I would try default network debugging meaassures. • Does the MSSQL endpoint DNS resolve • is it possible to poke the 1433 MSSQL port And so forth
g
Thanks for the feedback! I understand what you're saying about MSSQL and K3s running on the same node, I ran MSSQL in a pod, I merely installed it natively alongside to try rule out potential performance problems in the pod compared to a native installation. Native or as a pod unfortunately gives the same result. No DNS issues in either scenario and making a connection is not an issue. I have managed to find out that when I port forward my .NET pod and directly communicate with it skipping my ingress I have no issues at that time.
l
Hmm interesting. What ingress are you using? And you likely need to do it at the TCP level because of this being an SQL server and on a 1433 port. Most ingresses concerns themselves, in their default configuration, with HTTP traffic.
g
Nginx, my thought is that it's either my ingress or something in the networking of my K3s that is off, the pod seems to communicate well with mssql when bypassing the ingress. My next plan of attack is to try switching back to traefik
I don't regularly do 200 requests to my pods, but I noticed odd behaviour in my API tests, random 502's causing me to start investigating
l
What’s the CNI you use in K3s?
g
Default, Flannel
l
200 requests is nothing …
g
My thought to
l
Flannel … hmm the most basic ( not that it’s not great ) …. I would go with Calico or Cilium.
I’m deeply into Cilium.
g
Is it an easy switch?
l
njaaa I wouldn’t say easy.
g
I honestly think it has to be my CNI, I can't think how my Nginx ingress is causing this, but then I'm not to experienced yet in this type of troubleshooting
I'll read up on Cilium and try make a switch
👍 1
Initial testing shows improvement with Cilium
🦜 1
Ah, just happened to be a moment of luck it seems, getting the same behaviour again. You might know this, it seems that Cilium might be able to help me diagnose a little better then traditional tooling like tcpdump?
l
Use hubble
g
Thanks for all your input, much appreciated! Going to try and make sense of the Hubble data now, looks like I see Ingress requests being forwarded, but not quite figuring out yet where it gets stuck
👍 1
👏 1