This message was deleted.
# rke2
This message was deleted.
How did you enable IPSec in Cilium?
Copy code
apiVersion: <|>
kind: HelmChartConfig
  name: rke2-cilium
  namespace: kube-system
  valuesContent: |-
    k8sServiceHost: rke2-server1
    k8sServicePort: 6443
      replicas: 1
      enabled: true
      type: ipsec
      enabled: true
I applied this
You first deployed rke2+cilium and after that applied that?
Now Cilium reports this:
Copy code
root@rke2-server1:/home/cilium# cilium status
KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.24 (v1.24.4+rke2r1) [linux/amd64]
Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "<|>"]
KubeProxyReplacement:    Probe
Host firewall:           Disabled
CNI Chaining:            none
Cilium:                  Ok   1.12.0 (v1.12.0-9447cd1)
NodeMonitor:             Listening for events on 4 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok
IPAM:                    IPv4: 3/254 allocated from,
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       25/25 healthy
Proxy Status:            OK, ip, 0 redirects active on ports 10000-20000
Global Identity Range:   min 256, max 65535
Hubble:                  Ok   Current/Max Flows: 4095/4095 (100.00%), Flows/s: 6.88   Metrics: Disabled
Encryption:              IPsec
Cluster health:          3/3 reachable   (2022-11-14T15:54:59Z)
Hubble is active as well and I'm able to observe traffic using it
So Cilium was deployed with one config and then "redeployed" using ipsec. I wonder if that's supported 🤔. Any reason why you did not deploy directly with ipsec?
What error you see when accessing the UI?
I didn't have a secret setup on the original deployment
I just get a Failed to Connect. Connection Refused.
Copy code
$ kctl get pods -n cattle-system
NAME                               READY   STATUS    RESTARTS   AGE
rancher-7c676f75c-fc4dj            1/1     Running   0          16h
rancher-webhook-66dcd7db66-75cxj   1/1     Running   0          15h

$ kctl get svc -n cattle-system
NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
rancher           ClusterIP   <none>        80/TCP,443/TCP   16h
rancher-webhook   ClusterIP   <none>        443/TCP          15h
webhook-service   ClusterIP   <none>        443/TCP          15h
I'm still relatively new to this so even basic debugging suggestions could be helpful. Rancher doesn't appear to report any errors.
you were connecting to
No, not to the clusterip. I was opening a browser to the node that the pod was running on.
Could you try using the clusterIP
in the browser?
Let's try to disect the problem 😛
👍 1
I should describe my environment. I'm using Vagrant on an Ubuntu Host. My vagrant cluster stands up 3 nodes for RKE2. One server and two agents. I used to be able to open a browser on my ubuntu host to the Rancher UI whose pod was on agent1. And then, with ipsec enabled, opening a browser on the host with the same url, I get the Failed to Connect error. So now trying to open a browser on my host to the ClusterIP I'm get 'Unable to Connect'
ok, can you show me
kubectl get endpoints -A