This message was deleted.
# rke2
a
This message was deleted.
g
ok, I might have found this thread that could help https://rancher-users.slack.com/archives/C896QJRPE/p1647462355183109
this looks like working, so putting (I don't have or want VIP yet):
Copy code
kind: ConfigMap
apiVersion: v1
metadata:
  name: kubernetes-services-endpoint
  namespace: tigera-operator
data:
  KUBERNETES_SERVICE_HOST: "127.0.0.1"
  KUBERNETES_SERVICE_PORT: "6443"
to
/var/lib/rancher/rke2/server/manifests
💯 1
well, sort of worked, as
Copy code
k -n calico-system logs calico-kube-controllers-64bfc95dc8-4t7qf
2022-05-19 11:38:10.602 [INFO][1] main.go 94: Loaded configuration from environment config=&config.Config{LogLevel:"info", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", DatastoreType:"kubernetes"}
W0519 11:38:10.607051       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2022-05-19 11:38:10.609 [INFO][1] main.go 115: Ensuring Calico datastore is initialized
2022-05-19 11:38:10.613 [ERROR][1] client.go 272: Error getting cluster information config ClusterInformation="default" error=Get "<https://127.0.0.1:6443/apis/crd.projectcalico.org/v1/clusterinformations/default>": dial tcp 127.0.0.1:6443: connect: connection refused
2022-05-19 11:38:10.613 [FATAL][1] main.go 120: Failed to initialize Calico datastore error=Get "<https://127.0.0.1:6443/apis/crd.projectcalico.org/v1/clusterinformations/default>": dial tcp 127.0.0.1:6443: connect: connection refused
but install went through
👀 1
easier would be to add one of the controlplane's IP address here, but I can't predict which one will come up first 🙂
yeah, as the
tigera-operator
is running with
hostNetwork: true
it obviously went through, talking to the local API server, but
calico-kube-controllers
is a normal pod
calico-kube-controllers shall talk to the
kubernetes
svc, which is available once tigera was able to configure eBPF
or the DownWard API
status.hostIP
😞
v
you can query the endpoints for the kubernetes service in the default namespace and use one of those for the
kubernetes-services-endpoint
KUBERNETES_SERVICE_HOST
Copy code
# kubectl get endpoints -n default kubernetes -o jsonpath='{.subsets[0].addresses[0].ip}'
10.10.10.10
that should work for everything in the tigera-operator namespace, but you will also run the same issue with any other namespace service that relies on the actual clusterIP for that default/kubernetes service
we eventually moved to having felix ignore the kube-proxy rules and left it installed to require the smallest possible number of “hacks” for dynamic namespaces and application deployments
g
@victorious-analyst-3332 adding
bpfKubeProxyIptablesCleanupEnabled
to felix config right?
v
yup yup
g
yeah, I tend to do the same. Just wanted to make it clean, but it's impossible with the current setup
with clusters having a vip (using kube-vip) tho' it should be possible
v
yeah, I didn’t find a good way to update that kubernetes service with node IPs and ports instead of the clusterIP
g
thx!
well, still with that patch approach, the next helm run on rke2-calico,
bpfKubeProxyIptablesCleanupEnabled
will be removed
catch 22
v
that overwrites the
felixConfiguration
? apologies, we’re still deploying calico outside of RKE2
g
yeah, looks like I'll also need to do that
v
I think the alternative is to put the felixconfig under the manifests dir for the helm modifications
g
yeah, or prepare the rke2-calico helm chart for such actions, so maybe I'll raise an upstream PR