This message was deleted.
# rke2
a
This message was deleted.
p
Here is the events of the controller pod:
Copy code
Events:
  Type     Reason          Age                   From               Message
  ----     ------          ----                  ----               -------
  Normal   Scheduled       55m                   default-scheduler  Successfully assigned metallb-system/controller-54b4fd6944-rnjrz to <http://lou1sspkubew3.corp.aperturecvo.com|lou1sspkubew3.corp.aperturecvo.com>
  Normal   Pulling         55m                   kubelet            Pulling image "<http://quay.io/metallb/controller:v0.13.7|quay.io/metallb/controller:v0.13.7>"
  Normal   Pulled          55m                   kubelet            Successfully pulled image "<http://quay.io/metallb/controller:v0.13.7|quay.io/metallb/controller:v0.13.7>" in 1.958823904s
  Warning  Failed          55m                   kubelet            Error: failed to get sandbox container task: no running task found: task 88a7a6f0425e2b3ee86b1961e64ab3c59eb4f08a6303af18518eb87ace467c46 not found: not found
  Warning  Failed          55m                   kubelet            Error: sandbox container "01b77967f047796cd5b5f6c4d065ec1b4dc5b304e35b7a0e2fae01e3fb72597d" is not running
  Normal   Pulled          55m (x2 over 55m)     kubelet            Container image "<http://quay.io/metallb/controller:v0.13.7|quay.io/metallb/controller:v0.13.7>" already present on machine
  Normal   Created         55m (x2 over 55m)     kubelet            Created container controller
  Warning  Failed          55m                   kubelet            Error: sandbox container "cfe98b37c678e34045274a67097cd59a0be8a952bc30d8cd21377e053e4511e3" is not running
  Normal   SandboxChanged  25m (x721 over 55m)   kubelet            Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         16s (x1290 over 55m)  kubelet            Back-off restarting failed container
On all the articles about installing MetalLB they install a ConfigMap with the network protocol and IP address pool outlined, but from their official documentation I don’t see anything about a ConfigMap so maybe that is where I’ve messed up? I just don’t know.
c
Yes, that ConfigMap is imperative or the deployment will not do anything.
p
This is the config map I applied after applying the
metallb-native.yaml
file from their GitHub repo:
Copy code
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.0.100.15
      - 10.0.100.18-10.0.100.20
      - 10.0.100.157-10.0.100.162
      - 10.0.100.229-10.0.100.233
That link points to a different kind of file. This one is what I found on tutorials on how to do it.
c
So you are just doing layer2?
p
Yes. Our routers don’t support BGP
c
Well, honestly if you are having issues with the controller pod then that has nothing to do with the configuration of the IP Pool
Do you have a link to the article you used? Did you try just installing with the helm chart?
p
I followed the official docs using the manifest method.
Once it broke I went looking else where.
I didn’t do the “Preperation” step because I wasn’t sure it applied and there isn’t a
kube-proxy
configmap so I wasn’t sure if RKE2 did it differently.
c
p
So how do I “uninstall” a manifest? Just delete all of the pods?
c
kubectl delete -f /path/to/manifest
p
Oh great!
Thanks for that tip.
c
And then if you created the configmap I would delete that too as it will not delete from the above command (or anything else you created manually). One caveot to that would be if you installed it into its own namespace you can just delete the namespace and recreate it and everything will be deleted.
p
Okay. I’ll trying using that tutorial and come back if I find my self in the same place. Thanks @cuddly-restaurant-47972!
c
Anytime! Let me know how it goes.
p
Looks like the latest version of MetalLB doesn’t use ConfigMaps any more. They use CRs, I’m guessing those are custom resources. I’m not sure how to apply that however. Back to documentation hopping to find the answer.
New problems. I see they have a Slack channel. I’ll go bug them. Thanks for the help.
491 Views