hi.. i've got 3 control planes and 3 worker nodes ...
# k3s
a
hi.. i've got 3 control planes and 3 worker nodes joined to each control plane, when i installed the control planes i used k3sup and passed it arguments so it would use IPVS, do I need to do that when I join the worker nodes as well? this was my join command:
Copy code
k3sup install \
  --ip 10.0.2.2 \
  --user mfreeman \
  --k3s-extra-args "\
    --disable traefik \
    --disable servicelb \
    --disable-cloud-controller \
    --kube-proxy-arg proxy-mode=ipvs \
    --cluster-cidr=10.0.3.0/24,2602:f678:0:101::/64 \
    --service-cidr=10.0.5.0/24,2602:f678:0:104::/108 \
    --disable-network-policy \
    --flannel-backend=none \
    --datastore-endpoint=<https://10.0.1.2:2379>,<https://10.0.1.3:2379>,<https://10.0.1.4:2379> \
    --datastore-cafile=/etc/ssl/etcd/ssl/ca.pem \
    --datastore-certfile=/etc/ssl/etcd/ssl/k3s-client.pem \
    --datastore-keyfile=/etc/ssl/etcd/ssl/k3s-client-key.pem"
followed by
k3sup join --ip 10.0.2.3 --server-ip 10.0.2.2 --user mfreeman   --ssh-key ~/.ssh/id_ed25519
for the worker nodes, was I supposed to add all those additional arguments or what?
c
yes. kube-proxy args are part of agent config which means it is per-node. Needs to be set individually on every node.
It is totally possible to run different nodes in different modes.
You’re disabling both CNI and cloud-controller, I assume you’re aware you need to replace those with something?
a
yeah i'm using Calico
and metallb
c
and what for your cloud provider?
a
hm its been a while since I originally came up with that command, can't remember now
i'm running this on-prem
i believe thats why i choose that option
a
i'm not seeing any options there, other than "use an external CCM"?
can I go back and enable the default?
isnt MetallB the load balancer in my case?
c
it is a loadbalancer yes, but you need all the other bits
you can just remove that flag from the server args
a
ok lemme try that
ok thanks for the help.. gotta troubleeshoot some routing issue now
s
aren't the
cluster-cidr
and
service-cidr
non-standard CIDR sizes?
c
yeah the ipv4 cluster-cidr won’t allow for more than a single node in the cluster if you’re using the built-in Kubernetes node IPAM since each node gets a /24… you’d need to reduce the node cidr mask or increase the total size of that cidr.
a
ok thank you.. going to rebuild
s
This is what I did in my test IPv4+IPv6 test cluster
Copy code
cluster-cidr: "10.1.0.0/16,fd30:cafe:1234:1::/56"
    service-cidr: "10.2.0.0/18,fd30:cafe:1234:2::/112"
a
thank you