This message was deleted.
# rke2
a
This message was deleted.
b
You start rke2 from scratch with the dual-stack config and things don't work?
m
Yea, uninstall reinstall, only difference being the dual stack config
b
That errors seems to point at kube-apiserver not starting for some reason
can you search in the logs "Running kube-apiserver"?
m
Which logs are you thinking? Journalctl/running rke2-server with --debug?
b
journalctl without debug should show it too
sudo journalctl | grep "Running kube-apiserver"
m
Copy code
Oct 17 17:31:00 rke-01-evrtwaxa rke2[1097056]: time="2023-10-17T17:31:00Z" level=info msg="Running kube-apiserver --advertise-address=10.115.1.11 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=<https://kubernetes.default.svc.k8s-evrtwaxa.dev.as20055.net>,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=<https://127.0.0.1:2379> --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=<https://kubernetes.default.svc.k8s-evrtwaxa.dev.as20055.net> --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.115.96.0/20,2605:21c0:fc02:100::/64 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key"
b
that looks fine
could you check the kube-api logs? They should be under
/var/log/container/$SOMETHING-KUBE-API
not sure if the path is
container
or
containers
m
containers, interesting, that was the log I was looking for!
๐Ÿ™Œ 1
2023-10-17T173647.787539514Z stderr F E1017 173647.787319 1 run.go:74] "command failed" err="specified --service-cluster-ip-range[1] is too large; for 128-bit addresses, the mask must be >= 108"
b
ah! True!
the previous log didn't look fine in the end ๐Ÿ˜›
m
ah, darn, the unstandard v6 allocation sizes that drive me nuts ๐Ÿ˜›
b
yeah, I agree
m
Thanks for the help, I knew there had to be a log somewhere but I couldn't find it..
๐Ÿ‘ 1