This message was deleted.
# general
a
This message was deleted.
c
did you configure the cloud provider? Its still tainted because the cloud-provider hasn’t initialized the node yet, which usually means the cloud provider isn’t configured properly.
b
yes i did or at least this is the config i applied based on the rancher docs:
Copy code
cloud_provider:
    name: openstack
    openstackCloudProvider:
      global:
        username: "xxx"
        password: "****"
        auth-url: "<https://xxxx:5000/v3>"
        tenant-id: "xxxxxxxxxxxxxxxxxx"
        domain-id: "default"
        region: "RegionOne"
        tls-insecure: true
      load_balancer:
        lb-version: "v2"
        subnet-id: "xxxxxxxxx"
        floating-subnet-id: "xxxxx"
        use-octavia: true
        floating-network-id: "xxxxxxx"
        create-monitor: false
        manage-security-groups: true
      block_storage:
        ignore-volume-az: true
        trust-device-path: false
        bs-version: "v2"
      metadata:
        request-timeout: 0
c
checked the logs on the openstack cluster?
b
you mean check the logs on openstack itself or the kubernetes cluster deployed by rancher ?
c
on the cluster. there will probably be some errors from the openstack cloud controller
b
looks like the cloud controller never got installed in the first place:
Copy code
AMESPACE         NAME                                                    READY   STATUS      RESTARTS   AGE
calico-system     calico-kube-controllers-677d488b5f-clcz6                0/1     Pending     0          34m
calico-system     calico-node-57gdh                                       0/1     Running     0          34m
calico-system     calico-typha-6547bcd8d5-grtw9                           0/1     Pending     0          34m
cattle-system     cattle-cluster-agent-55cfb76bdf-c6xkz                   0/1     Pending     0          35m
kube-system       etcd-k8s-test-pool1-24fb3892-tsrbx                      1/1     Running     0          34m
kube-system       helm-install-rke2-calico-6mjqp                          0/1     Completed   2          35m
kube-system       helm-install-rke2-calico-crd-722h6                      0/1     Completed   0          35m
kube-system       helm-install-rke2-coredns-nhs2p                         0/1     Completed   0          35m
kube-system       helm-install-rke2-ingress-nginx-hg7gn                   0/1     Pending     0          35m
kube-system       helm-install-rke2-metrics-server-9plg5                  0/1     Pending     0          35m
kube-system       kube-apiserver-k8s-test-pool1-24fb3892-tsrbx            1/1     Running     0          34m
kube-system       kube-controller-manager-k8s-test-pool1-24fb3892-tsrbx   1/1     Running     0          34m
kube-system       kube-proxy-k8s-test-pool1-24fb3892-tsrbx                1/1     Running     0          35m
kube-system       kube-scheduler-k8s-test-pool1-24fb3892-tsrbx            1/1     Running     0          34m
kube-system       rke2-coredns-rke2-coredns-76cb76d66-pm5pw               0/1     Pending     0          35m
kube-system       rke2-coredns-rke2-coredns-autoscaler-58867f8fc5-v9ppd   0/1     Pending     0          35m
tigera-operator   tigera-operator-6457fc8c7c-wffp6                        1/1     Running     0          34m
c
oh, is this RKE2? We don’t have a packaged cloud provider for openstack on RKE2…
b
oh ok so i have to install openstack-ccm manually ?
c
correct
b
what about if i use rke does that have the openstack-ccm included ?
c
I am not sure, I’m an rke2/k3s dev so I don’t use rke much… but its more likely.
b
that ok i want to test RKE2 anyway because of it's use of containerd
let me try to install the openstack-ccm and see if that works
c
yeah, I might be biased but I would go with rke2 or k3s 😉
👍 1
b
looks like installing openstack-ccm failed because rke2 is using the port used by ccm:
Copy code
error: failed to create listener: failed to listen on 127.0.0.1:10258: listen tcp 127.0.0.1:10258: bind: address already in use
root@testk8s-pool1-ad60a11d-cd48t:~# lsof -i:10258
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
cloud-con 2283 root    7u  IPv4  24904      0t0  TCP localhost:10258 (LISTEN)
root@testk8s-pool1-ad60a11d-cd48t:~# ps aux | grep 2283
root        2283  0.3  0.6 751948 24444 ?        Ssl  21:06   0:02 cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-provider=rke2 --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/rke2/server/cred/cloud-controller.kubeconfig --node-status-update-frequency=1m0s --profiling=false
root       12485  0.0  0.0   7004  2212 pts/2    S+   21:15   0:00 grep --color=auto 2283
root@testk8s-pool1-ad60a11d-cd48t:~#
c
yeah, you should disable the built-in ccm
set it to “external” when provisioning the downstream cluster, IIRC
b
ok let me try again
377 Views