I have been playing around on my homelab with K3s ...
# k3s
m
I have been playing around on my homelab with K3s on openSUSE MicroOS, in preparation for a POC at the day job, involving RKE2, Elemental and Rancher. Now, I know I am comparing apples and watermelons here, but I have come across a few issues during setup, that are giving me pause for the POC. The biggest issue seems to be that some workloads are not a fan of the immutable host OS. This has mostly been an issue with Calico, which means I will log a ticket on their side about it, but was wondering if anybody else has come across this, and what you did to resolve? In my case, I had to keep making transactional updates to the base image, to create directory structures, config files, etc. But after repeating the cycle of Error, Web search, work through code repository to trace dependencies, create correct folder structure/populate files with data from repo, update , reboot, Error.... just too many times. I have switched back to a normal OS, even if only for a week or two, to recover my sanity.
m
Not sure about the calico issue. As for the SLMicro OS, to not have to make transactional updates and create directories by logging into the node you should be doing all that in the build of the iso. In elemental you want to add/setup all of your directory and packages before building the OS and then deploy the finalized iso to your nodes. Helpful information you can provide: Elemental version? SLMicro version? What are you adding/editing on your host manually? What does your cloud config look like if building ISOs with elemental?
m
Ok, to clarify. I was using the stock standard openSUSE Micro image, and not via the elemental builder - wanted to run my Rancher instance to manage the ElementalOS images on it. SLMicro: Snapshot20250219 I might have been frustrated enough at the time to delete my notes when I reimaged... though there is no evidence of that. The few notes that survived the cleanse, show I had to create /usr/libexec/kubernetes/kubelet-plugins, along with the files to populate it, and at least one other folder was mounted by the Calico pods, but was unable to write as it was read-only.
m
ok, thanks! ok, that is different. What rancher version are you trying to install? SLMicro version I meant 6.0 or 6.1?
m
Will have to check the exact SLMicro version again, but I believe it is 6.1 (Image is Feb 2025) I did not get to the Rancher install as the cluster was never functional, without Calico. 'n Have trauma with Flannel, so I avoid it.
m
Would you be opposed to Cilium? Let me test this in a few and will get back to you. SLMicro 6.1 Rancher 2.10.3 CNI: Calico
m
No problems with Cillium, it is just further down on the list, so have not gotten to it yet. Will try it out this weekend.
👍 1
m
I set this up over the weekend and I really didn't find any issues with setting this up. Steps I ran to reproduce. 1. Download MicrOS and only added ssh pub key to the image with ignition for access 2. Installed rke2 from tar binary and started rke2-server with a basic config.yaml 3. Waited for node to come up
RKE Version
Copy code
sl-c1:~ # rke2 --version
rke2 version v1.31.7+rke2r1 (7b18bda1c5ec1e110cec206f9163f6aba3a2154d)
go version go1.23.6 X:boringcrypto
MicroOS version
Copy code
sl-c1:~ # cat /etc/os-release
NAME="openSUSE Leap Micro"
VERSION="6.1"
ID="opensuse-leap-micro"
ID_LIKE="suse opensuse opensuse-leap suse-microos"
VERSION_ID="6.1"
PRETTY_NAME="openSUSE Leap Micro 6.1"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:opensuse:leap-micro:6.1"
BUG_REPORT_URL="<https://bugs.opensuse.org>"
HOME_URL="<https://www.opensuse.org/>"
DOCUMENTATION_URL="<https://en.opensuse.org/Portal:LeapMicro>"
LOGO="distributor-logo-LeapMicro"
RKE2 First node config
Copy code
sl-c1:~ # cat /etc/rancher/rke2/config.yaml
enable-servicelb: true
cni: calico
K8s pods running
Copy code
sl-c1:~ # kubectl get po -A
NAMESPACE         NAME                                                    READY   STATUS      RESTARTS        AGE
calico-system     calico-kube-controllers-564697854-l7g86                 1/1     Running     0               8m59s
calico-system     calico-node-tbjdt                                       1/1     Running     0               8m46s
calico-system     calico-typha-5447f546d9-9822b                           1/1     Running     0               8m37s
kube-system       cloud-controller-manager-sl-c1                          1/1     Running     4 (15h ago)     2d20h
kube-system       etcd-sl-c1                                              1/1     Running     1               2d20h
kube-system       helm-install-rke2-calico-crd-fqljj                      0/1     Completed   3               2d20h
kube-system       helm-install-rke2-calico-p7tjl                          0/1     Completed   3               2d20h
kube-system       helm-install-rke2-coredns-4h45q                         0/1     Completed   0               2d20h
kube-system       helm-install-rke2-ingress-nginx-kfk6r                   0/1     Completed   0               2d20h
kube-system       helm-install-rke2-metrics-server-klsqs                  0/1     Completed   0               2d20h
kube-system       helm-install-rke2-runtimeclasses-wfgtt                  0/1     Completed   0               2d20h
kube-system       helm-install-rke2-snapshot-controller-crd-rmqn9         0/1     Completed   0               2d20h
kube-system       helm-install-rke2-snapshot-controller-dzhvt             0/1     Completed   0               2d20h
kube-system       kube-apiserver-sl-c1                                    1/1     Running     4               2d20h
kube-system       kube-controller-manager-sl-c1                           1/1     Running     4 (15h ago)     2d20h
kube-system       kube-proxy-sl-c1                                        1/1     Running     0               2d15h
kube-system       kube-scheduler-sl-c1                                    1/1     Running     3 (15h ago)     2d20h
kube-system       rke2-coredns-rke2-coredns-autoscaler-596dcdf688-k88j4   1/1     Running     2 (2d15h ago)   2d20h
kube-system       rke2-coredns-rke2-coredns-cf7df985b-nrpwx               1/1     Running     2 (2d15h ago)   2d20h
kube-system       rke2-ingress-nginx-controller-52d8f                     1/1     Running     2 (2d15h ago)   2d20h
kube-system       rke2-metrics-server-58ff89f9c7-tcx5k                    1/1     Running     2 (2d15h ago)   2d20h
kube-system       rke2-snapshot-controller-58dbcfd956-l85mr               1/1     Running     2 (2d15h ago)   2d20h
tigera-operator   tigera-operator-56b7b68557-n9sxl                        1/1     Running     0               8m15s
K8s Node
Copy code
sl-c1:~ # kubectl get no
NAME    STATUS   ROLES                       AGE     VERSION
sl-c1   Ready    control-plane,etcd,master   2d20h   v1.31.7+rke2r1
Please let me know which packages and directories you needed to setup aside from the base OS. Did you by chance start with MicroOS base image vs default?
m
My combustion script:
Copy code
#!/bin/bash

set -e

## Note:
## To check if the k3s installation has been finished
## issue the "systemctl status k3sinstall.service" command.
## To finish the installation you must reboot!
## Once booted you can check the node with:
## "kubectl get nodes"
## For more check out:
## "<https://documentation.suse.com/trd/kubernetes/pdf/kubernetes_ri_k3s-slemicro_color_en.pdf>"

## Enable network
# combustion: network
## Post output on stdout
exec > >(exec tee -a /dev/tty0) 2>&1

## 1Password Token - expire in 30 days. Empty to skip
OP_SERVICE_ACCOUNT_TOKEN=ops_<some_exceptionally_long_token>

## Install 1Password
if [ "$OP_SERVICE_ACCOUNT_TOKEN" ]
then
    rpm --import <https://downloads.1password.com/linux/keys/1password.asc>
    zypper addrepo <https://downloads.1password.com/linux/rpm/stable/x86_64> 1password
    zypper --non-interactive install 1password-cli
fi

## Add password for root user
## SUSE documentation recommends openssl passwd -6, mkpasswd --method=sha-512 works as well
## Retrieve from 1Password, otherwise the default password that is set here is: linux
if [ "$OP_SERVICE_ACCOUNT_TOKEN" ]
then
    ROOT_USER_PASSWORD=$(openssl passwd -6 $(op read "<op://IT> Infrastruktuur/K3s root/password"))
    NORMAL_USER_PASSWORD=$(openssl passwd -6 $(op read "<op://IT> Infrastruktuur/K3s user/password"))
else
    ROOT_USER_PASSWORD='redacted'
    NORMAL_USER_PASSWORD='redacted'
fi

SSH_ROOT_PUBLIC_KEY=ssh_key.pub
SSH_USER_PUBLIC_KEY=ssh_key.pub
USER_REQUIRED_PACKAGES='bash-completion btop bat nano' ## patterns-microos-cockpit cockpit
CREATE_NORMAL_USER=<some_silly_username>  ## Add the username here to create a user, leave empty to skip creating one
NODE_HOSTNAME="k8s-control-0"  ## If you want to add additional nodes to a cluster you must set the hostname or nodes will not be able to join

## K3s configuration
##MASTER_NODE_ADDR='172.168.255.104'
##MASTER_NODE_K3S_TOKEN
##INSTALL_K3S_EXEC='server'
INSTALL_K3S_UPSTREAM=true  ## Set to false if you want to use the openSUSE rpm, also add the package name to USER_REQUIRED_PACKAGES
INSTALL_K3S_EXEC='server --cluster-init' ## Not used, just reference

## Set hostname
echo $NODE_HOSTNAME > /etc/hostname

## Mount /var and /home so user can be created smoothly
if [ "$CREATE_NORMAL_USER" ]
then
    mount /var && mount /home
fi

## Retrieve SSH key
##if [ "$OP_SERVICE_ACCOUNT_TOKEN" ]
##then
    ## Setup SSH credentials
    ## TODO - 1Password does not support SSH keys in another vault
##fi

## Set root password
echo root:$ROOT_USER_PASSWORD | chpasswd -e
## Add ssh public key as authorized key for the root user
mkdir -pm700 /root/.ssh/
cat $SSH_ROOT_PUBLIC_KEY >> /root/.ssh/authorized_keys

## User creation
if [ "$CREATE_NORMAL_USER" ]
then
    echo "User creation is requested, creating user."
    useradd -m $CREATE_NORMAL_USER -s /bin/bash -g users
    echo $CREATE_NORMAL_USER:$NORMAL_USER_PASSWORD | chpasswd -e
    echo $CREATE_NORMAL_USER "ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/adminusers
    mkdir -pm700 /home/$CREATE_NORMAL_USER/.ssh/
    chown -R $CREATE_NORMAL_USER:users /home/$CREATE_NORMAL_USER/.ssh/
    cat $SSH_USER_PUBLIC_KEY >> /home/$CREATE_NORMAL_USER/.ssh/authorized_keys
    echo "Requested user has been created, requested password has been set."
  else
    echo "No user will be created"
fi

## Install required packages
if [ "$USER_REQUIRED_PACKAGES" ]
then
    zypper ref && zypper --non-interactive install $USER_REQUIRED_PACKAGES
fi

## Setup K3s config files
mkdir -p /etc/rancher/k3s/config.yaml.d
cat k3s/registries.yaml > /etc/rancher/k3s/registries.yaml
cat k3s/config.yaml > /etc/rancher/k3s/config.yaml
mkdir -p /mnt/pv
mkdir -p /usr/libexec/kubernetes/kubelet-plugins

## Setup K3s token
if [ "$OP_SERVICE_ACCOUNT_TOKEN" ]
then
    ##op item get ai6gs7gbkmuo3droolbdh3dgu4
    op inject -f -i k3s/token.yaml -o /etc/rancher/k3s/config.yaml.d/token.yaml
    chmod +r /etc/rancher/k3s/config.yaml.d/token.yaml
fi

if $INSTALL_K3S_UPSTREAM; then
    ## Download and install the latest k3s installer
    curl -L --output k3s_installer.sh <https://get.k3s.io> && install -m755 k3s_installer.sh /usr/bin/
    ## Create a systemd unit that installs k3s if not installed yet
    cat install-rancher-k3s.service > /etc/systemd/system/install-rancher-k3s.service
fi

## Enable services
##systemctl enable cockpit.socket
systemctl enable sshd
systemctl enable install-rancher-k3s.service

## Unmount var and home
if [ "$CREATE_NORMAL_USER" ]
then
    umount /var && umount /home
fi

echo "Configured with Combustion" > /etc/issue.d/combustion

## Close outputs and wait for tee to finish.
##exec 1>&- 2>&-; wait;
I used the default image from openSUSE: openSUSE-MicroOS-DVD-x86_64-Snapshot20250219-Media.iso My config.yaml
Copy code
tls-san:
  - k3s.<some_private_domain>.org
  - k3s.local
#cluster-init: true # Setup a clean cluster
embedded-registry: true # use embedded peer to peer registry
flannel-backend: "none" # Disable Flannel, in order to install Calico
disable-network-policy: true # Disable policy, in order to install Calico
disable:
  - traefik # Calico Install
  - servicelb
default-local-storage-path: /mnt/pv
rootless: false
selinux: true
disable-kube-proxy: true
write-kubeconfig-mode: "0644"
I find it both frustrating and embarrasing that you had no issues. Since that leans heavily towards an ID10T error in my case.
Also, thank you for the effort you have put in.
m
Dude, don't feel that way. This is a great learning experience for both of us 🙂. I'm also not a k3s expert, so I'll give this a shot today with k3s. I'll post what I find later today. There shouldn't be much of a difference, but for clarity I used a vm to set this up with
openSUSE-Leap-Micro.x86_64-Default-qcow.qcow2
image.
🙏 1