This message was deleted.
# k3s
a
This message was deleted.
c
if you need HA, then yes 3 servers and one agent would be good - assuming you are OK with your servers also running workloads. Otherwise, a single server and 3 agents is your best choice.
Servers can run workloads by default, so everything is “combined” as you put it.
Also, K3s has servers (control-plane + kubelet) and agents (kubelet) by default. We don’t use master/worker.
h
Thank you for explaining. Can mitigate the risk of running workloads by creating a etcd storage in each control plane?
c
yeah, that is the biggest risk. just make sure that you have enough IO on the servers to support both the datastore and the workload
h
I'm focusing on HA. 3 servers it is. Thanks for the timely responses.
"The recommended HA deployment will be to back k3s with an external SQL database (postgres to start, but mysql soon after)." direct quote from rancher dev.... but he stated this in 2019. Is this still better than a etcd on each server ? thanks.
(In my use case of making HA main objective), maybe its not that big of a difference, but I would think it'd be better to have a etcd on each server instead of a single external database. thanks.
SQLite is proven for embedded systems, its hard to decide between the two options
s
Depends on whether your external SQL database is HA 🙂
h
roger
c
also, in 2019 we didn’t have embedded etcd for HA
3 years is a long time in Kubernetes
1
🎯 1
also, if you want a quote from a rancher dev… check out the rancher employee next to my name 😉
😄 1
h
thanks. haha true.
In my HA setup, I'm configuring the master node, I call the server argument for it, but do I also have top call the agent argument so that it can run workloads? thanks
Do I have to be the root user to setup everything as well? I can't run pods as root from what I read?
what is root user
c
no, servers also run agent components
h
I'm having trouble with the install. its being super weird
it worked on several other machines but for this one its being stubborn
c
looks like your host is failing to download things from github?
The install doesn’t complete. it just fails at that download step, that’s why you can’t run k3s
figure out why your host can’t
curl <https://github.com/k3s-io/k3s/releases/download/v1.25.4%2Bk3s1/sha256sum-amd64.txt>
h
I figured it out thanks!!! took alot of toying with.
now I just have to figure out how to do it for this other machine ! lol
this is my cluster information, why does it say the cluster name is a service? and if I'm using HA with etcd in each master I should be using load balancer instead of type : clusterIP it has listed right?
I'm just double checking, I understand the answer to these questions but further explanation would be nice.
thanks.
c
what do you mean by the cluster name is a service?
the default/kubernetes service is built in to Kubernetes
h
I'd have to change all the config files for the pods to load balancer, its also starnge it doesn't list etcd and it also hasn't stopped running..... doesn't complete the command (sudo kubctl getservices) doesn't get completed and neither does get pods command get completed
c
that is a non-configurable service built in to Kubernetes
h
roger
c
pods running in a Kubernetes cluster are guaranteed to find the apiserver at a service named ‘kubernetes’ in the ‘default’ namespace
it doesn’t complete because you added the -w flag which stands for “watch”
h
oh god I didn't see that
c
it will stay there reporting changes or additions until you cancel it
h
thankyou
c
K3s doesn’t run any of the control-plane components in pods.
this is covered in the docs
https://docs.k3s.io/
• Operation of all Kubernetes control plane components is encapsulated in a single binary and process. This allows K3s to automate and manage complex cluster operations like distributing certificates.
I”m not sure why you would have to change config files for pods to use loadbalancer… what is that about?
h
for HA.... I thought I needed load balancer to connect between 3 servers and 1 agent. I'd configure the load balancer, expose the services running on the server and agent nodes by creating a load balancer service resource, and then update the application address to use the load balancer host name.
c
not for pods
that’s all handled for you
h
ok
c
you only need loadbalancers for stuff connecting to the cluster from outside
h
right.
but if I needed route incoming traffic from things into the apps inside the cluster it would be HA and the setup would be as I had explained?
i may be overthinking this part, my bad. all the apps will be in the cluster through containerd, so I don't need load balancer because that's for connecting to a cloud environment.
c
If you have things running inside the cluster, and you want to expose them outside the cluster in a HA manner, you would need a LoadBalancer
h
right
c
Normally you would set up an Ingress to route HTTP/HTTPs traffic, and then put the ingress behind a LoadBalancer Service
and then point stuff at the LoadBalancer IP
by default K3s comes with ServiceLB which just uses the node IPs as LoadBalancer IPs, but if you want a real “floating IP” type thing you can disable ServiceLB and deploy MetalLB or Kube-VIP or something
h
I'm not going to have backup devices so floating IP wouldn't matter, unless I chose to replicate apps... interesting. So I don't even have to make a load balancer service I can just use the default ServiceLB?
c
yeah it should work fine
h
good evening, I ran k3s check-config command and it gave me this. swap: should be disabled...is this telling me to disable swap? to turn off swap I'll run 'swapoff -a'. thanks -will
c
yeah best practice is to run with it off, but you can leave it on if you have to.
1
😁 1
h
when I turned off the other server node and tried checking my cluster it said "error from the server" I thought HA can handle if one of the server nodes goes down. thanks -will
c
how many servers do you have?
you need at least three
and where were you checking from that you saw an error?
h
Currently the setup is 2 HA servers. But eventually another server and agent will be added to the cluster. Anyways, 2 servers at the moment. I turned the server I connected to the cluster off, and then ran 'kubectl get nodes' and its giving me 'server timeout' taking forever to load and not executing anything
i ran the 'kubectl get nodes' obviously on the other server I originallycreated the cluster on
the first time I tried it told me api server error.
the second picture I think isn't critical and just warning
c
yes if you have two servers, and take one down, the other one will also crash because you have no quorum
2 servers is worse than 1. You should have 1/3/5 and so on.
h
roger.
I understand that containerd is focused on container orchestration and execution. But I can use containerd for all the basic functions and handling I need to get my containers inside the cluster and deploy my containers? just making sure. thanks -will
I also tried running 'containerd --help' and it gave: me 'container command must be installed' but I thought containerd default came with regular install of nodes? thanks -will
and the nodes will maintain data from previous sessions of applications by default or do I have to configure the nodes to be stateful? thanks
c
what?
if you’re using kubernetes you want to be creating pods. you shouldn’t be running containerd manually for any reason
what do you mean by “previous sessions”?
I would probably start by familiarizing myself with pods, deployments, statefulsets, and so on. And then read about persistent volume claims.
👍 1
h
previous sessions was regarding, for example: if I have an application that's a temperature sensor which records current temperature but I also wanted to have recent temperature history that wouldn't wipe even if the pod crashed. I would just configure the data volume for the pod to have storage and then make a script to check the volume usage to delete old temperature data or I could use log rotation maybe. thanks for explaining, I worded the questions in a wrong way and I'm still getting used to terminology, sorry. -will
c
that really depends on how your application stores its data. if it wants to write things to a file, and have that file be there the next time the pod is started, it should store that on a persistent volume. this is all basic kubernetes stuff though.
h
right. and I'm mentioning containerd because I'm using K3s and I have all these images that will run on the cluster and I was just going to use containerd as the runtime
c
you don’t really need to worry about what the runtime is, or interact with it directly at all. that is all abstracted by the kubelet and CRI. You just give the name of your image and the kubelet uses the CRI interface to talk to the runtime and runs it for you.
1
h
good afternoon brandon, I hope all is well. Would I have to install docker on all the nodes to pull docker images into the cluster? or can I just have docker on one and just pull my images into it. thanks,
c
you don’t need to use docker at all. what are you hoping to do by pulling images manually?
you shouldn’t have docker installed on any of the nodes. it will conflict with the containerd managed by k3s
h
Good morning Brandon, I hope all is well. I was reading that client certificates for say kubectl need to be reset every year. but this only pertains to if you use kubeadm? thanks -will
and pertaining to the docker question , I was watching many tutorials of people using docker desktop to run containers in their clusters which is why I was confused. thanks for explaining, I'm not going to use it.
Persistent storage has to be configured even though I already have etcd in each control plane? -thanks -will
c
persistent storage is for your workloads, not for the cluster datastore. You would need something to provide you with Persistent Volumes that can migrate across nodes. You can use the Local Path Provisioner, but that is just a stub - the PVs it creates are just a path on a node, forever binding the pod that uses that PV to that specific node.
Certificates do expire once a year, but are automatically renewed at startup if they are within 90 days of expiring. As long as you periodically patch and restart your nodes you should be fine.
h
when I run 'kubectl cluster info-dump' I'm getting an error message in the cluster information about 100 times and its saying: [WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
Im going to check out the path
c
that is normal
there’s an issue about that if you search
h
Good morning Brandon, I hope you had a good weekend/holiday. The Metric server comes with k3s has support for monitoring/alerting and collecting resource usage metrics for pods? I shouldn't need a 3rd party application like OpenTelemtry or using Grafana to provide me with observability options? thanks -will
I read that K3s doesn't recommend using namespaces, I am making this system alone, I really only found the whole namespace aspect interesting for organizing my application components and assigning resource limits on apps. I could assign resource limits other ways, I was just wondering why namespaces were frowned upon and if it makes sense for a single developer to use them. thanks, -will
s
Please share the URL to documents saying that k3s doesn't recommend using namespaces. K3s is "just" one certified K8s distribution. IMHO, best practice for running your workloads in k8s should be applied to k3s - which in my mind means using namespaces appropriately. I put applications with their attendant components (derployments, services, configmaps, secrets, ...) into individual namespaces in k3s. Otherwise it's anarchy and security is severely compromised.
c
I read that K3s doesn’t recommend using namespaces,
Yeah, what are you reading? We don’t say that and I’m not sure why anyone else would.
🎯 1
The Metric server comes with k3s has support for monitoring/alerting?
No, it does not do alerting. You can read about the metrics-server at https://github.com/kubernetes-sigs/metrics-server - but it is pretty much only used for
kubectl top pods
and
kubectl top nodes
as well as some of the cluster autoscalers if you deploy them. If you want long-term monitoring and alerting you would be looking at deploying prometheus and grafana or something along those lines.
1
h
Maybe I misunderstood, but I was watching Darren Shepherd running this demo and they were talking about how namespaces are not "recommended" for use in k3s:

https://www.youtube.com/watch?v=k58WnbKmjdA

. I'd have to go back through the whole 3 hour video to find it. Thanks. -will @sticky-summer-13450
c
I think maybe something was misunderstood or taken out of context? There’s no concern around using namespaces in K3s or any other Kubernetes distro.
1
h
Good morning Brandon, I hope all is well. If I set the replicas field to "2" in a deployment yaml file for an application, does that mean there are 3 in total instances of that application running (the actual app + 2 replicas of it)? And these replicas are running along with the application? I looked on the kubernetes page about this and it didn't really mention this. -Thanks -will
s
replicas: 0
means you have asked for no instances of the pod (the app, in the case you are describing) running. Nothing running.
replicas: 1
means you have asked for one copy of your app running
replicas: 2
means you have asked for two copies of your app running There is no "original" and "replicas" - they are all replicas and they are all treated with the same respect. I think the docs on ReplicSets should help - and remember that a Deployment makes a replica set.
h
Good morning Mark. Thankyou for explaining this. I'm also setting a PodDisruptionBudget for all my applications along with antiaffinity rules. If I have a 3 node cluster- to still satisfy the quorum HA system( also I'm running 3 replicas for each app) I'm going to set the MaximumUnavailable attribute of PDB to "1" allowing only 1 pod to be evicted at a time if 2 of the nodes are down. does this make sense? -thanks again @sticky-summer-13450 -will
s
@handsome-autumn-77266 you probably also want to look at
topologySpreadConstraints
to spread the three replica pods across the three nodes.
h
Does k3s support multicast? I have middleware embedded into these container applications and it uses multicast to communicate. Thanks -will
c
most CNIs do not support multicast, no. You’d probably need to run the pod with host network if you want to do multicast to the host node’s physical network.
h
Good evening @creamy-pencil-82913 , I hope all is well. I created a hostNetwork: true field along with hostPort:## for the container. So how it works is I'm telling kubernetes to use host network namespace for each deployment. Now I need to write the necessary code inside the container to perform multicast. and then the hostport value specifies the port mapping allowing exposure of port on host network mapped to a port inside the container. these two attributes are the only additions I'd need in each app deployment yaml to perform multicast to the host network? thanks -will
c
you don’t need hostPort if you’re using host network. HostPort is only necessary if you need a single port forwarded from the host to a pod that is using cluster network.
if you use host network, all the ports are on the host.
h
roger. thanks again
@creamy-pencil-82913 my fault I put that question in the wrong chat: Referring back to docker. The problem I have is I have a docker private repository hosted on AWS and I have my K3s cluster setup locally. So far I setup a secret component that has the access token(credentials) to my private docker repository(allowing docker to actually pull the image). SO I would need to wipe my cluster and add the --docker flag to make docker the runtime instead of containerd? Or I can just do ctr pull.... but I still need to use docker login to get into the repository.... thanks -will
c
the preferred way to do that would be to use ImagePullSecrets for the pod, or if you must, configure auth at the containerd level as per https://docs.k3s.io/installation/private-registry#configs
h
I've installed and uninstalled k3s a couple of times just using : curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server --cluster-init . I recently uninstalled it using the common : /usr/local/bin/k3s-uninstall.sh , I re installed it again... but I only got 3 lines of the download in the terminal so I knew it didn't install correctly, and then I ran get nodes and I got memcache error. I tried finding and deleting every file in the system related to k3s. After that I started getting kubectl command not found even after re running the install script where it just output 3 lines saying the version and a sha.txt I can send you a screenshot of what it said after install
@creamy-pencil-82913
c
Sounds like the download is failing. Try curling the last url it shows
h
curling the last url it shows after running the install command?
I don't understand why its doing this, it has nothing to do with ca-certificates?
this is what spits out afterrunning install script, and then when i curl he url I get 'not found'
i have aws cli alread installed it would have nothing to do with that?
@creamy-pencil-82913
I fixed the problem. I just download ubuntu desktop with ethcer onto a usb , wiped the existing os and reinstalled the os and re ran the install from fresh. its all set now. sucks I have to do this for everything though. maybe originally I didn't delete the manifest file properly or helm is what I'm thinking
all good on everything , the only thing left is I'm telling kubernetes to use host network namespace for each deployment by using hostnetwork: true attribute in each apps yaml deploment..... but--- would I also have to run server nodes behind the host name, if the containers are using multicast to communicate? k3s server --tls-san value connects with the agents but i don't think I'd have to set all this up though... thanks -will
@creamy-pencil-82913
I installed K3s binary for fun, but I can't find my kill all script? I guess I would just remove all the directorys and then just run k3s server to get everything back.
c
the killall script comes from the install script. there’s no good way to get it by itself.
h
roger, I was also trying to systemctl restart k3s and all of the commands for k3s.service but with the binary install it doesn't seem to work either.
so I just created a k3s.service manually, but I thought with the binary it would all work similarliy with the https k3s install script
c
nope. the binary doesn’t include any of the scripts or the systemd unit. it’s literally just
k3s
h
So I would just have to make the killall.sh, k3s.service, and k3s-uninstall.sh if I wanted to use them
But with the binary I wouldn't need them because if I wanted to kill the cluster I could just remove all the directories generated from the binary and then run k3s server again to recreate the cluster.
the command "k3s server --data-dir" specifies the directory to store data. Is this necessary to call when creating a cluster? If I don't call this where will k3s store data by default?
c
/var/lib/rancher/k3s
yes, if you want to use that, you must use it always. Things will not work well if you change it later.
h
sorry, what do you mean "use what always?" @creamy-pencil-82913
c
you can’t flip back and forth between different data-dir settings. If you are going to use that setting, always use it.
1
h
with k3s binary on a k3s server install, when I run "k3s kubectl get all -A" I don't see anything in regards to flannel and I also tried other methods to find it with the k3s server binary and I couldn't find anything. When I ran k3s server after I made the binary executable on path usr/local/bin... I did notice flannel referenced a few times during the install output lines. I guess flannel doesn't come with k3s server by default? if flannel does come with binary server install by default how would I disable it because I wanted to use my host network and not flannel. I also notice maybe flannel comes with default agent binary install.... so I'd have to run: "k3s agent --no-flannel" . finally if I have the binary on each machine and I wanted to connect another "k3s server" to the cluster I'd run: "k3s server --node token --host ip" I'm just wondering because I saw documentation on rancher that you can connect an agent binary but I didn't see them mention actual cluster connect k3s server binary script. thanks -will I hope your doing well by the way Brandon I really appreciate your input.
c
flannel is built in to K3s. If you want to disable it and use your own CNI, start with
--flannel-backend=none
. This is covered in the docs: https://docs.k3s.io/installation/network-options#custom-cni
h
roger, so there won't be any conflict with flannel virtual network and physical host network if I wanted my apps to communicate on host network I run all the apps with "hostnetwork: true" attribute in each of their deployments.yaml if you want to do multicast to the host node’s physical network if you use host network, all the ports are on the host. thanks @creamy-pencil-82913 -will
for the automated upgrades https://docs.k3s.io/upgrades/automated This will work even for the binary install? or are the automated upgrades are just for the online install script? @creamy-pencil-82913 thanks -will
207 Views