This message was deleted.
# k3s
a
This message was deleted.
b
Honestly I'm not sure, but I'm confused as to what you're asking for.
granted I haven't been using k3s for very long.
You say controller, but it seems
controller
and
server
are synonymous?
And
agent
nodes? Are all of these nodes?
Do you mean Control Plane nodes and worker nodes?
b
correct
I was trying to use k3s-centric wording
when you create a server (control plane), it writes a usable kube config at /etc/rancher/k3s/k3s.yaml. It even configures the root kubectl to use that file
b
I think (I could be very wrong here) that this might not be possible.
b
but this is not the same with agent (worker) nodes
b
The agents/other workers are still bound to the same cluster.
So the kube config is for the cluster and not the node.
b
well, that's where things get interesting. when k3s do the cluster join, they do end up writing a kubectl that is usable locally
like, crazy enough,
Copy code
ln -s /var/lib/rancher/k3s/agent/k3scontroller.kubeconfig /root/.kube/config
works
b
AFAICT you can't have a node that's part of multiple clusters.
b
no, it's not inteded to be apart of multiple clusters
its a much simpler request
i'm just saying write a usable kube config
to reach the cluster it just joined
this would probably involve modifying the joining logic to include making a per-worker user or something .. maybe .. i haven't dug into the internals of k3s
b
Isn't that what
/var/lib/rancher/k3s/agent/k3scontroller.kubeconfig
is?
b
sorta, but you can see that the user cert is intended for the controller itself, so i assume later down the line, that would skew any management or telemetry
like, it gets you there functionally, but it's not "correct"
b
I don't think configs on the nodes are intended to be unique.
b
to that end, on control plane (server) nodes, the files
/var/lib/rancher/k3s/agent/k3scontroller.kubeconfig
and
/etc/rancher/k3s/k3s.yaml
are using different users and certs
b
But that sounds like what you're asking?
b
I would leave that to be an implementation detail. I'm really just saying write a usable config in the exact same way that control plane nodes do
what ever mechanism that is
b
I'm out of my depth here, but that sounds clearer that what's listed in the issue on GitHub.
b
hmm, well thanks for help me flesh it out! I'll try to revise the description to be more clear
b
I would point out that the k3s nodes that are part of the control plane, the files
/var/lib/rancher/k3s/agent/k3scontroller.kubeconfig
and
/etc/rancher/k3s/k3s.yaml
are using different users and certs, and you're requesting a similar configuration for the other nodes.
But maybe the devs will understand with what you already got.
🙂 Sorry I couldn't be more help
b
hey no worries! this is a dev cluster and i appreciate chances to improve context setting. it's a struggle at work too.
thanks again
b
np
c
Agent nodes are unprivileged and are not intended to have a copy of the admin kubeconfig
If you want just any old kubeconfig that will work, you can misuse one of the other ones that the agent does have - like the one for the kubelet, as demonstrated above.
b
The kubeconfig is generated by the kubeapi-server. since there is no kubeapi-server on agents - the admin cluster config would need to be sent over the wire.... terrible security default.
c
But, it is intentional that only server nodes provide you with an admin kubeconfig.
As Trent says, would be a security boundary violation
b
okay, i'm happy enough with the ugly controller hack for dev clusters
thanks for the additional context. I'll transpose to the issue and close it
you know, honestly, this is pretty nice as a default. the controller permissions don't allow delete. it lets a developer poke around before they shoot their foot
later they can update the config to a real cluster user
b
lol why not just create a set of permissions for your users and craft a specific kubeconfig with the principal of least privilege. You can manually craft kubeconfigs (https://vimalpaliwal.com/blog/2019/12/8064ef5f27/how-to-create-a-custom-kubeconfig-wi[…]ess-to-k8s-cluster-using-service-account-token-and-rbac.html) or just have Rancher do it for you. Don't let your devs logon to nodes. Heck I wouldn't even give devs access to the cluster at all.... have them commit into a repo that is watched by a CD tool and a mutating/validating webhook and a policy engine like #kubewarden
b
heh. context always matters most. this is happening in vagrant on their laptop. and i don't know how long this flow will live. best to learn as the context changes
is there a lot of pressure to think of k3s as production ready? the solution you've proposed does not seem like anything i would apply to k3s
b
is there a lot of pressure to think of k3s as production ready?
...i'm not sure what you mean by this? k3s is used in production all the time
what I provided is a kubernetes native concept - which k3s === kubernetes
c
K3s is production-ready. has been for years. How you manage user access to your clusters does not sound production-ready.
b
again, on a laptop, not meant for prod or even mentioned that
noted
c
if they’re running it on a laptop then just give them the admin kubeconfig from /etc/rancher/k3s/k3s.yaml
b
i appreciate the suggestion. thank you
b
yea I wouldn't overthink it for a local non-prod dev setup
c
or are you doing something weird where you have a central server but individual devs provide their own agents? I’m not really understanding what you’re trying to do.
b
rancher-desktop is pretty cool for the dev use case!
c
why are you even talking about agents if you have devs running the full server on their laptops
b
i'm doing something weird, but i really don't want to explain any further 🙂 and yes, rancher desktop is awesome
thanks again 🙂
c
nodes are not meant to be a privilege barrier within the cluster. one-node-per-namespace or something weird like that is not a pattern I usually see people observing.
b
that one-node-per-namespace is a horrible anti-pattern and just asking for trouble. Kubernetes is like a hammer. If you use it like a hammer it's great... but if you try to use it to fix a crack in your glass......
b
i have never heard of that convention
b
then ignore it haha 😂 Brad is just reminding me of Spooky things even after Halloween is over 🎃 👻
b
i can at least promise both of you that i'm not using any nodes for access control or anything else like that, and i appreciate the concern. I do intend to use k3s in some ways later that could be considered prod-adjacent, but I never thought of k3s as a deployable product beyond edge use cases.
c
why would that be
its used in production, at scale, all over the place. Just because you can easily spin it up on a small device doesn’t make it any less ready for large-scale use.
b
i really don't know how to reply other than to say that's the perception i've gotten at different jobs and meet ups
when i'm not doing weird stuff, i'm usually working on cloud managed clusters like eks and gke, so maybe it's just about the circles i run in 🤷
i think the consensus was that by the time you get done configuring it for a production deployment, you've stripped away so much of the out-of-the-box magic that it's not a big leap to go straight to k8s
i'll admit, as you've seen, i just listen and go my own way, so always happy to hear more 🙂
c
that is certainly one class of production environment. There are more folks running Kubernetes in “production” in retail / hospitality / industrial environments then you might suspect if you mostly work in SAAS.
b
people run clusters at the edge for hotels and stuff?
i know even for pizza places i've installed on-site redundant linux servers, but clusters seems like a whole new level
that said, i bet it makes hardware servicing suuuuper easy.