https://rancher.com/ logo
Title
k

kind-vase-40458

12/20/2022, 7:30 PM
Hi all, i'm hoping to understand the motivation for k3s-server to dynamically sign new certificates like so. From code it handles the short-lived certs and adding new SAN (i know it also consults with
dynamic-cert.json
(but couldn't find it in the code) Wondering why is this departure from standard K8s? I don't seem to find docs regarding this on K8s.
c

creamy-pencil-82913

12/20/2022, 7:59 PM
K3s dynamically generates all of its certificates. This prevents the administrator from having to know the node names and such that will participate in the cluster ahead of time. If K3s needs a cert for a node or component, it generates it.
Would you prefer to have to manually generate all the certs ahead of time, or run a prepare step as kubeadm does?
k

kind-vase-40458

12/20/2022, 8:07 PM
In our system, every host is provisioned with a machine cert issued by a central CA.. And we have an internal service mesh that negotiate TLS. So i'm hoping that every client and agent, as well as the server, will use their respective machine TLS cert.
The solution that works for us, does follow your recommendation here for self-signed CA
Currently, we store the generated sever CA in a central secret repo, so every client and k3s-agent can also fetch it.
However, our internal security team is asking if we can avoid manually handling/storing this new CA.
In k8s, we have these options on the server
--tls-cert-file
 --tls-private-key-file
and then kubeconfig will have CA to verify the cert of the server
certificate-authority
following this guide "The API server’s TLS certificate (and certificate authority)" Here, there's no dynamic certs, so server doesn't need to take CA. Client has CA for verification
c

creamy-pencil-82913

12/20/2022, 8:17 PM
we don’t really support that at the moment. We are working on allowing for the CA certs to be issued by an existing root, but there is certainly not any support (nor are we planning to allow) the user to provide all the various server and client certs required by K3s.
You can track the work to support signing the cluster CAs with an existing org CA at https://github.com/k3s-io/k3s/pull/6615
k

kind-vase-40458

12/20/2022, 8:19 PM
thanks so much for clarifications here! so glad to be able to get first-hand response from you so quickly!
oh hmm i think even with that, it will require that the server has access to root CA private key right?
c

creamy-pencil-82913

12/20/2022, 8:21 PM
I will also say that Kubernetes isn’t really designed for use with a single root CA. There are some things that just don’t work if you do that. If you look at the code and the requirements, it’s pretty clearly designed for multiple independent CAs that are not trusted by the OS CA bundle, or even by other Kubernetes components or applications running within the cluster. It is expected that there will be many small self-signed CAs, and that components will handle establishing a root of trust via Kubernetes itself.
keep in mind that Kubernetes doesn’t handle certificate revocation or anything like that - so you really want a VERY small blast radius for your cluster CAs.
The Kubernetes CAs arent intended to be trusted by the OS, or used to authenticate anything more than Kubernetes components. Applications should handle TLS on their own, either by generating self-signed certs on their own, or by using tools like cert-manager to provision certificates from an external trusted CA.
k

kind-vase-40458

12/20/2022, 8:25 PM
using tools like cert-manager to provision certificates from an external trusted CA.
with this solution, the current functionality of cluster generating dynamic certs will require that you have access to private key of this CA? Is that reasonable generally? Maybe it's our internal constraints here that the security will not give use the private key for the root CA
c

creamy-pencil-82913

12/20/2022, 8:25 PM
We have customers that really want to use an existing organizational CA intermediate to sign their cluster CAs and rotate the cluster CAs yearly, and there is a 10 year expiration on the autogenerated CAs, so we’re working on better tools to rotate or renew the cluster CAs.. but Kubernetes itself isn’t really designed to have everything all managed by an external CA.
cert-manager is usually used to call out to an ACME certificate provider
you’ll notice that Kubernetes and Kubernetes-native applications almost ALWAYS allow you to specify not just component addresses, but also client certificates to use to authenticate with, and a CA bundle to use to authenticate the remote. This is because it is just kind of expected that everything will have its own CA.
k

kind-vase-40458

12/20/2022, 8:28 PM
btw brandon, i'm from stripe, and we are now managing a small cluster of k3s.. we are considering managed cluster like EKS, and i'm wondering if we can get more information about rancher managed solution
Even though the custom CA certificate may be included in the filesystem (in the ConfigMap
kube-root-ca.crt
), you should not use that certificate authority for any purpose other than to verify internal Kubernetes endpoints. An example of an internal Kubernetes endpoint is the Service named
kubernetes
in the default namespace.
If you want to use a custom certificate authority for your workloads, you should generate that CA separately, and distribute its CA certificate using a ConfigMap that your pods have access to read.
Upstream Kubernetes straight up tells you not to use the Kubernetes CA for anything except the cluster itself
👍 1
I’m not the best person to represent Hosted Rancher, that’s managed by another team. I can say that it is literally just an instance of Rancher running on K3s, managed by a team here at SUSE. You would not have access to the local K3s cluster, it is only used for hosting Rancher which can in turn be used to provision downstream clusters to run your workload.
Those downstream clusters could be K3s, RKE2, RKE, EKS…
h

hallowed-ocean-20951

12/20/2022, 9:38 PM
If you go to our product page, you can find information on our hosted solution, called "Rancher Prime Hosted".
k

kind-vase-40458

12/20/2022, 10:37 PM
thanks david!
h

hallowed-ocean-20951

12/20/2022, 10:40 PM
Sure, no problem. If you are interested in speaking with one of our account executives on our sales team, let us know.
k

kind-vase-40458

12/21/2022, 5:06 AM
actually brandon, can i follow-up on the cert pls.. If we don't provide external CA cert and key, and just rely on K3S autogenerated. For client/component to talk to the API-server, i have to do the followings; • For k3s agent, bootstrap with the node token generated on server node. This is well-documented (doc) • For client calling API server, it needs the same copy of self-signed CA generated on the server node, so they can verify TLS cert that server presents to it?
Either i provide my own CA like you suggested here or rely on self-signed CA, the clients need to have the same copy of CA public cert
like this "Distributing Self-Signed CA Certificate" (doc)
c

creamy-pencil-82913

12/21/2022, 7:07 AM
if you just rely on the k3s autogenerated certs and keys, you have to do nothing at all. It just works.
k

kind-vase-40458

12/21/2022, 7:08 AM
hmm can you help to explain why it would work for a new client that knows nothing about CA pls?
c

creamy-pencil-82913

12/21/2022, 7:09 AM
It is explained in passing in various issues, but the best description of it is in the PR I linked earlier: https://github.com/brandond/k3s/blob/custom-cert-gen/docs/adrs/ca-cert-rotation.md#server-ca-pinning
tl;dr if you use the full token from the server (the one that starts with K10) to join agents, the token contains a hash of the server CA. This is used to verify that the agent is connecting to the expected cluster. From there the rest of the certificates and agent config are bootstrapped.
k

kind-vase-40458

12/21/2022, 7:10 AM
ah that makes sense.. i did see that when ti's comparing the hash in the log
i'm thinking of client making API call to API-server, not the k3s-agent
c

creamy-pencil-82913

12/21/2022, 7:11 AM
this is the same thing that kubeadm does, except kubeadm does it by connecting to the apiserver and retrieving a configmap, which requires that anonymous access to the apiserver be enabled. which isn’t ideal.
a client making a call to the apiserver has a kubeconfig?
maybe I’m not understanding the question.
brandond@dev01:~$ cat ~/.kube/k3s-server-1.yaml
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTnpFMU1ESXpNREF3SGhjTk1qSXhNakl3TURJeE1UUXdXaGNOTXpJeE1qRTNNREl4TVRRdwpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTnpFMU1ESXpNREF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFUL3poaVVDWFZtc1BHd0JSZ0R2dmI4cllqTXNZQlVOTktlUXlVMEFFWXgKK2s4UENyWVQ5aTlRK1pHL1g0SHF5ekRpWjYxUTFQYmdaOGFuZC9VM2F2MzRvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVXhocXhwakwvakhUTnJQUWZpSmpVCmZYZWpScG93Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQU93WXNSVzNTTGhBdjZmVlRZN2V1bEp6ZWVsaklSQnYKd1cyTjJQQVBHTXRyQWlFQWg4SU5KNlMzZElTUVRlb1BXZzJpVXdObnl3MVBzT2c1Z1dxVHkvaDJpR0k9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: <https://172.17.0.4:6443>
  name: default
k

kind-vase-40458

12/21/2022, 7:13 AM
import (
	v1 "<http://k8s.io/client-go/kubernetes/typed/core/v1|k8s.io/client-go/kubernetes/typed/core/v1>"
)

func NewCoreV1ClientForConfig(c *rest.Config, shc SecretHandleConfig) (*v1.CoreV1Client, error) {
	if shc.BearerToken != nil {
		shc.BearerToken.UnsafeAccessRawSecret(func(raw *unsaferawsecret.RawSecret) {
			c.BearerToken = string(raw.Value)
		})
	}

	if shc.CAData != nil {
		shc.CAData.UnsafeAccessRawSecret(func(raw *unsaferawsecret.RawSecret) {
			c.CAData = raw.Value
		})
	}

	return v1.NewForConfig(c)
c

creamy-pencil-82913

12/21/2022, 7:13 AM
The cluster CA data is right there in the kubeconfig
k

kind-vase-40458

12/21/2022, 7:13 AM
when i'm making programmatic call to API server via client...
i have to pass in CA
c

creamy-pencil-82913

12/21/2022, 7:14 AM
normally you don’t build all that by hand?
why are you doing that
k

kind-vase-40458

12/21/2022, 7:15 AM
this is outside the cluster.. it's a separate service calling into API-server
we have functionality that generates service account async
c

creamy-pencil-82913

12/21/2022, 7:15 AM
then just use a kubeconfig that has the cluster CA data in it
k

kind-vase-40458

12/21/2022, 7:16 AM
so basically i have to "distribute the self-signed CA" to the client right?
so this part is where our security team is less happy about.. to manually distribute CA data to clients.. either we use self-signed or external CA.. we have to distribute its copy somehow outside the cluster
c

creamy-pencil-82913

12/21/2022, 7:17 AM
it’s in the kubeconfig. just use a kubeconfig with the ca data in it. this is a well solved problem
k

kind-vase-40458

12/21/2022, 7:18 AM
"just use a kubeconfig", does this mean i'm copying the data in config and put it where i want to support client outside the cluster?
c

creamy-pencil-82913

12/21/2022, 7:18 AM
You will see pretty much the exact same behavior from managed kubernetes services
the kubeconfig pretty much always contains the CA bundle for the cluster. because clusters always have their own CAs.
k

kind-vase-40458

12/21/2022, 7:20 AM
got it! just would like to confirm my understanding that we cannot get away with handling CA data manually.. in this case via passing the kubeconfig data around outside the cluster
c

creamy-pencil-82913

12/21/2022, 7:20 AM
you know what a kubeconfig file is right? It contains all of the configuration necessary for a Kubernetes client to connect to the apiserver.
it is almost always true that the kubeconfig includes not just the server address and user credentials (or a command that can be invoked to retrieve credentials) but also the CA data for the cluster that will be connected to.
If you have a client that is not using in-cluster config (reading CA data and serviceaccount token injected into the pod), you will use a kubeconfig file to tell that client how to connect to the cluster.
k

kind-vase-40458

12/21/2022, 7:22 AM
It contains all of the configuration necessary for a Kubernetes client to connect to the apiserver.
thanks for explaining.. seems like when we manually build client like what i shared above, i could alternatively use kubeconfig instead
c

creamy-pencil-82913

12/21/2022, 7:22 AM
that is how Kubernetes clients work. You should not be hand-crafting connections. Just distribute a kubeconfig and call https://pkg.go.dev/k8s.io/client-go/tools/clientcmd#BuildConfigFromFlags
k

kind-vase-40458

12/21/2022, 7:23 AM
makes sense
c

creamy-pencil-82913

12/21/2022, 7:24 AM
normally you just do
client, err := clientcmd.BuildConfigFromFlags("", cfg.Kubeconfig)
where
cfg.Kubeconfig
is the path to a file.
k

kind-vase-40458

12/21/2022, 7:25 AM
i think i saw the doc that kubeconfig is found on sever node at a well-defined path right?
After all the self-signed CA are created during first start-up
c

creamy-pencil-82913

12/21/2022, 7:26 AM
the admin kubeconfig is, yes, but you should’t pass that around
k

kind-vase-40458

12/21/2022, 7:26 AM
oh so where is the kubeconfig that i can give to my client outside the cluster?
i guess i have to populate it myself
c

creamy-pencil-82913

12/21/2022, 7:27 AM
I mean you can give that to people if you want them to be god-mode on the cluster
If you want to create separate users with their own RBAC you need to do that yourself
👍 1
you could use something like Rancher to manage RBAC, allowing users to log in via SSO/LDAP/etc, or you can just do it the hard way.
That IMO is one of the biggest value-adds of Rancher. Kubernetes itself does not come with any good tools for managing RBAC or users. You want users, you create certificates with usernames and groups. You want RBAC, you create roles and bind groups to those roles.
k

kind-vase-40458

12/21/2022, 7:29 AM
Currently, we don't use kubeconfig for client outside cluster.. For a limited set of those client, we create asociated service account.. We store CA and service account token in central secret infra.. So when we build the kube client, we manually build it with CA and token as i show above..
This isn't quite scalable tho, but our cluster serves a singular purpose and we don't have unbounded clients here
c

creamy-pencil-82913

12/21/2022, 7:31 AM
there are places in the kubeconfig for SA and token. Just distributing a yaml kubeconfig would probably be easier than pulling it all together by hand.
k

kind-vase-40458

12/21/2022, 7:31 AM
ahh
c

creamy-pencil-82913

12/21/2022, 7:32 AM
Kubernetes doesnt make it super easy to authenticate from outside the cluster. They really want you to run stuff in the cluster and use in-cluster config. If you don’t do that then you are left to your own devices to grab the CA data, tokens or client certs+keys, and bundle that into a kubeconfig that standalone clients can use.
Or come up with some tool that the external clients can ping to retrieve that data.
k

kind-vase-40458

12/21/2022, 7:35 AM
So in summary... if we use self-signed CA, • for in-cluster usage, eg k3s-agent, things will jsut work.. but under the hood, we rely on node token which we have to distribute to agents after server started • for outside cluster usage, we will need to know the self-signed CA. The best way to propagate this CA info is via kube config yaml. Ω
c

creamy-pencil-82913

12/21/2022, 7:36 AM
correct. You will find that kubeconfigs always contain the cluster CA data, regardless of what cluster offering you’re using. That is standard behavior for Kubernetes clients. If you’re hand-rolling client credentials, you should be building kubeconfigs that include the server addres, ca data, and client credentials (token or cert+key)
k

kind-vase-40458

12/21/2022, 7:37 AM
One question on self-signed CA.. For a given cluster, if i cycle the machines hosting the k3s-server, and the certs are no longer there in the
tls
folder, what prevents new CA to be generated?
c

creamy-pencil-82913

12/21/2022, 7:38 AM
that is also covered in the file I linked you to earlier, in my PR fork of K3s.
tl;dr it is stored in the datastore. as long as you retain the datastore, the same CA data is used.
If you build a new cluster from scratch, with a new datastore, you’ll have new CA certs.
k

kind-vase-40458

12/21/2022, 7:40 AM
ok.. that's one argument for us to use generated external CA.. suppose we build a new cluster, we don't have to change all the kubeconfig with CA that's already distributed to the client
alright.. thank you so much for your time and patience with me here! this clarifies things a lot
c

creamy-pencil-82913

12/21/2022, 8:02 AM
yeah… in general clusters are not rebuilt excessively frequently.
and if you’re generating serviceaccount tokens, those wouldn’t be valid on the new cluster either. since the serviceaccount token signing key is also cluster-specific.
k

kind-vase-40458

12/21/2022, 8:05 AM
ahh great point!
there's no clear advantage of using self-signed CA over providing an external CA generated via openssl as suggested in this issue right?
c

creamy-pencil-82913

12/21/2022, 8:09 AM
if you follow those steps you will have the exact same CA configuration as if you’d just let K3s generate them itself
k

kind-vase-40458

12/21/2022, 8:09 AM
haha ok that's great to know.. we did follow your steps there
c

creamy-pencil-82913

12/21/2022, 8:10 AM
Yeah then you haven’t really changed anything
k

kind-vase-40458

12/21/2022, 8:10 AM
and we store the openssl cert in our central secret infra.. if we use self-signed, we will just look up the generated CA and similalry store in our secret datastore
c

creamy-pencil-82913

12/21/2022, 8:11 AM
sure but the end result there is the exact same as if you just started k3s and then copied the certs and keys off the disk on the server
k

kind-vase-40458

12/21/2022, 8:11 AM
agreed!
c

creamy-pencil-82913

12/21/2022, 8:12 AM
Hopefully you’re not using the same certs and keys on multiple clusters, or any kubeconfig valid for one of them would be valid for the others as well.
k

kind-vase-40458

12/21/2022, 8:12 AM
i guess when people want to use their own CA, they want to have control over CN and key usage
c

creamy-pencil-82913

12/21/2022, 8:12 AM
I am not really sure why that matters
k

kind-vase-40458

12/21/2022, 8:12 AM
ah we only have 1 small cluster at this point, but noted.. when we scale to multiple
c

creamy-pencil-82913

12/21/2022, 8:13 AM
why would you care what the cluster’s CA looks like? It’s just a self-signed cert that you need to copy around to clients so that they trust it.
k

kind-vase-40458

12/21/2022, 8:14 AM
ah true.. i don't have special use case actually.. the steps we followed in that issue suffice for us.. i just wonder what other use cases might be
c

creamy-pencil-82913

12/21/2022, 8:14 AM
If you want to use a common root and/or intermediate to sign all the cluster CAs, you’d want to use the script from my PR. But you should also know that the cluster will not work properly at the moment if you do that, as there are some additional changes needed to support a CA bundle that has more than one cert in it. Right now everything assumes the cluster CA is just a single self-signed cert, not a root+intermediates.
k

kind-vase-40458

12/21/2022, 8:15 AM
ack
we have no use case for that
c

creamy-pencil-82913

12/21/2022, 8:16 AM
Most people that want to do it seem to fall into one of two camps: • people that don’t want to worry about things expiring and want to make the certs all valid basically forever so they don’t have to worry about things expiring if they forget to patch or reboot the cluster occasionally • People that have paranoid security teams that don’t want to see self-signed certs anywhere on their network, and have decided that rather than waivering their kubernetes infra, they’d prefer to weaken their security posture by distributing CA certs+keys to all their servers so that they can now issue certs trusted by their internal PKI.
people in the second camp also tend to rotate CA certificates really frequently, like once a year. Which sounds like a huge waste of everyone’s time. but that’s banking/government for you.