This message was deleted.
# k3s
a
This message was deleted.
👀 1
c
no. the only supported migration is sqlite -> embedded etcd
you could copy the data by hand and then set the datastore url, but we won’t do that for you
also, you’re referring to an env variable, if you were to put that in the config file it would use a different key.
m
Yeah, I didn't otherwise see a way to set this in the cluster.yaml
Look like you've already answered this question before 😅 - https://github.com/k3s-io/k3s/discussions/7936
Would it be possible to do an etcd backup from Rancher, change the datastore to mysql, and then do a restore on it?
Also, if it possible to run a downstream k3s on a mariadb/mysql backend from scratch (assuming the external database already exists)?
c
you can’t do an etcd backup if you’re not using etcd
do you mean a datastore backup using something like velero?
and no, Rancher provisioned clusters won’t use anything except etcd. If you want sqlite or another external DB you have to provision it by hand and then import it into rancher
m
I mean etced-snaphots from Rancher UI
c
etcd snapshots only work on rancher-provisioned clusters, which always use etcd
m
Yeah, that's what I meant by "downstream"
c
if you provisioned your cluster using rancher, it’s already using embedded etcd, even if it’s just a single node.
m
Ah, good to know!
I guess I can stop dusting off this old old script I wrote then 😅 - https://github.com/vwbusguy/sqlite-to-mysql
So just to confirm, there's no current way to run a downstream provisioned k3s cluster against a mariadb/postgres backend? Does this also count for imported k3s clusters?
c
imported you can do whatever your want since you’re the one building it.
provisioned is hardcoded to use embedded etcd
you might be able to hack it to do something else if you know what it’s doing under the covers, but its not supported.
👍 1
I have seen people do it successfully but I’m not going to say how, rancher gets confused about it lol
m
I can definitely respect that. Mainly, I'm just trying to find where the boundaries of sanity are here and I think you've answered that for me. Many thanks! I have a fairly largish single-node cluster that good into a reboot loop, so I hacked the etcd config on the node to increase the heartbeat and timeout that got it unstuck, so I figured migrating it to an external db would take the load off, but it seems that scaling out is the better next step for this cluster.
Probably shouldn't say largeish, since I have k3s nodes with more resources, but 24 cores and 72G of memory and it had loads over 24. Seems it was during pulling several larger images, so could ne IO bound. The underlying OS storage is ceph (not CSI - but OS level)
Which has been fine before on non-Rancher k3s nodes with a lot less resources, but never tried pulling these large images.
c
yeah, etcd gets fussy about io latency. i have seen issues when it shares backing storage with the image fileystem, and there are large, fast image pulls.
every etcd write calls a fsync, if there is a bunch of image data coming the etcd writes will get backed up behind flushing those to disk
m
Is it possible to mount a separate disk/volume between where etcd lives and where the pod data is stored?
c
yeah, you can put either on different partitions. they’re in different subdirs under /var/lib/rancher/k3s
m
Looks like etcd data is /var/lib/rancher/k3s/server/db/etcd/ ? At least the snapshot and wal files
c
yeah I would probably just put server or server/db on a separate partition
👍 1
m
I assume shutting down the k3s service is sufficient to rsync the data to the new filesystem and remount it before restarting k3s or do I need to shut down the rancher-system-agent, too?
c
I would stop rancher-system-agent first, then k3s. otherwise rancher-system-agent may try to do other things, like restart it.
m
Many thanks!