https://rancher.com/ logo
#harvester
Title
# harvester
b

bumpy-portugal-40754

05/01/2022, 12:41 AM
I noticed that it's possible to rollout various ssh-keys to the Harvester nodes at installation time. To follow the normal lifecycle it is necessary to be able to add or remove keys later at runtime. How can this be achieved?
s

sticky-summer-13450

05/01/2022, 9:45 AM
ssh-copy-id
?
ssh rancher@<node>
then
vi ~/.ssh/authorized_keys
?
b

bumpy-portugal-40754

05/02/2022, 12:13 AM
Really? Thanks! Didn't think it's that easy and "normal". I was told that this is not going to work because the key is lost after every reboot... I think I need to take a deeper look into the node setup.
w

witty-jelly-95845

05/03/2022, 2:59 PM
SSH keys in VMs (whether provisioned via cloud-init using key attached during VM creation or subsequently added after) should remain post-restart of VM. Which OS/image are you using for VM?
s

sticky-summer-13450

05/03/2022, 3:00 PM
I think the OP was asking about the Harvester nodes, not the VMs running on the nodes - although it's the same answer for both 🙂
👍 1
w

witty-jelly-95845

05/03/2022, 3:17 PM
Note IP addresses of VMs can change after reboot so might be giving appearance of SSH keys disappearing - they're not, just that you're authenticating to what's now the wrong IP
b

bumpy-portugal-40754

05/04/2022, 8:36 PM
Yes, I meant Harvester nodes only. I have multiple admins here and need to decide wether to use one shared ssh key for the rancher user or deploy separate user keys for each admin. When somebody is leaving, I need to deploy a new single key vs. I just need to adjust the authorizedkeys file. I prefer multiple keys but I was also told that this is not working because the authorizedkeys file is reset after a reboot or update. As per your description this doesn't seem to be true.
I was again told that one cannot change the ssh keys after installation. Can someone from Suse confirm this?
w

witty-jelly-95845

05/05/2022, 11:58 AM
I'm not SUSE but that's poop - after installation /home/rancher/.ssh/authorized_keys doesn't exist on a Harvester node (unless you choose to import keys during install - I don't*). I create and add my desktop one post-install to both my nodes. I can then edit that file later and the changes stick post-reboot as I've just tested. Even if whoever told you that has confused with VMs it's still not true as you can change those too (though cloud-init can confuse things a bit). *it's possible importing during install starts a process rewriting file but that would be nuts for this exact reason!
1
👍 1
b

bumpy-portugal-40754

05/05/2022, 5:03 PM
Ok, thanks. I'm currently trying to install Harvester in a VM (no baremetal here) and play around a bit myself. I was wondering why Rancher would introduce such a "feature"... it would be a really bad design decision... even though the infrastructure should be immutable in general.
So far I can confirm that any files in the rancher home survive a reboot. I couldn't do an upgrade because the setup didn't complete successfully yet. Maybe it just takes long, maybe there is some other issue with this virtual setup.
The recommended resources of 4 CPU and 8 GB memory are not enough to run harvester in a VM. 5 CPU and 12 GB is my minimum. But the update from 1.0.0 to 1.0.1 doesn't complete. Not sure if this is also a resource issue. The described bugfix is not working... Sigh.
w

witty-jelly-95845

05/06/2022, 12:37 PM
4 CPU cores should just be enough but 8GB RAM is not. I've yet to try upgrading my nodes as don't want to break them right now!
b

bumpy-portugal-40754

05/06/2022, 1:58 PM
Nope. 4 cores are not enough because at some point the overall CPU reservations cannot be satisfied and some pods don't start anymore. Easily visible in the cluster events.
Update: I could update my Rancher VM from 1.0.1 to 1.0.2-rc1. The update from 1.0.0 to 1.0.1 hangs reproduceable. Described fixes in the update description don't work. All self created user data survived also a Harvester update. ssh-keys and other files in /home, /root etc. are persitent. There is a 60 GB partition for perstent data which is mounted into the mentioned directories. Which also means that ssh-copy-id is a valid way to copy ssh-keys to the rancher user. Creating other user will not work because /etc is not mounted to the persistent partition.
1.0.2 needs 4.58 reserved CPUs and 4.89 as reserved when the update is running. Somebody should fix the minimum values for the virtual setup... or change the request values of the application pods.