This message was deleted.
# elemental
a
This message was deleted.
s
Additional disks is certainly a valid scenario. I have not done such a configuration with LVM, but it definitely should be possible to format and set the LVM device at firstboot and then mount it. I'll try to follow up on this and provide some more specific details, probably this can be a documented example.
s
I thought about trying to add my own "stage" as well, like 90_custom, so I could choose which stage it runs on, but I don't see an easy way to do that either. Thank you David, and good luck πŸ™‚
I thought of using LVM to make it easier to add additional storage capacity and resize disks without needing to add more disks to the Longhorn Pool. I tried this on Harvester too but no luck as you can't update /etc/fstab to mount on boot (or I just don't know how). This is the reason I had to go back to virtualizing Harvester, otherwise I would make it my primary OS on the hosts. When you only have 400GB SSDs but need a 500+ volume it's impossible to do with just adding the disks individually to Longhorn, and Hardware Raid seems archaic now. Too bad something like ZFS isn't an option here.
s
@sparse-monitor-30665 I just managed to create an LVM volume group and some logic volumes on top on my lab as part of the elemental installation, they they are mounted at boot time once booting into the installed system.
s
That's awesome πŸ™‚ I spent all day yesterday trying to figure it out. I knew coming on here would be the right course.
s
I followed the docs from https://elemental.docs.rancher.com/customizing, the trick is essentially similar to what done there to examplify how extra drivers could be added: 1. Create LVM volumes and format them in an install hook (after-install would be good IMHO). The hooks needs to be included in the ISO 2. Add a config-url in the registration to pull an extra clound init stage from the ISO. This initramfs stage would simply append lines in fstab, so we let systemd to mount the devices. 3. Add a config.yaml file within ISO to ensure install hooks are considered during the installation. You need to set the cloud-init-paths. This is all documented in the Custom image section I linked. I see these docs can be improved, I pretend to include this new example I did for LVM devices and also try to improve the existing current section so it is easier to foresee how the process could be used for other means.
s
Thank you for finding and pointing me in the right direction, I did look through those docs but there is so much it's a little overwhelming at first.
s
@sparse-monitor-30665 FJYI I just done a PR to update docs regarding this topic. I added the LVM example and tried to outline the pattern I followed to customize the installation. I am aware it can be improved and we'll take action to figure out how to make or communicate this sort of configurations easier. It aims to be continuous effort process.
Oh forgot link the PR in case you'd like to have a look before it is actually reviewed and merged. https://github.com/rancher/elemental-docs/pull/38
πŸ™‚ 1
s
Two things... 1. You named the oem file here as "_hook" ... "The
overlay/oem/lvm_volumes_hook.yaml
could something as simple as:" instead of "_in_fstab" as you did earlier. 2. I can't get the actual hook to run. I never see "Running post-install" during the installation process. It finishes "after-install" then reboots. 3. The oem config does get copied to the /oem/... location and runs, but mounting fails as the Label doesn't exist. 4. If I use the default /var/lib/longhorn, will that have any conflict with the bind mount of the same listed prior in the fstab file. Should I be doing something like a 'sed -i ... ' type replace instead for this path?
*opps 4 things ... lol
I modified the elemental-iso-add-registration script so it accepts and maps the overlay folder rather than just a registration file. And I have the livecd-cloud-config.yaml in the root of the overlay folder. Is there anything else you can think of that I need to do here to get the hook to run?
Just noticed above in a pervious message where you said "(after-install would be good IMHO)." But then your instructions are for post-install...? I'm going to try after-install stage instead.
Well, that didn't change anything, I still don't see the hook get run in either stage. Also, regarding fstab and /var/lib/longhorn, I checked dmesg for "mount" and see this: [ 20.171538] systemd-fstab-generator[1270]: Failed to create unit file /run/systemd/generator/var-lib-longhorn.mount, as it already exists. Duplicate entry in /etc/fstab? So looks like I will need to do a 'replace' on that line.
After adding " config-dir: /run/initramfs/live/elemental" to my registration cloud config it finally picked up the hooks path and ran on after-install . I ended up going back to the docs on the Stable page and mixed and matched to get here.
As for mounting, after a long trial and error and re-install (because I forgot that I could edit the oem files on the host) This is what I have:
name: "Longhorn Data LVM"
stages:
initramfs:
- name: "Update fstab with new LVM path for Longhorn"
commands:
- |
sed -i "s|^/usr/local/.state/var-lib-longhorn.bind.*|/dev/rancher/longhorn /var/lib/longhorn xfs defaults 0 0|g" /etc/fstab
boot:
- name: "Unmount Longhorn in case it's mounted via bind"
commands:
- |
umount /var/lib/longhorn
- name: "Mount Longhorn via LVM"
commands:
- |
mount /var/lib/longhorn
I couldn't get it to mount the new path with just updating fstab in initramfs. Perhaps it would work if just adding a new path, and using that in Longhorn GUI, but that seems like more work. it appeared to be mounted already and I'm not sure at what stage that happened, but also couldn't unmount/remount there either. It worked in boot, so I made two stages.
Having had some more time to think about this, I've changed this around so it's now mounting to a subfolder (defaultDisk) similar to how Harvester does it, so I can mount additional storage under the base directory later (nvme backed, etc). I have to do a mkdir first to ensure it exists, then echo's to fstab as you did before. But still does a manual mount on boot stage.
Sorry for all the messages, I get a little ahead of myself sometimes. I see I can't edit messages after a certain time has elapsed, so... Here is my currently working hook and oem stages: Hook: OEM:
... Here is my currently working hook and oem stages. I was able to remove the boot "mount" stage now that I'm not overwriting the base longhorn directory. I am also playing around with Multus so I can use my storage network for Longhorn (like Harvester), hence the second nic. Hook:
name: "Create Longhorn Storage Volume"
stages:
after-install:
- name: "Create LVM"
if: '[ -e "/dev/sdb" ]'
commands:
- |
# Create the physical volume, volume group and logical volume
pvcreate /dev/sdb
vgcreate vg0 /dev/sdb
lvcreate -l 100%FREE -n lv0 vg0
udevadm trigger
# Trigger udev detection
if [ ! -e "/dev/vg0/lv0" ]; then
udevadm settle
fi
- name: "Format LVM"
if: '[ -e "/dev/vg0/lv0" ]'
commands:
- |
# Format logical volume for later use in fstab
mkfs.xfs -L defaultDisk /dev/vg0/lv0
OEM: longhorn.yaml
name: "Longhorn Network and Data Configuration"
stages:
initramfs:
- name: "Setup eth1 network for Longhorn"
if: '[ "$(ip link | grep eth1)" ]'
files:
- path: /etc/sysconfig/network/ifcfg-eth1
content: |
BOOTPROTO='dhcp'
STARTMODE='auto'
permissions: 0600
owner: 0
group: 0
- name: "Create directory for Longhorn defaultDisk"
if: '[ "$(lvs | grep lv0)" ] && [ ! -e /var/lib/longhorn/defaultDisk ]'
commands:
- |
mkdir /var/lib/longhorn/defaultDisk
- name: "Update fstab for Longhorn defaultDisk"
if: '[ "$(lvs | grep lv0)" ] && [ -e /var/lib/longhorn/defaultDisk ]'
commands:
- |
echo "/dev/vg0/lv0 /var/lib/longhorn/defaultDisk xfs defaults 0 0" >> /etc/fstab
s
@sparse-monitor-30665 Sorry for the delay, yesterday had a pretty busy day and could not answer you. The
post-install
hook is indeed not available yet in stable version. Note in docs it has been added to the
Next
version. Sorry my bad, should have warned about that. So first thing would be adapting the
post-install
to
after-install
hook.
I couldn't get it to mount the new path with just updating fstab in initramfs. Perhaps it would work if just adding a new path, and using that in Longhorn GUI, but that seems like more work. it appeared to be mounted already and I'm not sure at what stage that happened, but also couldn't unmount/remount there either. It worked in boot, so I made two stages.
I am not really following you here sorry. But this could be related to the fact you already included such a mount point with immutable rootfs layout configuration in
/run/cos/cos-layout.env
. Note that in that file there is a list of persistent paths are mostly a bind mount from the persistent partition. If you have extra devices to be used as storage I would simply append the path in fstab as I documented and remove any reference to it in the rootfs stage cloud-init configuration. The immutable-rootfs dracut module will create an fstab entry for all the mountpoint defined in
/run/cos/cos-layout.env
to make sure systemd knows about it and it umounts them as expected on shutdown and also to make them obvious to any user logged into the system. So if you append new fstab lines including devices over an already existing mount point I understand systemd will complain on switch root and parsing the fstab file.
This is the default rootfs configuration of Elemental Teal, I suspect you might be having something similar or even this exact configuration:
Copy code
rootfs:
    - if: '[ ! -f "/run/cos/recovery_mode" ]'
      name: "Layout configuration"
      environment_file: /run/cos/cos-layout.env
      environment:
        VOLUMES: "LABEL=COS_OEM:/oem LABEL=COS_PERSISTENT:/usr/local"
        OVERLAY: "tmpfs:25%"
        RW_PATHS: "/var /etc /srv"
        PERSISTENT_STATE_PATHS: >-
          /etc/systemd
          /etc/rancher
          /etc/ssh
          /etc/iscsi 
          /etc/cni
          /home
          /opt
          /root
          /usr/libexec
          /var/log
          /var/lib/elemental
          /var/lib/rancher
          /var/lib/kubelet
          /var/lib/NetworkManager
          /var/lib/longhorn
          /var/lib/cni
          /var/lib/calico
        PERSISTENT_STATE_BIND: "true"
In that case you should remove the
/var/lib/longhorn
path from the list as you want to handle that as an extra device. Extra devices I would not include them within the layout setup unless you have good reasons to do so. The immutable-rootfs module only sets the basic layout to ensure the system can switch root from initrd to the actual device keeping most of the OS in read only mode. Additional devices can be easily mounted and handled after the actual swtich root and let the underlying distribution handle it as it would on a traditional OS.
If I am not mistaken you managed to workaround most if not all the unprecise advices I gave you from the documentation for the upcoming release. Let me know if you have still some doubts or issues left.
s
Hi David, no worries on any delays. And sorry for any confusions I may have caused with all my posts while getting into it. I was able to sort things out and make it work. Your original concept/example works well (aside from adapting it to work on current Stable release. I'm going to stick with just appending to fstab with new mounts rather than messing with core system so it's easier to work with during upgrades, etc. One final side note, in your example for the install script you have "...
*udeadm* settle
...", but it should be "...
ude*v*adm settle
...", it's missing the v in the middle. That one took me a bit to troubleshoot. πŸ˜‰ Thanks a tonne for your assistance, and I'm glad we've been able to add more examples to help others accomplish the same in the future.
s
Glad to help. Gonna fix the script typo thanks for the alert πŸ‘