This message was deleted.
# harvester
a
This message was deleted.
b
Hi, that depends of the file location, as in full path.
b
/oem/90_custom.yaml
b
I believe that's only processed on first boot before the system goes read-only
b
No, it's processed every boot. Screwing it up often leaves the node with no IP. I think you might be thinking of the another file in that directory.
harvester.config
m
AFAAIK its just a cloud-init config file
b
We are also experiencing this issue. @bland-article-62755, if you discover how to identify the root cause of this, could you please share here what you’ve found? Thank you!
b
The root cause is I screwed up the yaml.
I mean something applies the yaml so maybe the cloud-init service?
What I'm really hoping for is a service that I can look at logs and see where I went wrong.
b
journalctl -b
to see the boot log
b
yeah that shows the overall systemctl status but not the nitty gritty details.
b
Right, but it doesn't mention what actually applies the file, other than it's a cloud-init file.
(as an aside to any devs or project managers that happen to read this: A script to validate/sanity check that file before rebooting would be 🪄 )
b
I see it references here: <https://github.com/harvester/harvester/blob/master/enhancements/20230316-multi-csi-support.md> There is yamllint to check syntax? It will be called by SLE Micro during boot (systemd-analyze to see this), so could be tied into grub (cat /proc/cmdline), maybe initrd.... (I need to do an install...).
s
Hi @bland-article-62755. Sorry for the late reply.
/oem/90_custom.yaml
would be applied by elemental stack. (Yes, harvester used the elemental stack for the immutable capability) You can refer to https://rancher.github.io/elemental-toolkit/docs/reference/cloud_init/ Did you encounter any issues here?
b
On a related note: If you have upgraded the cluster from 1.1, this file is named 99_custom.yaml.