This message was deleted.
# harvester
a
This message was deleted.
s
Hmm, it should not need to enable iscsid manually. The corresponding Longhorn pod will handle it. What is the situation you faced that you needed to manually enable the iscsid yourself?
p
While troubleshooting the above, I found iscsid disabled, though the Longhorn docs say to have it running. Was just an attempt to fix my disk issue
@salmon-city-57654 I saw your reply to my (xelab04) Github issue, thank you. I'm trying to finish my networking issues, then will take a look at Longhorn again. Though, out of Longhorn's requirements, lsblk, blkid, awk, and findmnt are missing. Should I try to install these and see afterwards?
s
Could you generate the support bundle first? It should not need to install those packages manually. I can use lsblk, blkid, awk on Harvester host.
p
You're right, I made a mistake on those regards - they are indeed installed
Something of note maybe is that I have a 5TB disk on sdb. It's mounted to var/lib/harvester/defaultdisk Though that should be fine because longhorn has that as default path
I also put a later support bundle on the GitHub issue
Copy code
harvester-01:/home/rancher # blkid /dev/sdb 
/dev/sdb: LABEL="HARV_LH_DEFAULT" UUID="543d5c25-6d15-4d9f-9be5-27a3dedd8208" BLOCK_SIZE="4096" TYPE="ext4" 
harvester-01:/home/rancher # cat /var/lib/harvester/defaultdisk/longhorn-disk.cfg 
{"diskUUID":"9a3c4db0-3354-4f16-b102-04b18e187630"}
The disk UUID in the config file here seems incorrect. Could this be causing the issue? The UUID in the config file doesn't match any of my disks when I run
blkid
Hi @salmon-city-57654 sorry for the ping. I rm -rf'ed /var/lib/harvester/defaultdisk, then reinstalled harvester 1.3.2. It should have formatted /dev/sda. The problem still persists. I think I'll try installing 1.3.1 tomorrow and see if there's any change. I don't think the problem is with Longhorn right now
The disk UUID in the config still differs from the real UUID but I take it that this is normal behaviour now
s
Hi @powerful-easter-15334, sorry for the late reply. These two UUID were different seems normal. Could I know your current situation? I am trying to catch up with your case.
p
@salmon-city-57654 No no, don't apologise. It's stupid to expect support anytime I need. The github issue #7001 is the absolute latest of my suffering. I reinstalled Harvester, clearing everything from the previous disks. I don't think it's a Longhorn issue as Longhorn claims to mount the disk properly and everything is green on the UI. I'm really not knowledgeable about iscsi but someone mentioned it could be that the culprit
s
So, I could check the latest SB named
supportbundle_alex_stillstorage.zip
, right?
Is the corresponding VM still hanging on the boot, as you mentioned on the GH issue?
Could you elaborate more on your cloud-init config? Or you can try to boot up the VM w/o your cloud-init?
p
Yes, that one, thank you. The VM does hang on boot, on the screen for which I included the screenshot initially. I figured out everything else, it's only that which is causing issues. The cloud-init would be the default one Harvester applies, but I can try removing that one too
s
Hmm, sometimes it’s just an issue about the console update, did you try to login to this VM?
p
I can't login though since the boot is stuck on "waiting for disk at <uuid path>"
I'll try installing 1.3.1 tomorrow and see if that works any better. Keeping my fingers crossed because otherwise I'm a tad bit lost haha
s
Sure, I will also use the same ISO to reproduce myself. Thanks!
p
I wasn't able to go to the office today, with the country being affected by a cyclone. However, I tried all the iscsi combinations without any change
For anyone in the future checking this, I should maybe have started by checking the image or using another one to test. Using another one worked fine The other guy I work with was sure he'd used this one previously, but... whatever. I learnt several lessons, and a lot of how Harvester works along the way