https://rancher.com/ logo
#general
Title
# general
a

adamant-kite-43734

10/17/2022, 11:27 PM
This message was deleted.
s

square-orange-60123

10/18/2022, 2:44 AM
what versions are you using, and what size VM for rancher?
b

bright-fireman-42144

10/18/2022, 3:25 AM
harvester is at 1.0.3, rancher is 2.6.8 but have also tried 2.7-head containers on Leap 15.4 (1 vCPU, 3GB RAM, 30Gi volume) but I'm open to other solutions.
s

square-orange-60123

10/18/2022, 4:13 PM
I think the minimum for rancher is 2vCPU, 4GB ram. Check the logs on the container for any relevant errors.
b

bright-fireman-42144

10/18/2022, 5:25 PM
giving it another shot with 2/4 and if it still restarts the container I will generate support bundles for rancher and harvester. Thanks mate.. support in here has been amazing from everything from rancher/longhorn on GKE and now with my own piddly attempt at simulating GKE with harvester.
s

square-orange-60123

10/18/2022, 6:10 PM
no worries. I typically use ubuntu 20.04 and run the following and don’t run into issues:
Copy code
curl <https://releases.rancher.com/install-docker/20.10.sh> | sh 
 docker run -d --restart=unless-stopped --privileged -p 80:80 -p 443:443 rancher/rancher:v2.6.8 --trace
b

bright-fireman-42144

10/19/2022, 4:58 PM
went to 2vCPU and 4GB memory and will take advantage of memory dedupe and the fact that I am on nvme storage for swap... seems stable now.
still LEAP 15.4
thanks for, 'just being there', it's nice to know that someone has your back. I am not really and have never been a 'linux admin' and now I wading into the world of kubernetes. SUSE/Rancher has really helped me out in this regard. Replacing my ESXi lab box with harvester combined with rancher just seems like the logical thing to do in this journey so I can simulate GKE.
no go... going with your suggestion of ubuntu 20.04
was trying to keep this within the extended SUSE family, but whatever it takes to get this going!
😄 1
s

square-orange-60123

10/20/2022, 6:54 PM
glad to hear that rancher/suse is working well for you so far. Not sure about why leap would give trouble, but hopefully ubuntu is working well for you now!
b

bright-fireman-42144

10/20/2022, 7:00 PM
it is not... I'm back to having no images to choose from. I am going to install ubuntu 20.04 and rancher 2.6.8 on my vmware workstation and try it from there.
yep, still no joy. I'll ssh in and grab the container logs.
s

square-orange-60123

10/20/2022, 7:09 PM
oh, you’re at that stage. Are you using .img or equivalent? .ISOs are not supported through rancher.
b

bright-fireman-42144

10/20/2022, 7:11 PM
I have .img ready, I've been down that road 😉
s

square-orange-60123

10/20/2022, 7:13 PM
can you show your image list through harvester’s UI?
b

bright-fireman-42144

10/20/2022, 7:15 PM
I've been playing around with harvester natively. 😛
I'm loving harvester so far... integration with rancher will blow me away.. and then I can truly mimic GKE with some MetalLB added into the mix.
s

square-orange-60123

10/20/2022, 8:11 PM
can you upload a focal .img from say, here: https://cloud-images.ubuntu.com/focal/current/ to harvester and see if that shows up in rancher?
b

bright-fireman-42144

10/20/2022, 8:33 PM
already exists, but I can upload a new one. I should also note that no matching options are found for the namespace drop down as well.
same with network.
I suspect this may be from the harvester side. Attaching the support bundle for that as well.
s

square-orange-60123

10/20/2022, 8:55 PM
oh, interesting. You have harvester connected in rancher via
virtualization management
?
b

bright-fireman-42144

10/20/2022, 9:00 PM
I was originally going to try an 'inception' like weird install with rancher running in harvester to then control harvester but thought that might be causing issues so currently right now I have a vmware workstation ubuntu 20.04 running rancher 2.6.8 with same results... can connect/register harvester but cannot fetch images or namespaces or network with that "clusters.management.cattle.io "v1" not found" error.
🤔 1
and yes... via 'vitualization management'. It is when I go to cluster mgmt that I run into issues.
as a test... trying to deploy an RKE1 cluster results in the same failed fetching of images and other provider data.
for shiz and giggles I tried 2.7.s1-head and 2.6-head and no go. I think this is on the harvester side of things but I know NOTHING about harvester, so wouldn't even begin to know where to look.
@square-orange-60123 I see you are a contributor on harvester 1.0.3. Make this work please... but don't spend too much time if you are also trying to get 1.1 out the door that /may/ include fixes that could help me out! hehe... this is just a lab and I don't have a SUSE support contract so I appreciate all the out of band and not paid for time you guys are providing in the community.
👍 1
s

square-orange-60123

10/21/2022, 4:14 PM
could you report what you are seeing in github.com/harvester/harvester/issues with your support bundle and rancher logs so we can see what the issue may be?
1
b

bright-fireman-42144

10/21/2022, 5:56 PM
I love prepping things for an actual issue. In my years of experience (from both sides) I have found that setting up and recording the issue step by step with logs has made me discover the issue myself. Let's hope that is the case here.
well, didn't discover what it might be but hopefully other eyes on will help. Thanks for your support here in Slack @square-orange-60123
👍 1
update! copypasta from closing the issue on github: "It seems this was a combination of issues at different times. Rancher containers restarting were a known bug, but my particular issue running :latest was based on memory and therefore disk (swap) pressure. I intentionally sized my virtual machines smaller than the recommended 2vCPU/4 RAM as 1/3 and I also ran into disk space issues having deployed so many permutations of the rancher container.... which I solved using a docker system prune -a -f current combination that works, so far, in my case: harvester 1.0.3 on an Intel NUC rancher 2.6.9 as a container on Ubuntu 20.04 harvester VM with 2vCPU/6Gi RAM/30Gi HD"
memory/disk pressure was causing problem with fleet pods attempting to restart and ultimately warnings that a git operation had already been performed.
👍 1
3 Views