incalculable-ghost-55080
03/28/2023, 12:53 PMlimactl start ...
seems to take the most time. I'm just interested in possible optimisations since Docker and Podman Desktop start up faster.
Giving the default VM resources on a fresh install, I repeated launches several times and got the following results:
• Docker - 6 seconds, Podman - 19 seconds, Rancher - 37 seconds.
P.S. I just was waiting for docker/podman ps
to return results
Thank you in advance!average-wall-91860
03/28/2023, 9:42 PMv1.26.3
Container Engine: dockerd
wonderful-ability-35578
03/29/2023, 11:10 AMwonderful-ability-35578
03/29/2023, 11:10 AMproud-telephone-66502
03/29/2023, 12:01 PMacceptable-soccer-28720
03/29/2023, 1:43 PMkube-system svclb-traefik-4hbl4 0/2 CrashLoopBackOff 212 5d4h
kubectl describe pod -n kube-system svclb-traefik-4hbl4
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 4d21h (x649 over 5d) kubelet Back-off restarting failed container
Warning FailedMount 110s kubelet MountVolume.SetUp failed for volume "kube-api-access-lbpcr" : [failed to fetch token: serviceaccounts "default" is forbidden: User "system:node:mynode" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'mynode' and this object, failed to sync configmap cache: timed out waiting for the condition]
Warning FailedMount 108s kubelet MountVolume.SetUp failed for volume "kube-api-access-lbpcr" : failed to sync configmap cache: timed out waiting for the condition
Normal SandboxChanged 107s kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 87s (x2 over 102s) kubelet Container image "rancher/klipper-lb:v0.3.4" already present on machine
Normal Created 87s (x2 over 101s) kubelet Created container lb-port-80
Normal Started 87s (x2 over 101s) kubelet Started container lb-port-80
Normal Pulled 87s (x2 over 101s) kubelet Container image "rancher/klipper-lb:v0.3.4" already present on machine
Normal Created 86s (x2 over 101s) kubelet Created container lb-port-443
Normal Started 86s (x2 over 101s) kubelet Started container lb-port-443
Warning BackOff 70s (x5 over 101s) kubelet Back-off restarting failed container
Warning BackOff 70s (x5 over 101s) kubelet Back-off restarting failed container
kubectl logs -n kube-system svclb-traefik-4hbl4
Defaulted container "lb-port-80" out of: lb-port-80, lb-port-443
+ trap exit TERM INT
+ echo this-ip
+ grep -Eq :
+ cat /proc/sys/net/ipv4/ip_forward
+ '[' 0 '!=' 1 ]
+ exit 1
working reference from another RD 1.8.1 on Windows 10:
kubectl logs -n kube-system svclb-traefik...
+ trap exit TERM INT
+ echo 0.0.0.0/0
+ grep -Eq :
+ iptables -t filter -I FORWARD -s 0.0.0.0/0 -p TCP --dport 80 -j ACCEPT
+ echo some-ip
+ grep -Eq :
+ cat /proc/sys/net/ipv4/ip_forward
+ '[' 1 '==' 1 ]
+ iptables -t filter -A FORWARD -d some-ip/32 -p TCP --dport 80 -j DROP
+ iptables -t nat -I PREROUTING '!' -s some-ip/32 -p TCP --dport 80 -j DNAT --to some-ip:80
+ iptables -t nat -I POSTROUTING -d some-ip/32 -p TCP -j MASQUERADE
+ '[' '!' -e /pause ]
+ mkfifo /pause
most-agency-99776
03/29/2023, 5:37 PMmoby
container engine. Does anyone know how to get this to persist? If you found a way to script this or configure it, it would greatly help us.gentle-dream-1571
03/29/2023, 7:59 PMrc_env_allow="*"
export http_proxy=<http://myproxy.mycompany.com>
export https_proxy=<http://myproxy.mycompany.com>
export no_proxy=.<http://mycompanycorp.com|mycompanycorp.com>,.internal
Everything works fine except that the docker as a non-Administrative user fails.
We did the Factory Reset, uninstalled, and reinstalled using a non-Administrative user and now, docker version
works properly without errors when running as a non-Administrative user. However, the exact same /etc/rc.conf changes above do not appear to be working. Thoughts on how to correct this?gentle-farmer-13389
03/30/2023, 9:02 PMbroad-train-31975
03/31/2023, 12:33 AMstale-judge-78332
03/31/2023, 9:28 AMwitty-honey-18052
04/02/2023, 12:54 PM1.25.x
when selecting a version of 1.24.12
?witty-honey-18052
04/02/2023, 12:54 PMClient Version: <http://version.Info|version.Info>{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:57:26Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: <http://version.Info|version.Info>{Major:"1", Minor:"24", GitVersion:"v1.24.12+k3s1", GitCommit:"57e8adb524611d79c4e17c27f15c5066e54b0421", GitTreeState:"clean", BuildDate:"2023-03-27T21:41:45Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}
witty-honey-18052
04/02/2023, 12:56 PMError: chart requires kubeVersion: < 1.25.0-0 which is incompatible with Kubernetes v1.26.0
even when I switch back to 1.24.12 in preferenceswitty-honey-18052
04/02/2023, 12:56 PMwitty-honey-18052
04/02/2023, 1:08 PMwitty-honey-18052
04/02/2023, 3:22 PMwitty-honey-18052
04/02/2023, 3:23 PMv3.11.2
, which is passing the kubeversion to the helm chart as whatever is bundled with helm.witty-honey-18052
04/02/2023, 3:24 PMbulky-gold-73710
04/03/2023, 1:03 PMbulky-gold-73710
04/03/2023, 1:09 PMnerdctl compose -f docker-compose.yml -f docker-compose.override.yml up -d
I get the following error message:
failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/var/run/docker.sock" to rootfs at "/var/run/docker.sock": stat /var/run/docker.sock: no such file or directory: unknown
Is this an issue to be filed - or did I do something wrong?
Thanksabundant-camera-87627
04/03/2023, 1:25 PMproud-parrot-81193
04/03/2023, 2:57 PMrancher-desktop
, nerdctlproud-parrot-81193
04/03/2023, 2:58 PMdocker desktop
proud-parrot-81193
04/03/2023, 3:01 PMdocker desktop with k8s
, I have a solidjs + helm + devspace, everything worked on the old installed (docker desktop)proud-parrot-81193
04/03/2023, 3:03 PMrancher-desktop
and I've built the image again nerdctl build -t arkanmgerges/fe-softwaredev-expert:0.1.0 -f docker/dockerfile-local .
then I tried first install it with helm, but I got a problem that the image couldn't be pulled (locally)proud-parrot-81193
04/03/2023, 3:05 PMdocker desktop
and I had it also locally buildproud-parrot-81193
04/03/2023, 3:06 PMrapid-eye-50641
04/03/2023, 3:11 PMcontainerd
+ nerdctl
you need to pass the flag --namespace <http://k8s.io|k8s.io>
to the nerdctl build
command to make the locally built image available in the <http://k8s.io|k8s.io>
namespace. Please refer to an example on docs here - https://docs.rancherdesktop.io/how-to-guides/hello-world-example#build-image-from-code-locallywitty-honey-18052
04/03/2023, 5:11 PMskaffold dev
Then ctrl-c for cleanup
# example ref: <https://github.com/GoogleContainerTools/skaffold/blob/main/examples/helm-remote-repo/skaffold.yaml>
apiVersion: skaffold/v4beta2
kind: Config
metadata:
name: rancher
requires:
- configs: ["cert-manager"]
deploy:
helm:
releases:
- name: rancher
repo: <https://releases.rancher.com/server-charts/latest>
remoteChart: rancher
namespace: cattle-system
createNamespace: true
setValues:
bootstrapPassword: "admin"
hostname: "rancher.localhost"
kubeContext: rancher-desktop
---
# example ref: <https://github.com/GoogleContainerTools/skaffold/blob/main/examples/helm-remote-repo/skaffold.yaml>
apiVersion: skaffold/v4beta2
kind: Config
metadata:
name: cert-manager
deploy:
helm:
releases:
- name: cert-manager
repo: <https://charts.jetstack.io>
remoteChart: cert-manager
namespace: cert-manager
createNamespace: true
setValues:
installCRDs: "true"
kubeContext: rancher-desktop