adamant-ram-8166
05/02/2022, 7:22 PMbulky-appointment-8113
05/04/2022, 10:56 AMorange-ocean-99788
05/05/2022, 12:42 AMmillions-alarm-86298
05/06/2022, 9:03 AMglamorous-engine-17369
05/07/2022, 1:54 PMmillions-alarm-86298
05/09/2022, 7:48 PMmillions-alarm-86298
05/10/2022, 5:39 PMk3d cluster create
doesn't wait until traefik is up , so I manually must check if helm chart and CRDs are installed
is there any way to improve this experience ?brief-ability-87157
05/11/2022, 12:55 AM--registry-config
flag.brief-ability-87157
05/11/2022, 1:26 AM{name}-server-0
directly but wondering if there is another way.flat-florist-77376
05/11/2022, 3:58 PM--host-alias
flag, that works… unless I restart the node, in which case the alias I configured earlier goes missing?glamorous-elephant-6949
05/14/2022, 5:47 AMkubectl cluster-info
returned a valid config but kubectl get pods
showed all bootstrap containers in CreatingContainer
state.alert-oxygen-95787
05/16/2022, 2:01 PMwooden-coat-97755
05/17/2022, 2:00 PMgreen-musician-2686
05/17/2022, 9:10 PMred-afternoon-63849
05/19/2022, 4:49 PMk3d
and tilt.dev on a helmchart that requires sysbox
and CRI-O
is there anyway to get k3d to utilize a CRI-O
runtime?handsome-answer-69741
05/20/2022, 7:36 AMmelodic-market-42092
06/07/2022, 11:22 AMk3d cluster create $CLUSTER_NAME
and without any configuration files?bumpy-laptop-27086
06/15/2022, 10:24 AMwooden-potato-2675
06/16/2022, 2:22 PMFATA[0018] Failed to start server k3d-poccluster-server-2: Node k3d-poccluster-server-2 failed to get ready: error waiting for log line `k3s is up and running` from node 'k3d-poccluster-server-2': stopped returning log lines
proud-eve-37927
06/20/2022, 8:48 PMapiVersion: <http://source.toolkit.fluxcd.io/v1beta1|source.toolkit.fluxcd.io/v1beta1>
kind: HelmRepository
metadata:
name: vault
spec:
interval: 5m
url: https://helm.releases.hashicorp.com
I got the following error from flux:
❯ kg <http://helmrepositories.source.toolkit.fluxcd.io|helmrepositories.source.toolkit.fluxcd.io> vault
NAME URL AGE READY STATUS
vault https://helm.releases.hashicorp.com 10m False failed to fetch Helm repository index: failed to cache index to temporary file: Get "https://helm.releases.hashicorp.com/index.yaml": dial tcp: lookup helm.releases.hashicorp.com on 10.43.0.1053 server misbehaving
on the Kind / rke / minikube it works perfect. I have no glue what's going wrong there in k3d.
Can anybody help me please in this issue
Thanks a lot Klausmelodic-market-42092
06/23/2022, 1:49 PMk3d cluster create $CLUSTER_NAME -p "8081:3000@loadbalancer"
to create my local cluster and get something exposed. This works well, but now I'd like to expose two things. Is that possible? Can I do something like k3d cluster create $CLUSTER_NAME -p "8081:3000,8082:8000@loadbalancer"
?melodic-market-42092
06/24/2022, 10:00 AMk3d cluster create $CLUSTER_NAME \
-p "8081:3000@loadbalancer" \
-p "8082:8000@loadbalancer" \
-p "8086:8086@loadbalancer" \
-p "8087:2746@loadbalancer"
to expose a few things from my k3d cluster. The first three of those work, but the last one just.. doesn't. I'm struggling to figure out why. If I do a port-forward to the service, then things work as expected. The ingress and service for the fourth entry there (8087:2746) looks exactly similar to the ingress'es for the other three 🤔 The application behind 2746 is https://argoproj.github.io/argo-workflows/argo-server/. Any hints anybody could give on this? Hopefully someone has seen this before.cool-sunset-23736
07/17/2022, 4:46 PM/etc/confd/values.yaml
file. Is this a shared volume between the nodes or is there some other mechanism in play?millions-alarm-86298
07/20/2022, 7:10 AMfierce-room-79104
07/24/2022, 12:48 PMfierce-room-79104
07/24/2022, 12:49 PMERRO[0005] Failed Cluster Start: Failed to start server k3d-foo-server-0: Node k3d-foo-server-0 failed to get ready: Failed waiting for log message 'k3s is up and running' from node 'k3d-foo-server-0': node 'k3d-foo-server-0' (container 'd0fbe81acb8e3fb4c0e7e051c65b93346cb6180d88ec8cd84301a9282c71e7c6') not runningERRO[0005] Failed to create cluster >>> Rolling BackINFO[0005] Deleting cluster 'foo'DEBU[0005] Cluster Details: &{Name:foo Network:{Name:k3d-foo ID:b7842ddccf7c6454bd9297f3b4c977cbf1cae3de909c6fef95af3aa5c10e9334 External:false IPAM:{IPPrefix:10.89.0.0/24 IPsUsed:[] Managed:false} Members:[]} Token:SDuOUscMoGAPLaLcgqkR Nodes:[0xc00008b6c0 0xc00008b860] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000041080 ServerLoadBalancer:0xc0001ffd30 ImageVolume:k3d-foo-images Volumes:[k3d-foo-images k3d-foo-images]}DEBU[0005] Deleting node k3d-foo-serverlb ...DEBU[0005] Deleting node k3d-foo-server-0 ...INFO[0005] Deleting cluster network 'k3d-foo'INFO[0005] Deleting 2 attached volumes...DEBU[0005] Deleting volume k3d-foo-images...DEBU[0005] Deleting volume k3d-foo-images...WARN[0005] Failed to delete volume 'k3d-foo-images' of cluster 'foo': failed to find volume 'k3d-foo-images': Error: No such volume: k3d-foo-images -> Try to delete it manuallyFATA[0005] Cluster creation FAILED, all changes have been rolled back!
And on the podman side it logs:
WARN[0016] Could not find mount at destination "/var/run" when parsing user volumes for container 498e3a3624702a4de4b372373774cffa26d8b77390aafbaceb4b7a18e596dfddWARN[0017] Could not find mount at destination "/var/run" when parsing user volumes for container 498e3a3624702a4de4b372373774cffa26d8b77390aafbaceb4b7a18e596dfddWARN[0017] Could not find mount at destination "/var/run" when parsing user volumes for container 498e3a3624702a4de4b372373774cffa26d8b77390aafbaceb4b7a18e596dfddWARN[0017] Could not find mount at destination "/var/run" when parsing user volumes for container 498e3a3624702a4de4b372373774cffa26d8b77390aafbaceb4b7a18e596dfddWARN[0017] Could not find mount at destination "/var/run" when parsing user volumes for container 498e3a3624702a4de4b372373774cffa26d8b77390aafbaceb4b7a18e596dfdd
big-librarian-12753
08/01/2022, 10:14 AMmillions-alarm-86298
08/03/2022, 10:55 AMmelodic-market-42092
08/03/2022, 12:16 PM/etc/hosts
like this:
localhost myapp.local
And have an ingress like this:
spec:
rules:
- host: myapp.local
And run k3d like this:
k3d cluster create $CLUSTER_NAME \
-p "80:80@loadbalancer" \
Should I then be able to reach the service that my ingress points to by going to http://myapp.local in the browser?many-telephone-82541
08/09/2022, 6:11 PMError: failed to generate container "69815de67a72a047842260d67a1d83fa61f6cf073a3cbf4b09fc1166926cec26 │
│ " spec: failed to generate spec: path "/var/lib/kubelet/pods" is mounted on "/var/lib/kubelet" but it is not a shared mount
Any suggestions?