crooked-cat-21365
02/17/2023, 3:24 PMapiVersion: <http://management.cattle.io/v3|management.cattle.io/v3>
builtin: true
context: cluster
description: ''
displayName: Cluster Member
external: false
hidden: false
kind: RoleTemplate
metadata:
annotations:
<http://cleanup.cattle.io/rtUpgradeCluster|cleanup.cattle.io/rtUpgradeCluster>: 'true'
<http://lifecycle.cattle.io/create.mgmt-auth-roletemplate-lifecycle|lifecycle.cattle.io/create.mgmt-auth-roletemplate-lifecycle>: 'true'
creationTimestamp: '2023-01-31T11:41:34Z'
finalizers:
- <http://controller.cattle.io/mgmt-auth-roletemplate-lifecycle|controller.cattle.io/mgmt-auth-roletemplate-lifecycle>
generation: 1
labels:
<http://authz.management.cattle.io/bootstrapping|authz.management.cattle.io/bootstrapping>: default-roletemplate
<http://cattle.io/creator|cattle.io/creator>: norman
managedFields:
- apiVersion: <http://management.cattle.io/v3|management.cattle.io/v3>
fieldsType: FieldsV1
fieldsV1:
f:builtin: {}
f:context: {}
f:description: {}
f:displayName: {}
f:external: {}
f:hidden: {}
f:metadata:
f:annotations:
.: {}
f:<http://cleanup.cattle.io/rtUpgradeCluster|cleanup.cattle.io/rtUpgradeCluster>: {}
f:<http://lifecycle.cattle.io/create.mgmt-auth-roletemplate-lifecycle|lifecycle.cattle.io/create.mgmt-auth-roletemplate-lifecycle>: {}
f:finalizers:
.: {}
v:"<http://controller.cattle.io/mgmt-auth-roletemplate-lifecycle|controller.cattle.io/mgmt-auth-roletemplate-lifecycle>": {}
f:labels:
.: {}
f:<http://authz.management.cattle.io/bootstrapping|authz.management.cattle.io/bootstrapping>: {}
f:<http://cattle.io/creator|cattle.io/creator>: {}
f:rules: {}
manager: rancher
operation: Update
time: '2023-01-31T11:41:50Z'
name: cluster-member
resourceVersion: '8254'
uid: 8d5303bf-a83d-4d27-b379-0e97d7d6417f
rules:
- apiGroups:
- <http://ui.cattle.io|ui.cattle.io>
resources:
- navlinks
verbs:
- get
- list
- watch
- apiGroups:
- <http://management.cattle.io|management.cattle.io>
resources:
- clusterroletemplatebindings
verbs:
- get
- list
- watch
- apiGroups:
- <http://management.cattle.io|management.cattle.io>
resources:
- projects
verbs:
- create
- apiGroups:
- <http://management.cattle.io|management.cattle.io>
resources:
:
stale-waiter-23637
02/17/2023, 3:46 PMancient-area-28415
02/17/2023, 5:03 PMbest-address-42882
02/17/2023, 6:21 PMbest-address-42882
02/17/2023, 6:21 PMbest-address-42882
02/17/2023, 6:22 PMhundreds-evening-84071
02/17/2023, 8:45 PMrke-network-plugin-deploy-job
pods in kube-system
namespace that are spawning every few minutes in error state.
I need some guidance in resolving thismagnificent-dress-27494
02/17/2023, 11:30 PMk3s
cluster that docker.io images that should not require pull secrets fail with a 401 authorization error.
Example:
helm upgrade --install openfaas openfaas/openfaas -n openfaas -f values-arm64.yaml
---
29s Warning Failed pod/nats-7489f6b794-7qfjs Failed to pull image "nats-streaming:0.25.3": rpc error: code = Unknown desc = failed to pull and unpack image "<http://docker.io/library/nats-streaming:0.25.3|docker.io/library/nats-streaming:0.25.3>": failed to resolve reference "<http://docker.io/library/nats-streaming:0.25.3|docker.io/library/nats-streaming:0.25.3>": failed to authorize: failed to fetch oauth token: unexpected status: 401 Unauthorized
I noticed the same thing with a JupyterHub deployment. I eventually got around that by adding my Docker Pro credentials and specifying imagePullSecrets
as the Chart allows. Unfortunately, the OpenFaaS does not allow for specifying regcred
for the base install.
Does anybody have any tips on this? I haven't had to specify credentials for both of these deployments in the past. This seems to have only come up in the past few weeks.hundreds-actor-42498
02/18/2023, 4:41 PMnarrow-lunch-43082
02/19/2023, 3:41 AMadorable-midnight-46384
02/19/2023, 8:19 AMsome-monkey-58167
02/20/2023, 1:45 AMlittle-ram-17683
02/20/2023, 4:49 AMkube-proxy
:
kubeproxy:
extra_args:
ipvs-strict-arp: 'true'
proxy-mode: ipvs
But looks like there is no config map for kube-proxy. When I try to edit cluster.yaml
to add custom config to kubeproxy I'm not sure where I should add this cfg. Could you help me, please? 🙂white-house-39401
02/20/2023, 9:37 AMclean-piano-88449
02/20/2023, 10:11 AMnarrow-lunch-43082
02/21/2023, 1:12 AMmicroscopic-knife-52274
02/21/2023, 6:34 AMflat-finland-50817
02/21/2023, 10:31 AMbillions-action-81923
02/21/2023, 11:49 AMsudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher
I can see the container running (Image attached)
But I cannot access the rancher UI on the public IP. This site can’t be reached error. Has something changed recently cause I was able to access it few weeks back? Appreciate any help on thishundreds-evening-84071
02/21/2023, 2:05 PMbumpy-printer-21267
02/21/2023, 6:54 PMvictorious-mouse-54341
02/21/2023, 7:35 PM--node-ip=192.169.7.10"
.
My full k3d syntax (if it matters) is:
k3d cluster create doctorconsul --network doctorconsul_wan --api-port 127.0.0.1:6443 --k3s-arg="--disable=traefik@server:0" -p "8502:443@loadbalancer --node-ip=192.169.7.10"
If someone could confirm that the arg is correct in k3s, that will help me dig into k3d. Thanks!stocky-dream-23501
02/21/2023, 8:03 PMbright-fireman-42144
02/21/2023, 10:14 PMINSTALL_K3S_VERSION="v1.24.7+k3s1"
red-vegetable-45199
02/22/2023, 12:49 AM$ make quick-release
+++ [0221 16:32:42] Verifying Prerequisites....
+++ [0221 16:32:42] Using Docker for MacOS
+++ [0221 16:32:43] Building Docker image kube-build:build-1b391dbecf-5-v1.15.15-legacy-1
+++ Docker build command failed for kube-build:build-1b391dbecf-5-v1.15.15-legacy-1
#1 [internal] load build definition from Dockerfile
#1 sha256:6e1255cd7055eb1cfae67de69d45beeac7b5ba54045fd75efabf9a5c9b66e94c
#1 transferring dockerfile: 1.97kB done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:023df0965c52a498f67650eb5839999db1772ef4662fc6bf683096ebea80ecfe
#2 transferring context: 2B done
#2 DONE 0.0s
#3 [internal] load metadata for <http://k8s.gcr.io/build-image/kube-cross:v1.15.15-legacy-1|k8s.gcr.io/build-image/kube-cross:v1.15.15-legacy-1>
#3 sha256:de1430dc4d9c424151ed461aaeef09868cd97b36305b30b581cd87032a3545d5
#3 ERROR: no match for platform in manifest sha256:f393f7b488a9488fc1dff42ba24e2674f9a2d962c7dedd93a457aaf414ab956e: not found
------
> [internal] load metadata for <http://k8s.gcr.io/build-image/kube-cross:v1.15.15-legacy-1|k8s.gcr.io/build-image/kube-cross:v1.15.15-legacy-1>:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: no match for platform in manifest sha256:f393f7b488a9488fc1dff42ba24e2674f9a2d962c7dedd93a457aaf414ab956e: not found
To retry manually, run:
docker build -t kube-build:build-1b391dbecf-5-v1.15.15-legacy-1 --pull=false --build-arg=KUBE_BUILD_IMAGE_CROSS_TAG=v1.15.15-legacy-1 --build-arg=KUBE_BASE_IMAGE_REGISTRY=<http://k8s.gcr.io/build-image|k8s.gcr.io/build-image> /Users/yongxiang.gao/source/kubernetes/_output/images/kube-build:build-1b391dbecf-5-v1.15.15-legacy-1
!!! [0221 16:32:44] Call tree:
!!! [0221 16:32:44] 1: build/release.sh:35 kube::build::build_image(...)
make: *** [quick-release] Error 1
How to fix it?narrow-lunch-43082
02/22/2023, 2:33 AMdazzling-holiday-3716
02/22/2023, 4:25 AMdazzling-holiday-3716
02/22/2023, 4:25 AMWSL2 is not supported with your current machine configuration.
Please enable the "Virtual Machine Platform" optional component and ensure virtualization is enabled in the BIOS.
For information please visit <https://aka.ms/enablevirtualization>
Error code: Wsl/Service/CreateVm/0x80370102
dazzling-holiday-3716
02/22/2023, 4:25 AM23-02-22T04:22:36.568Z: Registered distributions:
2023-02-22T04:22:37.179Z: Registered distributions:
2023-02-22T04:22:38.578Z: WSL failed to execute wsl.exe --import rancher-desktop C:\Users\Madhu\AppData\Local\rancher-desktop\distro C:\Program Files\Rancher Desktop\resources\resources\win32\distro-0.31.tar --version 2: Error: wsl.exe exited with code 4294967295
dazzling-holiday-3716
02/22/2023, 4:26 AM