colossal-spring-98913
09/21/2023, 2:32 AMtoken
in config.yaml and the EncryptionConfig? I’m trying to figure out how just the token is sufficient to restore a node from a etcd backup. Is the encryptionConfig also stored in etcd ?sparse-flag-14809
09/21/2023, 2:32 PMsparse-flag-14809
09/21/2023, 2:33 PMsparse-flag-14809
09/21/2023, 2:34 PMwonderful-rain-13345
09/21/2023, 6:42 PME0921 18:35:26.833337 270467 memcache.go:265] couldn't get current server API group list: Get "<https://rancher.internal.nullreference.io/k8s/clusters/local/api?timeout=32s>": tls: failed to verify certificate: x509: certificate signed by unknown authority
ambitious-plastic-3551
09/22/2023, 1:09 PMmammoth-memory-36508
09/22/2023, 10:11 PMfierce-tomato-30072
09/25/2023, 1:22 AMroot 2159939 9.8 5.3 1198428 438608 ? Ssl Sep24 67:40 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --advertise-address=IP PUBLIC
my config:
server: <https://192.168.0.61:9345>
data-dir: /var/lib/rancher/rke2
tls-san:
- cluster.local
- 192.168.0.61
- 14.225.53.251
node-external-ip: IP PUBLIC
But "kubectl get nodes -o wide" is nothing
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-01 Ready control-plane,etcd,master 2d17h v1.26.4+rke2r1 192.168.0.61 <none> Ubuntu 22.04.2 LTS 5.15.0-75-generic <containerd://1.6.19-k3s1>
worker-01 Ready <none> 2d17h v1.26.4+rke2r1 192.168.0.5 <none> Ubuntu 22.04.2 LTS 5.15.0-75-generic <containerd://1.6.19-k3>
Thanks all.brave-lamp-48253
09/25/2023, 7:22 AMbland-machine-10503
09/25/2023, 4:54 PMmammoth-memory-36508
09/25/2023, 9:51 PMapiVersion: <http://projectcalico.org/v3|projectcalico.org/v3>
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
namespaceSelector: <http://projectcalico.org/name|projectcalico.org/name> not in {'kube-system', 'calico-system', 'calico-apiserver', 'default', 'cattle-system', 'cattle-fleet-system', 'cattle-impersonation-system', 'tigera-operator', 'kube-node-lease'}
types:
- Ingress
- Egress
egress:
- action: Allow
protocol: UDP
destination:
selector: 'k8s-app == "kube-dns"'
ports:
- 53
rhythmic-boots-77358
09/26/2023, 1:54 AM<http://rke.cattle.io/init-node|rke.cattle.io/init-node>
via the rancher v3 api? From what I can see this information is only known on the machine plan which from what I can see isn't accessible via the api.swift-sunset-4572
09/26/2023, 8:44 AMicy-secretary-33916
09/26/2023, 10:22 AMambitious-plastic-3551
09/26/2023, 11:30 AMambitious-plastic-3551
09/26/2023, 12:02 PMambitious-plastic-3551
09/26/2023, 12:04 PMechoing-father-81877
09/26/2023, 4:44 PMsalmon-hair-72590
09/27/2023, 5:24 AMambitious-plastic-3551
09/27/2023, 8:43 AMswift-sunset-4572
09/27/2023, 12:11 PMbest-jordan-89798
09/27/2023, 12:29 PMcolossal-spring-98913
09/27/2023, 6:39 PMclever-processor-78736
09/27/2023, 7:32 PMcilium
image from a specific feature branch from upstream (cilium/cilium
) and deploy that as part of a rke2 deployment using the rke2-cilium
chart via Rancher Manager. I know how to build the container image and i've pushed that to a private image registry, to deploy this image is no problem since i can just add the image + tag to the rke2-cilium
chart values. The thing is that the feature branch in the cilium
repo includes changes to the official Helm chart (adds additional values), this means that i need to push a custom Helm chart to a private chart repo. AFAICT the official cilium
Helm chart is patched as part of the process to make it into a rke2-cilium
chart. I can probably apply the same patch and add the changes in the upstream Helm chart, but how can i tell Rancher to now fetch the rke2-cilium
from my private chart repo?broad-eye-24995
09/28/2023, 2:27 AMabundant-account-84261
09/28/2023, 10:09 AMminiature-sandwich-48416
09/28/2023, 4:33 PMrke2-ingress-nginx-controller
I have in my 2 node RKE2 cluster. I think that this behaviour started after I installed second node, I have not experienced this with only single RKE2 node. The issue is, that the Ingress
resources are continuously refreshing... I was looking into the log of the rke2-ingress-nginx-controller
and I see there many records like:
I0927 19:15:06.246738 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"vault", Name:"vault", UID:"5c2ba786-f9ef-4bfe-8449-2c9cdda1ed29", APIVersion:"<http://networking.k8s.io/v1|networking.k8s.io/v1>", ResourceVersion:"45134625", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0927 19:16:05.890433 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"vault", Name:"vault", UID:"5c2ba786-f9ef-4bfe-8449-2c9cdda1ed29", APIVersion:"<http://networking.k8s.io/v1|networking.k8s.io/v1>", ResourceVersion:"45135047", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0927 19:17:06.437723 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"vault", Name:"vault", UID:"5c2ba786-f9ef-4bfe-8449-2c9cdda1ed29", APIVersion:"<http://networking.k8s.io/v1|networking.k8s.io/v1>", ResourceVersion:"45135508", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0927 19:18:05.886983 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"vault", Name:"vault", UID:"5c2ba786-f9ef-4bfe-8449-2c9cdda1ed29", APIVersion:"<http://networking.k8s.io/v1|networking.k8s.io/v1>", ResourceVersion:"45135925", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0927 19:19:06.435988 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"vault", Name:"vault", UID:"5c2ba786-f9ef-4bfe-8449-2c9cdda1ed29", APIVersion:"<http://networking.k8s.io/v1|networking.k8s.io/v1>", ResourceVersion:"45136387", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0927 19:20:07.327070 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"vault", Name:"vault", UID:"5c2ba786-f9ef-4bfe-8449-2c9cdda1ed29", APIVersion:"<http://networking.k8s.io/v1|networking.k8s.io/v1>", ResourceVersion:"45136847", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0927 19:21:06.764093 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"vault", Name:"vault", UID:"5c2ba786-f9ef-4bfe-8449-2c9cdda1ed29", APIVersion:"<http://networking.k8s.io/v1|networking.k8s.io/v1>", ResourceVersion:"45137287", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0927 19:22:07.334879 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"vault", Name:"vault", UID:"5c2ba786-f9ef-4bfe-8449-2c9cdda1ed29", APIVersion:"<http://networking.k8s.io/v1|networking.k8s.io/v1>", ResourceVersion:"45137728", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
Working with the ArgoCD, application are literally always Progressing, because of this “Scheduled for sync”.
Do you have experience with similar behaviour? I am curious how to debug and resolve this issue. Request coming to Ingress
resources are slow because of this scheduled sync issue.
There are for example 309 count of such events on single Ingress
resource over short period of time.
This is an example of Ingress
:
apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
kind: Ingress
metadata:
annotations:
<http://cert-manager.io/cluster-issuer|cert-manager.io/cluster-issuer>: letsencrypt
labels:
<http://app.kubernetes.io/instance|app.kubernetes.io/instance>: vault
<http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: Helm
<http://app.kubernetes.io/name|app.kubernetes.io/name>: vault
<http://argocd.argoproj.io/instance|argocd.argoproj.io/instance>: vault
<http://helm.sh/chart|helm.sh/chart>: vault-0.25.0
name: vault
namespace: vault
spec:
ingressClassName: nginx
rules:
- host: <http://vault.my.online|vault.my.online>
http:
paths:
- backend:
service:
name: vault
port:
number: 8200
path: /
pathType: Prefix
tls:
- hosts:
- <http://vault.my.online|vault.my.online>
secretName: vault-ingress-tls
wide-midnight-35930
09/28/2023, 7:27 PMcattle-logging-system
listing over 300 completed
pods like rancher-logging-root-fluentd-configcheck-45909352
in completed state. For some reason pods are not disappearing. I do not see any option such revisionHistoryLimit
to set as value in helm chart in order to remove completed pods automatically. Is there any reason why those pods are still there? Any proper solution for this? Thanksabundant-account-84261
09/29/2023, 12:39 AMsparse-flag-14809
09/29/2023, 3:53 PM