red-waitress-37932
05/23/2023, 3:57 PMable-scientist-83553
05/23/2023, 8:24 PMrdctl set --experimental.virtual-machine.type vz
. I tried following the instructions here to enable it from the UI, but it isn't working as I don't have a global settings button on the top left. Can anyone help?shy-mouse-46102
05/24/2023, 9:06 AMrapid-house-54346
05/24/2023, 9:11 AMbland-translator-58922
05/24/2023, 9:54 AMbig-judge-33880
05/24/2023, 10:44 AMhallowed-window-565
05/24/2023, 11:20 AMhallowed-window-565
05/24/2023, 11:22 AMancient-car-38783
05/24/2023, 12:44 PMkubectl logs --timestamps -n cert-manager cert-manager-webhook-5d4fd5cb7f-mq94z | tail -5
2023-05-24T14:20:58.529881904+02:00 I0524 12:20:58.513458 1 logs.go:59] http: TLS handshake error from 89.233.X.X:34022: read tcp 10.42.1.196:10250->89.233.X.X:34022: read: connection reset by peer
2023-05-24T14:20:58.529886873+02:00 I0524 12:20:58.514768 1 logs.go:59] http: TLS handshake error from 89.233.X.X:34028: EOF
2023-05-24T14:20:58.538134400+02:00 I0524 12:20:58.522918 1 logs.go:59] http: TLS handshake error from 89.233.X.X:34058: read tcp 10.42.1.196:10250->89.233.X.X:34058: read: connection reset by peer
2023-05-24T14:20:58.557263869+02:00 I0524 12:20:58.524215 1 logs.go:59] http: TLS handshake error from 89.233.X.X:34034: EOF
2023-05-24T14:20:58.563356287+02:00 I0524 12:20:58.558927 1 logs.go:59] http: TLS handshake error from 89.233.X.X:34042: read tcp 10.42.1.196:10250->89.233.X.X:34042: read: connection reset by peer
Is there any way to get the --insecure-skip-tls-verify
functionality into the command(s) that need executing?
---
2 It's trying to use 'rancher' as it's Issuer, where as it should/could be using the ClusterIssuer named "*letsencrypt-production*".
kubectl cert-manager -n cattle-system status certificate tls-rancher-ingress
Name: tls-rancher-ingress
Namespace: cattle-system
Created at: 2023-05-24T14:20:57+02:00
Conditions:
Issuing: True, Reason: DoesNotExist, Message: Issuing certificate as Secret does not exist
Ready: False, Reason: DoesNotExist, Message: Issuing certificate as Secret does not exist
DNS Names:
- cluster.domain.ext
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 19m cert-manager-certificates-trigger Issuing certificate as Secret does not exist
Normal Generated 19m cert-manager-certificates-key-manager Stored new private key in temporary Secret resource "tls-rancher-ingress-xxwck"
Normal Requested 19m cert-manager-certificates-request-manager Created new CertificateRequest resource "tls-rancher-ingress-8wb4j"
error when getting Issuer: <http://issuers.cert-manager.io|issuers.cert-manager.io> "rancher" not found
error when finding Secret "tls-rancher-ingress": secrets "tls-rancher-ingress" not found
Not Before: <none>
Not After: <none>
Renewal Time: <none>
CertificateRequest:
Name: tls-rancher-ingress-8wb4j
Namespace: cattle-system
Conditions:
Approved: True, Reason: <http://cert-manager.io|cert-manager.io>, Message: Certificate request has been approved by <http://cert-manager.io|cert-manager.io>
Ready: False, Reason: Pending, Message: Referenced "Issuer" not found: <http://issuer.cert-manager.io|issuer.cert-manager.io> "rancher" not found
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitingForApproval 19m cert-manager-certificaterequests-issuer-ca Not signing CertificateRequest until it is Approved
Normal WaitingForApproval 19m cert-manager-certificaterequests-issuer-selfsigned Not signing CertificateRequest until it is Approved
Normal WaitingForApproval 19m cert-manager-certificaterequests-issuer-venafi Not signing CertificateRequest until it is Approved
Normal WaitingForApproval 19m cert-manager-certificaterequests-issuer-vault Not signing CertificateRequest until it is Approved
Normal WaitingForApproval 19m cert-manager-certificaterequests-issuer-acme Not signing CertificateRequest until it is Approved
Normal <http://cert-manager.io|cert-manager.io> 19m cert-manager-certificaterequests-approver Certificate request has been approved by <http://cert-manager.io|cert-manager.io>
Normal IssuerNotFound 19m cert-manager-certificaterequests-issuer-selfsigned Referenced "Issuer" not found: <http://issuer.cert-manager.io|issuer.cert-manager.io> "rancher" not found
Normal IssuerNotFound 19m cert-manager-certificaterequests-issuer-ca Referenced "Issuer" not found: <http://issuer.cert-manager.io|issuer.cert-manager.io> "rancher" not found
Normal IssuerNotFound 19m cert-manager-certificaterequests-issuer-vault Referenced "Issuer" not found: <http://issuer.cert-manager.io|issuer.cert-manager.io> "rancher" not found
Normal IssuerNotFound 19m cert-manager-certificaterequests-issuer-acme Referenced "Issuer" not found: <http://issuer.cert-manager.io|issuer.cert-manager.io> "rancher" not found
Normal IssuerNotFound 19m cert-manager-certificaterequests-issuer-venafi Referenced "Issuer" not found: <http://issuer.cert-manager.io|issuer.cert-manager.io> "rancher" not found
---
Anyone that could guide me in the right direction? I'm bit blindly trying search terms and there results but unsuccessful so far.shy-boots-25209
05/24/2023, 12:59 PMrhythmic-guitar-48301
05/24/2023, 1:54 PMChart requires kubeVersion: >= 1.21.0-0 which is incompatible with Kubernetes v1.20.0
However I my cluster is using v1.25.5+k3s1.
I saw an issue on fleet that said to fix this upgrade to a more recent version of fleet. I tried updating with helm to 0.7.0-rc3 but I am seeing rancher/fleet:v0.6.0
and rancher/fleet-agent:v0.6.0
when I look at my pods.
➜ ~ helm -n cattle-fleet-system install --create-namespace --wait \
fleet <https://github.com/rancher/fleet/releases/download/v0.7.0-rc.3/fleet-0.7.0-rc.3.tgz>
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/patrick/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/patrick/.kube/config
NAME: fleet
LAST DEPLOYED: Tue May 23 23:05:55 2023
NAMESPACE: cattle-fleet-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
Not sure why this isn't updated.future-fountain-82544
05/24/2023, 1:57 PMINFO: <https://REDACTED/ping> is accessible
INFO: REDACTED resolves to REDACTED
time="2023-05-24T13:45:38Z" level=info msg="Listening on /tmp/log.sock"
time="2023-05-24T13:45:38Z" level=info msg="Rancher agent version v2.7.1 is starting"
time="2023-05-24T13:45:38Z" level=fatal msg="looking up cattle-system/cattle ca/token: failed to find service account cattle-system/ca
ttle: serviceaccounts \"cattle\" is forbidden: User \"system:serviceaccount:cattle-system:cattle\" cannot get resource \"serviceaccoun
ts\" in API group \"\" in the namespace \"cattle-system\""
After a few pod crash/restarts, it calms down and I see this:
time="2023-05-24T13:46:50Z" level=info msg="Listening on /tmp/log.sock"
time="2023-05-24T13:46:50Z" level=info msg="Rancher agent version v2.7.1 is starting"
time="2023-05-24T13:46:50Z" level=info msg="Connecting to <wss://REDACTED/v3/connect/register> with token starting with
[REDACTED]"
time="2023-05-24T13:46:50Z" level=info msg="Connecting to proxy" url="<wss://REDACTED/v3/connect/register>"
When I enable debug logging in the pod, I see an occasional "Wrote ping" message, but not much else.
Any ideas on where to start looking?eager-nightfall-87875
05/24/2023, 2:55 PMminiature-ambulance-98143
05/24/2023, 4:36 PMbusy-napkin-41956
05/24/2023, 5:09 PMbulky-eve-17563
05/25/2023, 9:04 AMfreezing-hairdresser-79403
05/25/2023, 9:15 AM*.<http://local.rke.example.com|local.rke.example.com>
• *.<http://downstream1.rke.example.com|downstream1.rke.example.com>
• *.<http://downstream2.rke.example.com|downstream2.rke.example.com>
• *.<http://downstream3.rke.example.com|downstream3.rke.example.com>
However, the Let's Encrypt production API has a limit of 50 certificates per registered domain per week.
Since the registered domain is <http://example.com|example.com>
, I may quickly reach this limit.
I'm not sure about how the renewal process for certificates works if I decide to use the ACME Terraform provider. Specifically, I'm uncertain if renewing the certificates requires executing an additional Terraform apply.
I want to know if there is another solution to achieve this please ?astonishing-mouse-9587
05/25/2023, 9:33 AMgreat-monkey-83864
05/25/2023, 10:33 AM/run/desktop/mnt/host/c/some_path
there seems to be some sort of mapping in RD too - but is there a way I can have a generic config that maps the above for all pods? so we don't need to update the reference in every location when switching between ?
tajolly-area-75887
05/25/2023, 11:31 AMUnknown error: <http://samltokens.management.cattle.io|samltokens.management.cattle.io> "xxxxx" not found
millions-receptionist-27635
05/25/2023, 12:45 PMError creating machine: Error running "sudo apt-get update": ssh command error: command: sudo apt-get update err: exit status 100 output: Hit:1 <http://archive.ubuntu.com/ubuntu> jammy InRelease
[ERROR] handler node-controller: Error creating machine: Error installing Docker: , requeuing
[ERROR] handler node-controller: Error creating machine: Error running "sudo apt-get update": ssh command error: command: sudo apt-get update err: exit status 100 output: Get:1 <http://security.ubuntu.com/ubuntu> focal-security InRelease [114 kB], requeuing
Does anyone have an idea if it could be the node-driver or on Digital Ocean's side? Or are there other ways to debug the problem besides looking through the Rancher-logs?busy-napkin-41956
05/25/2023, 12:50 PMminiature-ambulance-98143
05/25/2023, 4:38 PMdelightful-gigabyte-66989
05/26/2023, 1:56 AMlemon-noon-36352
05/26/2023, 10:15 AMwooden-motorcycle-17772
05/26/2023, 1:09 PMbig-judge-33880
05/26/2023, 3:45 PMboundless-dog-9864
05/26/2023, 7:33 PMhundreds-jackal-3140
05/27/2023, 3:56 PMtime="2023-05-27T15:25:50Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:6443/v1-k3s/readyz>: 500 Internal Server Error"
E0527 15:25:55.611263 101 controller.go:163] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
time="2023-05-27T15:25:55Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:6443/v1-k3s/readyz>: 500 Internal Server Error"
time="2023-05-27T15:25:57Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
error creating chain "KUBE-IPTABLES-HINT": exit status 3: Ignoring deprecated --wait-interval option.
E0527 15:25:59.599063 101 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"a1861d6e3f719d7af4534bce849a6b710b8aeec31ebc26efa1b499eb57ed0a64\": not found" podSandboxID="a1861d6e3f719d7af4534bce849a6b710b8aeec31ebc26efa1b499eb57ed0a64"
E0527 15:26:00.166615 101 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"ccaa4af90b9d268420d713770e59e0cddd6a5df90cf6351ac73e18e552d56fd5\": not found" podSandboxID="ccaa4af90b9d268420d713770e59e0cddd6a5df90cf6351ac73e18e552d56fd5"
Trace[108069360]: ---"Write to database call finished" len:2979,err:Internal error occurred: failed calling webhook "rancher.cattle.io.secrets": failed to call webhook: Post "<https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=10s>": context deadline exceeded 10003ms (15:26:09.987)
time="2023-05-27T15:26:09Z" level=warning msg="Failed to create Kubernetes secret: Internal error occurred: failed calling webhook \"rancher.cattle.io.secrets\": failed to call webhook: Post \"<https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=10s>\": context deadline exceeded"
Trace[1354403927]: ---"Write to database call finished" len:326,err:Internal error occurred: failed calling webhook "rancher.cattle.io.secrets": failed to call webhook: Post "<https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=10s>": context deadline exceeded 10000ms (15:26:10.169)
time="2023-05-27T15:26:10Z" level=warning msg="Error ensuring node password secret for pre-validated node 'local-node': Internal error occurred: failed calling webhook \"rancher.cattle.io.secrets\": failed to call webhook: Post \"<https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=10s>\": context deadline exceeded"
error checking rule: exit status 2: Ignoring deprecated --wait-interval option.
E0527 15:26:10.367605 101 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"ce831e413c98914abb6b3b94bfec7beae966a72ca73be22501da503d6a8eb095\": not found" podSandboxID="ce831e413c98914abb6b3b94bfec7beae966a72ca73be22501da503d6a8eb095"
W0527 15:26:13.073302 101 dispatcher.go:174] Failed calling webhook, failing open <http://rancher.cattle.io.features.management.cattle.io|rancher.cattle.io.features.management.cattle.io>: failed calling webhook "<http://rancher.cattle.io.features.management.cattle.io|rancher.cattle.io.features.management.cattle.io>": failed to call webhook: Post "<https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation/features.management.cattle.io?timeout=10s>": proxy error from 127.0.0.1:6443 while dialing 10.42.0.52:9443, code 503: 503 Service Unavailable
E0527 15:26:13.073346 101 dispatcher.go:181] failed calling webhook "<http://rancher.cattle.io.features.management.cattle.io|rancher.cattle.io.features.management.cattle.io>": failed to call webhook: Post "<https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation/features.management.cattle.io?timeout=10s>": proxy error from 127.0.0.1:6443 while dialing 10.42.0.52:9443, code 503: 503 Service Unavailable
E0527 15:26:31.769067 101 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"fleet-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=fleet-controller pod=fleet-controller-55c547c6d5-47jjd_cattle-fleet-system(949dc5b1-1aff-4c5f-912d-8f1b37793c68)\"" pod="cattle-fleet-system/fleet-controller-55c547c6d5-47jjd" podUID=949dc5b1-1aff-4c5f-912d-8f1b37793c68
crooked-orange-82673
05/28/2023, 12:54 PM