https://rancher.com/ logo
Docs
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
espanol
  • h

    hundreds-terabyte-36933

    09/14/2022, 3:26 PM
    Hola, alguien tiene un sitio con buena documentacion para Rancher 2.6 continuous delivery? English o espanol , no me importa
    q
    • 2
    • 1
  • q

    quick-sandwich-76600

    10/28/2022, 7:19 PM
    Cloud Native Tech sessions en Madrid, el 16 de Noviembre. Tendremos sesiones técnica sobre Rancher, NeuVector y Longhorn. Información y registro: https://more.suse.com/SUSE_Cloud_Native_Tech_Sessions_LP_ES.html
    👍 1
  • c

    cool-intern-86930

    12/31/2022, 2:27 PM
    Hola, alguien vive?
  • w

    worried-plastic-58654

    12/31/2022, 8:40 PM
    👍
  • h

    hundreds-jewelry-18968

    01/09/2023, 1:16 AM
    Hola a todos, Tengo un cluster usando k3s y unos raspberrypis. Funciona perfecto pero es la segunda vez que tengo este error:
    ● k3s.service - Lightweight Kubernetes
         Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
         Active: active (running) since Mon 2023-01-09 00:57:40 GMT; 26s ago
           Docs: <https://k3s.io>
        Process: 29663 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
        Process: 29665 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
        Process: 29666 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
       Main PID: 29667 (k3s-server)
          Tasks: 105
         Memory: 713.5M
            CPU: 55.998s
         CGroup: /system.slice/k3s.service
                 ├─27643 /var/lib/rancher/k3s/data/73e12683e3eb1c52a6370763bda6bb977c304a3b6cda6e5c7656f45f3725e7dd/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 866ca3db8ed3>
                 ├─27988 /var/lib/rancher/k3s/data/73e12683e3eb1c52a6370763bda6bb977c304a3b6cda6e5c7656f45f3725e7dd/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 699af3f5d3c1>
                 ├─28196 /var/lib/rancher/k3s/data/73e12683e3eb1c52a6370763bda6bb977c304a3b6cda6e5c7656f45f3725e7dd/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id ac9ec12775e6>
                 ├─28317 /var/lib/rancher/k3s/data/73e12683e3eb1c52a6370763bda6bb977c304a3b6cda6e5c7656f45f3725e7dd/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id b345387839df>
                 ├─28535 /var/lib/rancher/k3s/data/73e12683e3eb1c52a6370763bda6bb977c304a3b6cda6e5c7656f45f3725e7dd/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 880445942eb9>
                 ├─29667 /usr/local/bin/k3s server
                 └─29683 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/ra>
    
    Jan 09 00:58:01 facio k3s[29667]: I0109 00:58:01.010177   29667 ttl_controller.go:121] Starting TTL controller
    Jan 09 00:58:01 facio k3s[29667]: I0109 00:58:01.010203   29667 shared_informer.go:240] Waiting for caches to sync for TTL
    Jan 09 00:58:01 facio k3s[29667]: I0109 00:58:01.022756   29667 node_ipam_controller.go:91] Sending events to api server.
    Jan 09 00:58:01 facio k3s[29667]: E0109 00:58:01.441328   29667 authentication.go:63] "Unable to authenticate the request" err="[x509: certificate has expired or is not yet v>
    Jan 09 00:58:04 facio k3s[29667]: E0109 00:58:04.547721   29667 authentication.go:63] "Unable to authenticate the request" err="[x509: certificate has expired or is not yet v>
    Jan 09 00:58:04 facio k3s[29667]: E0109 00:58:04.548614   29667 authentication.go:63] "Unable to authenticate the request" err="[x509: certificate has expired or is not yet v>
    Jan 09 00:58:04 facio k3s[29667]: E0109 00:58:04.548867   29667 authentication.go:63] "Unable to authenticate the request" err="[x509: certificate has expired or is not yet v>
    Si mal no recuerdo se iba a cumplir un año en en el que este cluster iba a estar funcionando sin interrupciones. No he tocado nada. Alguien sabe como arreglar el problema
    "Unable to authenticate the request" err="[x509: certificate has expired or is not yet v>"
    ? la primera vez que vi este error reinstale el cluster, quisiera evitar hacerlo nuevamente.
    q
    • 2
    • 3
  • h

    hundreds-jewelry-18968

    01/09/2023, 1:20 AM
    Revisando el journal veo este mensaje:
    Jan 09 01:18:01 facio k3s[29667]: E0109 01:18:01.542615   29667 authentication.go:63] "Unable to authenticate the request" err="[x509: certificate has expired or is not yet valid: current time 2023-01-09T01:18:01Z is after 2023-01-08T22:37:57Z, verifying certificate SN=2555825581207100645, SKID=,AKID=8F:E9:C8:2E:7F:91:B7:01:BC:7A:48:DD:77:8C:6A:EF:92:DA:E5:58 failed: x509: certificate has expired or is not yet valid: current time 2023-01-09T01:18:01Z is after 2023-01-08T22:37:57Z]"
    no se como se genero este problema
    current time 2023-01-09T01:18:01Z is after 2023-01-08T22:37:57Z
  • h

    hundreds-jewelry-18968

    01/09/2023, 1:31 AM
    Un dato mas, puedo usar
    kubectl
    dentro del master y veo el cluster, pero el servicio me sigue mostrando los errores que puse anteriorment
Powered by Linen
Title
h

hundreds-jewelry-18968

01/09/2023, 1:31 AM
Un dato mas, puedo usar
kubectl
dentro del master y veo el cluster, pero el servicio me sigue mostrando los errores que puse anteriorment
View count: 3