bitter-bear-79635
01/23/2023, 5:35 PMbitter-bear-79635
01/23/2023, 5:42 PMwooden-addition-88668
01/24/2023, 2:33 PMwooden-addition-88668
01/24/2023, 2:34 PMloud-daybreak-83328
01/26/2023, 6:36 PMoidc-client-id: <http://myclient.example.org|myclient.example.org>
oidc-groups-claim: groups
oidc-issuer-url: <https://keycloak.example.org/realms/test>
oidc-username-claim: preferred_username
When I get a token and try to use it to authenticate (just did a kubectl --token=XXXXXXXXXX get nodes, I get a message: error: You must be logged in to the server (the server has asked for the client to provide credentials) and the kube-api server just logs this:
time="2023-01-26T18:31:29Z" level=info msg="Processing v1Authenticate request..."
time="2023-01-26T18:31:29Z" level=error msg="found 1 parts of token"
has anyone done this?famous-lizard-52395
02/02/2023, 5:02 PMbest-fountain-73060
02/06/2023, 8:57 AMeager-london-83975
02/06/2023, 2:27 PMeager-london-83975
02/06/2023, 2:27 PMeager-london-83975
02/06/2023, 2:28 PMeager-london-83975
02/06/2023, 2:28 PMeager-london-83975
02/06/2023, 2:29 PMeager-london-83975
02/06/2023, 2:29 PMquaint-candle-18606
02/06/2023, 7:46 PMable-analyst-76573
02/10/2023, 10:18 PMlittle-gpu-19383
02/13/2023, 10:11 AMlittle-gpu-19383
02/13/2023, 10:12 AMlimited-eye-27484
02/13/2023, 11:16 PMkube-apiserver
container is showing a lot of errors like this:
E0213 22:18:56.420705 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: <http://leases.coordination.k8s.io|leases.coordination.k8s.io> "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "<http://coordination.k8s.io|coordination.k8s.io>" in the namespace "kube-system"
Where should I even be looking in Rancher to start troubleshooting this problem?little-ambulance-5584
02/16/2023, 2:43 AMblue-farmer-46993
02/20/2023, 9:27 AMlittle-horse-77834
02/23/2023, 4:07 PMkubectl -n myns get all > myns.yaml
and then generator myns.yaml
lemon-application-97336
02/23/2023, 4:28 PMhelm upgrade --install rancher rancher-stable/rancher --namespace cattle-system --set hostname="myhost.mydomain" --set tls=external
lemon-application-97336
02/23/2023, 4:36 PMhelm upgrade --install rancher rancher-stable/rancher --namespace cattle-system --set hostname="myhost.mydomain" --set tls=external
Installation was ok, but the Ingress reports:
'nginx-ingress-controller Scheduled for sync'
In the rancher log I see the following errors:
[ERROR] Failed to connect to peer <wss://10.45.3.4/v3/connect> [local ID=10.45.4.5]: dial tcp 10.45.3.4:443: i/o timeout
I'm confused, I would have expected the internal connections were going to port 80, which is open.
Anybody can give me a hint, what could be wrong?
Thankslemon-noon-36352
03/03/2023, 10:10 AMbig-tiger-67977
03/04/2023, 2:48 PMbig-jordan-45387
03/07/2023, 8:52 AMabundant-gpu-72225
03/10/2023, 8:13 PMsparse-artist-18151
03/13/2023, 12:48 PMapiVersion: <http://metallb.io/v1beta1|metallb.io/v1beta1>
kind: IPAddressPool
metadata:
name: core-net-192.168.92.140-159
namespace: metallb-system
spec:
addresses:
- 192.168.94.140-192.168.94.159
---
apiVersion: <http://metallb.io/v1beta1|metallb.io/v1beta1>
kind: L2Advertisement
metadata:
name: metallb-pool
namespace: metallb-system
spec:
ipAddressPools:
- core-net-192.168.99.140-159
How can we enable kubeproxy ipvs on the management cluster? (at the moment i only have one cluster with workernodes added to the management cluster)
kubeproxy:
extra_args:
ipvs-scheduler: lc
proxy-mode: ipvs
do i need to deploy a separate cluster with workernodes for this?
Thanks a lot for your input, if im on the wrong channel for these questions please let me know, i apologize in advance
Posted in #rke2white-garden-41931
03/14/2023, 12:31 AMquiet-park-6213
03/21/2023, 8:14 AMquiet-park-6213
03/21/2023, 8:14 AMcreamy-pencil-82913
03/21/2023, 8:33 AMquiet-park-6213
03/21/2023, 10:02 AMcreamy-pencil-82913
03/21/2023, 5:59 PM