incalculable-painting-771
11/16/2022, 11:56 PMcool-truck-28488
11/17/2022, 1:34 PMhigh-alligator-99144
11/17/2022, 8:03 PMgifted-breakfast-73755
11/18/2022, 6:44 PM2022/11/17 22:09:38 [ERROR] cluster [c-jpmvb] provisioning: [[network] Host [10.10.50.53] is not able to connect to the following ports: [10.10.50.44:2379]. Please check network policies and firewall rules]
I previously created a cluster a few months ago with the same node template and RKE template and did not run into the issue. The main difference is that when https://releases.rancher.com/install-docker/20.10.sh ran months ago in the previous cluster it installed Docker engine version 20.10.12
whereas it installed version 20.10.17
in the new cluster (assuming the script has been updated since). I was able to workaround the issue by adding iptables=false
to the Docker engine options in my node template but based on the Docker documentation, it sounds like disabling that is not recommended.
Anyone know how I can get it working without disabling iptables
?incalculable-painting-771
11/21/2022, 9:41 AMpolite-king-74071
11/24/2022, 1:01 PMrich-shoe-36510
11/25/2022, 11:43 AMlimited-eye-44568
11/28/2022, 11:41 PMdazzling-insurance-3854
12/02/2022, 6:19 PMcannot proceed with upgrade of controlplane since 1 host(s) cannot be reached prior to upgrade
The cluster was provisioned with Terraform and RKE, the problem seems to have been triggered by a downscale at the same time that the cluster was about or getting upgraded, I have tried clearing the node from the CRDs, update the state manually, recreate a node, but nothing worked after that, nodes can’t register, and the upgrade is also stuck, any ideas to what can I do to keep debugging this? is there a way to deregister and register the cluster again in rancher to see if that helps?magnificent-potato-64164
12/06/2022, 4:07 PMbright-winter-60933
12/14/2022, 1:15 PMcluster-cidr: 10.42.0.0/16,2001:cafe:42:0::/56
service-cidr: 10.43.0.0/16,2001:cafe:42:1::/112
any suggestions?
E1214 11:56:17.881602 1 main.go:330] Error registering network: failed to configure interface flannel-v6.1: failed to ensure v6 address of interface flannel-v6.1: failed to add v6 IP address 2001:cafe:42::/56 to flannel-v6.1: permission denied
acceptable-soccer-28720
12/20/2022, 2:35 PMcreamy-animal-16878
12/21/2022, 1:37 PMError from server (InternalError): an error on the server ("unable to create impersonator account: error setting up impersonation for user user-6j58n: failed to get secret for service account: cattle-impersonation-system/cattle-impersonation-user-6j58n, error: timed out waiting for the condition") has prevented the request from succeeding (get nodes)
There's 3 nodes with "all roles", plus 1 worker.
"current-context" in KubeConfig points to fqdn for rancher mgmt cluster. If I change this to point to one of the rke cluster nodes, it always works, for all 3 nodes.
In "cattle-impersonation-system" ns on rke cluster exists secret "cattle-impersonation-user-6j58n-token", but not without "-token". Help? 🙂colossal-london-26937
12/29/2022, 8:11 AM<the name of the node pool>-<machine number>
. However for the cloud provider, we must not to override the hostname, is there a way to disable the hostname override when using node pools ?
I know hostname_override
in the yaml, but I don’t see how to use it with node pools.
Thank you 🙂future-monkey-15944
12/29/2022, 3:49 PMacoustic-afternoon-14446
01/04/2023, 11:19 AMrke up
. All nodes upgraded fine, but now the etcd containers are continuously panicking on all nodes;
{"level":"panic","ts":"2023-01-04T11:13:47.673Z","caller":"rafthttp/transport.go:346","msg":"unexpected removal of unknown remote peer","remote-peer-id":"d88f54ed22afab7e","stacktrace":"<http://go.etcd.io/etcd/server/v3/etcdserver/api/rafthttp.(*Transport).removePeer\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/api/rafthttp/transport.go:346\ngo.etcd.io/etcd/server/v3/etcdserver/api/rafthttp.(*Transport).RemovePeer\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/api/rafthttp/transport.go:329\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).applyConfChange\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:2301\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).apply\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:2133\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).applyEntries\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:1357\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).applyAll\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:1179\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).run.func8\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:1111\ngo.etcd.io/etcd/pkg/v3/schedule.(*fifo).run\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/pkg/schedule/schedule.go:157|go.etcd.io/etcd/server/v3/etcdserver/api/rafthttp.(*Transport).removePeer\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/api/rafthttp/transport.go:346\ngo.etcd.io/etcd/server/v3/etcdserver/api/rafthttp.(*Transport).RemovePeer\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/api/rafthttp/transport.go:329\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).applyConfChange\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:2301\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).apply\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:2133\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).applyEntries\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:1357\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).applyAll\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:1179\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).run.func8\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:1111\ngo.etcd.io/etcd/pkg/v3/schedule.(*fifo).run\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/pkg/schedule/schedule.go:157>"}
panic: unexpected removal of unknown remote peer
This is happening on all 3 etcd nodes. I found related information on the etcd github, but there everyone talks about removing nodes and adding specific startup options. Since this is running in docker and I have no idea where the config is, I am unclear on how to proceed here.
If someone could please help me out here; we now have a field cluster that is unreachable and I have no idea what to do...acceptable-soccer-28720
01/05/2023, 10:52 AMagreeable-egg-60021
01/05/2023, 11:49 AMdry-businessperson-74633
01/08/2023, 6:51 PMwide-easter-7639
01/10/2023, 10:41 AMmany-sunset-84595
01/12/2023, 9:41 AMmany-sunset-84595
01/12/2023, 12:16 PMwide-easter-7639
01/13/2023, 8:31 AMbitter-account-51613
01/13/2023, 1:40 PMgray-teacher-31467
01/13/2023, 9:34 PMmicroscopic-diamond-94749
01/20/2023, 9:58 AMrke
has the bastion option, but it doesn't seem to work from Rancher with RKE Provisioning.some-wall-92554
01/20/2023, 12:09 PMvictorious-analyst-3332
01/20/2023, 3:34 PMclever-insurance-23287
01/20/2023, 8:42 PMclever-insurance-23287
01/20/2023, 8:43 PM