This message was deleted.
# general
a
This message was deleted.
m
Ok we have at least three issues that could cause us problems joining Rancher server...
1. agent pod no obeying /etc/hosts on node, added hostAlias to pod spec.
2. currently we are not using a supported version of kubernetes for the version of RKE, chalk up to overzealous automation, we need to plod along more like a tortoise.
3. kubernetes 1.2.1 has a known issue getting stuck at "Waiting for API to become available."
rke downgrade to a supported version then...
If this all works then we have successfully joined Rancher server over a single IP private endpoint on an isolated network.. if.
for number 2 somehow we ended up running rke 1.3.15 which does not support the k8s version hard-coded into our "manifests" 1.21.8. Every time a dev says "manifests" I get a lovecraftian protagonist shudder.
Ok it didn't work. The Rancher agent is driving me batty, I would hostAlias the deployment yaml to use the private endpoint, it would use that to contact Rancher server initially, then it would redeploy and erase my alias in deployment yaml, so that it would try to use the DNS information, which happens to be an inaccessible IP since it is not peered. Peering the vnets worked, however my network engineer does not like that configuration.