Yes, as far as I know, the cluster that was imported using the recovery script is as follows.
lively-zoo-99006
11/29/2024, 3:46 AM
restore-kube-config.sh
#!/bin/bash
help ()
{
echo ' ================================================================ '
echo ' --master-ip: 指定 Master 节点 IP,任意一个 K8S Master 节点 IP 即可。'
echo ' 使用示例:bash restore-kube-config.sh --master-ip=1.1.1.1 '
echo ' ================================================================'
}
case "$1" in
-h|--help) help; exit;;
esac
if [[ $1 == '' ]]; then
help;
exit;
fi
CMDOPTS="$*"
for OPTS in $CMDOPTS;
do
key=$(echo ${OPTS} | awk -F"=" '{print $1}' )
value=$(echo ${OPTS} | awk -F"=" '{print $2}' )
case "$key" in
--master-ip) K8S_MASTER_NODE_IP=$value ;;
esac
done
# 获取 Rancher Agent 镜像
RANCHER_IMAGE=$( docker images --filter=label=io.cattle.agent=true |grep 'v2.' | \
grep -v -E 'rc|alpha|<none>' | head -n 1 | awk '{print $3}' )
if [[ -d /etc/kubernetes/ssl ]]; then
K8S_SSLDIR=/etc/kubernetes/ssl
else
echo '/etc/kubernetes/ssl 目录不存在'
exit 1
fi
CHECK_CLUSTER_STATE_CONFIGMAP=$( docker run --rm --entrypoint bash --net=host \
-v $K8S_SSLDIR/etc/kubernetes/sslro $RANCHER_IMAGE -c '\
if kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml \
-n kube-system get configmap full-cluster-state | grep full-cluster-state > /dev/null; then \
echo 'yes'; else echo 'no'; fi' )
if [[ $CHECK_CLUSTER_STATE_CONFIGMAP != 'yes' ]]; then
docker run --rm --net=host \
--entrypoint bash \
-e K8S_MASTER_NODE_IP=$K8S_MASTER_NODE_IP \
-v $K8S_SSLDIR/etc/kubernetes/sslro \
$RANCHER_IMAGE \
-c '\
kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml \
-n kube-system \
get secret kube-admin -o jsonpath={.data.Config} | base64 --decode | \
sed -e "/^[[space]]*server:/ s_:.*_: \"https://${K8S_MASTER_NODE_IP}:6443\"_"' > kubeconfig_admin.yaml
if [[ -s kubeconfig_admin.yaml ]]; then
echo '恢复成功,执行以下命令测试:'
echo ''
echo "kubectl --kubeconfig kubeconfig_admin.yaml get nodes"
else
echo "kubeconfig 恢复失败。"
fi
else
docker run --rm --entrypoint bash --net=host \
-e K8S_MASTER_NODE_IP=$K8S_MASTER_NODE_IP \
-v $K8S_SSLDIR/etc/kubernetes/sslro \
$RANCHER_IMAGE \
-c '\
kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml \
-n kube-system \
get configmap full-cluster-state -o json | \
jq -r .data.\"full-cluster-state\" | \
jq -r .currentState.certificatesBundle.\"kube-admin\".config | \
sed -e "/^[[space]]*server:/ s_:.*_: \"https://${K8S_MASTER_NODE_IP}:6443\"_"' > kubeconfig_admin.yaml
if [[ -s kubeconfig_admin.yaml ]]; then
echo '恢复成功,执行以下命令测试:'
echo ''
echo "kubectl --kubeconfig kubeconfig_admin.yaml get nodes"
else
echo "kubeconfig 恢复失败。"
fi
fi
lively-zoo-99006
11/29/2024, 3:48 AM
I don't know what cluster version was before the import because it's so long ago. My solution here is to create a new cluster with a high version, and then re-create the services in the new cluster. This method is not the migration upgrade method of importing the old cluster, but it is the best solution at present.