faint-airport-83518
05/04/2022, 2:39 PMgray-lawyer-73831
05/04/2022, 3:10 PMfaint-airport-83518
05/04/2022, 3:35 PMgray-lawyer-73831
05/04/2022, 3:36 PMfaint-airport-83518
05/04/2022, 3:37 PMgray-lawyer-73831
05/04/2022, 3:38 PMfaint-airport-83518
05/04/2022, 3:38 PMgray-lawyer-73831
05/04/2022, 5:39 PMfaint-airport-83518
05/04/2022, 5:43 PMgray-lawyer-73831
05/04/2022, 5:46 PMcorrelation to k8s version, I’m guessing?Yep! 😄
faint-airport-83518
05/05/2022, 5:19 PMSYSTEM_UPGRADE_JOB_KUBECTL_IMAGE
and I'm using a private registry.. I'm wondering if I need to put any imagePullSecrets somewhere?gray-lawyer-73831
05/05/2022, 5:20 PMfaint-airport-83518
05/05/2022, 5:22 PMapiVersion: <http://kustomize.toolkit.fluxcd.io/v1beta1|kustomize.toolkit.fluxcd.io/v1beta1>
kind: Kustomization
metadata:
name: rke2-system-upgrade-controller
namespace: bigbang
spec:
interval: 1m
sourceRef:
kind: GitRepository
name: rke2-system-upgrade-controller-repo
path: .
prune: true
images:
- name: rancher/system-upgrade-controller
newName: private.registry.internal/rancher/system-upgrade-controller
newTag: v0.9.1
patches:
- patch: |-
apiVersion: v1
kind: ConfigMap
metadata:
name: default-controller-env
data:
SYSTEM_UPGRADE_JOB_KUBECTL_IMAGE: private.registry.internal/rancher/kubectl:v1.23.6
target:
kind: ConfigMap
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: system-upgrade-controller
namespace: system-upgrade
spec:
template:
spec:
imagePullSecrets:
- name: private-registry
target:
kind: Deployment
patches:
- target:
kind: Plan
patch: |-
apiVersion: <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>
kind: Plan
metadata:
name: whatever
spec:
version: v1.22.9-rke2r1
gray-lawyer-73831
05/05/2022, 5:24 PMsystem-upgrade
and do you see the system-upgrade-controller
deployed in that namespace?faint-airport-83518
05/05/2022, 5:24 PMgray-lawyer-73831
05/05/2022, 5:24 PMfaint-airport-83518
05/05/2022, 5:25 PMgray-lawyer-73831
05/05/2022, 5:26 PMfaint-airport-83518
05/05/2022, 5:26 PM# Server plan
apiVersion: <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>
kind: Plan
metadata:
name: server-plan
namespace: system-upgrade
labels:
rke2-upgrade: server
spec:
concurrency: 1
nodeSelector:
matchExpressions:
- {key: rke2-upgrade, operator: Exists}
- {key: rke2-upgrade, operator: NotIn, values: ["disabled", "false"]}
# When using k8s version 1.19 or older, swap control-plane with master
- {key: <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>, operator: In, values: ["true"]}
serviceAccountName: system-upgrade
cordon: true
# drain:
# force: true
upgrade:
image: rancher/rke2-upgrade
version: v1.23.1+rke2r2
---
# Agent plan
apiVersion: <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>
kind: Plan
metadata:
name: agent-plan
namespace: system-upgrade
labels:
rke2-upgrade: agent
spec:
concurrency: 1
nodeSelector:
matchExpressions:
- {key: rke2-upgrade, operator: Exists}
- {key: rke2-upgrade, operator: NotIn, values: ["disabled", "false"]}
# When using k8s version 1.19 or older, swap control-plane with master
- {key: <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>, operator: NotIn, values: ["true"]}
prepare:
args:
- prepare
- server-plan
image: rancher/rke2-upgrade
serviceAccountName: system-upgrade
cordon: true
drain:
force: true
upgrade:
image: rancher/rke2-upgrade
version: v1.23.1+rke2r2
gray-lawyer-73831
05/05/2022, 5:26 PMfaint-airport-83518
05/05/2022, 5:27 PMgray-lawyer-73831
05/05/2022, 5:30 PMfaint-airport-83518
05/05/2022, 5:30 PMgray-lawyer-73831
05/05/2022, 5:30 PMfaint-airport-83518
05/05/2022, 5:31 PMgray-lawyer-73831
05/05/2022, 5:31 PMapiVersion: <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>
kind: Plan
metadata:
name: rke2-server
namespace: system-upgrade
labels:
rke2-upgrade: server
spec:
concurrency: 1
version: v1.22.8-rke2r1
nodeSelector:
matchExpressions:
- {key: <http://node-role.kubernetes.io/master|node-role.kubernetes.io/master>, operator: In, values: ["true"]}
serviceAccountName: system-upgrade
cordon: true
#drain:
# force: true
upgrade:
image: rancher/rke2-upgrade
---
apiVersion: <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>
kind: Plan
metadata:
name: rke2-agent
namespace: system-upgrade
labels:
rke2-upgrade: agent
spec:
concurrency: 2
version: v1.22.8-rke2r1
nodeSelector:
matchExpressions:
- {key: <http://node-role.kubernetes.io/master|node-role.kubernetes.io/master>, operator: NotIn, values: ["true"]}
serviceAccountName: system-upgrade
prepare:
image: rancher/rke2-upgrade
args: ["prepare", "rke2-server"]
drain:
force: true
upgrade:
image: rancher/rke2-upgrade
faint-airport-83518
05/05/2022, 5:32 PM---
apiVersion: v1
kind: ConfigMap
metadata:
name: default-controller-env
namespace: system-upgrade
data:
SYSTEM_UPGRADE_CONTROLLER_DEBUG: "false"
SYSTEM_UPGRADE_CONTROLLER_THREADS: "2"
SYSTEM_UPGRADE_JOB_ACTIVE_DEADLINE_SECONDS: "900"
SYSTEM_UPGRADE_JOB_BACKOFF_LIMIT: "99"
SYSTEM_UPGRADE_JOB_IMAGE_PULL_POLICY: "Always"
SYSTEM_UPGRADE_JOB_KUBECTL_IMAGE: "rancher/kubectl:v1.21.9"
SYSTEM_UPGRADE_JOB_PRIVILEGED: "true"
SYSTEM_UPGRADE_JOB_TTL_SECONDS_AFTER_FINISH: "900"
SYSTEM_UPGRADE_PLAN_POLLING_INTERVAL: "15m"
gray-lawyer-73831
05/05/2022, 5:33 PMfaint-airport-83518
05/05/2022, 5:36 PMgray-lawyer-73831
05/05/2022, 5:37 PMfaint-airport-83518
05/05/2022, 5:37 PMgray-lawyer-73831
05/05/2022, 5:37 PMfaint-airport-83518
05/05/2022, 5:38 PMgray-lawyer-73831
05/05/2022, 5:39 PMfaint-airport-83518
05/05/2022, 5:40 PMKubectlImage
- this guy. yeah I'm not seeing anywhere to add imagePullSecrets
to whatever pod specgray-lawyer-73831
05/05/2022, 5:48 PMfaint-airport-83518
05/05/2022, 5:50 PMgray-lawyer-73831
05/05/2022, 5:50 PMfaint-airport-83518
05/05/2022, 5:56 PM│
│ system-upgrade-controller-5bd59b74fc-hnqn6 W0505 17:56:39.022311 1 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. │
│ system-upgrade-controller-5bd59b74fc-hnqn6 time="2022-05-05T17:56:39Z" level=info msg="Applying CRD <http://plans.upgrade.cattle.io|plans.upgrade.cattle.io>" │
│ system-upgrade-controller-5bd59b74fc-hnqn6 time="2022-05-05T17:56:40Z" level=info msg="Starting /v1, Kind=Node controller" │
│ system-upgrade-controller-5bd59b74fc-hnqn6 time="2022-05-05T17:56:40Z" level=info msg="Starting /v1, Kind=Secret controller" │
│ system-upgrade-controller-5bd59b74fc-hnqn6 time="2022-05-05T17:56:40Z" level=info msg="Starting batch/v1, Kind=Job controller" │
│ system-upgrade-controller-5bd59b74fc-hnqn6 time="2022-05-05T17:56:40Z" level=info msg="Starting <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>, Kind=Plan controller" │
│
gray-lawyer-73831
05/05/2022, 5:57 PMfaint-airport-83518
05/05/2022, 6:01 PM# Server plan
apiVersion: <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>
kind: Plan
metadata:
name: server-plan
namespace: system-upgrade
labels:
rke2-upgrade: server
spec:
concurrency: 1
nodeSelector:
matchExpressions:
- {key: rke2-upgrade, operator: Exists}
- {key: rke2-upgrade, operator: NotIn, values: ["disabled", "false"]}
# When using k8s version 1.19 or older, swap control-plane with master
- {key: <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>, operator: In, values: ["true"]}
serviceAccountName: system-upgrade
cordon: true
# drain:
# force: true
upgrade:
image: rancher/rke2-upgrade
version: v1.23.1+rke2r2
gray-lawyer-73831
05/05/2022, 6:02 PM# Server plan
apiVersion: <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>
kind: Plan
metadata:
name: server-plan
namespace: system-upgrade
labels:
rke2-upgrade: server
spec:
concurrency: 1
nodeSelector:
matchExpressions:
- {key: <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>, operator: In, values: ["true"]}
serviceAccountName: system-upgrade
cordon: true
# drain:
# force: true
upgrade:
image: rancher/rke2-upgrade
version: v1.23.1-rke2r2
version
is confusing with these Plansfaint-airport-83518
05/05/2022, 6:03 PMgray-lawyer-73831
05/05/2022, 6:06 PMfaint-airport-83518
05/05/2022, 6:12 PM- name: SYSTEM_UPGRADE_PLAN_LATEST_VERSION
value: v1.22.9-rke2r1
image: private.registry/rancher/rke2-upgrade:v1.22.9-rke2r1
imagePullPolicy: Always
name: upgrade
it just looks like it's stuck maybegray-lawyer-73831
05/05/2022, 6:48 PMfaint-airport-83518
05/05/2022, 6:56 PM│ Events: │
│ Type Reason Age From Message │
│ ---- ------ ---- ---- ------- │
│ Warning FailedScheduling 113s default-scheduler 0/7 nodes are available: 3 node(s) had taint {CriticalAddonsOnly: true}, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector. │
│ Warning FailedScheduling 52s default-scheduler 0/7 nodes are available: 3 node(s) had taint {CriticalAddonsOnly: true}, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.
Events: │
│ Type Reason Age From Message │
│ ---- ------ ---- ---- ------- │
│ Normal Scheduled 66s default-scheduler Successfully assigned system-upgrade/apply-server-plan-on-vm-gvzonecil2zackrke2server000002--1-6qvnr to vm-gvzonecil2zackrke2server000002 │
│ Normal Pulling 25s (x3 over 67s) kubelet Pulling image "private.registry/rancher/kubectl:v1.23.6" │
│ Warning Failed 24s (x3 over 66s) kubelet Failed to pull image "private.registry/rancher/kubectl:v1.23.6": rpc error: code = Unknown desc = failed to pull and unpack image "private.registry/rancher/kubectl:v1.23.6": failed to resolve reference "private.registry/rancher/kub │
│ ectl:v1.23.6": pulling from host zarf.c1.internal failed with status code [manifests v1.23.6]: 401 Unauthorized │
│ Warning Failed 24s (x3 over 66s) kubelet Error: ErrImagePull │
│ Normal BackOff 13s (x3 over 66s) kubelet Back-off pulling image "private.registry/rancher/kubectl:v1.23.6" │
│ Warning Failed 13s (x3 over 66s) kubelet Error: ImagePullBackOff │
│
gray-lawyer-73831
05/05/2022, 7:29 PMfaint-airport-83518
05/05/2022, 8:23 PMgray-lawyer-73831
05/05/2022, 8:26 PMfaint-airport-83518
05/05/2022, 8:27 PMgray-lawyer-73831
05/05/2022, 8:28 PMfaint-airport-83518
05/05/2022, 8:30 PM# pods "apply-server-plan-on-vm-gvzonecil2zackrke2server000000--1-h4pnm" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds`, `spec.tolerations` (only additions to existing tolerations) or `spec.terminationGracePeriodSeconds` (allow it to be set to 1 if it was previously negative)
# core.PodSpec{
# ... // 11 identical fields
# NodeName: "vm-gvzonecil2zackrke2server000000",
# SecurityContext: &{HostNetwork: true, HostPID: true, HostIPC: true},
# - ImagePullSecrets: []core.LocalObjectReference{{Name: "private-registry"}},
# + ImagePullSecrets: nil,
# Hostname: "",
# Subdomain: "",
# ... // 14 identical fields
# }
#rke2
gray-lawyer-73831
05/05/2022, 8:43 PMfaint-airport-83518
05/05/2022, 8:43 PMgray-lawyer-73831
05/05/2022, 8:44 PMfaint-airport-83518
05/05/2022, 8:45 PMgray-lawyer-73831
05/05/2022, 8:46 PMfaint-airport-83518
05/05/2022, 9:05 PMgray-lawyer-73831
05/05/2022, 9:19 PMfaint-airport-83518
05/05/2022, 9:42 PMgray-lawyer-73831
05/05/2022, 9:42 PMfaint-airport-83518
05/05/2022, 9:42 PMapiVersion: <http://kustomize.toolkit.fluxcd.io/v1beta1|kustomize.toolkit.fluxcd.io/v1beta1>
kind: Kustomization
metadata:
name: rke2-system-upgrade-controller
namespace: bigbang
spec:
interval: 1m
sourceRef:
kind: GitRepository
name: rke2-system-upgrade-controller-repo
path: .
prune: true
images:
- name: rancher/system-upgrade-controller
newName: private.registry/rancher/system-upgrade-controller
newTag: v0.9.1
patches:
- patch: |-
apiVersion: v1
kind: ConfigMap
metadata:
name: default-controller-env
data:
SYSTEM_UPGRADE_JOB_KUBECTL_IMAGE: private.registry/rancher/kubectl:v1.22.6
target:
kind: ConfigMap
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: system-upgrade-controller
namespace: system-upgrade
spec:
template:
spec:
imagePullSecrets:
- name: private-registry
target:
kind: Deployment
- patch: |-
apiVersion: v1
kind: ServiceAccount
metadata:
name: system-upgrade
namespace: system-upgrade
imagePullSecrets:
- name: private-registry
target:
kind: ServiceAccount
gray-lawyer-73831
05/05/2022, 9:47 PMfaint-airport-83518
05/06/2022, 2:29 PM│ drain evicting pod logging/logging-ek-es-master-0 │
│ drain evicting pod istio-system/passthrough-ingressgateway-7879ff64db-kh86m │
│ drain evicting pod gatekeeper-system/gatekeeper-controller-manager-5bd878c895-4sbrp │
│ drain error when evicting pods/"passthrough-ingressgateway-7879ff64db-kh86m" -n "istio-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. │
│ drain error when evicting pods/"gatekeeper-controller-manager-5bd878c895-4sbrp" -n "gatekeeper-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. │
│ drain error when evicting pods/"logging-ek-es-master-0" -n "logging" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.