This message was deleted.
# rke2
a
This message was deleted.
c
Newer releases use hosts.d (configPath in the crictl info output) instead of the deprecated inline mirror configuration. https://github.com/k3s-io/k3s/issues/8972 https://github.com/k3s-io/k3s/pull/8973 As far as why you’re getting image pull errors, I’d probably check the containerd logs. You might also confirm that you’re not using a custom containerd config template that needs to be updated?
g
@creamy-pencil-82913 Found the error, apparently it doesn't like duplicate names
time="2024-03-07T13:30:23.340519705Z" level=error msg="failed to decode hosts.toml" error="failed to parse TOML: (24, 2): duplicated tables"
The process then tries the default registry,
<http://ghcr.io|ghcr.io>
fails and stays on
ImagePullBackOff
We understand that duplicate names are not normal but having the process error out is an interesting result.
c
You have the same endpoint listed more than once?
You could open a bug with containerd, but I imagine they would WONTFIX it as proper behavior for invalid toml. We could look at detecting and warning when parsing registries.yaml, but at the end of the day you still need to fix it manually.
Did you accidentally put the same URL multiple times, instead of specifying your primary/secondary/tertiary mirror?
g
The file looked like this
<http://docker.io|docker.io>:
endpoint:
- "harbor.1.*.*"
- "harbor.2.*.*"
- "<http://harbor.LR|harbor.LR>.*.*"
- "harbor.3.*.*"
- "<http://harbor.LR|harbor.LR>.*.*"
<http://quay.io|quay.io>:
endpoint:
- "harbor.1.*.*"
- "harbor.2.*.*"
- "<http://harbor.LR|harbor.LR>.*.*"
- "harbor.3.*.*"
- "<http://harbor.LR|harbor.LR>.*.*"
<http://k8s.gcr.io|k8s.gcr.io>:
endpoint:
- "harbor.1.*.*"
- "harbor.2.*.*"
- "<http://harbor.LR|harbor.LR>.*.*"
- "harbor.3.*.*"
- "<http://harbor.LR|harbor.LR>.*.*"
We have a server of last resort that gets appended to the end of all the registries as part of our deployment. Some users were adding that same server because they were not seeing it listed as our provision script hadn't run yet.
We realize this was our mistake but the error and failure to process any of the listed endpoints was a surprise.
c
yeah I wouldn’t have expected that either. would you mind opening an issue describing what you’re doing, and what happened? We can look at handling that better on the k3s side, since containerd itself handles it poorly.
g
c