This message was deleted.
# fleet
a
This message was deleted.
👍 1
m
I think I found a way to do it by putting the manifests in a separate folder and adding it to the paths of the git repo. Unfortunately, it seems to apply the manifest before the helm chart has ran and gets an error since the CRDs haven't been installed yet on a fresh cluster.
b
check the
dependsOn
feature of Fleet
👀 1
you can ensure the manifest
dependsOn
the helm chart install first
m
Is there a way to specify that with a regular manifest? The
yaml
seems to be specific to overlays.
b
I should say more specifically
bundleB
(manifest) dependsOn
bundleA
(helm chart) running first
I wrote something about it a ways back — https://dev.to/flrichar/k3s-upgrades-with-fleet-1cd9
👀 1
m
Interesting
So
targetCustomizations
is how you specify what to do with regular manifests, then?
b
yah, you specify values/options for helm or kustomize but with manifests (regular yaml) would need to be an overlay
per-target (cluster) customization
m
How would that work if the same manifest doesn't change between clusters?
There's nothing cluster-specific about the ClusterIssuer in this case
b
then you could use the same bundle (with the manifest) on multiple clusters with no customizations, yea?
m
right
Hmmm, tried refactoring the helm and manifests in separate folders with a simple fleet.yaml for the manifests with a dependsOn for the helm folder and now it fails with a cryptic error message:
no bundles matching labels <http://fleet.cattle.io/bundle-name=fleet-cert-manager-operator,fleet.cattle.io/bundle-namespace=fleet-default|fleet.cattle.io/bundle-name=fleet-cert-manager-operator,fleet.cattle.io/bundle-namespace=fleet-default> in namespace fleet-default
Oh weird - it removed "fleet-" from the repo name
OH! It's the repo-name as in the GitRepo object, not the actual name of the git repo
Huzzah! It works!
🎉 1
FYI, I didn't need to do targetCustomization, but your blog post got me on the right track with
dependsOn
: This is the fleet.yaml in the manifests folder that got it working:
Copy code
defaultNamespace: cert-manager
dependsOn:
  - name: cert-manager-lsit-operator
It created the ClusterIssuer, but seems to be complaining about the fact that the ClusterIssuer object doesn't have a namespace.
b
thats weird, ClusterIssuer would be a cluster-wide issuer, yea?
as opposed to the namespaced Issuer
m
Correct
So it doesn't take a namespace. Fleet complains that is it in namespace '' instead of default
ClusterIssuer "letsencrypt-issuer" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "<http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>" must equal "default"
b
this is telling me a helm controller thinks that ClusterIssuer belongs in the default namespace
like a prior helm run or something
m
But this is the standalone manifest that isn't part of helm
It seems like fleet might be treating this separate bundle as also a helm?
🤔 1
b
I’d double check the manifest
b
yea I use cert-manager a lot, I have a ClusterIssuer called cf-dns01 for cloudflare’s dns
no namespace under metadata
I don’t think the problem is with the Object, I think the problem is with the helm controller wanting membership metadata
m
I think I might have figured it out
Yup - you're right - I needed to add labels/annotations for the helm in the dependsOn
👍 1
Copy code
labels:
    <http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: Helm
  annotations:
    <http://meta.helm.sh/release-name|meta.helm.sh/release-name>: cert-managet-lsit-manifests
    <http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>: cert-manager
b
excellente
m
Many thanks!
b
I recognized your name from masto
lol
m
Hah! Good find!