brash-zebra-92886
12/01/2022, 7:34 PM/usr/lib/ca-certificates/update.d/99certbundle.run: line 21: /var/lib/ca-certificates/ca-bundle.pem.new: Read-only file system
?early-rocket-98507
12/07/2022, 3:38 PMWarning FailedMount 45s kubelet MountVolume.SetUp failed for volume "kube-api-access-k5tht" : chown c:\var\lib\kubelet\pods\6c04872e-2cf8-468c-93ad-a03574b7c9ea\volumes\<http://kubernetes.io|kubernetes.io>~projected\kube-api-access-k5tht\..2022_12_07_15_29_15.258699061\token: not supported by windows
Did something change with Fleet that Rancher doesn't update properly?quiet-manchester-9915
12/07/2022, 4:40 PMlimited-potato-16824
12/09/2022, 10:47 AMfleet-local
to fleet-defaul
by following the instructions here:
https://fleet.rancher.io/troubleshooting#migrate-the-local-cluster-to-the-fleet-default-cluster
After we initialized the move, I could see this in the logs:
rancher-5677f59677-shbs7 rancher 2022/12/07 12:39:42 [ERROR] error syncing 'local': handler provisioning-cluster-create: failed to create fleet-default/local <http://provisioning.cattle.io/v1|provisioning.cattle.io/v1>, Kind=
Cluster for provisioning-cluster-create local: admission webhook "<http://rancherauth.cattle.io|rancherauth.cattle.io>" denied the request: cluster name must be 63 characters or fewer, cannot be "local" nor of the form "c
-xxxxx", requeuing
While this was happening, the local cluster had been removed from the fleet-local
workspace but for the reason above, it did not show up in fleet-default
. We managed to get the cluster back to fleet-local
again after editing the object, but it would have been nice to have all the "Continuous delivery" clusters in the same workspace. If you have any hints how to make that migration successful, please share πbillowy-computer-46613
12/12/2022, 7:15 AMbillowy-computer-46613
12/12/2022, 7:22 AMdazzling-chef-87126
12/15/2022, 8:40 AMapiVersion: v1
kind: Namespace
metadata:
annotations:
<http://field.cattle.io/projectId|field.cattle.io/projectId>: c-pkz94:p-5n2wt
name: c-providers
spec:
finalizers:
- kubernetes
But the projectId (ie`c-pkz94:p-5n2wt`) isn't consistent between rancher cluster.dazzling-chef-87126
12/15/2022, 11:56 AMsquare-telephone-53524
12/19/2022, 6:53 PMmammoth-postman-10874
12/19/2022, 9:51 PM<http://global.fleet.clusterLabels.management.cattle.io/cluster-display-name|global.fleet.clusterLabels.management.cattle.io/cluster-display-name>
mammoth-postman-10874
12/19/2022, 9:52 PMsquare-telephone-53524
12/19/2022, 11:37 PMsquare-telephone-53524
12/20/2022, 12:37 AMbillowy-apple-60989
12/20/2022, 12:47 PMhelmChartInflationGenerator
or helmCharts
to the kustomization.yml
but they dont seem to get picked updazzling-chef-87126
12/23/2022, 7:53 AM- bundles/
- bundle 1 <-- GitRepo
fleet.yaml
- bundle 2 <-- GitRepo
fleet.yaml
β’ have a single GitRepo for a git repository, with many bundles under?
- bundles/ <-- GitRepo
- bundle 1
fleet.yaml
- bundle 2
fleet.yaml
steep-furniture-72588
12/24/2022, 3:29 AMmysterious-lock-12214
01/02/2023, 8:43 PMmammoth-postman-10874
01/02/2023, 11:13 PMmany-area-51777
01/09/2023, 4:26 PMclever-mechanic-71254
01/16/2023, 8:23 AMplain-refrigerator-80586
01/16/2023, 9:38 AMUnable to build kubernetes objects from release manifest: resource mapping not found for name: "my-demo" namespace: "system" from "": no matches for kind "Application" in version "<http://argoproj.io/v1alpha1|argoproj.io/v1alpha1>" ensure CRDs are installed first
Here's how my repo looks like:
βββ Chart.lock
βββ charts
β βββ argo-cd-5.13.7.tgz
β βββ ingress-nginx-4.1.1.tgz
βββ Chart.yaml
βββ fleet.yaml
βββ templates
β βββ app-of-apps.yaml
βββ values.yaml
fleet.yaml file:
defaultNamespace: argocd
targetCustomizations:
- name: qa
helm:
values:
replication: false
clusterGroup: qa-clusters
Chart.yaml file:
apiVersion: v2
name: system-chart
description: A wrapper chart for argo CD
type: application
version: 0.8.5
dependencies:
- name: argo-cd
version: 5.13.7
repository: <https://argoproj.github.io/argo-helm>
condition: argo-cd.enabled
- name: ingress-nginx
version: 4.1.1
repository: <https://kubernetes.github.io/ingress-nginx>
condition: ingress-nginx.enabled
As mentioned on this page https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet#helm-chart-dependencies I did a helm dependencies build $chart to cover dependencis and add to my repo.
Can someone help me or give me some guidance?nutritious-orange-38459
01/20/2023, 8:42 AMrendered manifests contain a resource that already exists. Unable to continue with install: Namespace "foo" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "<http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>": must be set to "Helm"; annotation validation error: missing key "<http://meta.helm.sh/release-name|meta.helm.sh/release-name>": must be set to "admin-foo"; annotation validation error: missing key "<http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>": must be set to "foo"
Any ideas what is going on?victorious-analyst-3332
01/20/2023, 3:34 PMmany-area-51777
01/22/2023, 3:47 PMmany-area-51777
01/22/2023, 4:04 PMhelm test
in your CI? How does your setup look like? I'm curious to see how other's approach this if at all - especially how you integrate it with fleet at the end (there aren't many examples out there) πwide-magician-63081
01/24/2023, 7:56 AMorange-airplane-98016
01/24/2023, 10:35 PMancient-air-32350
01/25/2023, 9:50 PMmanagedChart
is doing ?
i got one cluster deployed with fleet stuck at ManagedChart generation is 2, but latest observed generation is
and I wonder how i could trigger the upgrade to generation 2.
alsothe helm release rancher-monitoring-crd
seems to be installed corectly on the downstream clusteraverage-waitress-96027
01/27/2023, 11:01 AMhelm -n cattle-fleet-system install --create-namespace --wait \ fleet <https://github.com/rancher/fleet/releases/download/v0.5.0/fleet-0.5.0.tgz>
I get following error:
Error: INSTALLATION FAILED: timed out waiting for the condition
Any suggestions, please?
Thank you.
Expected behaviour
To properly install fleet on Rancher Desktop.
Steps To Reproduce
1. Install fleet using : Error: INSTALLATION FAILED: timed out waiting for the condition
2. Error: Error: INSTALLATION FAILED: timed out waiting for the condition
Environment
- Mac
- Rancher Desktop
Logs:
Running the install command with --debug
getting following output:
install.go:192: [debug] Original chart version: ""
install.go:209: [debug] CHART PATH: /Downloads/fleet-0.5.1.tgz
client.go:229: [debug] checking 17 resources for changes
client.go:512: [debug] Looks like there are no changes for ServiceAccount "gitjob"
client.go:512: [debug] Looks like there are no changes for ServiceAccount "fleet-controller"
client.go:512: [debug] Looks like there are no changes for ServiceAccount "fleet-controller-bootstrap"
client.go:512: [debug] Looks like there are no changes for ConfigMap "fleet-controller"
client.go:512: [debug] Looks like there are no changes for ClusterRole "gitjob"
client.go:512: [debug] Looks like there are no changes for ClusterRole "fleet-controller"
client.go:512: [debug] Looks like there are no changes for ClusterRole "fleet-controller-bootstrap"
client.go:512: [debug] Looks like there are no changes for ClusterRoleBinding "gitjob-binding"
client.go:512: [debug] Looks like there are no changes for ClusterRoleBinding "fleet-controller"
client.go:512: [debug] Looks like there are no changes for ClusterRoleBinding "fleet-controller-bootstrap"
client.go:512: [debug] Looks like there are no changes for Role "gitjob"
client.go:512: [debug] Looks like there are no changes for Role "fleet-controller"
client.go:512: [debug] Looks like there are no changes for RoleBinding "gitjob"
client.go:512: [debug] Looks like there are no changes for RoleBinding "fleet-controller"
client.go:512: [debug] Looks like there are no changes for Service "gitjob"
client.go:512: [debug] Looks like there are no changes for Deployment "gitjob"
client.go:512: [debug] Looks like there are no changes for Deployment "fleet-controller"
wait.go:66: [debug] beginning wait for 17 resources with timeout of 5m0s
ready.go:277: [debug] Deployment is not ready: cattle-fleet-system/gitjob. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: cattle-fleet-system/gitjob. 0 out of 1 expected pods are ready
...
ready.go:277: [debug] Deployment is not ready: cattle-fleet-system/gitjob. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: cattle-fleet-system/gitjob. 0 out of 1 expected pods are ready
Error: INSTALLATION FAILED: timed out waiting for the condition
helm.go:84: [debug] timed out waiting for the condition
INSTALLATION FAILED
main.newInstallCmd.func2
<http://helm.sh/helm/v3/cmd/helm/install.go:141|helm.sh/helm/v3/cmd/helm/install.go:141>
<http://github.com/spf13/cobra.(*Command).execute|github.com/spf13/cobra.(*Command).execute>
<http://github.com/spf13/cobra@v1.5.0/command.go:872|github.com/spf13/cobra@v1.5.0/command.go:872>
<http://github.com/spf13/cobra.(*Command).ExecuteC|github.com/spf13/cobra.(*Command).ExecuteC>
<http://github.com/spf13/cobra@v1.5.0/command.go:990|github.com/spf13/cobra@v1.5.0/command.go:990>
<http://github.com/spf13/cobra.(*Command).Execute|github.com/spf13/cobra.(*Command).Execute>
<http://github.com/spf13/cobra@v1.5.0/command.go:918|github.com/spf13/cobra@v1.5.0/command.go:918>
main.main
<http://helm.sh/helm/v3/cmd/helm/helm.go:83|helm.sh/helm/v3/cmd/helm/helm.go:83>
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_amd64.s:1571
average-waitress-96027
01/27/2023, 4:00 PMFailed to pull image "rancher/gitjob:v0.1.32-security1": rpc error: code = Unknown desc = failed to pull and unpack image "<http://docker.io/rancher/gitjob:v0.1.32-security1|docker.io/rancher/gitjob:v0.1.32-security1>": failed to resolve reference "<http://docker.io/rancher/gitjob:v0.1.32-security1|docker.io/rancher/gitjob:v0.1.32-security1>": pulling from host <http://registry-1.docker.io|registry-1.docker.io> failed with status code [manifests v0.1.32-security1]: 403 Forbidden