careful-mouse-48712
02/20/2023, 4:59 PMdazzling-chef-87126
02/21/2023, 10:04 AMwide-magician-63081
02/21/2023, 3:42 PMNotReady(1) [Bundle ebad]; scaledobject.keda.sh ebad-ma7/ebad-back error] Scaling is not performed because triggers are not active, Resource is Ready; scaledobject.keda.sh ebad-ma7/ebad-back modified {"spec":{"triggers":[{"metadata":{"desiredReplicas":"2","end":"35 14 * * *","start":"29 14 * * *","timezone":"Europe/Paris"},"type":"cron"}]}}; scaledobject.keda.sh ebad-ma7/ebad-front error] Scaling is not performed because triggers are not active, Resource is Ready
Is there a way to ignore this type of status? I think it's related to this issue: https://github.com/rancher/fleet/issues/937mammoth-postman-10874
02/24/2023, 4:25 PMambitious-plastic-3551
02/27/2023, 5:50 PMimportant-kitchen-32874
02/27/2023, 8:21 PMGitRepo
is mapped to clusters, as well as the ClusterGroup
concepts, and they're both using label selectors if I'm not mistaken. I'm wondering - is there a doc or proposal for how Fleet manages the case where the set of clusters matching the label selector changes? In "normal" kube, e.g. a NodeSelector
will allow for transparent selection, movement, re-scheduling, etc - is that also what happens in Fleet?ambitious-plastic-3551
02/27/2023, 9:18 PMimportant-kitchen-32874
02/27/2023, 10:20 PMGitRepo
that should select region=us-west,cloud=aws,machine-type=xxl
or whatever, will I get instant failover if a cluster in that ClusterGroup
goes down? Is there anything that balances out the usage of the group so, as an admin providing these clusters, I have less maintenance to do?salmon-train-47748
03/01/2023, 9:58 PMsteep-furniture-72588
03/03/2023, 10:58 AMnumerous-lighter-90852
03/08/2023, 9:11 AMquick-sandwich-76600
03/08/2023, 4:50 PMquick-sandwich-76600
03/08/2023, 4:51 PMquick-sandwich-76600
03/08/2023, 4:56 PMnumerous-lighter-90852
03/09/2023, 8:07 AMnumerous-lighter-90852
03/09/2023, 8:30 AMkubectl patch gitrepo/<repo name> -n fleet-local -p '{"spec": {"paused":false}}' --type merge
What do you think about it?quick-sandwich-76600
03/09/2023, 9:09 AMnumerous-lighter-90852
03/09/2023, 9:11 AMnumerous-lighter-90852
03/09/2023, 9:13 AMmillions-pizza-50389
03/13/2023, 10:27 AM# Update catalog CRDs
$ kubectl apply -f <https://github.com/stashed/installer/raw/v2023.03.13/crds/stash-catalog-crds.yaml>
# Update the helm repositories
$ helm repo update
# Upgrade Stash Community operator chart
$ helm upgrade stash appscode/stash \
--version v2023.03.13 \
--namespace kube-system \
--set features.community=true \
--set-file global.license=/path/to/the/license.txt
microscopic-knife-52274
03/13/2023, 1:36 PMnamespace: example-namespace
helm:
releaseName: dfl
chart: "<oci://custom.registry.net/test/example>"
repo: ""
version: "1.0.0"
targetCustomizations:
- name: dev
helm:
chart: "<oci://custom.registry.net/test/example>"
version: "0.0.0-dev"
clusterName: cluster-dev
On our cluster "cluster-dev" it tries to deploy the Version 1.0.0 instead of 0.0.0-dev. Do you know what could be wrong? Thank you in advance.careful-mouse-48712
03/13/2023, 1:57 PMcareful-mouse-48712
03/13/2023, 1:58 PMtargetCustomizations:
- name: cluster-0530
clusterName: cluster-0530
yaml:
overlays:
- cluster-0530
- name: cluster-0531
clusterName: cluster-0531
yaml:
overlays:
- cluster-0531
- name: cluster-0532
clusterName: cluster-0532
yaml:
overlays:
- cluster-0532
careful-mouse-48712
03/13/2023, 1:58 PMfleet test -t cluster-xyz .
shows the correct manifests.careful-mouse-48712
03/13/2023, 6:21 PMnutritious-garage-22695
03/15/2023, 5:59 PMflat-whale-67864
03/16/2023, 7:56 PMnumerous-lighter-90852
03/17/2023, 2:10 PMcalm-twilight-27465
03/20/2023, 11:36 PMrefined-analyst-8898
03/21/2023, 3:46 PMrefined-analyst-8898
03/21/2023, 3:46 PM