square-engine-61315
08/16/2022, 11:40 AMPlease note that upgrades from experimental Dqlite to embedded etcd are not supported. If you attempt an upgrade it will not succeed and data will be lost.How do I know which one I'm using?
eager-librarian-82484
08/16/2022, 4:40 PMbrainy-postman-1566
08/16/2022, 11:35 PMpowerful-summer-42797
08/17/2022, 8:59 AMnutritious-crayon-45180
08/18/2022, 3:04 AMred-waitress-37932
08/18/2022, 4:19 PMwonderful-spring-28306
08/19/2022, 7:59 AMkubectl get no
Research shows me that I should start the kubernetes process with
--register-node=false
Is this possible with k3s or do you have a better suggestion on how I could achieve this?red-musician-8168
08/19/2022, 8:27 PMhundreds-state-15112
08/19/2022, 10:00 PMcreamy-pencil-82913
08/20/2022, 5:04 AMfaint-airport-39912
08/20/2022, 7:29 AMcurl -sfL <https://get.k3s.io> | sh -s - server --datastore-endpoint='<postgres://k3s:password@192.168.1013:5432/k3s>' --token='K102xxxxx''
I'm using to add new master node, but it is giving me following error so can you someone look into it and help ??
FYI, This highly available k3s cluster is configured one year back. `
systemd[1]: k3s.service: Service hold-off time over, scheduling restart.
systemd[1]: k3s.service: Scheduled restart job, restart counter is at 97783.
systemd[1]: Stopped Lightweight Kubernetes.
systemd[1]: Starting Lightweight Kubernetes...
sh[23500]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
sh[23500]: /bin/sh: 1: /usr/bin/systemctl: not found
k3s[23512]: time="2022-08-11T07:19:00Z" level=info msg="Starting k3s v1.24.3+k3s1 (990ba0e8)"
k3s[23512]: time="2022-08-11T07:19:00Z" level=info msg="Configuring postgres database connection pooling: maxIdleConns=2, maxOpenConns=
k3s[23512]: time="2022-08-11T07:19:00Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
k3s[23512]: time="2022-08-11T07:19:00Z" level=info msg="Database tables and indexes are up to date"
k3s[23512]: time="2022-08-11T07:19:00Z" level=info msg="Kine available at <unix://kine.sock>"
k3s[23512]: time="2022-08-11T07:19:00Z" level=fatal msg="starting kubernetes: preparing server:
bootstrap data already found and encryp
"
systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: k3s.service: Failed with result 'exit-code'.
`chilly-telephone-51989
08/20/2022, 8:22 PMapiVersion: v1
kind: Service
metadata:
name: postgres
namespace: xp
labels:
service: postgres
type: database
spec:
type: ClusterIP
selector:
service: postgres
type: database
ports:
- name: client
protocol: TCP
port: 5432
targetPort: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgres
namespace: xp
labels:
service: postgres
type: database
subsets:
- addresses:
- ip: 172.19.0.2
ports:
- name: client
port: 5432
protocol: TCP
I tried verifying the service using busybox but
nslookup and ping both fail for postgres.xp
however pinging the IP 172.19.0.2 runs just fine.
how do I expose the IP of an external service inside?billowy-needle-49036
08/20/2022, 9:46 PMchilly-telephone-51989
08/21/2022, 6:10 PMapiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
kind: Ingress
metadata:
name: ingress
namespace: xplore
annotations:
<http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: traefik
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gateway
port:
number: 80
kind-nightfall-56861
08/22/2022, 7:18 AMmany-church-13850
08/23/2022, 4:27 AMelegant-article-67113
08/23/2022, 12:41 PMmelodic-hamburger-23329
08/25/2022, 4:47 AMrefined-toddler-64572
08/25/2022, 8:02 PM1.24.3+k3s1
) on Ubuntu 22.04.1 with ZFS (2.1.4
) and external containerd (1.5.9
). Works GREAT, no day-to-day issues. What I've been struggling with is when K3s uses an external containerd -- either kubelet / cadvisor is a bit wonky with metrics in that the image=
and container=
are missing which breaks many dashboards. I can't tell if this is a K3s / kubelet issue, cadvisor issue, containerd issue.. not sure where to seek advise. Kube-Prometheus-Stack is deployed, works well as long as a dashboard doesn't try to use something like container_cpu_usage_seconds_total{image!=""}
returns an empty set as the image reference is missing, but that does return data when K3s uses bundled containerd. Suggestions welcomed.creamy-river-81697
08/25/2022, 8:50 PMmelodic-hamburger-23329
08/26/2022, 1:11 AMnerdctl build
inside a container running in k3s, what should I do? k3s doesn’t bundle buildkitd I think, so I guess I need to set up buildkitd manually. k3s bundles containerd, so I’m not quite sure how this manual should be applied: https://github.com/containerd/nerdctl/blob/master/docs/build.mdflat-fish-63205
08/26/2022, 12:26 PMwhile true; do kubectl exec -it <pod_name> /bin/ls; done
Running above command will give intermittent error:
Error from server: error dialing backend: EOF
clever-air-65544
08/26/2022, 3:46 PMstale-fish-49559
08/26/2022, 4:45 PMKubernetes API connection failure: Get "<https://10.43.0.1:443/version>": dial tcp 10.43.0.1:443: connect: network is unreachable
. Any ideas how to approached this problem? DNS cache issue, flannel?prehistoric-diamond-4224
08/27/2022, 11:14 AM*v1beta1.Ingress
which was deprecated in 1.22.
Is there a traefik v1 version that i can upgrade to that supports v1.Ingress
?
I'd rather not update traefik to v2 since i'd have to migrate all of our ingresses, many of which are rmanaged by helm charts.
Would nginx be compatibe with standard Ingress resources? Is a migration from traefiik v1.81 to nginx ingress feasible without modifying existing ingress resources?
Thank youmelodic-hamburger-23329
08/29/2022, 3:17 AMstop k3s => replace k3s binary => start k3s
enough?average-monitor-43003
08/29/2022, 4:31 PM+ echo 'Installing helm_v3 chart'
+ helm_v3 install --set-string global.systemDefaultRegistry= traefik <https://10.43.0.1:443/static/charts/traefik-10.19.300.tgz> --values /config/values-01_HelmChart.yaml --values /config/values-10_HelmChartConfig.yaml
Error: INSTALLATION FAILED: cannot re-use a name that is still in use
k3s-1.24.3
kube-system pod/helm-install-traefik-crd-f8qps 0/1 Completed 0 18m
kube-system pod/helm-install-traefik-q76r7 0/1 CrashLoopBackOff 5 (63s ago) 4m32s
kube-system job.batch/helm-install-traefik-crd 1/1 14m 18m
kube-system job.batch/helm-install-traefik 0/1 4m35s 4m39s
gifted-morning-94496
08/29/2022, 4:54 PMmicroscopic-smartphone-81961
08/29/2022, 5:57 PMmelodic-hamburger-23329
08/30/2022, 3:02 AMapiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: kube-system
---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRoleBinding
metadata:
name: admin
roleRef:
apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin
namespace: kube-system
`kubectl get clusterrole cluster-admin -oyaml`:
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRole
metadata:
annotations:
<http://rbac.authorization.kubernetes.io/autoupdate|rbac.authorization.kubernetes.io/autoupdate>: "true"
creationTimestamp: "2022-08-29T05:57:20Z"
labels:
<http://kubernetes.io/bootstrapping|kubernetes.io/bootstrapping>: rbac-defaults
name: cluster-admin
resourceVersion: "72"
uid: f381f7e3-9f54-4da5-bcc4-39745a2e8bbd
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'