This message was deleted.
# k3s
a
This message was deleted.
t
@fast-garage-66093 Answered this in Rancher Desktop channel before I moved the message here. The answer is no scanning is done before deployment. So the mystery remains why the MySQL 5.5 image wouldn’t work in K3s, but did when running in Docker, but the 8.0.x image started fine in K3s.
c
can you provide an example pod spec?
t
Here’s a manifest I used to test this in isolation from the workshop content:
Copy code
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-db
  labels:
    name: demo-db
    component: db
spec:
  selector:
    matchLabels:
      name: demo-db
      component: db
  replicas: 1
  template:
    metadata:
      labels:
        name: demo-db
        component: db
    spec:
      containers:
        - name: demo-db
          image: dev.local/demo-mysql:5.5.45
          imagePullPolicy: Never
          volumeMounts:
            - name: demo-db-volume
              mountPath: /var/lib/mysql
          env:
            - name: MYSQL_DATABASE
              value: registry
            - name: MYSQL_PASSWORD
              value: admin
            - name: MYSQL_ROOT_PASSWORD
              value: root+1
            - name: MYSQL_USER
              value: admin
          ports:
            - containerPort: 3306
      volumes:
        - name: demo-db-volume
          persistentVolumeClaim:
            claimName: demo-db-pv-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: demo-db-pv-claim
  labels:
    app: demo-db
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
  name: demo-db
spec:
  ports:
    - port: 3306
      targetPort: 3306
  selector:
    name: demo-db
This always fails to CrashLoopBackOff. The MySQL 8.0.29 version just changes the container image line:
Copy code
containers:
  - name: demo-db
    image: dev.local/demo-mysql:8.0.29
The dead simple Dockerfile used to build these two images is:
Copy code
#FROM mysql:5.5.45
FROM mysql:8.0.29

COPY scripts /docker-entrypoint-initdb.d/

ENTRYPOINT ["/entrypoint.sh"]
CMD ["mysqld"]
(I initially tried to diagnose the issue by commenting out the COPY line, but the container still wouldn’t start.)
c
does the
mysql:5.5.45
image work fine, and it’s just your image that doesn’t work?
having the example use your private image with an imagePullPolocy of Never makes it kinda hard to replicate…
This example works fine for me, so I suspect something in your custom entrypoint is broken:
Copy code
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-db
  namespace: default
  labels:
    name: demo-db
    component: db
spec:
  selector:
    matchLabels:
      name: demo-db
      component: db
  replicas: 1
  template:
    metadata:
      labels:
        name: demo-db
        component: db
    spec:
      containers:
        - name: demo-db
          image: <http://docker.io/library/mysql:5.5.45|docker.io/library/mysql:5.5.45>
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: demo-db-volume
              mountPath: /var/lib/mysql
          env:
            - name: MYSQL_DATABASE
              value: registry
            - name: MYSQL_PASSWORD
              value: admin
            - name: MYSQL_ROOT_PASSWORD
              value: root+1
            - name: MYSQL_USER
              value: admin
          ports:
            - containerPort: 3306
      volumes:
        - name: demo-db-volume
          persistentVolumeClaim:
            claimName: demo-db-pv-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: demo-db-pv-claim
  namespace: default
  labels:
    app: demo-db
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
Copy code
brandond@dev01:~/suc-test$ kubectl get node -o wide
NAME           STATUS   ROLES                  AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE   KERNEL-VERSION          CONTAINER-RUNTIME
k3s-server-1   Ready    control-plane,master   33m   v1.24.2+k3s2   172.17.0.2    <none>        K3s dev    5.17.4-051704-generic   <containerd://1.6.6-k3s1>
brandond@dev01:~/suc-test$ kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
demo-db-7588b845c5-m78nd   1/1     Running   0          102s
t
Of course it would work for you. 🙂
Copy code
% kubectl get node -o wide           
NAME                   STATUS   ROLES                  AGE     VERSION        INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
lima-rancher-desktop   Ready    control-plane,master   5h28m   v1.23.7+k3s1   192.168.205.2   <none>        Alpine Linux v3.15   5.15.40-0-virt   <docker://20.10.16>
% kubectl get pod
NAME                      READY   STATUS             RESTARTS      AGE
demo-db-8bc96f44d-25vbq   0/1     CrashLoopBackOff   3 (41s ago)   89s
@creamy-pencil-82913 I tried using the manifest file you supplied that uses the MySQL image from Docker. Unfortunately, it still fails, so I don’t think it’s the custom Dockerfile entrypoint:
Copy code
% kubectl get all         
NAME                           READY   STATUS             RESTARTS      AGE
pod/demo-db-6b545c4c57-nhfnm   0/1     CrashLoopBackOff   4 (17s ago)   105s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1    <none>        443/TCP   5h47m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/demo-db   0/1     1            0           105s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/demo-db-6b545c4c57   1         1         0       105s
The pod logs are non-existent:
Copy code
% kubectl logs demo-db-6b545c4c57-nhfnm
%
I also tried using containerd instead of dockerd (moby), but got the same results.
c
Do the containerd logs say anything interesting?
What OS are you trying this all on? What k3s version?
Oh nm, I see your node list output up above.
You're running alpine Linux in a VM?