https://rancher.com/ logo
Title
t

thousands-mechanic-72394

07/13/2022, 7:34 PM
In a Kubernetes workshop at UberConf 2022, a provided deployment based on MySql 5.5.x would never start and entered a CrashLoopBackOff state. What was frustrating was there were no logs for the container that failed. The k3s logs told me an error had occurred and the the deployment went to CrashLoopBackOff. (The image runs fine in Docker.) I upgraded the Docker image to be based on MySql 8.0.x and it worked fine. I ran an image scan against the mysql:5.5:45 and found 38 critical issues, but no critical issues were found for mysql:8.0.29. Given the absence of log information, I’m wondering if K3s / RD scans the image before deployment and blocks the deployment if critical vulnerabilities are found? (edited)
@fast-garage-66093 Answered this in Rancher Desktop channel before I moved the message here. The answer is no scanning is done before deployment. So the mystery remains why the MySQL 5.5 image wouldn’t work in K3s, but did when running in Docker, but the 8.0.x image started fine in K3s.
c

creamy-pencil-82913

07/13/2022, 7:41 PM
can you provide an example pod spec?
t

thousands-mechanic-72394

07/13/2022, 8:20 PM
Here’s a manifest I used to test this in isolation from the workshop content:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-db
  labels:
    name: demo-db
    component: db
spec:
  selector:
    matchLabels:
      name: demo-db
      component: db
  replicas: 1
  template:
    metadata:
      labels:
        name: demo-db
        component: db
    spec:
      containers:
        - name: demo-db
          image: dev.local/demo-mysql:5.5.45
          imagePullPolicy: Never
          volumeMounts:
            - name: demo-db-volume
              mountPath: /var/lib/mysql
          env:
            - name: MYSQL_DATABASE
              value: registry
            - name: MYSQL_PASSWORD
              value: admin
            - name: MYSQL_ROOT_PASSWORD
              value: root+1
            - name: MYSQL_USER
              value: admin
          ports:
            - containerPort: 3306
      volumes:
        - name: demo-db-volume
          persistentVolumeClaim:
            claimName: demo-db-pv-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: demo-db-pv-claim
  labels:
    app: demo-db
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
  name: demo-db
spec:
  ports:
    - port: 3306
      targetPort: 3306
  selector:
    name: demo-db
This always fails to CrashLoopBackOff. The MySQL 8.0.29 version just changes the container image line:
containers:
  - name: demo-db
    image: dev.local/demo-mysql:8.0.29
The dead simple Dockerfile used to build these two images is:
#FROM mysql:5.5.45
FROM mysql:8.0.29

COPY scripts /docker-entrypoint-initdb.d/

ENTRYPOINT ["/entrypoint.sh"]
CMD ["mysqld"]
(I initially tried to diagnose the issue by commenting out the COPY line, but the container still wouldn’t start.)
c

creamy-pencil-82913

07/13/2022, 8:35 PM
does the
mysql:5.5.45
image work fine, and it’s just your image that doesn’t work?
having the example use your private image with an imagePullPolocy of Never makes it kinda hard to replicate…
This example works fine for me, so I suspect something in your custom entrypoint is broken:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-db
  namespace: default
  labels:
    name: demo-db
    component: db
spec:
  selector:
    matchLabels:
      name: demo-db
      component: db
  replicas: 1
  template:
    metadata:
      labels:
        name: demo-db
        component: db
    spec:
      containers:
        - name: demo-db
          image: <http://docker.io/library/mysql:5.5.45|docker.io/library/mysql:5.5.45>
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: demo-db-volume
              mountPath: /var/lib/mysql
          env:
            - name: MYSQL_DATABASE
              value: registry
            - name: MYSQL_PASSWORD
              value: admin
            - name: MYSQL_ROOT_PASSWORD
              value: root+1
            - name: MYSQL_USER
              value: admin
          ports:
            - containerPort: 3306
      volumes:
        - name: demo-db-volume
          persistentVolumeClaim:
            claimName: demo-db-pv-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: demo-db-pv-claim
  namespace: default
  labels:
    app: demo-db
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
brandond@dev01:~/suc-test$ kubectl get node -o wide
NAME           STATUS   ROLES                  AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE   KERNEL-VERSION          CONTAINER-RUNTIME
k3s-server-1   Ready    control-plane,master   33m   v1.24.2+k3s2   172.17.0.2    <none>        K3s dev    5.17.4-051704-generic   <containerd://1.6.6-k3s1>
brandond@dev01:~/suc-test$ kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
demo-db-7588b845c5-m78nd   1/1     Running   0          102s
t

thousands-mechanic-72394

07/13/2022, 11:06 PM
Of course it would work for you. 🙂
% kubectl get node -o wide           
NAME                   STATUS   ROLES                  AGE     VERSION        INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
lima-rancher-desktop   Ready    control-plane,master   5h28m   v1.23.7+k3s1   192.168.205.2   <none>        Alpine Linux v3.15   5.15.40-0-virt   <docker://20.10.16>
% kubectl get pod
NAME                      READY   STATUS             RESTARTS      AGE
demo-db-8bc96f44d-25vbq   0/1     CrashLoopBackOff   3 (41s ago)   89s
@creamy-pencil-82913 I tried using the manifest file you supplied that uses the MySQL image from Docker. Unfortunately, it still fails, so I don’t think it’s the custom Dockerfile entrypoint:
% kubectl get all         
NAME                           READY   STATUS             RESTARTS      AGE
pod/demo-db-6b545c4c57-nhfnm   0/1     CrashLoopBackOff   4 (17s ago)   105s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1    <none>        443/TCP   5h47m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/demo-db   0/1     1            0           105s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/demo-db-6b545c4c57   1         1         0       105s
The pod logs are non-existent:
% kubectl logs demo-db-6b545c4c57-nhfnm
%
I also tried using containerd instead of dockerd (moby), but got the same results.
c

creamy-pencil-82913

07/14/2022, 5:29 AM
Do the containerd logs say anything interesting?
What OS are you trying this all on? What k3s version?
Oh nm, I see your node list output up above.
You're running alpine Linux in a VM?