I am unable to pull image from private reg ```k d...
# k3s
a
I am unable to pull image from private reg
Copy code
k describe pod af-deploy-69bddb7795-brtjz -n acredifast 

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  31s                default-scheduler  Successfully assigned acredifast/af-deploy-69bddb7795-brtjz to pumpkin
  Normal   Pulling    19s (x2 over 31s)  kubelet            Pulling image "my-registry:30699/backend:v1"
  Warning  Failed     19s (x2 over 31s)  kubelet            Failed to pull image "my-registry:30699/backend:v1": failed to pull and unpack image "my-registry:30699/backend:v1": failed to resolve reference "my-registry:30699/backend:v1": unable to read CA cert "/etc/rancher/k3s/certs.d/my-registry:30699/tls.crt": open /etc/rancher/k3s/certs.d/my-registry:30699/tls.crt: no such file or directory
  Warning  Failed     19s (x2 over 31s)  kubelet            Error: ErrImagePull
  Normal   BackOff    5s (x2 over 31s)   kubelet            Back-off pulling image "my-registry:30699/backend:v1"
  Warning  Failed     5s (x2 over 31s)   kubelet            Error: ImagePullBackOff
jsimons@blueberry:~/k3s/af$
Copy code
jsimons@blueberry:~/k3s/af$ k get all -n acredifast                                                        
NAME                             READY   STATUS         RESTARTS   AGE
pod/af-deploy-776d76bff9-qm6cx   0/1     ErrImagePull   0          7s

NAME                  TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)                       AGE
service/af-nodeport   NodePort   10.43.8.230   <none>        8080:31170/TCP,22:32740/TCP   6s

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/af-deploy   0/1     1            0           7s

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/af-deploy-776d76bff9   1         1         0       7s
jsimons@blueberry:~/k3s/af$ k describe pod af-deploy-776d76bff9-qm6cx -n acredifast                        
Name:             af-deploy-776d76bff9-qm6cx
Namespace:        acredifast
Priority:         0
Service Account:  default
Node:             pumpkin/10.0.0.2
Start Time:       Mon, 18 Aug 2025 12:39:11 -0700
Labels:           app=af-backend
                  pod-template-hash=776d76bff9
Annotations:      <none>
Status:           Pending
IP:               10.42.1.132
IPs:
  IP:           10.42.1.132
Controlled By:  ReplicaSet/af-deploy-776d76bff9
Containers:
  af-backend:
    Container ID:   
    Image:          my-registry:30699/backend:latest
    Image ID:       
    Port:           5036/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ErrImagePull
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  128Mi
    Requests:
      cpu:        500m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7s82v (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  kube-api-access-7s82v:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  32s                default-scheduler  Successfully assigned acredifast/af-deploy-776d76bff9-qm6cx to pumpkin
  Normal   Pulling    17s (x2 over 31s)  kubelet            Pulling image "my-registry:30699/backend:latest"
  Warning  Failed     17s (x2 over 31s)  kubelet            Failed to pull image "my-registry:30699/backend:latest": failed to pull and unpack image "my-registry:30699/backend:latest": failed to resolve reference "my-registry:30699/backend:latest": unable to read CA cert "/etc/rancher/k3s/certs.d/my-registry:30699/tls.crt": open /etc/rancher/k3s/certs.d/my-registry:30699/tls.crt: no such file or directory
  Warning  Failed     17s (x2 over 31s)  kubelet            Error: ErrImagePull
  Normal   BackOff    4s (x2 over 30s)   kubelet            Back-off pulling image "my-registry:30699/backend:latest"
  Warning  Failed     4s (x2 over 30s)   kubelet            Error: ImagePullBackOff
jsimons@blueberry:~/k3s/af$ cat Deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: af-deploy 
  namespace: acredifast
spec:
  selector:
    matchLabels:
      app: af-backend 
  template:
    metadata:
      labels:
        app: af-backend
    spec:
      containers:
        - name: af-backend
          image: my-registry:30699/backend:latest 
          resources:
            limits:
              memory: "128Mi"
              cpu: "500m"
          ports:
            - containerPort: 5036 
jsimons@blueberry:~/k3s/af$ cat Deploy.yaml ^C
(failed reverse-i-search)`curl': kubectl exec -it -n flux-system source-controller-5f6985f6c4-l9tt7 -- ^Crl forgejo.kraytonian.local    
jsimons@blueberry:~/k3s/af$ k get all -n acredifast                                                        
NAME                             READY   STATUS         RESTARTS   AGE
pod/af-deploy-776d76bff9-qm6cx   0/1     ErrImagePull   0          114s

NAME                  TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)                       AGE
service/af-nodeport   NodePort   10.43.8.230   <none>        8080:31170/TCP,22:32740/TCP   113s

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/af-deploy   0/1     1            0           114s

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/af-deploy-776d76bff9   1         1         0       114s
jsimons@blueberry:~/k3s/af$ k describe deploy af-deploy -n acredifast                 
Name:                   af-deploy
Namespace:              acredifast
CreationTimestamp:      Mon, 18 Aug 2025 12:39:11 -0700
Labels:                 <none>
Annotations:            <http://deployment.kubernetes.io/revision|deployment.kubernetes.io/revision>: 1
Selector:               app=af-backend
Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=af-backend
  Containers:
   af-backend:
    Image:      my-registry:30699/backend:latest
    Port:       5036/TCP
    Host Port:  0/TCP
    Limits:
      cpu:         500m
      memory:      128Mi
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  <none>
NewReplicaSet:   af-deploy-776d76bff9 (1/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  2m27s  deployment-controller  Scaled up replica set af-deploy-776d76bff9 from 0 to 1
jsimons@blueberry:~/k3s/af$ k describe pod af-deploy-776d76bff9-qm6cx -n acredifast
Name:             af-deploy-776d76bff9-qm6cx
Namespace:        acredifast
Priority:         0
Service Account:  default
Node:             pumpkin/10.0.0.2
Start Time:       Mon, 18 Aug 2025 12:39:11 -0700
Labels:           app=af-backend
                  pod-template-hash=776d76bff9
Annotations:      <none>
Status:           Pending
IP:               10.42.1.132
IPs:
  IP:           10.42.1.132
Controlled By:  ReplicaSet/af-deploy-776d76bff9
Containers:
  af-backend:
    Container ID:   
    Image:          my-registry:30699/backend:latest
    Image ID:       
    Port:           5036/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  128Mi
    Requests:
      cpu:        500m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7s82v (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  kube-api-access-7s82v:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  2m36s                default-scheduler  Successfully assigned acredifast/af-deploy-776d76bff9-qm6cx to pumpkin
  Normal   Pulling    67s (x4 over 2m35s)  kubelet            Pulling image "my-registry:30699/backend:latest"
  Warning  Failed     67s (x4 over 2m35s)  kubelet            Failed to pull image "my-registry:30699/backend:latest": failed to pull and unpack image "my-registry:30699/backend:latest": failed to resolve reference "my-registry:30699/backend:latest": unable to read CA cert "/etc/rancher/k3s/certs.d/my-registry:30699/tls.crt": open /etc/rancher/k3s/certs.d/my-registry:30699/tls.crt: no such file or directory
  Warning  Failed     67s (x4 over 2m35s)  kubelet            Error: ErrImagePull
  Normal   BackOff    4s (x10 over 2m34s)  kubelet            Back-off pulling image "my-registry:30699/backend:latest"
  Warning  Failed     4s (x10 over 2m34s)  kubelet            Error: ImagePullBackOff
jsimons@blueberry:~/k3s/af$ curl -u "user:pass" <https://my-registry:30699/v2/backend/tags/list>
{"name":"backend","tags":["latest"]}
jsimons@blueberry:~/k3s/af$ ls -la /etc/rancher/k3s/certs.d/my-registry\:30699/tls.crt 
-rw-r--r-- 1 root root 1846 Aug 17 19:01 /etc/rancher/k3s/certs.d/my-registry:30699/tls.crt
c
Copy code
Failed to pull image "my-registry:30699/backend:latest": failed to pull and unpack image "my-registry:30699/backend:latest": failed to resolve reference "my-registry:30699/backend:latest": unable to read CA cert "/etc/rancher/k3s/certs.d/my-registry:30699/tls.crt": open /etc/rancher/k3s/certs.d/my-registry:30699/tls.crt: no such file or directory
What exactly did you put in your registries.yaml that is making it look for that CA file?
Or did you need to configure
my-registry:30699
to use http instead of https?
/etc/rancher/k3s/certs.d
is not something that K3s creates or manages… where is that coming from
a
Copy code
jsimons@blueberry:~/k3s/af$ cat /etc/rancher/k3s/registries.yaml 
mirrors:
  my-registry:30699:
    endpoint:
      - <https://my-registry:30699/v2>
configs:
  my-registry:30699:
    auth:
      username: user
      password: pass
    tls:
      ca_file: /etc/rancher/k3s/certs.d/my-registry:30699/tls.crt

jsimons@blueberry:~/k3s/af$ ls -la  /etc/rancher/k3s/certs.d/my-registry\:30699/
total 12
drwxr-xr-x 2 root root 4096 Aug 17 19:01 .
drwxr-xr-x 3 root root 4096 Aug 17 19:00 ..
-rw-r--r-- 1 root root 1846 Aug 17 19:01 tls.crtjsimons@blueberry:~/k3s/af$ cat /etc/rancher/k3s/registries.yaml 
mirrors:
  my-registry:30699:
    endpoint:
      - <https://my-registry:30699/v2>
configs:
  my-registry:30699:
    auth:
      username: myuser
      password: mypasswd
    tls:
      ca_file: /etc/rancher/k3s/certs.d/my-registry:30699/tls.crt

jsimons@blueberry:~/k3s/af$ ls -la  /etc/rancher/k3s/certs.d/my-registry\:30699/
total 12
drwxr-xr-x 2 root root 4096 Aug 17 19:01 .
drwxr-xr-x 3 root root 4096 Aug 17 19:00 ..
-rw-r--r-- 1 root root 1846 Aug 17 19:01 tls.crt
it wont let me reply but I added that my self
/etc/rancher/k3s/certs.d
is not something that K3s creates or manages… where is that coming from
I will change the path back to this which did not work at first jsimons@blueberry:~/k3s/af$ cat /usr/local/share/ca-certificates/tls.crt
c
Do you have multiple nodes?
a
yes
c
Did you configure registries.yaml and place the file at
/etc/rancher/k3s/certs.d/my-registry:30699/tls.crt
on all of them?
The error message indicates that the node where the pod is trying to run does not have that file
open /etc/rancher/k3s/certs.d/my-registry:30699/tls.crt: no such file or directory
error message is pretty cut and dried
a
Copy code
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  35s                default-scheduler  Successfully assigned acredifast/af-deploy-69bddb7795-rp9lk to blueberry
  Normal   Pulling    34s                kubelet            Pulling image "my-registry:30699/backend:v1"
  Normal   Pulled     34s                kubelet            Successfully pulled image "my-registry:30699/backend:v1" in 353ms (353ms including waiting). Image size: 92103741 bytes.
  Normal   Created    18s (x3 over 33s)  kubelet            Created container: af-backend
  Normal   Started    18s (x3 over 33s)  kubelet            Started container af-backend
  Normal   Pulled     18s (x2 over 32s)  kubelet            Container image "my-registry:30699/backend:v1" already present on machine
  Warning  BackOff    6s (x4 over 31s)   kubelet            Back-off restarting failed container af-backend in pod af-deploy-69bddb7795-rp9lk_acredifast(89df1cf0-6662-422f-8715-701797df5516)

it looks like that worked
thanks
c
yep. make sure that your files are where you said they are, on all your nodes.
1
a
@creamy-pencil-82913 if only it were that cut and dry.
Copy code
"/etc/hosts" 16L, 568B written
pthiel@strawberry:~$ cat registries.yaml 
mirrors:
  my-registry:30699:
    endpoint:
      - <https://my-registry:30699/v2>
configs:
  my-registry:30699:
    auth:
      username: user
      password: pass
    tls:
      ca_file: /usr/local/share/ca-certificates/tls.crt
pthiel@strawberry:~$ cat /usr/local/share/ca-certificates/ 
forgejo-kraytonian-server.crt  tls.crt
kforgejo-server.crt            
pthiel@strawberry:~$ cat /usr/local/share/ca-certificates/tls.crt

jsimons@blueberry:~/k3s/af$ k describe pod af-deploy-76584fdcfb-g2rvr -n acredifast                        
Name:             af-deploy-76584fdcfb-g2rvr
Namespace:        acredifast
Priority:         0
Service Account:  default
Node:             strawberry/10.0.0.4
Start Time:       Mon, 18 Aug 2025 23:09:30 -0700
Labels:           app=af-backend
                  pod-template-hash=76584fdcfb
Annotations:      <none>
Status:           Pending
IP:               10.42.3.196
IPs:
  IP:           10.42.3.196
Controlled By:  ReplicaSet/af-deploy-76584fdcfb
Containers:
  af-backend:
    Container ID:   
    Image:          my-registry:30699/backend:v2
    Image ID:       
    Port:           5036/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  128Mi
    Requests:
      cpu:        500m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6lxj (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  kube-api-access-q6lxj:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  25s                default-scheduler  Successfully assigned acredifast/af-deploy-76584fdcfb-g2rvr to strawberry
  Normal   BackOff    24s                kubelet            Back-off pulling image "my-registry:30699/backend:v2"
  Warning  Failed     24s                kubelet            Error: ImagePullBackOff
  Normal   Pulling    11s (x2 over 25s)  kubelet            Pulling image "my-registry:30699/backend:v2"
  Warning  Failed     11s (x2 over 25s)  kubelet            Failed to pull image "my-registry:30699/backend:v2": failed to pull and unpack image "my-registry:30699/backend:v2": failed to resolve reference "my-registry:30699/backend:v2": failed to do request: Head "<https://my-registry:30699/v2/backend/manifests/v2>": tls: failed to verify certificate: x509: certificate signed by unknown authority
  Warning  Failed     11s (x2 over 25s)  kubelet            Error: ErrImagePull
I am creating a configmap to resolve the issue
c
you need to put the whole cert chain in the file
The cert chain is incomplete or is not rooted in a cert included in the file you specified
a
@creamy-pencil-82913 would I be able to do it with
doing this is just not clear in the k3s documentation and should be adjusted.
c
the CA cert file should include the root CA, and any intermediate CAs necessary to complete the chain between your registry’s server cert and the root CA cert.
This is pretty standard for use of custom CAs
a
I am new to this but can you confirm this is what i have to do this is like my first time doing this kind of stuff https://pages.cs.wisc.edu/~zmiller/ca-howto/
@creamy-pencil-82913 I was able to get it done
Thanks for your help you rock @creamy-pencil-82913
s
I think we all agree with that
👍 1
a
@creamy-pencil-82913 it works but still errors out
Copy code
jsimons@blueberry:~$ k describe pod af-deploy-7985bff656-kszl5 -n acredifast 
Name:             af-deploy-7985bff656-kszl5
Namespace:        acredifast
Priority:         0
Service Account:  default
Node:             strawberry/10.0.0.4
Start Time:       Wed, 20 Aug 2025 23:46:07 -0700
Labels:           app=af-backend
                  pod-template-hash=7985bff656
Annotations:      <none>
Status:           Running
IP:               10.42.3.213
IPs:
  IP:           10.42.3.213
Controlled By:  ReplicaSet/af-deploy-7985bff656
Containers:
  af-backend:
    Container ID:   <containerd://4c0fde11214eadfe1e869829acb35f121383ed396e37a11a1833b1e795240ca>c
    Image:          my-registry:30772/backend:v1
    Image ID:       my-registry:30772/backend@sha256:396f070a6f627e3a287ac522db32c76aacc51c0d45a52135e3a9f2e90f24f682
    Port:           5036/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 20 Aug 2025 23:46:08 -0700
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  128Mi
    Requests:
      cpu:        500m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4t5qh (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  kube-api-access-4t5qh:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
  Type     Reason                           Age                   From     Message
  ----     ------                           ----                  ----     -------
  Warning  FailedToRetrieveImagePullSecret  3m3s (x742 over 15h)  kubelet  Unable to retrieve some image pull secrets (reg-login-secret); attempting to pull the image may not succeed.
jsimons@blueberry:~$ k get all -n acredifast                                
[sudo] password for jsimons: 
NAME                             READY   STATUS    RESTARTS   AGE
pod/af-deploy-7985bff656-kszl5   1/1     Running   0          16h

NAME                  TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                       AGE
service/af-nodeport   NodePort   10.43.60.119   <none>        8080:31163/TCP,22:32673/TCP   16h

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/af-deploy   1/1     1            1           16h

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/af-deploy-7985bff656   1         1         1       16h
c
Doesn’t appear to be the same problem? it says that your image pull secret doesn’t exist
Unable to retrieve some image pull secrets (reg-login-secret)
does the secret exist in the same namespace as the pod?
you did a
kubectl get all
on the namespace and there is no secret so… there’s no secret
a
it exsists
Copy code
jsimons@blueberry:~$ k get secrets -n registry                                                                                                                           
NAME               TYPE                DATA   AGE
reg-auth-secret    Opaque              1      18h
reg-certs-secret   <http://kubernetes.io/tls|kubernetes.io/tls>   2      18h
c
you can’t reference a secret in a different namespace. This is in the docs.
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
To use image pull secrets for a Pod (or a Deployment, or other object that has a pod template that you are using), you need to make sure that the appropriate Secret does exist in the right namespace. The namespace to use is the same namespace where you defined the Pod.
a
i created another one
Copy code
jsimons@blueberry:~$ k get secrets -n acredifast                                                                                                                         
NAME               TYPE                             DATA   AGE
reg-login-secret   <http://kubernetes.io/dockerconfigjson|kubernetes.io/dockerconfigjson>   1      8s
jsimons@blueberry:~$
c
Did you notice that the
*imagePullSecrets*:
in the pod spec only allows you to specify a secret name? not a namespace?
How would it know to go looking in a completely different random namespace for a secret with that name
a
Copy code
jsimons@blueberry:~$ cat k3s/af/Deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: af-deploy 
  namespace: acredifast
spec:
  selector:
    matchLabels:
      app: af-backend 
  template:
    metadata:
      labels:
        app: af-backend
    spec:
      containers:
        - name: af-backend
          image: my-registry:30772/backend:v1 
          resources:
            limits:
              memory: "128Mi"
              cpu: "500m"
          ports:
            - containerPort: 5036
      imagePullSecrets:
        - name: reg-login-secret
i have it
c
delete the pod and when it gets recreated, you will not see that
Unable to retrieve some image pull secrets (reg-login-secret)
error. Because the secret will exist in the correct namespace.
1
a
i think it got deleted
c
you need to read all the docs when using features. If you just plug in random things it is less likely to work
1
👀 1
a
i do read the docs the docs was following a tutorial and have not gotten to that part yet I am on the storage part rn https://kubernetes.io/docs/concepts/storage/ i do have a goal of getting through the hole thing just dont want to use docker hub at the moment
i will strive to be more on point next time