This message was deleted.
# epinio
a
This message was deleted.
c
if you try it out and it doesn't work, let me know. I may have missed something
b
Thanks, will try it out and let you know... will put it in my git gist... and modify from there.
I got started. First i think
git co .
means
git checkout .
, I hope. But i got stuck on
make acceptance-cluster-setup
, when i export
export KUBECONFIG=$PWD/tmp/acceptance-kubeconfig
and look at the pods via
k9s
, i saw this status of all the pods Any tips ?
scratch the above issue, after leaving it overnight, it became ready... will try to reproduce it later but deleting it... but let me move on
This is what i am doing from the script point of view (https://gist.github.com/gitricko/f3b2a2636f7607bfabb3396c770e9750) However, i think this work flow does not work for macosx, i think. Currently, i am facing this issue for the last step Output of
make prepare_environmnet_k3d
is:
Copy code
...
Creating the PVC
persistentvolumeclaim/epinio-binary unchanged
Creating the dummy copier Pod
pod/epinio-copier created
Waiting for dummy pod to be ready
pod/epinio-copier condition met
Copying the binary on the PVC
total 80504
-rwxr-xr-x    1 503      staff     82435126 Sep 28 05:33 epinio
Deleting the epinio-copier to avoid multi-attach issue between pods
pod "epinio-copier" deleted
Patching the epinio-server deployment to use the copied binary
deployment.apps/epinio-server patched
Waiting for the rollout of the deployment to complete
Waiting for deployment spec update to be observed...
Waiting for deployment "epinio-server" rollout to finish: 0 out of 1 new replicas have been updated...
Waiting for deployment "epinio-server" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "epinio-server" rollout to finish: 1 old replicas are pending termination...
deployment "epinio-server" successfully rolled out
-------------------------------------
Cleanup old settings
-------------------------------------
Trying to login../scripts/prepare-environment-k3d.sh: line 47: ./dist/epinio-darwin-amd64: No such file or directory
../scripts/prepare-environment-k3d.sh: line 47: ./dist/epinio-darwin-amd64: No such file or directory
../scripts/prepare-environment-k3d.sh: line 47: ./dist/epinio-darwin-amd64: No such file or directory
../scripts/prepare-environment-k3d.sh: line 47: ./dist/epinio-darwin-amd64: No such file or directory
../scripts/prepare-environment-k3d.sh: line 47: ./dist/epinio-darwin-amd64: No such file or directory
Failed to run ./dist/epinio-darwin-amd64 login -u admin -p password --trust-ca <https://epinio.172.19.0.2.omg.howdoi.website> after 5 attempts!
make: *** [prepare_environment_k3d] Error
First issue is that i only have
dist/epinio-linux-amd64
file but it is looking for
dist/epinio-darwin-amd64
Next, I try to use my own epinio to login... but I cant... Looking at the docker, only port 80 was open and no 443 in
k3d-epinio-acceptance
cluster. I will try to install this in a C9 linux machine to see if it work tmr
Good thing is that i saw this in my cluster. Dex pod!
c
• Sorry, mistakenly assumed you would run this on Linux. You'd need to adapt the script(s) to build and use the darwin binary. • Yes "git co" is my local alias for "git checkout" (I was just copying and pasting from my system). • By default, we don't bind the k3d port on any host ports. You can do that by setting this environment variable to `true`: https://github.com/epinio/epinio/blob/f9ab365e2fc8f469841dd083647c25a11f6d7ccf/scripts/acceptance-cluster-setup.sh#L62 but shouldn't need that, unless the k3d container's IP address is not accessible from where you will be accessing your cluster (e.g. you want to you access the cluster with the host's IP address instead). In that case you would also need to manually set the
EPINIO_SYSTEM_DOMAIN
(https://github.com/epinio/epinio/blob/f9ab365e2fc8f469841dd083647c25a11f6d7ccf/scripts/prepare-environment-k3d.sh#L84) to something lilke
<http://you.host.ip.address.sslip.io|you.host.ip.address.sslip.io>
because by default the script will use the container's IP address.
This development flow has been mostly adapted to the core team's needs and we are all working on pretty similar setups. That's why it's not very flexible.