This message was deleted.
# general
a
This message was deleted.
f
Don't have access to the telemetry anymore but I'm sure there's still more than "we'd" like. I can only think of one person who is both still a current employee (not me anymore) and worked on 1.x in any capacity. And they were a manager, so probably not going to be helpful to you. It's been 5+ years since development moved on to 2.0. But if you have a specific question I can try to answer as best as I remember from a previous life…
l
I remember you helped back then to set up my server. Maybe you can help with the following: • i had a server i set up manually, i terraformed a similar infrastructure and doing the config with ansible. Ideally this should make it also disaster recovery proof/limiting as a new instance can be run in a few minutes. • when i spin the instance i use the DB from the old config but in a new URL. I would like to avoid any manual steps but I have no clue on what table on the DB the CATTLE_URL is set up and if this is the only configuration to change. I also see references to CATTLE_URL are hardcoded in deployments, 😞 . Also I am unable to add a host. No matter what I do and despite using a public IP ( or none during the agent registration allowing for automatic detection) the agent container spins but it never comes alive on the server. Initially the agent gave a DNS resolution error because it was trying to use the host /etc/hosts file but i fixed it by spinning it with --dns 8.8.8.8 . Despite seeing no error in the agent logs i still see no host on the server and I have no clue if there is a log on the server that i can explore to see what is happening on that side, if anything is networking related etc • It is really a shame Rancher 1.x with Cattle is gone. RKE etc are just adding a lot more complexity. If you know of any alternatives let me know. I have investigated portainer, terraformed SWARM -> deploy portainer and traefik but again it looks more complex than running the old Rancher 1.x
f
The settings are in a table called
setting
, and the registration url is the one called
api.host
. I get it, we all liked cattle more too, but spending time trying to automate digging yourself further into a dead end now doesn't seem like a very good idea... Every bit of rancher, the docker versions it worked with, the JRE it's running on, OS the images are built with, the network stacks, etc are entirely abandoned by everyone involved now and eventually something is going to permanently break (or have some widespread CVE). There's a couple others but anything other than Kubernetes is still fighting the rest of the world. k3s (which rancher made) is simpler to manage and lighter and can be backed by a mysql DB (or embedded sqlite) instead of dealing with etcd.
FWIW we're sort of trying to solve this in a different way now @ acorn.io with most of the early former ranchers… Instead of trying to make k8s easier (which never worked out, half the people want every possible knob), we're assuming k8s is just there and it's someone else's job to manage it. And providing a simpler developer experience on top that happens to run on top of k8s behind the curtain. So not really simpler if you're the one that has to run the cluster, but if you can get the cluster from someone else (EKS/GKE/an ops team that likes managing k8s/possibly us as a SaaS) then the people that just want to run apps on it don't have to know anything about it.
l
I dont know about the managed one ... I generally think that anything on top of K8s then still might suffer the complexity it has. Ie with RKE/k8s I tried to terraform and config them too . I have spent a lot of time but it was just really complex because you cannot just tail a few logs if something does not work. You need to know the architecture and behind it where things might fail. I noticed you started Acorn, quick read on the homepage you went for a custom file 🙂 so no docker-compose allowed natively. Everything new needs a bit of learning also spinning a managed K8s is more expensive than a single server on Hetzner . https://www.hetzner.com/cloud I think i am probably too fixated on pricing and i need to look into a managed solution for the long run but I will probably give another shot to portainer for now as it looks like the only one still supporting something relatively simple like SWARM
f
k3s pointed at mysql or something is signficantly simpler if you've never tried it. Sure, like I said not exactly aimed at you because we're just punting on someone else provides k8s. No not compose, but the syntax is not that different really, and is more powerful in a few important ways like first-class variables and conditions (it's basically a subset of cuelang) instead of batshit string templating that you hope turns into valid yaml on render (like helm). Compose doesn't really map well to any other system because it has networks and tons of other docker-isms baked in. We tried at one time, other people have, Docker has a halfheated sorta-run-a-stack-on-k8s thing in Desktop, etc, they all suck. Either it's simple enough that you don't need a tool to convert it, or it's not and it's not gonna work. And swarm itself has been pretty neglected for years, split up with the sale to mirantis, etc.
l
so really in your opinion there is no other way other than k8s
and i mean kubernetes not rancher k8s
f
There's others, and if it's for your home lab or something purely for yourself because you like it better, sure? But if you're running a business and expecting someone else to be able to maintain it someday if you grow or get hit by a bus, pretty much yeah k8s is it
l
i need to watch out very carefully for busses now 🙂
f
But of all the choices sticking to 1.x in almost-2023 when we gave up on it in 2017 has to be the worst 🙂.
👍 1
l
Ok i m going to take a look at k3s again. I ll time box this for the week. If i get not stable setup i ll have to go managed
👋you could run a load balancer on Rancher 1.x . Is the approach now that also loadbalancers (or ingress as i thought it is called in k8s) is not a Rancher responsibility anymore?
f
Ingress is like a shared http "router" for the cluster listening on 80/443, you define rules (ingressrule resources) to say foo.com goes to service A, bar.com to B, etc. This is pretty much always available, usually provided by nginx or traefik. Tcp/udp load balancers are a separate thing, a service resource with type=LoadBalancer. When you make one of those the controller for your "cloud" spins up a balancer and hooks it up, i.e. an ELB in Amazon land. These are available only if configured (i.e not by default in RKE) because you need a pool of IPs, etc.
K3s ships with traefik for ingress and a thing we made called Klipper for LBs, which is a sort of trivial implementation that just uses a port on the host IPs and manages an iptables rule so that LB services can work out of the box instead of having to configure something. If you need being that there's things like MetalLB that do more, or configuring for the cloud provider you're running in if there is one.
l
also are you sure k3s ships with traefik? Here https://docs.ranchermanager.rancher.io/how-to-guides/new-user-guides/kubernetes-cluster-setup/k3s-for-rancher#1-install-kuberne[…]-up-the-k3s-server it mentions that the load balancer should be external
f
Traefik runs as an ingress controller, listening on each node and routing http requests that come to it to services. That does address distributing requests among the nodes to get to traefik in the first place. Or similar, for the k8s api itself, which is what that link is talking about
👍 1
No I mostly do UI, not terraform; Stop trying to automate things you don't even know if you're going to keep using 🙂
l
automation is not just a cool thing though. It is effectively a great way to have repeatable and automated steps that you otherwise need to remember (i am worse than a red fish) or heavily document to know what you have done in the past
also one bit i dont get from the installation script is how do you qualify one machine as a server vs agent. I dont see different scripts for different types of machines
f
Yeah same script different input (you can explicitly say
server
or
agent
if you want)
l
the quick start suggests to pass the K3S_URL ... and then it will just run as an agent/worker/node
f
Yeah that works too
l
do you know if the agent needs to connect to mysql too or if it doesnt need the connection details?