This message was deleted.
# rancher-desktop
a
This message was deleted.
w
do you have multiple contexts? $
kubectl config get-contexts
q
17?
w
hah thats a lot of clusters!
q
a number are doubled
one is a pretty name and one's a gke name
w
assuming RD with k3s is running in the background is the current-context set to the RD one?
q
Copy code
% kubectl config get-contexts|perl -ne 'next unless /^\S/;print'
CURRENT   NAME                                                                CLUSTER                                                             AUTHINFO                                                            NAMESPACE
*         rancher-desktop                                                     rancher-desktop                                                     rancher-desktop
the gke clusters are all quite happy
a bunch of docker/rancher/kind clusters are quite unhappy
w
maybe a firewall rule is dropping the loopback?
def not helping to see all those dupes
q
i'm on macOS
w
yup?
q
Copy code
% lsof -iTCP -n -P|grep 6443|wc
       0       0       0
according to
/usr/sbin/lsof
no one is listening on 6443?
w
hmmm and the VM is running?
rdctl shell and all that?
q
sigh, it was
🤣 1
let's try this again...
ok, so, for me, restarting rd for the nth time left me w/ a working
kubectl version
.... but i forced my coworker to restart repeatedly and that wasn't enough... when he returns, we can poke some more
w
this with qemu based or using vz?
q
i'm using vz -- i'll have to check his
w
maybe something is still wonky w vz support. I haven't spent enough time with vz enabled to figure out if our internal controls have a field day compared to qemu
q
he's using qemu
w
guessing logs would be needed for k3s and maybe the vm if it's been shaky
q
ok, doing a factory reset got his RD working
👍 1