This message was deleted.
# harvester
a
This message was deleted.
👾 1
w
My theory is that for some reason the traffic that is outbound from harvester is sent through some internal ingress- however - when looking at the log from the ingress controller - no error messages are found.
a
Did you set any
http_proxy
in harvester ? and did you set the
ssl-certificates, addtional CA
?
w
No this is a clean installation without any configurations. No proxy and no CA certs.
If I ssh to any of the nodes I can ping Internet and run curl etc
Also if I start a VM it can too reach internet.
a
w
I exec:ed to the harvester pod in the harvester-system namespace and found that it has the same problem when trying to curl an external https url.
This is the log from the pod
(the url above gives the same error message)
a
If your network is using
http-proxy
, then you need to configure them in Harvester.
w
I understand but I’m not using a proxy - the nodes have no problems reaching internet but harvester is not able to communicate
a
How long is the cluster running after installation ?
w
This is a problem that occurs directly after installing the cluster and I have tried reinstalling the cluster both with 1.2.0 and 1.1.2
a
please
sudo -i cat /oem/harvester.config
after ssh into Harvester NODE
And there is a
*ingress*
POD, please check the log of this POD, check the last few error/warnings.
w
This is the contents of the config
Copy code
sudo -i cat /oem/harvester.config
schemeversion: 1
serverurl: ""
token: ion-travesty-shaded
os:
  sshauthorizedkeys:
    ... redacted ...
  writefiles: []
  hostname: harvester-6wzrr
  modules:
  - kvm
  - vhost_net
  sysctls: {}
  ntpservers:
  - <http://sth1.ntp.se|sth1.ntp.se>
  - <http://sth2.ntp.se|sth2.ntp.se>
  - <http://sth3.ntp.se|sth3.ntp.se>
  - <http://sth4.ntp.se|sth4.ntp.se>
  dnsnameservers:
  - 1.1.1.1
  - 1.0.0.1
  wifi: []
  password: agenda-mascot-plane
  environment: {}
  labels: {}
install:
  automatic: true
  mode: create
  managementinterface:
    interfaces:
    - name: eno1
      hwaddr: d0:43:1e:9b:ec:81
    method: dhcp
    ip: ""
    subnetmask: ""
    gateway: ""
    defaultroute: false
    bondoptions: {}
    mtu: 0
    vlanid: 0
  vip: 172.30.10.99
  viphwaddr: ""
  vipmode: static
  forceefi: false
  device: /dev/sda
  configurl: <http://redacted>
  silent: false
  isourl: <http://redacted>
  poweroff: false
  noformat: false
  debug: false
  tty: tty1
  forcegpt: true
  forcembr: false
  datadisk: ""
  webhooks: []
runtimeversion: v1.24.11+rke2r1
rancherversion: v2.6.11
harvesterchartversion: 1.1.2
monitoringchartversion: 100.1.0+up19.0.3
systemsettings: {}
loggingchartversion: 100.1.3+up3.17.7
The ingress controller seems to be working ok with the incoming requests (no cert warnings or errors found when grep:ing)
thank you so much for helping me- I’m really stuck here. I want to use harvester but this is a problem I’ve been trying to fix for a couple of weeks and I’m stuck.
a
As a workaround, you may try
download
the image to your PC, then create image via
upload
from the downloaded file.
please alsoget
kuectl logs -n kube-system rke2-ingress-nginx-controller-9zrdz | grep x509
Also need
kubectl get <http://settings.harvesterhci.io|settings.harvesterhci.io> -A
w
Yes, it works if I download it to my internal network but not from internet.
grep x509
returns zero results..
Do you have any more ideas on what to try?
Found the problem- the dhcp had a search domain that confused the nodes and resolved the wrong ip
👍 1
a
cool
do you mean your local dhcp server had some errros ?
w
My dhcp config had a local search domain which I changed from the default .local to an actual domain name which made everything strange. That was the culprit and the way I discovered what was going on was that I tried to narrow down the problem from within the harvester pod where I tried to curl 1.1.1.1 and it worked but when trying another web address I always got the same IP:s in response. I found that these IP:s was connected to another ISP where we have some other servers so I understood it had something to do with the DNS resolve- not routing or network (strangely though- nslookup and dig all resolved ok). Then I tried to narrow down where we have made configurations that are specific to us (since the harvester installation was a clean one). And eventually found the dhcp config where this name prevented the pods from accessing the internet.
🙌 1