This message was deleted.
# harvester
a
This message was deleted.
s
I have applied a NAD like this:
Copy code
apiVersion: "<http://k8s.cni.cncf.io/v1|k8s.cni.cncf.io/v1>"
kind: NetworkAttachmentDefinition
metadata:
  name: bridge-conf
spec:
  config: '{
    "cniVersion": "0.3.1",
    "name": "bridge-conf",
    "type": "bridge",
    "bridge": "harvester-br0",
    "vlan": 1,
    "ipam": {
     "type": "host-local",
     "subnet": "192.168.1.0/24",
     "dataDir": "/mnt/cluster-ipam"
    }
  }'
If I create a test Pod with the proper annotation, then I see that
harvester-br0
will be created on the host. However,
harvester-network-controller
still ends up in a
CrashLoopBackoff
because it panics:
Copy code
I0731 05:39:35.654293 1 main.go:112] Starting network controller with 2 threads.
I0731 05:39:35.654332 1 main.go:117] Starting network controller in namespace: harvester-system.
W0731 05:39:35.654525 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
time="2022-07-31T05:39:38Z" level=info msg="Do not update the existing CRD NodeNetwork"
time="2022-07-31T05:39:39Z" level=info msg="Do not update the existing CRD NetworkAttachmentDefinition"
I0731 05:39:41.453235 1 monitor.go:52] Start Monitor
time="2022-07-31T05:39:41Z" level=info msg="Starting <http://k8s.cni.cncf.io/v1|k8s.cni.cncf.io/v1>, Kind=NetworkAttachmentDefinition controller"
I0731 05:39:41.659095 1 controller.go:68] nad configuration bridge-conf has been changed: { "cniVersion": "0.3.1", "name": "bridge-conf", "type": "bridge", "bridge": "harvester-br0", "vlan": 1, "ipam": { "type": "host-local", "subnet": "192.168.1.0/24", "dataDir": "/mnt/cluster-ipam" } }
time="2022-07-31T05:39:41Z" level=info msg="Starting <http://network.harvesterhci.io/v1beta1|network.harvesterhci.io/v1beta1>, Kind=NodeNetwork controller"
I0731 05:39:41.659831 1 controller.go:95] node network configuration harvester-pool1-80b4dfff-khqdn-vlan has been changed, spec: {Description: NodeName:harvester-pool1-80b4dfff-khqdn Type:vlan NetworkInterface:}
I0731 05:39:41.660331 1 controller.go:84] ignore link not found error, details: slave of harvester-br0 not found
I0731 05:39:41.661060 1 controller.go:172] ignore link not found error, details: slave of harvester-br0 not found
E0731 05:39:41.662099 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 440 [running]:
<http://github.com/harvester/harvester-network-controller/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x16db840|github.com/harvester/harvester-network-controller/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x16db840>?, 0x2ba40c0})
/go/src/github.com/harvester/harvester-network-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x99
<http://github.com/harvester/harvester-network-controller/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0|github.com/harvester/harvester-network-controller/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0>, 0x0, 0xc000291ed0?})
/go/src/github.com/harvester/harvester-network-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75
panic({0x16db840, 0x2ba40c0})
g
I will need to check the code, and I have not tried the app mode install in a while, but the network pod is trying to find a slave for harvester-br0
did you specify the network interface in the nodes to use for vlans, likely to be the management interface
s
I was starting with the pre-requisites here: https://docs.harvesterhci.io/v1.0/dev/dev-mode/ . If I can get a working Digital Ocean dev deployment I am happy to contribute back whatever specifics are needed as it would be cool to get this working when bare metal isn’t available.
Where do I specify the network interface in the nodes for vlans?
g
so after you installed harvester.. you need to go the the hosts and edit config under networks you will find what interface to use
s
Ahh… I guess I haven’t gotten to that part since I got stuck during the Installation phase on step #9 of https://docs.harvesterhci.io/v1.0/dev/dev-mode/ — I was waiting for everything to look good and have been troubleshooting this crash in the network-controller
But maybe I can expose the UI and do that… although when I run “kubectl -n kube-system edit cm kubevip” it says there is no cm called kubevip in that namespace
g
i suspect the error is because of that
if you can access the harvester console, i'd try and get in there and configure the interface
i will look at the code too
s
OK thanks! For the “Expose Harvester UI” portion of the dev Installation instructions, do you know where I can find the kubevip cm?
And the VIP_IP and VIP_NICE variables… should those be a Digital Ocean elastic IP?
g
yeah the variable in the helm chart
basically the service will be bound to that address
s
Before I deployed the Harvester helm chart I created a Digital Ocean elastic IP and bound it to the host VM. I used that for VIP_IP. I didn’t really know what to use for VIP_NIC so I used eth0?
g
i am not entirely sure.. i have not tried the dev mode install, but i suspect that should be the one
s
When Rancher deploys RKE2 to Digital Ocean using the Node driver, it asks for networking options, which can be calico+multus or canal+multus or cilium+multus . I chose calico+multus and had to change the network controller Deployment manifest to add some args to use calico instead of the default which is flannel
g
the vip is just bound to the loadbalancer, should be the interface used for cni
s
Not sure what the interface used for cni is… I have calico as the management CNI and then multus as well… This DO host VM gets an eth0 and an eth1
If I change to Enabled I don’t have any options in the drop-down…
g
i will need to launch a cluster and test it out now.. i dont have an answer as to why
any chance you could please generate a support bundle? I can try and see if there is anything useful in there
s
OK thanks for the help!
Trying to generate a support bundle… can’t tell if it is stuck at 0% or is just taking a while, but if it does complete, I will post it. Do you prefer it in a DM?
Doesn’t look like it is making any progress. Generally, though, I am just looking for a way to get Harvester working in dev mode on a nested virt provider like DO. If you end up being able to get that work, would love to hear what you had to do beyond what is on the current dev mode page. Thanks @great-bear-19718!
g
sure.. will do
yeah please DM me
s
@great-bear-19718 I wonder if my issues getting dev mode install working on Digital Ocean per https://docs.harvesterhci.io/v1.0/dev/dev-mode/ are because my Rancher-provisioned Digital Ocean Droplet is using Ubuntu 22.04 LTS instead of 18.04 or 20.04. It is a single-node RKE2 cluster with Harvester. I’ll try re-deploying and report back…