VXLAN port not open in Hyper-V
Hi,
I'm trying to get instances in a Windows Server 2016 Datacenter based Hyper-V compute node to communicate with instances on other hypervisors using Open vSwitch and neutron-ovs-agent over VXLAN. My other hypervisors run CentOS/KVM. Instances on the Linux machines can communicate just fine, but I don't get the ones running in Hyper-V to communicate with the world outside the hypervisor.
The setup runs two controller nodes (10.60.11.21 and 10.60.11.22 with a shared VIP 10.60.11.201) with the L3-agent running the router on .21. In addition I have two separate hypervisors installed, one KVM (10.60.11.23) and one Hyper-V (10.60.11.25).
The Hyper-V host runs 3 VMs, one CirrOS (192.168.0.7) and two Windows Server (one with 192.168.0.12 and one trying to use DHCP).
I have created a Hyper-V VMswitch named "external" and enabled the Open vSwitch extension. The VMswitch is connected to a physical interface named "Tenant" (without IP). I have also manually created a Open vSwitch bridge named "br-ex" and connected the "Tenant" port above to it. The hypervisor also has a Management interface with IP (i.e. 10.60.11.25) with connectivity to the other Openstack nodes, this is the interface that should be used for the VXLAN traffic.
The VMs on Hyper-V can ping themselves and other VMs on the same Hyper-V host. They can not access the router, DHCP-server on the controller node or VMs on other hypervisors.
I think the problem has something to do with that Open vSwitch does not open a listening socket for VXLAN (udp/4789), i.e. according to netstat -na | find "4789"
. I also can see outgoing flood traffic from br-tun to each VXLAN port according to ovs-ofctl dump-flows br-tun
but I don't see this traffic in wireshark or at the incoming br-tun VXLAN port on the other hypervisors, my guess is that this is outgoing ARP requests that for some reason are not sent out on the physical interface.
If I try to connect to an instance on the Hyper-V host from the outside (10.20.150.115) I see the incoming TCP SYN on the Hyper-V host (but not on the VM).
Frame 1195965: 128 bytes on wire (1024 bits), 128 bytes captured (1024 bits) on interface 0
Ethernet II, Src: HewlettP_70:57:80 (ac:16:2d:70:57:80), Dst: HewlettP_70:a9:d0 (ac:16:2d:70:a9:d0)
Internet Protocol Version 4, Src: 10.60.11.21, Dst: 10.60.11.25
User Datagram Protocol, Src Port: 44982, Dst Port: 4789
Virtual eXtensible Local Area Network
Ethernet II, Src: fa:16:3e:e2:06:b1 (fa:16:3e:e2:06:b1), Dst: fa:16:3e:9e:f7:c6 (fa:16:3e:9e:f7:c6)
Internet Protocol Version 4, Src: 10.20.150.115, Dst: 192.168.0.12
Transmission Control Protocol, Src Port: 53839, Dst Port: 22, Seq: 0, Len: 0
I.e ...
Hi, Thanks for adding the detail information about your setup. To be short the problem is you are missing flows for br-ex, which means it will drop all the packets. A simple way to fix this is: ovs-ofctl del-flows br-ex ovs-ofctl add-flow br-ex actions=normal.
Please remove: the following lines from your neutron config: vxlan_group = 239.1.1.1 bridge_mappings = physnet1:br-ex Try to follow the steps described in the blog post:https://cloudbase.it/open-vswitch-2-5-hyper-v-part-1/ . That should get you up to speed. Thanks, Alin.
Thanks for your quick response. I tried removing vxlan_group = 239.1.1.1 and bridge_mappings = physnet1:br-ex. But still it will not open a UDP/4789 listening port for VXLAN. I suppose that will be needed?! Any idea what could be the cause of this?
I also tried adding the normal flow to br-ex. I no longer get any drops there. I will try to redo the installation according to the url provided, when I'm back in office and let you know the results.
Ok, did your prior env work though?. You won't see anyone listening on the udp port we sniff the packets directly in the datapath.
The Linux hypervisors work well with the controllers, instances launched on them can be reached from other VMs and from outside without any problems. Do you know why I get ICMP port unreachable messages from the HyperV host for all incoming VXLAN traffic?
The Hyper-V host is new and has not been working in the OpenStack environment earlier. It's the first time we try to get HyperV into the system so we probably has made some basic mistake setting it up.
It depends a lot on your setup. Mind posting the output of: ovs-vsctl show, ovs-dpctl show, ovs-ofctl dump-flows br-tun, ovs-ofctl dump-flows br-ex , get-netadapter, ipconfig, route print -4?
Before adding the br-ex normal flow you suggested http://pastebin.com/S0KQqQ0fhttp://pastebin.com/SrrWxBsXhttp://pastebin.com/BCAsUc4rhttp://pastebin.com/ABijLVBYhttp://pastebin.com/9d7tNMQqhttp://pastebin.com/Yri4Beihhttp://pastebin.com/sBtc1M76http://pastebin.com/Fy7DTfe7
and flows after the flow was added for br-ex http://pastebin.com/3TyMdt28 The Hyper-V VM now runs on internal IP 192.168.0.15 (external 10.60.12.156) and MAC FA:16:3E:54:CC:E7.
As you will see I try to use my Management interface (10.60.11.25) as the source of my VXLAN-tunnels. That's the way our working Linux-computes are set up. The Management interface is not the interface that is VMswitch enabled and connected to br-ex (the VMswitch enabled interface is Tenant).
OK, hope that is the problem. I will try to change it when I'm on site on monday and let you know.
Thank you!
Thanks! When enabling the br-ex virtual interface in Windows and setting the VXLAN IP on that interface it works after recabling the interface physically to the "tunnel switch" and also adding the normal flow via `ovs-ofctl add-flow br-ex actions=normal`.
Do you know why the neutron-ovs-agent does not add the "normal" flow automatically? Is it possible to configure something to make it do so? Each time the ovs-vswitchd service is restarted I currently have to add it again manually...