New Question
0

Hyper-V Compute

asked 2014-12-18 10:05:14 +0200

chandra gravatar image

I have a three node Openstack setup with two nodes running Devstack on Ubuntu and one node with Hyper-V.

Both the Linux nodes are configured with OpenVSwitch and VLANs with the following config

QPLUGIN=ml2 QML2PLUGINTYPEDRIVERS="flat,vlan" QML2TENANTNETWORKTYPE="vlan" QML2PLUGINMECHANISMDRIVERS="openvswitch,hyperv" ENABLETENANTVLANS=True VLANINTERFACE=eth1 TENANTVLANRANGE=2001:2999 FLATINTERFACE=eth1 PHYSICALNETWORK=default OVSPHYSICALBRIDGE=br-eth1

All the communication between these two nodes is working fine and the new instances created are getting IP adress from the DHCP agent.

Coming to the third Hyper-V node, I have installed the Compute using the 2014.2.1 installer. The compute service is running perfectly. The instances are getting assigned properly and they are starting also. But, the challenge is with the network. They are not getting the IPs from the DHCP.

I have logged into the Hyper-V Manager where these instances are running and found out that the vSwitch which was selected in the installation is not getting assigned to the network adapter of the instance. If i add this manually, everything is working perfectly.

What could be the reason for Virtual Switch(myVSwitch is the virtual switch i created during installation) not getting associated with the new instances? Any ideas. My Hyper-V agent configuration is like this.

[DEFAULT] verbose=true controlexchange=neutron policyfile=C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\etc\policy.json rpcbackend=neutron.openstack.common.rpc.implkombu rabbithost=192.168.0.10 rabbitport=5672 rabbituserid=guest rabbitpassword=devstack logdir=C:\OpenStack\Log\ logfile=neutron-hyperv-agent.log [AGENT] pollinginterval=2 physicalnetworkvswitchmappings=*:myVSwitch enablemetricscollection=false [SECURITYGROUP] firewalldriver=neutron.plugins.hyperv.agent.securitygroupsdriver.HyperVSecurityGroupsDriver enablesecurity_group=false

Thanks & Regards Chandra

edit retag flag offensive close merge delete

2 answers

Sort by » oldest newest most voted
1

answered 2014-12-18 14:08:00 +0200

alexpilotti gravatar image

Hi Chandra,

For comparison this is the Neutron Ml2 Devstack configuration that we use in our tests: https://github.com/cloudbase/openstac...

If the Devstack configuration is correct, here are the troubleshooting steps that we usually apply when hunting for networking issues.

The following steps require a VM booted with one single vnic attached to the private network.

nova show <vm_name>

Make sure that the vNIC is attached to the private network

neutron net-show <private_network_name>

Make sure that the network type is VLAN and that the segmentation id (vlan tag) is correct.

On the Hyper-V side:

Check if the Neutron agent assigned the VLAN tag to the VM's vNIC:

Get-VMNetworkAdapterVLAN instance-*

Check if your ethernet card driver allows VLAN traffic. This unfortunately depends on the card manufacturer.

For Intel cards, check this KB article: http://www.intel.com/support/network/...

We prepared this simple PowerShell script that checks the registry automatically: https://bitbucket.org/cloudbase/vlanf...

For other ethernet cards manufacturer, you can refer to the info available here, look for the "Windows" section: http://wiki.wireshark.org/CaptureSetu...

Login in your guest:

Get-VM instance-* | Get-VMConsole

Assuming that you didn't receive a DHCP address, assign manually the address returned from the "nova show" command previously executed, e.g.

sudo ifconfig eth0 10.0.0.2 netmask 255.255.255.0 up

Ping the router address, e.g.:

ping 10.0.0.1

If you are getting replies, the network is ok, but the DHCP service needs to be troubleshooted.

On the other side, assuming that you didn't get any ping reply, let ping running to be able to monitor the traffic in the following steps.

On the neutron L3 server (the Devstack node in your case):

Replace eth1 in the following tcpdump command with the data interface name, this is the interface connected to the network where the Hyper-V vswitch is connected as well.

sudo tcpdump -i eth1 -vv -e -n

If there's no traffic, make sure the Hyper-V vswitch and the Linux adapter are connected to the right network.

If there is traffic, Make sure that in the output the 802.1Q tag is present, otherwise it means that either the Hyper-V or the Linux ethernet card drivers are stripping the tag.

If this is the case, look at the card configuration:

sudo ethtool -k eth1

Disable VLAN offloading (note that -K is uppercase now).:

sudo ethtool -K eth1 rxvlan off
sudo ethtool -K eth1 txvlan off

At this point, if you can see traffic flowing (typically unanswered ARP requests) but still no ping replies, it's time to look at the namespaces. I'll add the steps for this as soon as you can confirm what results you obtained from the above troubleshooting steps.

Please let me know if this helps.

Thanks!

edit flag offensive delete link more
0

answered 2014-12-20 06:18:32 +0200

chandra gravatar image

Thanks a lot for the detailed and insightful information.

As I mentioned above, the main problem is with the Virtual adapter mapping to the instances by the hyper-V Compute. How do we mention the VLAN numbers at the Hyper-V Compute end at the installation time/in the local configuration at Hyper-V end(From the controller end we anyway have section to mention the ranges in local.conf).

For my problem, the adapter association problem got resolved by creating the Virtual adapter using the Hyper-V Network Manager with the VLAN association already done before the installation and selecting the adapter while installing instead of choosing to create one during installation.

Now everything is working fine with the Hyper-V compute end also. But, only problem I have now is related to Compute resources. Sometime instance creations are failing saying that no valid host are found even when there are 5-6 VCPUs and 4-5 GB RAM free also. Not sure if there is some configuration needs to be done at the Hyper-V compute end. Another problem is that even after deleting the instances and the console also showing all the compute resource free, instance creations failing with the same message. When I restart everything at the Hyper-V compute side, things are working normally. But, the same problem is repeating in case of instance deletions and creations.

Is there any specific cleanup settings to have the proper cleanup to be done either at the Hyper-V compute end (or) Devstack controller end.

Thanks & Regards Chandra

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

Stats

Asked: 2014-12-18 10:05:14 +0200

Seen: 1,280 times

Last updated: Dec 20 '14