1 | initial version |
Hello Sirisha,
First of all, in /etc/neutron/neutron.conf
, please use ML2 driver instead. ML2 allows you
to use multiple networking technologies: openvswitch, linuxbridge, Hyper-V L2 agent.
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
You will have to make sure that in the config file /etc/neutron/plugins/ml2/ml2_conf.ini
you have the following:
[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,hyperv
Also, make sure your openstack controller services q-svc and q-agt run using both config files:
python /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
python /usr/local/bin/neutron-openvswitch-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
From reading the logs you showed, it seems the bridge br-int is missing on your openstack controller. Could you run the following command and post the results?
sudo ovs-vsctl show
The bridges br-ex, br-int, br-eth1 (assuming eth1 is the Guest network) should be present.
If not, run:
sudo ovs-vsctl add-br br-int
sudo ovs-vsctl add-br br-eth1
sudo ovs-vsctl add-port br-eth1 eth1
sudo ovs-vsctl add-br br-ex
sudo ovs-vsctl add-port br-ex EXTERNAL_INTERFACE
Secondly, before the instance to receive an IP, the port associated to the instance must be bound. To check this, run:
(nova show instance_name for the INSTANCE_IP)
PORT_ID=`neutron port-list | grep $INSTANCE_IP | awk '{ print $2 }'`
neutron port-show port_id
The result must have the following fields:
admin_state_up | True
binding:host_id | WIN-...
binding:vif_type | hyperv
binding:vnic_type | normal
status | ACTIVE
Finally, it would be useful to know what OpenStack release you are using as well as the environment you are using on your OpenStack controller and Hyper-V node.
Best regards,
Claudiu Belu
2 | No.2 Revision |
Hello Sirisha,
First of all, in /etc/neutron/neutron.conf
, please use ML2 driver instead. ML2 allows you
to use multiple networking technologies: openvswitch, linuxbridge, Hyper-V L2 agent.
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
You will have to make sure that in the config file /etc/neutron/plugins/ml2/ml2_conf.ini
you have the following:
[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,hyperv
Also, make sure your openstack controller services q-svc and q-agt run using both config files:
python /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
python /usr/local/bin/neutron-openvswitch-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
From reading the logs you showed, it seems the bridge br-int is missing on your openstack controller. Could you run the following command and post the results?
sudo ovs-vsctl show
The bridges br-ex, br-int, br-eth1 (assuming eth1 is the Guest network) should be present.
If not, run:
# if br-int is missing
sudo ovs-vsctl add-br br-int
# if br-eth1 is missing
sudo ovs-vsctl add-br br-eth1
sudo ovs-vsctl add-port br-eth1 eth1
#if br-ex is missing
sudo ovs-vsctl add-br br-ex
sudo ovs-vsctl add-port br-ex EXTERNAL_INTERFACE
Secondly, before the instance to receive an IP, the port associated to the instance must be bound. To check this, run:
(nova show instance_name for the INSTANCE_IP)
PORT_ID=`neutron port-list | grep $INSTANCE_IP | awk '{ print $2 }'`
neutron port-show port_id
The result must have the following fields:
admin_state_up | True
binding:host_id | WIN-...
binding:vif_type | hyperv
binding:vnic_type | normal
status | ACTIVE
Finally, it would be useful to know what OpenStack release you are using as well as the environment you are using on your OpenStack controller and Hyper-V node.
Best regards,
Claudiu Belu
3 | No.3 Revision |
Hello Sirisha,
First of all, in /etc/neutron/neutron.conf
, please use ML2 driver instead. ML2 allows you
to use multiple networking technologies: openvswitch, linuxbridge, Hyper-V L2 agent.
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
You will have to make sure that in the config file /etc/neutron/plugins/ml2/ml2_conf.ini
you have the following:
[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,hyperv
Also, make sure your openstack controller services q-svc and q-agt run using both config files:
python /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
python /usr/local/bin/neutron-openvswitch-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
From reading the logs you showed, it seems the bridge br-int is missing on your openstack controller. Could you run the following command and post the results?
sudo ovs-vsctl show
The bridges br-ex, br-int, br-eth1 (assuming eth1 is the Guest network) should be present.
If not, run:
# if br-int is missing
sudo ovs-vsctl add-br br-int
# if br-eth1 is missing
sudo ovs-vsctl add-br br-eth1
sudo ovs-vsctl add-port br-eth1 eth1
#if br-ex is missing
sudo ovs-vsctl add-br br-ex
sudo ovs-vsctl add-port br-ex EXTERNAL_INTERFACE
Secondly, before the instance to receive an IP, the port associated to the instance must be bound. To check this, run:
(nova show instance_name for the INSTANCE_IP)
PORT_ID=`neutron port-list | grep $INSTANCE_IP | awk '{ print $2 }'`
neutron port-show port_id
The result must have the following fields:
admin_state_up | True
binding:host_id | WIN-...
binding:vif_type | hyperv
binding:vnic_type | normal
status | ACTIVE
Finally, it would be useful to know what OpenStack release you are using as well as the environment you are using on your OpenStack controller and Hyper-V node.
Let me know if there are any news.
Best regards,
Claudiu Belu
4 | No.4 Revision |
Hello Sirisha,
First of all, in /etc/neutron/neutron.conf
, please use ML2 driver instead. ML2 allows you
to use multiple networking technologies: openvswitch, linuxbridge, Hyper-V L2 agent.
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
You will have to make sure that in the config file /etc/neutron/plugins/ml2/ml2_conf.ini
you have the following:
[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,hyperv
Also, make sure your openstack controller services q-svc and q-agt run using both config files:
python /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
python /usr/local/bin/neutron-openvswitch-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
From reading the logs you showed, it seems the bridge br-int is missing on your openstack controller. Could you run the following command and post the results?
sudo ovs-vsctl show
The bridges br-ex, br-int, br-eth1 (assuming eth1 is the Guest network) should be present.
If not, run:
# if br-int is missing
sudo ovs-vsctl add-br br-int
# if br-eth1 is missing
sudo ovs-vsctl add-br br-eth1
sudo ovs-vsctl add-port br-eth1 eth1
#if br-ex is missing
sudo ovs-vsctl add-br br-ex
sudo ovs-vsctl add-port br-ex EXTERNAL_INTERFACE
Secondly, before the instance to receive an IP, the port associated to the instance must be bound. To check this, run:
(nova show instance_name for the INSTANCE_IP)
PORT_ID=`neutron port-list | grep $INSTANCE_IP | awk '{ print $2 }'`
neutron port-show port_id
The result must have the following fields:
admin_state_up | True
binding:host_id | WIN-...
binding:vif_type | hyperv
binding:vnic_type | normal
status | ACTIVE
Finally, it would be useful to know what OpenStack release you are using as well as the environment you are using on your OpenStack controller and Hyper-V node.
UPDATE 1:
Can you try to do:
neutron net-show $NETWORK_ID
where NETWORK_ID is the ID of the network you are trying to boot the new instance.
The field provider:network_type
must be one of those values: vlan, flat, local
, as those are the ones supported by Hyper-V.
If it is, then some logs should be useful: Hyper-V compute node neutron logs, OpenStack controller q-agt.log
Also, it would be useful to see what agents are alive. Executing:
neutron agent-list
should list the registered agents and their status. At least DHCP agent, HyperV agent and Open vSwitch agent should be alive.
UPDATE 2:
If the port is bound, according to neutron, then we will have to get deeper into the issue. First of all, we should check that the NIC has been conected corectly on Hyper-V. On the Hyper-V compute node, open a powershell and execute:
Get-VMNetworkAdapterVlan -VMNetworkAdapterName $PORT_ID -ErrorAction Ignore
where $PORTID is the same portid shown in neutron. It should display something like this:
VMName VMNetworkAdapterName Mode VlanList
------ -------------------- ---- --------
instance-00000089 3bfe7d09-073c-4ac6-9acd-b73b44a0a5f9 Access 500
If not, the Hyper-V neutron logs are really necessary to know what to do next.
If the results are corect, then there could be a number of possible issues:
Make sure that DHCP agent is alive:
neutron agent-list
Make sure that the network you are creating the instance in has DHCP enabled:
neutron net-show $NETID # get SUBNETID neutron subnet-show $SUBNET_ID
Make sure that the OpenStack controller's eth1 and Hyper-V compute node's "external" network are in the same network.
Make sure that the OpenStack controller's eth1 is configured in promisc mode. For ubuntu, in /etc/network/interfaces
:
auto eth1 iface eth1 inet manual up ip link set eth1 up down ip link set eth1 down
and restart the networking service.
If the above did solve the issue, we will have to see what happens to the network traffic. For debugging purposes, please download cirros.vhd. Here's the script to do so:
git clone https://github.com/cloudbase/ci-overcloud-init-scripts.git glance image-create --property hypervisortype=hyperv --name cirros-vhdx --disk-format vhd --container-format bare --file ci-overcloud-init-scripts/scripts/devstackvm/cirros.vhdx glance image-create --property hypervisortype=hyperv --name cirros-vhd --disk-format vhd --container-format bare --file ci-overcloud-init-scripts/scripts/devstackvm/cirros.vhd
and then use cirros-vhd as the boot image and create a new instance. After the instance booted, you will need the instance's name and assigned IP and also:
nova show $VM_NAME | grep "instance_name"
nova show $VM_NAME | grep "private network"
# also need the private network's DHCP IP
# $NET_ID is the instance's network ID.
sudo ip netns qdhcp-$NET_ID ifconfig
#the DHCP IP is the IP address from the tap device. In my case, 10.0.0.3
On the Hyper-V compute node, execute in powershell:
# $INSTANCE_NAME will be instance-xxxxxxxx
Get-VM -VMName $INSTANCE_NAME | Get-VMConsole
This will open a VM console. Login, and execute:
ifconfig
# no assinged IP? then assign it manually (value from OpenStack Controller):
sudo ifconfig eth0 $ASSIGNED_IP netmask $NETWORK_NETMASK up
ping $DHCP_IP
# let it run.
Now, we will track the traffic from the Hyper-V VM to the DHCP service. On the OpenStack controller, execute:
# both ICMP echo request and ICMP echo reply must be visible, for all commands.
sudo tcpdump -vv -eni eth1 icmp
sudo ip netns exec qdhcp-$NET_ID tcpdump -vv -ni $TAP_NAME
If on the second command you see ICMP echo request and reply, but on the first command you only see ICMP echo request, there might be some issues with the returning traffic. The same would happen to the DHCP requests: they get to the server, but the DHCP reply doesn't get back.
If on the second command you see nothing, there might be some issues with how the bridges are created or how
OpenStack controller's neutron agent is configured. From what I can see from your sudo ovs-vsctl show
command,
there are some patches missing:
Bridge "br-eth1"
Port "phy-br-eth1"
Interface "phy-br-eth1"
type: patch
options: {peer="int-br-eth1"}
Bridge br-int
fail_mode: secure
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "int-br-eth1"
Interface "int-br-eth1"
type: patch
options: {peer="phy-br-eth1"}
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Let me know if there are any news.
Best regards,
Claudiu Belu
5 | No.5 Revision |
Hello Sirisha,
First of all, in /etc/neutron/neutron.conf
, please use ML2 driver instead. ML2 allows you
to use multiple networking technologies: openvswitch, linuxbridge, Hyper-V L2 agent.
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
You will have to make sure that in the config file /etc/neutron/plugins/ml2/ml2_conf.ini
you have the following:
[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,hyperv
Also, make sure your openstack controller services q-svc and q-agt run using both config files:
python /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
python /usr/local/bin/neutron-openvswitch-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
From reading the logs you showed, it seems the bridge br-int is missing on your openstack controller. Could you run the following command and post the results?
sudo ovs-vsctl show
The bridges br-ex, br-int, br-eth1 (assuming eth1 is the Guest network) should be present.
If not, run:
# if br-int is missing
sudo ovs-vsctl add-br br-int
# if br-eth1 is missing
sudo ovs-vsctl add-br br-eth1
sudo ovs-vsctl add-port br-eth1 eth1
#if br-ex is missing
sudo ovs-vsctl add-br br-ex
sudo ovs-vsctl add-port br-ex EXTERNAL_INTERFACE
Secondly, before the instance to receive an IP, the port associated to the instance must be bound. To check this, run:
(nova show instance_name for the INSTANCE_IP)
PORT_ID=`neutron port-list | grep $INSTANCE_IP | awk '{ print $2 }'`
neutron port-show port_id
The result must have the following fields:
admin_state_up | True
binding:host_id | WIN-...
binding:vif_type | hyperv
binding:vnic_type | normal
status | ACTIVE
Finally, it would be useful to know what OpenStack release you are using as well as the environment you are using on your OpenStack controller and Hyper-V node.
UPDATE 1:
Can you try to do:
neutron net-show $NETWORK_ID
where NETWORK_ID is the ID of the network you are trying to boot the new instance.
The field provider:network_type
must be one of those values: vlan, flat, local
, as those are the ones supported by Hyper-V.
If it is, then some logs should be useful: Hyper-V compute node neutron logs, OpenStack controller q-agt.log
Also, it would be useful to see what agents are alive. Executing:
neutron agent-list
should list the registered agents and their status. At least DHCP agent, HyperV agent and Open vSwitch agent should be alive.
UPDATE 2:
If the port is bound, according to neutron, then we will have to get deeper into the issue. First of all, we should check that the NIC has been conected corectly on Hyper-V. On the Hyper-V compute node, open a powershell and execute:
Get-VMNetworkAdapterVlan -VMNetworkAdapterName $PORT_ID -ErrorAction Ignore
where $PORTID is the same portid shown in neutron. It should display something like this:
VMName VMNetworkAdapterName Mode VlanList
------ -------------------- ---- --------
instance-00000089 3bfe7d09-073c-4ac6-9acd-b73b44a0a5f9 Access 500
If not, the Hyper-V neutron logs are really necessary to know what to do next.
If the results are corect, then there could be a number of possible issues:
Make sure that DHCP agent is alive:
neutron agent-list
agent-list
Make sure that the network you are creating the instance in has DHCP enabled:
#
$NET_ID is the network ID you created the instance Make sure that the OpenStack controller's eth1 and Hyper-V compute node's "external" network are in the same network.
Make sure that the OpenStack controller's eth1 is configured in promisc mode. For ubuntu, in /etc/network/interfaces
:
auto eth1
iface eth1 inet manual
up ip link set eth1 up
down ip link set eth1 and restart the networking service.
If the above did solve the issue, we will have to see what happens to the network traffic. For debugging purposes, please download cirros.vhd. Here's the script to do so:
git clone https://github.com/cloudbase/ci-overcloud-init-scripts.git
glance image-create --property and then use cirros-vhd as the boot image and create a new instance. After the instance booted, you will need the instance's name and assigned IP and also:
nova show $VM_NAME | grep "instance_name"
nova show $VM_NAME | grep "private network"
# also need the private network's DHCP IP
# $NET_ID is the instance's network ID.
sudo ip netns qdhcp-$NET_ID ifconfig
#the DHCP IP is the IP address from the tap device. In my case, 10.0.0.3
On the Hyper-V compute node, execute in powershell:
# $INSTANCE_NAME will be instance-xxxxxxxx
Get-VM -VMName $INSTANCE_NAME | Get-VMConsole
This will open a VM console. Login, and execute:
ifconfig
# no assinged IP? then assign it manually (value from OpenStack Controller):
sudo ifconfig eth0 $ASSIGNED_IP netmask $NETWORK_NETMASK up
ping $DHCP_IP
# let it run.
Now, we will track the traffic from the Hyper-V VM to the DHCP service. On the OpenStack controller, execute:
# both ICMP echo request and ICMP echo reply must be visible, for all commands.
sudo tcpdump -vv -eni eth1 icmp
sudo ip netns exec qdhcp-$NET_ID tcpdump -vv -ni $TAP_NAME
If on the second command you see ICMP echo request and reply, but on the first command you only see ICMP echo request, there might be some issues with the returning traffic. The same would happen to the DHCP requests: they get to the server, but the DHCP reply doesn't get back.
If on the second command you see nothing, there might be some issues with how the bridges are created or how
OpenStack controller's neutron agent is configured. From what I can see from your sudo ovs-vsctl show
command,
there are some patches missing:
Bridge "br-eth1"
Port "phy-br-eth1"
Interface "phy-br-eth1"
type: patch
options: {peer="int-br-eth1"}
Bridge br-int
fail_mode: secure
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "int-br-eth1"
Interface "int-br-eth1"
type: patch
options: {peer="phy-br-eth1"}
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Let me know if there are any news.
Best regards,
Claudiu Belu