New Question

Revision history [back]

click to hide/show revision 1
initial version

Configuring Open vSwitch/Neutron Agent to Use Existing Provider VLANs

We have recently stood up an OpenStack (Ocata) controller and a Hyper-V compute node with Cloudbase Nova agent installed as well as Open vSwitch 2.6.1 neutron agent (installed via the Cloudbase Open vSwitch installer).

Prior to standing up OpenStack in the past few weeks, we have had quite a number of customers hosted and managed manually on our network each with their own dedicated VLAN on our network. I am attempting to create provider networks for each customer where each network is configured to use each customers' existing, corresponding VLAN. For example:

Customer A is on VLAN 150

Customer B is on VLAN 200

Customer C is on VLAN 250

etc.

We want to create tenants in OpenStack for each customer, and then create a network for each customer's VLAN and assign it to the corresponding customer and allow customers to launch their own VMs on their dedicated VLANs on our network (with IPs from their subnet pool assigned to the VM automatically).

We have only configured ONE interface that we want to use with Open vSwitch on which OpenStack should be adding VMs. However, for some reason when OpenStack spins up a VM, we look at the Hyper-V settings for the VM and the network adapter is assigned to the wrong Hyper-V VMSwitch and the VLAN is not enabled or assigned.

Per the info below, the bridge br-back_end was created in OVS on the compute node. Port "Ethernet 2 Farm Nic" was added to the bridge. This is the physical interface where Customer A needs to be able to communicate on VLAN 150. Back-end zone is the virtual switch in Hyper-V that uses "Ethernet 2 Farm Nic".

We launched a VM for Customer A (which should be on VLAN 150). Per the info included below, when OpenStack launched the VM, interface/port "8e0a75d9-14a3-48d2-8526-cfdee2dc3cd8" was created automatically on the br-int OVS bridge. However, when looking at the settings for the instance in Hyper-V, it shows the instance is using "8e0a75d9-14a3-48d2-8526-cfdee2dc3cd8" on the Web zone VMSwitch. VLAN is not enabled in the adapter settings for the instance in Hyper-V and the VM (with DHCP turned on) is not picking up an IP address. Screenshot of the Hyper-V adapter settings for the instance.

Does anyone know why the Web zone VMSwitch is being assigned to the instance in Hyper-V and VLAN is not enabled for the adapter (with no VLAN assigned)? Per below, we need to be on VLAN 150 on the Ethernet 2 Farm Nic physical interface. The Back-end zone VMswitch uses the Ethernet 2 Farm Nic physical interface. The Web zone VMswitch uses some other physical interface that we do not want this VM to use. So, I'm not sure why the adapter for the instance is being attached to the Web zone VMswitch...

Does anyone have any ideas? Any thoughts or guidance would be GREATLY appreciated.

Here is our ovs-vsctl show output from the Hyper-V compute node:

PS C:\Windows\system32> ovs-vsctl show
c1a78073-23dd-4cfc-b743-063d9a654440
Bridge br-int
    fail_mode: secure
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port int-br-back_end
        Interface int-br-back_end
            type: patch
            options: {peer=phy-br-back_end}
    Port "8e0a75d9-14a3-48d2-8526-cfdee2dc3cd8"
        Interface "8e0a75d9-14a3-48d2-8526-cfdee2dc3cd8"
    Port br-int
        Interface br-int
            type: internal
Bridge br-tun
    fail_mode: secure
    Port br-tun
        Interface br-tun
            type: internal
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
Bridge br-back_end
    fail_mode: secure
    Port br-back_end
        Interface br-back_end
            type: internal
    Port "Ethernet 2 Farm Nic"
        Interface "Ethernet 2 Farm Nic"
    Port phy-br-back_end
        Interface phy-br-back_end
            type: patch
            options: {peer=int-br-back_end} PS
 C:\Windows\system32>

Here is a list of the NICs on the Hyper-V compute node:

PS C:\Windows\system32> get-netadapter

Name                      InterfaceDescription                    ifIndex Status       MacAddress             LinkSpeed
----                      --------------------                    ------- ------       ----------             ---------
br-tun                    Hyper-V Virtual Ethernet Adapter #4          78 Disabled     00-15-5D-63-47-04        10 Gbps

br-int                    Hyper-V Virtual Ethernet Adapter #3          76 Disabled     00-15-5D-63-47-03        10 Gbps

Ethernet 8 Cluster Wit... Broadcom NetXtreme Gigabit Ethernet #4       19 Up           00-0A-F7-53-7C-67         1 Gbps

Ethernet 7                Broadcom NetXtreme Gigabit Ethernet #3       18 Disconnected 00-0A-F7-53-7C-66          0 bps

Ethernet 6 SAN 5.6        Broadcom NetXtreme Gigabit Ethernet #2       17 Up           00-0A-F7-53-7C-65         1 Gbps

Ethernet 4 ProxyFarm Nic  Broadcom NetXtreme Gigabit Ethernet          15 Up           00-0A-F7-53-7C-64         1 Gbps

br-back_end               Hyper-V Virtual Ethernet Adapter #2          32 Up           00-15-5D-63-47-02        10 Gbps

Ethernet  WebFarm Nic     Broadcom BCM5709C NetXtreme II Gi...#39      12 Up           00-24-E8-75-4A-C0         1 Gbps

Ethernet 2 Farm Nic       Broadcom BCM5709C NetXtreme II Gi...#40      13 Up           00-24-E8-75-4A-C2         1 Gbps

Ethernet 3 Backup Nic     Broadcom BCM5709C NetXtreme II Gi...#41      14 Up           00-24-E8-75-4A-C4         1 Gbps

Ethernet 5 MGT            Broadcom BCM5709C NetXtreme II Gi...#38      16 Up           00-24-E8-75-4A-BE         1 Gbps

Here is the list of Hyper-V VM switches on the Hyper-V compute node:

PS C:\Windows\system32> get-vmswitch

Name          SwitchType NetAdapterInterfaceDescription
----          ---------- ------------------------------
Back-end zone External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #40
Proxy zone    External   Broadcom NetXtreme Gigabit Ethernet
Web zone      External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #39

Here is our neutron_ovs_agent.conf:

[DEFAULT]
verbose=true
debug=false
control_exchange=neutron
policy_file=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\policy.json
#rpc_backend=neutron.openstack.common.rpc.impl_kombu
rpc_backend=rabbit
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=PaSsW0Rd
logdir=C:\OpenStack\Log\
logfile=neutron-ovs-agent.log
[agent]
tunnel_types = vxlan
enable_metrics_collection=false
[SECURITYGROUP]
enable_security_group=false
[ovs]
local_ip = 14.14.14.2
tunnel_bridge = br-tun
integration_bridge = br-int
tenant_network_type = vlan
#enable_tunneling = true
enable_tunneling = false
of_interface = ovs-ofctl 
ovsdb_interface = vsctl
bridge_mappings = provider:br-back_end

Here is the network we created for Customer A on OpenStack (their VLAN is 150):

~]$ openstack network show mgmt-net
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2017-05-04T22:16:25Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | ce4fda94-7924-4dda-88b7-ed76a890fde3 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | None                                 |
| mtu                       | 1500                                 |
| name                      | mgmt-net                             |
| port_security_enabled     | True                                 |
| project_id                | f0d140af365f4c65b8144f9d5a784bd8     |
| provider:network_type     | vlan                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | 150                                  |
| qos_policy_id             | None                                 |
| revision_number           | 5                                    |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | db2924a3-554d-4367-acd8-2bc15b547ddb |
| updated_at                | 2017-05-04T22:31:40Z                 |
+---------------------------+--------------------------------------+

Configuring Open vSwitch/Neutron Agent to Use Existing Provider VLANs

We have recently stood up an OpenStack (Ocata) controller and a Hyper-V compute node with Cloudbase Nova agent installed as well as Open vSwitch 2.6.1 neutron agent (installed via the Cloudbase Open vSwitch installer).

Prior to standing up OpenStack in the past few weeks, we have had quite a number of customers hosted and managed manually on our network each with their own dedicated VLAN on our network. I am attempting to create provider networks for each customer where each network is configured to use each customers' existing, corresponding VLAN. For example:

  • Customer A is on VLAN 150

  • Customer B is on VLAN 200

  • Customer C is on VLAN 250

  • etc.

We want to create tenants in OpenStack for each customer, and then create a network for each customer's VLAN and assign it to the corresponding customer and allow customers to launch their own VMs on their dedicated VLANs on our network (with IPs from their subnet pool assigned to the VM automatically).

We have only configured ONE interface that we want to use with Open vSwitch on which OpenStack should be adding VMs. However, for some reason when OpenStack spins up a VM, we look at the Hyper-V settings for the VM and the network adapter is assigned to the wrong Hyper-V VMSwitch and the VLAN is not enabled or assigned.

Per the info below, the bridge br-back_end was created in OVS on the compute node. Port "Ethernet 2 Farm Nic" was added to the bridge. This is the physical interface where Customer A needs to be able to communicate on VLAN 150. Back-end zone is the virtual switch in Hyper-V that uses "Ethernet 2 Farm Nic".

We launched a VM for Customer A (which should be on VLAN 150). Per the info included below, when OpenStack launched the VM, interface/port "8e0a75d9-14a3-48d2-8526-cfdee2dc3cd8" was created automatically on the br-int OVS bridge. However, when looking at the settings for the instance in Hyper-V, it shows the instance is using "8e0a75d9-14a3-48d2-8526-cfdee2dc3cd8" on the Web zone VMSwitch. VLAN is not enabled in the adapter settings for the instance in Hyper-V and the VM (with DHCP turned on) is not picking up an IP address. Screenshot of the Hyper-V adapter settings for the instance.

Does anyone know why the Web zone VMSwitch is being assigned to the instance in Hyper-V and VLAN is not enabled for the adapter (with no VLAN assigned)? Per below, we need to be on VLAN 150 on the Ethernet 2 Farm Nic physical interface. The Back-end zone VMswitch uses the Ethernet 2 Farm Nic physical interface. The Web zone VMswitch uses some other physical interface that we do not want this VM to use. So, I'm not sure why the adapter for the instance is being attached to the Web zone VMswitch...

Does anyone have any ideas? Any thoughts or guidance would be GREATLY appreciated.

Here is our ovs-vsctl show output from the Hyper-V compute node:

PS C:\Windows\system32> ovs-vsctl show
c1a78073-23dd-4cfc-b743-063d9a654440
Bridge br-int
    fail_mode: secure
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port int-br-back_end
        Interface int-br-back_end
            type: patch
            options: {peer=phy-br-back_end}
    Port "8e0a75d9-14a3-48d2-8526-cfdee2dc3cd8"
        Interface "8e0a75d9-14a3-48d2-8526-cfdee2dc3cd8"
    Port br-int
        Interface br-int
            type: internal
Bridge br-tun
    fail_mode: secure
    Port br-tun
        Interface br-tun
            type: internal
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
Bridge br-back_end
    fail_mode: secure
    Port br-back_end
        Interface br-back_end
            type: internal
    Port "Ethernet 2 Farm Nic"
        Interface "Ethernet 2 Farm Nic"
    Port phy-br-back_end
        Interface phy-br-back_end
            type: patch
            options: {peer=int-br-back_end} PS
 C:\Windows\system32>

Here is a list of the NICs on the Hyper-V compute node:

PS C:\Windows\system32> get-netadapter

Name                      InterfaceDescription                    ifIndex Status       MacAddress             LinkSpeed
----                      --------------------                    ------- ------       ----------             ---------
br-tun                    Hyper-V Virtual Ethernet Adapter #4          78 Disabled     00-15-5D-63-47-04        10 Gbps

br-int                    Hyper-V Virtual Ethernet Adapter #3          76 Disabled     00-15-5D-63-47-03        10 Gbps

Ethernet 8 Cluster Wit... Broadcom NetXtreme Gigabit Ethernet #4       19 Up           00-0A-F7-53-7C-67         1 Gbps

Ethernet 7                Broadcom NetXtreme Gigabit Ethernet #3       18 Disconnected 00-0A-F7-53-7C-66          0 bps

Ethernet 6 SAN 5.6        Broadcom NetXtreme Gigabit Ethernet #2       17 Up           00-0A-F7-53-7C-65         1 Gbps

Ethernet 4 ProxyFarm Nic  Broadcom NetXtreme Gigabit Ethernet          15 Up           00-0A-F7-53-7C-64         1 Gbps

br-back_end               Hyper-V Virtual Ethernet Adapter #2          32 Up           00-15-5D-63-47-02        10 Gbps

Ethernet  WebFarm Nic     Broadcom BCM5709C NetXtreme II Gi...#39      12 Up           00-24-E8-75-4A-C0         1 Gbps

Ethernet 2 Farm Nic       Broadcom BCM5709C NetXtreme II Gi...#40      13 Up           00-24-E8-75-4A-C2         1 Gbps

Ethernet 3 Backup Nic     Broadcom BCM5709C NetXtreme II Gi...#41      14 Up           00-24-E8-75-4A-C4         1 Gbps

Ethernet 5 MGT            Broadcom BCM5709C NetXtreme II Gi...#38      16 Up           00-24-E8-75-4A-BE         1 Gbps

Here is the list of Hyper-V VM switches on the Hyper-V compute node:

PS C:\Windows\system32> get-vmswitch

Name          SwitchType NetAdapterInterfaceDescription
----          ---------- ------------------------------
Back-end zone External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #40
Proxy zone    External   Broadcom NetXtreme Gigabit Ethernet
Web zone      External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #39

Here is our neutron_ovs_agent.conf:

[DEFAULT]
verbose=true
debug=false
control_exchange=neutron
policy_file=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\policy.json
#rpc_backend=neutron.openstack.common.rpc.impl_kombu
rpc_backend=rabbit
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=PaSsW0Rd
logdir=C:\OpenStack\Log\
logfile=neutron-ovs-agent.log
[agent]
tunnel_types = vxlan
enable_metrics_collection=false
[SECURITYGROUP]
enable_security_group=false
[ovs]
local_ip = 14.14.14.2
tunnel_bridge = br-tun
integration_bridge = br-int
tenant_network_type = vlan
#enable_tunneling = true
enable_tunneling = false
of_interface = ovs-ofctl 
ovsdb_interface = vsctl
bridge_mappings = provider:br-back_end

Here is the network we created for Customer A on OpenStack (their VLAN is 150):

~]$ openstack network show mgmt-net
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2017-05-04T22:16:25Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | ce4fda94-7924-4dda-88b7-ed76a890fde3 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | None                                 |
| mtu                       | 1500                                 |
| name                      | mgmt-net                             |
| port_security_enabled     | True                                 |
| project_id                | f0d140af365f4c65b8144f9d5a784bd8     |
| provider:network_type     | vlan                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | 150                                  |
| qos_policy_id             | None                                 |
| revision_number           | 5                                    |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | db2924a3-554d-4367-acd8-2bc15b547ddb |
| updated_at                | 2017-05-04T22:31:40Z                 |
+---------------------------+--------------------------------------+

Configuring Open vSwitch/Neutron Agent to Use Existing Provider VLANs

We have recently stood up an OpenStack (Ocata) controller and a Hyper-V compute node with Cloudbase Nova agent installed as well as Open vSwitch 2.6.1 neutron agent (installed via the Cloudbase Open vSwitch installer).

Prior to standing up OpenStack in the past few weeks, we have had quite a number of customers hosted and managed manually on our network each with their own dedicated VLAN on our network. I am attempting to create provider networks for each customer where each network is configured to use each customers' existing, corresponding VLAN. For example:

  • Customer A is on VLAN 150
  • Customer B is on VLAN 200
  • Customer C is on VLAN 250
  • etc.

We want to create tenants in OpenStack for each customer, and then create a network for each customer's VLAN and assign it to the corresponding customer and allow customers to launch their own VMs on their dedicated VLANs on our network (with IPs from their subnet pool assigned to the VM automatically).

We have only configured ONE interface that we want to use with Open vSwitch on which OpenStack should be adding VMs. However, for some reason when OpenStack spins up a VM, we look at the Hyper-V settings for the VM and the network adapter is assigned to the wrong Hyper-V VMSwitch and the VLAN is not enabled or assigned.

Per the info below, the bridge br-back_end was created in OVS on the compute node. Port "Ethernet 2 Farm Nic" was added to the bridge. This is the physical interface where Customer A needs to be able to communicate on VLAN 150. Back-end zone is the virtual switch in Hyper-V that uses "Ethernet 2 Farm Nic".

We launched a VM for Customer A (which should be on VLAN 150). Per the info included below, when OpenStack launched the VM, interface/port "8e0a75d9-14a3-48d2-8526-cfdee2dc3cd8" was created automatically on the br-int OVS bridge. However, when looking at the settings for the instance in Hyper-V, it shows the instance is using "8e0a75d9-14a3-48d2-8526-cfdee2dc3cd8" on the Web zone VMSwitch. VLAN is not enabled in the adapter settings for the instance in Hyper-V and the VM (with DHCP turned on) is not picking up an IP address. Screenshot of the Hyper-V adapter settings for the instance.

Does anyone know why the Web zone VMSwitch is being assigned to the instance in Hyper-V and VLAN is not enabled for the adapter (with no VLAN assigned)? Per below, we need to be on VLAN 150 on the Ethernet 2 Farm Nic physical interface. The Back-end zone VMswitch uses the Ethernet 2 Farm Nic physical interface. The Web zone VMswitch uses some other physical interface that we do not want this VM to use. So, I'm not sure why the adapter for the instance is being attached to the Web zone VMswitch...

Does anyone have any ideas? Any thoughts or guidance would be GREATLY appreciated.

Here is our ovs-vsctl show output from the Hyper-V compute node:

PS C:\Windows\system32> ovs-vsctl show
c1a78073-23dd-4cfc-b743-063d9a654440
Bridge br-int
    fail_mode: secure
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port int-br-back_end
        Interface int-br-back_end
            type: patch
            options: {peer=phy-br-back_end}
    Port "8e0a75d9-14a3-48d2-8526-cfdee2dc3cd8"
        Interface "8e0a75d9-14a3-48d2-8526-cfdee2dc3cd8"
    Port br-int
        Interface br-int
            type: internal
Bridge br-tun
    fail_mode: secure
    Port br-tun
        Interface br-tun
            type: internal
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
Bridge br-back_end
    fail_mode: secure
    Port br-back_end
        Interface br-back_end
            type: internal
    Port "Ethernet 2 Farm Nic"
        Interface "Ethernet 2 Farm Nic"
    Port phy-br-back_end
        Interface phy-br-back_end
            type: patch
            options: {peer=int-br-back_end} PS
 C:\Windows\system32>

Here is a list of the NICs on the Hyper-V compute node:

PS C:\Windows\system32> get-netadapter

Name                      InterfaceDescription                    ifIndex Status       MacAddress             LinkSpeed
----                      --------------------                    ------- ------       ----------             ---------
br-tun                    Hyper-V Virtual Ethernet Adapter #4          78 Disabled     00-15-5D-63-47-04        10 Gbps

br-int                    Hyper-V Virtual Ethernet Adapter #3          76 Disabled     00-15-5D-63-47-03        10 Gbps

Ethernet 8 Cluster Wit... Broadcom NetXtreme Gigabit Ethernet #4       19 Up           00-0A-F7-53-7C-67         1 Gbps

Ethernet 7                Broadcom NetXtreme Gigabit Ethernet #3       18 Disconnected 00-0A-F7-53-7C-66          0 bps

Ethernet 6 SAN 5.6        Broadcom NetXtreme Gigabit Ethernet #2       17 Up           00-0A-F7-53-7C-65         1 Gbps

Ethernet 4 ProxyFarm Nic  Broadcom NetXtreme Gigabit Ethernet          15 Up           00-0A-F7-53-7C-64         1 Gbps

br-back_end               Hyper-V Virtual Ethernet Adapter #2          32 Up           00-15-5D-63-47-02        10 Gbps

Ethernet  WebFarm Nic     Broadcom BCM5709C NetXtreme II Gi...#39      12 Up           00-24-E8-75-4A-C0         1 Gbps

Ethernet 2 Farm Nic       Broadcom BCM5709C NetXtreme II Gi...#40      13 Up           00-24-E8-75-4A-C2         1 Gbps

Ethernet 3 Backup Nic     Broadcom BCM5709C NetXtreme II Gi...#41      14 Up           00-24-E8-75-4A-C4         1 Gbps

Ethernet 5 MGT            Broadcom BCM5709C NetXtreme II Gi...#38      16 Up           00-24-E8-75-4A-BE         1 Gbps

Here is the list of Hyper-V VM switches on the Hyper-V compute node:

PS C:\Windows\system32> get-vmswitch

Name          SwitchType NetAdapterInterfaceDescription
----          ---------- ------------------------------
Back-end zone External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #40
Proxy zone    External   Broadcom NetXtreme Gigabit Ethernet
Web zone      External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #39

Here is our neutron_ovs_agent.conf:

[DEFAULT]
verbose=true
debug=false
control_exchange=neutron
policy_file=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\policy.json
#rpc_backend=neutron.openstack.common.rpc.impl_kombu
rpc_backend=rabbit
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=PaSsW0Rd
logdir=C:\OpenStack\Log\
logfile=neutron-ovs-agent.log
[agent]
tunnel_types = vxlan
enable_metrics_collection=false
[SECURITYGROUP]
enable_security_group=false
[ovs]
local_ip = 14.14.14.2
tunnel_bridge = br-tun
integration_bridge = br-int
tenant_network_type = vlan
#enable_tunneling = true
enable_tunneling = false
of_interface = ovs-ofctl 
ovsdb_interface = vsctl
bridge_mappings = provider:br-back_end

Here is the network we created for Customer A on OpenStack (their VLAN is 150):

~]$ openstack network show mgmt-net
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2017-05-04T22:16:25Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | ce4fda94-7924-4dda-88b7-ed76a890fde3 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | None                                 |
| mtu                       | 1500                                 |
| name                      | mgmt-net                             |
| port_security_enabled     | True                                 |
| project_id                | f0d140af365f4c65b8144f9d5a784bd8     |
| provider:network_type     | vlan                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | 150                                  |
| qos_policy_id             | None                                 |
| revision_number           | 5                                    |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | db2924a3-554d-4367-acd8-2bc15b547ddb |
| updated_at                | 2017-05-04T22:31:40Z                 |
+---------------------------+--------------------------------------+

Here is the nova.conf from the Hyper-V compute node:

[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
compute_driver=compute_hyperv.driver.HyperVDriver
instances_path=C:\ClusterStorage\Volume1\VMs
use_cow_images=true
flat_injected=true
mkisofs_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\mkisofs.exe
verbose=false
allow_resize_to_same_host=true
running_deleted_instance_poll_interval=120
resize_confirm_window=5
resume_guests_state_on_host_boot=true
rpc_response_timeout=1800
lock_path=C:\OpenStack\Log\
vif_plugging_is_fatal=false
vif_plugging_timeout=60
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=nova-compute.log
force_config_drive=True
[placement]
auth_strategy=keystone
auth_plugin=v3password
auth_type=password
auth_url=http://controller:35357/v3
project_name=service
username=placement
password=[redacted]
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
[notifications]
[glance]
api_servers=http://controller:9292
[hyperv]
vswitch_name=Web zone
limit_cpu_features=true
config_drive_inject_password=true
qemu_img_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom=true
dynamic_memory_ratio=1
enable_instance_metrics_collection=false
[rdp]
enabled=true
html5_proxy_base_url=http://controller:8000/
[neutron]
url=http://controller:9696
auth_strategy=keystone
project_name=service
username=neutron
password=[redacted]
auth_url=http://controller:35357/v3
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
auth_plugin=v3password
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

Edit: added nova.conf from the compute node.