New Question

Revision history [back]

click to hide/show revision 1
initial version

HyperV-Agent Error: PortBindingFailed

I am attempting to configure OpenStack to launch instances on a specific VLAN on one of the interfaces on my compute nodes. The VMswitch name is Back-end zone. I want to launch VMs on VLAN 150 on this VMswitch.

I have installed the latest Cloudbase Nova driver for Ocata and am using the default Hyper-V neutron agent.

Do I need to do anything with the segment_id directive for the subnet?

I have tried setting the router:external directive for the network to 'internal' but this doesn't seem to help.

All relevant information I could think of has been included below. Please let me know if there is any additional info you would like to see. Any help would be greatly appreciated.

Here is my network agent list on the controller:

# openstack network agent list
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| ID                                   | Agent Type     | Host          | Availability Zone | Alive | State | Binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | True  | UP    | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack | None              | True  | UP    | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | True  | UP    | neutron-l3-agent       |
| da6107b9-c6ac-41cd-8063-28ce0d702c24 | HyperV agent   | T-Cloud-1     | None              | True  | UP    | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+

Here are my VMswitches:

PS C:\Windows\system32> get-vmswitch

Name          SwitchType NetAdapterInterfaceDescription
----          ---------- ------------------------------
Back-end zone External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #40
Proxy zone    External   Broadcom NetXtreme Gigabit Ethernet
Web zone      External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #39

I receive the following error message in Horizon when launching an instance:

2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] Instance failed network setup after 1 attempt(s)
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager Traceback (most recent call last):
[...]
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] [instance: 109a7120-b447-4313-8050-20a7099118d4] Instance failed to spawn
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] Traceback (most recent call last):
[...]
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.

I don't see any new logs in the neutron-hyperv-agent.log file on the compute node.

I see the following errors in /var/log/neutron/server.log on the controller:

2017-05-16 16:53:29.701 7634 WARNING neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] Unable to schedule network c8321bc0-f38c-4943-988b-f298313ae1be: no agents available; will retry on subsequent port and subnet creation events.
2017-05-16 16:53:29.890 7634 INFO neutron.wsgi [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] 10.9.47.71 - - [16/May/2017 16:53:29] "POST /v2.0/ports.json HTTP/1.1" 201 1087 1.430238
2017-05-16 16:53:30.174 7634 INFO neutron.wsgi [req-b6a0a893-439c-4bf1-a548-7f924ed19755 - - - - -] 10.9.47.71 - - [16/May/2017 16:53:30] "GET /v2.0/extensions.json HTTP/1.1" 200 7503 0.278365
2017-05-16 16:53:31.892 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[... attempts repeat same message ...]
2017-05-16 16:53:32.054 7634 INFO neutron.plugins.ml2.plugin [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Attempt 10 to bind port 5c6b4f72-635e-44ed-8681-01fde769998b
2017-05-16 16:53:32.074 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]

This is my neutron_hyperv_agent.conf on the compute node:

[DEFAULT]
verbose=false
control_exchange=neutron
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=neutron-hyperv-agent.log
[AGENT]
polling_interval=2
physical_network_vswitch_mappings=*:Back-end zone
enable_metrics_collection=false
enable_qos_extension=false
worker_count=12
[SECURITYGROUP]
firewall_driver=hyperv
enable_security_group=true
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

This is my nova.conf on the compute node:

[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
compute_driver=compute_hyperv.driver.HyperVDriver
instances_path=C:\ClusterStorage\Volume1\VMs
use_cow_images=true
flat_injected=true
mkisofs_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\mkisofs.exe
verbose=false
allow_resize_to_same_host=true
running_deleted_instance_poll_interval=120
resize_confirm_window=5
resume_guests_state_on_host_boot=true
rpc_response_timeout=1800
lock_path=C:\OpenStack\Log\
vif_plugging_is_fatal=false
vif_plugging_timeout=60
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=nova-compute.log
force_config_drive=True
[placement]
auth_strategy=keystone
auth_plugin=v3password
auth_type=password
auth_url=http://controller:35357/v3
project_name=service
username=placement
password=[redacted]
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
[notifications]
[glance]
api_servers=http://controller:9292
[hyperv]
vswitch_name=Back-end zone
limit_cpu_features=true
config_drive_inject_password=true
qemu_img_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom=true
dynamic_memory_ratio=1
enable_instance_metrics_collection=false
[rdp]
enabled=true
html5_proxy_base_url=http://controller:8000/
[neutron]
url=http://controller:9696
auth_strategy=keystone
project_name=service
username=neutron
password=[redacted]
auth_url=http://controller:35357/v3
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
auth_plugin=v3password
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

Here is the show output for the network I am using for the instances that fail to launch:

# openstack network show c8321bc0-f38c-4943-988b-f298313ae1be
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2017-05-16T20:47:01Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | c8321bc0-f38c-4943-988b-f298313ae1be |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| mtu                       | 1500                                 |
| name                      | mgmt-net2                            |
| port_security_enabled     | True                                 |
| project_id                | f0d140af365f4c65b8144f9d5a784bd8     |
| provider:network_type     | vlan                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | 150                                  |
| qos_policy_id             | None                                 |
| revision_number           | 9                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| updated_at                | 2017-05-16T20:49:27Z                 |
+---------------------------+--------------------------------------+

Here is the show output for the subnet being used:

# openstack subnet show 7f81cf8b-32f1-44e1-bafa-19d6906e1e13
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| allocation_pools  | 10.9.48.240-10.9.48.250              |
| cidr              | 10.9.48.0/24                         |
| created_at        | 2017-05-16T20:49:27Z                 |
| description       |                                      |
| dns_nameservers   | 8.8.4.4                              |
| enable_dhcp       | True                                 |
| gateway_ip        | 10.9.48.1                            |
| host_routes       |                                      |
| id                | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| ip_version        | 4                                    |
| ipv6_address_mode | None                                 |
| ipv6_ra_mode      | None                                 |
| name              | mgmt-net_tso_test_range2             |
| network_id        | c8321bc0-f38c-4943-988b-f298313ae1be |
| project_id        | f0d140af365f4c65b8144f9d5a784bd8     |
| revision_number   | 2                                    |
| segment_id        | None                                 |
| service_types     |                                      |
| subnetpool_id     | None                                 |
| updated_at        | 2017-05-16T20:49:27Z                 |
+-------------------+--------------------------------------+

HyperV-Agent Error: PortBindingFailed

I am attempting to configure OpenStack to launch instances on a specific VLAN on one of the interfaces on my compute nodes. The VMswitch name is Back-end zone. I want to launch VMs on VLAN 150 on this VMswitch.

I have installed the latest Cloudbase Nova driver for Ocata and am using the default Hyper-V neutron agent.

Do I need to do anything with the segment_id directive for the subnet?

I have tried setting the router:external directive for the network to 'internal' but this doesn't seem to help.

All relevant information I could think of has been included below. Please let me know if there is any additional info you would like to see. Any help would be greatly appreciated.

Here is my network agent list on the controller:

# openstack network agent list
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| ID                                   | Agent Type     | Host          | Availability Zone | Alive | State | Binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | True  | UP    | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack | None              | True  | UP    | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | True  | UP    | neutron-l3-agent       |
| da6107b9-c6ac-41cd-8063-28ce0d702c24 | HyperV agent   | T-Cloud-1     | None              | True  | UP    | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+

Here are my VMswitches:

PS C:\Windows\system32> get-vmswitch

Name          SwitchType NetAdapterInterfaceDescription
----          ---------- ------------------------------
Back-end zone External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #40
Proxy zone    External   Broadcom NetXtreme Gigabit Ethernet
Web zone      External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #39

I receive the following error message in Horizon when launching an instance:

2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] Instance failed network setup after 1 attempt(s)
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager Traceback (most recent call last):
[...]
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] [instance: 109a7120-b447-4313-8050-20a7099118d4] Instance failed to spawn
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] Traceback (most recent call last):
[...]
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.

I don't see any new logs in the neutron-hyperv-agent.log file on the compute node.

I see the following errors in /var/log/neutron/server.log on the controller:

2017-05-16 16:53:29.701 7634 WARNING neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] Unable to schedule network c8321bc0-f38c-4943-988b-f298313ae1be: no agents available; will retry on subsequent port and subnet creation events.
2017-05-16 16:53:29.890 7634 INFO neutron.wsgi [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] 10.9.47.71 - - [16/May/2017 16:53:29] "POST /v2.0/ports.json HTTP/1.1" 201 1087 1.430238
2017-05-16 16:53:30.174 7634 INFO neutron.wsgi [req-b6a0a893-439c-4bf1-a548-7f924ed19755 - - - - -] 10.9.47.71 - - [16/May/2017 16:53:30] "GET /v2.0/extensions.json HTTP/1.1" 200 7503 0.278365
2017-05-16 16:53:31.892 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[... attempts repeat same message ...]
2017-05-16 16:53:32.054 7634 INFO neutron.plugins.ml2.plugin [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Attempt 10 to bind port 5c6b4f72-635e-44ed-8681-01fde769998b
2017-05-16 16:53:32.074 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]

This is my neutron_hyperv_agent.conf on the compute node:

[DEFAULT]
verbose=false
control_exchange=neutron
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=neutron-hyperv-agent.log
[AGENT]
polling_interval=2
physical_network_vswitch_mappings=*:Back-end zone
enable_metrics_collection=false
enable_qos_extension=false
worker_count=12
[SECURITYGROUP]
firewall_driver=hyperv
enable_security_group=true
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

This is my nova.conf on the compute node:

[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
compute_driver=compute_hyperv.driver.HyperVDriver
instances_path=C:\ClusterStorage\Volume1\VMs
use_cow_images=true
flat_injected=true
mkisofs_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\mkisofs.exe
verbose=false
allow_resize_to_same_host=true
running_deleted_instance_poll_interval=120
resize_confirm_window=5
resume_guests_state_on_host_boot=true
rpc_response_timeout=1800
lock_path=C:\OpenStack\Log\
vif_plugging_is_fatal=false
vif_plugging_timeout=60
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=nova-compute.log
force_config_drive=True
[placement]
auth_strategy=keystone
auth_plugin=v3password
auth_type=password
auth_url=http://controller:35357/v3
project_name=service
username=placement
password=[redacted]
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
[notifications]
[glance]
api_servers=http://controller:9292
[hyperv]
vswitch_name=Back-end zone
limit_cpu_features=true
config_drive_inject_password=true
qemu_img_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom=true
dynamic_memory_ratio=1
enable_instance_metrics_collection=false
[rdp]
enabled=true
html5_proxy_base_url=http://controller:8000/
[neutron]
url=http://controller:9696
auth_strategy=keystone
project_name=service
username=neutron
password=[redacted]
auth_url=http://controller:35357/v3
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
auth_plugin=v3password
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

Here is the show output for the network I am using for the instances that fail to launch:

# openstack network show c8321bc0-f38c-4943-988b-f298313ae1be
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2017-05-16T20:47:01Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | c8321bc0-f38c-4943-988b-f298313ae1be |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| mtu                       | 1500                                 |
| name                      | mgmt-net2                            |
| port_security_enabled     | True                                 |
| project_id                | f0d140af365f4c65b8144f9d5a784bd8     |
| provider:network_type     | vlan                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | 150                                  |
| qos_policy_id             | None                                 |
| revision_number           | 9                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| updated_at                | 2017-05-16T20:49:27Z                 |
+---------------------------+--------------------------------------+

Here is the show output for the subnet being used:

# openstack subnet show 7f81cf8b-32f1-44e1-bafa-19d6906e1e13
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| allocation_pools  | 10.9.48.240-10.9.48.250              |
| cidr              | 10.9.48.0/24                         |
| created_at        | 2017-05-16T20:49:27Z                 |
| description       |                                      |
| dns_nameservers   | 8.8.4.4                              |
| enable_dhcp       | True                                 |
| gateway_ip        | 10.9.48.1                            |
| host_routes       |                                      |
| id                | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| ip_version        | 4                                    |
| ipv6_address_mode | None                                 |
| ipv6_ra_mode      | None                                 |
| name              | mgmt-net_tso_test_range2             |
| network_id        | c8321bc0-f38c-4943-988b-f298313ae1be |
| project_id        | f0d140af365f4c65b8144f9d5a784bd8     |
| revision_number   | 2                                    |
| segment_id        | None                                 |
| service_types     |                                      |
| subnetpool_id     | None                                 |
| updated_at        | 2017-05-16T20:49:27Z                 |
+-------------------+--------------------------------------+

HyperV-Agent Error: PortBindingFailed

I am attempting to configure OpenStack to launch instances on a specific VLAN on one of the interfaces on my compute nodes. The VMswitch name is Back-end zone. I want to launch VMs on VLAN 150 on this VMswitch.

I have installed the latest Cloudbase Nova driver for Ocata and am using the default Hyper-V neutron agent.

Do I need to do anything with the segment_id directive for the subnet?

I have tried setting the router:external directive for the network to 'internal' but this doesn't seem to help.

All relevant information I could think of has been included below. Please let me know if there is any additional info you would like to see. Any help would be greatly appreciated.

Here is my network agent list on the controller:

# openstack network agent list
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| ID                                   | Agent Type     | Host          | Availability Zone | Alive | State | Binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | True  | UP    | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack | None              | True  | UP    | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | True  | UP    | neutron-l3-agent       |
| da6107b9-c6ac-41cd-8063-28ce0d702c24 | HyperV agent   | T-Cloud-1     | None              | True  | UP    | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+

Here are my VMswitches:

PS C:\Windows\system32> get-vmswitch

Name          SwitchType NetAdapterInterfaceDescription
----          ---------- ------------------------------
Back-end zone External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #40
Proxy zone    External   Broadcom NetXtreme Gigabit Ethernet
Web zone      External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #39

I receive the following error message in Horizon when launching an instance:

2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] Instance failed network setup after 1 attempt(s)
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager Traceback (most recent call last):
[...]
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] [instance: 109a7120-b447-4313-8050-20a7099118d4] Instance failed to spawn
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] Traceback (most recent call last):
[...]
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.

I don't see any new logs in the neutron-hyperv-agent.log file on the compute node.

I see the following errors in /var/log/neutron/server.log on the controller:

2017-05-16 16:53:29.701 7634 WARNING neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] Unable to schedule network c8321bc0-f38c-4943-988b-f298313ae1be: no agents available; will retry on subsequent port and subnet creation events.
2017-05-16 16:53:29.890 7634 INFO neutron.wsgi [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] 10.9.47.71 - - [16/May/2017 16:53:29] "POST /v2.0/ports.json HTTP/1.1" 201 1087 1.430238
2017-05-16 16:53:30.174 7634 INFO neutron.wsgi [req-b6a0a893-439c-4bf1-a548-7f924ed19755 - - - - -] 10.9.47.71 - - [16/May/2017 16:53:30] "GET /v2.0/extensions.json HTTP/1.1" 200 7503 0.278365
2017-05-16 16:53:31.892 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[... attempts repeat same message ...]
2017-05-16 16:53:32.054 7634 INFO neutron.plugins.ml2.plugin [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Attempt 10 to bind port 5c6b4f72-635e-44ed-8681-01fde769998b
2017-05-16 16:53:32.074 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[...]
2017-05-18 22:53:59.037 7640 INFO neutron.plugins.ml2.plugin [req-eeeef18c-ac1d-4a21-99b9-dfcff8a9ce30 - - - - -] No ports have port_id starting with Network Adapter

This is my neutron_hyperv_agent.conf on the compute node:

[DEFAULT]
verbose=false
control_exchange=neutron
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=neutron-hyperv-agent.log
[AGENT]
polling_interval=2
physical_network_vswitch_mappings=*:Back-end zone
enable_metrics_collection=false
enable_qos_extension=false
worker_count=12
[SECURITYGROUP]
firewall_driver=hyperv
enable_security_group=true
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

This is my nova.conf on the compute node:

[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
compute_driver=compute_hyperv.driver.HyperVDriver
instances_path=C:\ClusterStorage\Volume1\VMs
use_cow_images=true
flat_injected=true
mkisofs_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\mkisofs.exe
verbose=false
allow_resize_to_same_host=true
running_deleted_instance_poll_interval=120
resize_confirm_window=5
resume_guests_state_on_host_boot=true
rpc_response_timeout=1800
lock_path=C:\OpenStack\Log\
vif_plugging_is_fatal=false
vif_plugging_timeout=60
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=nova-compute.log
force_config_drive=True
[placement]
auth_strategy=keystone
auth_plugin=v3password
auth_type=password
auth_url=http://controller:35357/v3
project_name=service
username=placement
password=[redacted]
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
[notifications]
[glance]
api_servers=http://controller:9292
[hyperv]
vswitch_name=Back-end zone
limit_cpu_features=true
config_drive_inject_password=true
qemu_img_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom=true
dynamic_memory_ratio=1
enable_instance_metrics_collection=false
[rdp]
enabled=true
html5_proxy_base_url=http://controller:8000/
[neutron]
url=http://controller:9696
auth_strategy=keystone
project_name=service
username=neutron
password=[redacted]
auth_url=http://controller:35357/v3
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
auth_plugin=v3password
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

Here is the show output for the network I am using for the instances that fail to launch:

# openstack network show c8321bc0-f38c-4943-988b-f298313ae1be
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2017-05-16T20:47:01Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | c8321bc0-f38c-4943-988b-f298313ae1be |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| mtu                       | 1500                                 |
| name                      | mgmt-net2                            |
| port_security_enabled     | True                                 |
| project_id                | f0d140af365f4c65b8144f9d5a784bd8     |
| provider:network_type     | vlan                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | 150                                  |
| qos_policy_id             | None                                 |
| revision_number           | 9                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| updated_at                | 2017-05-16T20:49:27Z                 |
+---------------------------+--------------------------------------+

Here is the show output for the subnet being used:

# openstack subnet show 7f81cf8b-32f1-44e1-bafa-19d6906e1e13
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| allocation_pools  | 10.9.48.240-10.9.48.250              |
| cidr              | 10.9.48.0/24                         |
| created_at        | 2017-05-16T20:49:27Z                 |
| description       |                                      |
| dns_nameservers   | 8.8.4.4                              |
| enable_dhcp       | True                                 |
| gateway_ip        | 10.9.48.1                            |
| host_routes       |                                      |
| id                | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| ip_version        | 4                                    |
| ipv6_address_mode | None                                 |
| ipv6_ra_mode      | None                                 |
| name              | mgmt-net_tso_test_range2             |
| network_id        | c8321bc0-f38c-4943-988b-f298313ae1be |
| project_id        | f0d140af365f4c65b8144f9d5a784bd8     |
| revision_number   | 2                                    |
| segment_id        | None                                 |
| service_types     |                                      |
| subnetpool_id     | None                                 |
| updated_at        | 2017-05-16T20:49:27Z                 |
+-------------------+--------------------------------------+

HyperV-Agent Error: PortBindingFailed

I am attempting to configure OpenStack to launch instances on a specific VLAN on one of the interfaces on my compute nodes. The VMswitch name is Back-end zone. I want to launch VMs on VLAN 150 on this VMswitch.

I have installed the latest Cloudbase Nova driver for Ocata and am using the default Hyper-V neutron agent.

Do I need to do anything with the segment_id directive for the subnet?

I have tried setting the router:external directive for the network to 'internal' but this doesn't seem to help.

All relevant information I could think of has been included below. Please let me know if there is any additional info you would like to see. Any help would be greatly appreciated.

Here is my network agent list on the controller:

# openstack network agent list
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| ID                                   | Agent Type     | Host          | Availability Zone | Alive | State | Binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | True  | UP    | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack | None              | True  | UP    | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | True  | UP    | neutron-l3-agent       |
| da6107b9-c6ac-41cd-8063-28ce0d702c24 | HyperV agent   | T-Cloud-1     | None              | True  | UP    | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+

Here are my VMswitches:

PS C:\Windows\system32> get-vmswitch

Name          SwitchType NetAdapterInterfaceDescription
----          ---------- ------------------------------
Back-end zone External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #40
Proxy zone    External   Broadcom NetXtreme Gigabit Ethernet
Web zone      External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #39

I receive the following error message in Horizon when launching an instance:

2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] Instance failed network setup after 1 attempt(s)
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager Traceback (most recent call last):
[...]
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] [instance: 109a7120-b447-4313-8050-20a7099118d4] Instance failed to spawn
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] Traceback (most recent call last):
[...]
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.

I don't see any new logs in the neutron-hyperv-agent.log file on the compute node.

I see the following errors in /var/log/neutron/server.log on the controller:

2017-05-16 16:53:29.701 7634 WARNING neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] Unable to schedule network c8321bc0-f38c-4943-988b-f298313ae1be: no agents available; will retry on subsequent port and subnet creation events.
2017-05-16 16:53:29.890 7634 INFO neutron.wsgi [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] 10.9.47.71 - - [16/May/2017 16:53:29] "POST /v2.0/ports.json HTTP/1.1" 201 1087 1.430238
2017-05-16 16:53:30.174 7634 INFO neutron.wsgi [req-b6a0a893-439c-4bf1-a548-7f924ed19755 - - - - -] 10.9.47.71 - - [16/May/2017 16:53:30] "GET /v2.0/extensions.json HTTP/1.1" 200 7503 0.278365
2017-05-16 16:53:31.892 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[... attempts repeat same message ...]
2017-05-16 16:53:32.054 7634 INFO neutron.plugins.ml2.plugin [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Attempt 10 to bind port 5c6b4f72-635e-44ed-8681-01fde769998b
2017-05-16 16:53:32.074 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[...]
2017-05-18 22:53:59.037 7640 INFO neutron.plugins.ml2.plugin [req-eeeef18c-ac1d-4a21-99b9-dfcff8a9ce30 - - - - -] No ports have port_id starting with Network Adapter

This is my neutron_hyperv_agent.conf on the compute node:

[DEFAULT]
verbose=false
control_exchange=neutron
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=neutron-hyperv-agent.log
[AGENT]
polling_interval=2
physical_network_vswitch_mappings=*:Back-end zone
enable_metrics_collection=false
enable_qos_extension=false
worker_count=12
[SECURITYGROUP]
firewall_driver=hyperv
enable_security_group=true
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

This is my nova.conf on the compute node:

[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
compute_driver=compute_hyperv.driver.HyperVDriver
instances_path=C:\ClusterStorage\Volume1\VMs
use_cow_images=true
flat_injected=true
mkisofs_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\mkisofs.exe
verbose=false
allow_resize_to_same_host=true
running_deleted_instance_poll_interval=120
resize_confirm_window=5
resume_guests_state_on_host_boot=true
rpc_response_timeout=1800
lock_path=C:\OpenStack\Log\
vif_plugging_is_fatal=false
vif_plugging_timeout=60
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=nova-compute.log
force_config_drive=True
[placement]
auth_strategy=keystone
auth_plugin=v3password
auth_type=password
auth_url=http://controller:35357/v3
project_name=service
username=placement
password=[redacted]
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
[notifications]
[glance]
api_servers=http://controller:9292
[hyperv]
vswitch_name=Back-end zone
limit_cpu_features=true
config_drive_inject_password=true
qemu_img_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom=true
dynamic_memory_ratio=1
enable_instance_metrics_collection=false
[rdp]
enabled=true
html5_proxy_base_url=http://controller:8000/
[neutron]
url=http://controller:9696
auth_strategy=keystone
project_name=service
username=neutron
password=[redacted]
auth_url=http://controller:35357/v3
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
auth_plugin=v3password
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

Here is the show output for the network I am using for the instances that fail to launch:

# openstack network show c8321bc0-f38c-4943-988b-f298313ae1be
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2017-05-16T20:47:01Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | c8321bc0-f38c-4943-988b-f298313ae1be |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| mtu                       | 1500                                 |
| name                      | mgmt-net2                            |
| port_security_enabled     | True                                 |
| project_id                | f0d140af365f4c65b8144f9d5a784bd8     |
| provider:network_type     | vlan                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | 150                                  |
| qos_policy_id             | None                                 |
| revision_number           | 9                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| updated_at                | 2017-05-16T20:49:27Z                 |
+---------------------------+--------------------------------------+

Here is the show output for the subnet being used:

# openstack subnet show 7f81cf8b-32f1-44e1-bafa-19d6906e1e13
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| allocation_pools  | 10.9.48.240-10.9.48.250              |
| cidr              | 10.9.48.0/24                         |
| created_at        | 2017-05-16T20:49:27Z                 |
| description       |                                      |
| dns_nameservers   | 8.8.4.4                              |
| enable_dhcp       | True                                 |
| gateway_ip        | 10.9.48.1                            |
| host_routes       |                                      |
| id                | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| ip_version        | 4                                    |
| ipv6_address_mode | None                                 |
| ipv6_ra_mode      | None                                 |
| name              | mgmt-net_tso_test_range2             |
| network_id        | c8321bc0-f38c-4943-988b-f298313ae1be |
| project_id        | f0d140af365f4c65b8144f9d5a784bd8     |
| revision_number   | 2                                    |
| segment_id        | None                                 |
| service_types     |                                      |
| subnetpool_id     | None                                 |
| updated_at        | 2017-05-16T20:49:27Z                 |
+-------------------+--------------------------------------+

Edit:

I am also seeing the following in the neutron log on the controller:

2017-05-18 22:39:33.116 7644 WARNING neutron.db.agents_db [req-9e8d8493-fd07-4a44-85d6-4aa75e8710ea - - - - -] Agent healthcheck: found 1 dead agents out of 4:
                Type       Last heartbeat host
        HyperV agent  2017-05-19 02:38:10 T-Cloud-1

I'm not sure I understand why this is saying that the agent is dead because the latest heartbeat is recent (within a few minutes) and doing an agent list says the agent is alive:

# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+
| id                                   | agent_type     | host          | availability_zone | alive | admin_state_up | binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | :-)   | True           | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack |                   | :-)   | True           | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | :-)   | True           | neutron-l3-agent       |
| c8a70f38-9b70-4eed-b3c8-eb0fc2481a87 | HyperV agent   | T-Cloud-1     |                   | :-)   | True           | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+

# neutron agent-show c8a70f38-9b70-4eed-b3c8-eb0fc2481a87
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| agent_type          | HyperV agent                         |
| alive               | True                                 |
| availability_zone   |                                      |
| binary              | neutron-hyperv-agent                 |
| configurations      | {                                    |
|                     |      "vswitch_mappings": {           |
|                     |           ".*$": "Back"              |
|                     |      }                               |
|                     | }                                    |
| created_at          | 2017-05-19 02:53:36                  |
| description         |                                      |
| heartbeat_timestamp | 2017-05-19 03:20:58                  |
| host                | T-Cloud-1                            |
| id                  | c8a70f38-9b70-4eed-b3c8-eb0fc2481a87 |
| started_at          | 2017-05-19 02:53:58                  |
| topic               | N/A                                  |
+---------------------+--------------------------------------+

HyperV-Agent Error: PortBindingFailed

I am attempting to configure OpenStack to launch instances on a specific VLAN on one of the interfaces on my compute nodes. The VMswitch name is Back-end zone. I want to launch VMs on VLAN 150 on this VMswitch.

I have installed the latest Cloudbase Nova driver for Ocata and am using the default Hyper-V neutron agent.

Do I need to do anything with the segment_id directive for the subnet?

I have tried setting the router:external directive for the network to 'internal' but this doesn't seem to help.

All relevant information I could think of has been included below. Please let me know if there is any additional info you would like to see. Any help would be greatly appreciated.

Here is my network agent list on the controller:

# openstack network agent list
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| ID                                   | Agent Type     | Host          | Availability Zone | Alive | State | Binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | True  | UP    | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack | None              | True  | UP    | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | True  | UP    | neutron-l3-agent       |
| da6107b9-c6ac-41cd-8063-28ce0d702c24 | HyperV agent   | T-Cloud-1     | None              | True  | UP    | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+

Here are my VMswitches:

PS C:\Windows\system32> get-vmswitch

Name          SwitchType NetAdapterInterfaceDescription
----          ---------- ------------------------------
Back-end zone External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #40
Proxy zone    External   Broadcom NetXtreme Gigabit Ethernet
Web zone      External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #39

I receive the following error message in Horizon when launching an instance:

2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] Instance failed network setup after 1 attempt(s)
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager Traceback (most recent call last):
[...]
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] [instance: 109a7120-b447-4313-8050-20a7099118d4] Instance failed to spawn
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] Traceback (most recent call last):
[...]
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.

I don't see any new logs in the neutron-hyperv-agent.log file on the compute node.

I see the following errors in /var/log/neutron/server.log on the controller:

2017-05-16 16:53:29.701 7634 WARNING neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] Unable to schedule network c8321bc0-f38c-4943-988b-f298313ae1be: no agents available; will retry on subsequent port and subnet creation events.
2017-05-16 16:53:29.890 7634 INFO neutron.wsgi [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] 10.9.47.71 - - [16/May/2017 16:53:29] "POST /v2.0/ports.json HTTP/1.1" 201 1087 1.430238
2017-05-16 16:53:30.174 7634 INFO neutron.wsgi [req-b6a0a893-439c-4bf1-a548-7f924ed19755 - - - - -] 10.9.47.71 - - [16/May/2017 16:53:30] "GET /v2.0/extensions.json HTTP/1.1" 200 7503 0.278365
2017-05-16 16:53:31.892 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[... attempts repeat same message ...]
2017-05-16 16:53:32.054 7634 INFO neutron.plugins.ml2.plugin [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Attempt 10 to bind port 5c6b4f72-635e-44ed-8681-01fde769998b
2017-05-16 16:53:32.074 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[...]
2017-05-18 22:53:59.037 7640 INFO neutron.plugins.ml2.plugin [req-eeeef18c-ac1d-4a21-99b9-dfcff8a9ce30 - - - - -] No ports have port_id starting with Network Adapter

This is my neutron_hyperv_agent.conf on the compute node:

[DEFAULT]
verbose=false
control_exchange=neutron
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=neutron-hyperv-agent.log
[AGENT]
polling_interval=2
physical_network_vswitch_mappings=*:Back-end zone
enable_metrics_collection=false
enable_qos_extension=false
worker_count=12
[SECURITYGROUP]
firewall_driver=hyperv
enable_security_group=true
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

This is my nova.conf on the compute node:

[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
compute_driver=compute_hyperv.driver.HyperVDriver
instances_path=C:\ClusterStorage\Volume1\VMs
use_cow_images=true
flat_injected=true
mkisofs_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\mkisofs.exe
verbose=false
allow_resize_to_same_host=true
running_deleted_instance_poll_interval=120
resize_confirm_window=5
resume_guests_state_on_host_boot=true
rpc_response_timeout=1800
lock_path=C:\OpenStack\Log\
vif_plugging_is_fatal=false
vif_plugging_timeout=60
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=nova-compute.log
force_config_drive=True
[placement]
auth_strategy=keystone
auth_plugin=v3password
auth_type=password
auth_url=http://controller:35357/v3
project_name=service
username=placement
password=[redacted]
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
[notifications]
[glance]
api_servers=http://controller:9292
[hyperv]
vswitch_name=Back-end zone
limit_cpu_features=true
config_drive_inject_password=true
qemu_img_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom=true
dynamic_memory_ratio=1
enable_instance_metrics_collection=false
[rdp]
enabled=true
html5_proxy_base_url=http://controller:8000/
[neutron]
url=http://controller:9696
auth_strategy=keystone
project_name=service
username=neutron
password=[redacted]
auth_url=http://controller:35357/v3
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
auth_plugin=v3password
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

Here is the show output for the network I am using for the instances that fail to launch:

# openstack network show c8321bc0-f38c-4943-988b-f298313ae1be
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2017-05-16T20:47:01Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | c8321bc0-f38c-4943-988b-f298313ae1be |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| mtu                       | 1500                                 |
| name                      | mgmt-net2                            |
| port_security_enabled     | True                                 |
| project_id                | f0d140af365f4c65b8144f9d5a784bd8     |
| provider:network_type     | vlan                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | 150                                  |
| qos_policy_id             | None                                 |
| revision_number           | 9                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| updated_at                | 2017-05-16T20:49:27Z                 |
+---------------------------+--------------------------------------+

Here is the show output for the subnet being used:

# openstack subnet show 7f81cf8b-32f1-44e1-bafa-19d6906e1e13
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| allocation_pools  | 10.9.48.240-10.9.48.250              |
| cidr              | 10.9.48.0/24                         |
| created_at        | 2017-05-16T20:49:27Z                 |
| description       |                                      |
| dns_nameservers   | 8.8.4.4                              |
| enable_dhcp       | True                                 |
| gateway_ip        | 10.9.48.1                            |
| host_routes       |                                      |
| id                | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| ip_version        | 4                                    |
| ipv6_address_mode | None                                 |
| ipv6_ra_mode      | None                                 |
| name              | mgmt-net_tso_test_range2             |
| network_id        | c8321bc0-f38c-4943-988b-f298313ae1be |
| project_id        | f0d140af365f4c65b8144f9d5a784bd8     |
| revision_number   | 2                                    |
| segment_id        | None                                 |
| service_types     |                                      |
| subnetpool_id     | None                                 |
| updated_at        | 2017-05-16T20:49:27Z                 |
+-------------------+--------------------------------------+

Edit:

I am also seeing the following in the neutron log on the controller:

2017-05-18 22:39:33.116 7644 WARNING neutron.db.agents_db [req-9e8d8493-fd07-4a44-85d6-4aa75e8710ea - - - - -] Agent healthcheck: found 1 dead agents out of 4:
                Type       Last heartbeat host
        HyperV agent  2017-05-19 02:38:10 T-Cloud-1

I'm not sure I understand why this is saying that the agent is dead because the latest heartbeat is recent (within a few minutes) and doing an agent list says the agent is alive:

# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+
| id                                   | agent_type     | host          | availability_zone | alive | admin_state_up | binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | :-)   | True           | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack |                   | :-)   | True           | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | :-)   | True           | neutron-l3-agent       |
| c8a70f38-9b70-4eed-b3c8-eb0fc2481a87 | HyperV agent   | T-Cloud-1     |                   | :-)   | True           | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+

# neutron agent-show c8a70f38-9b70-4eed-b3c8-eb0fc2481a87
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| agent_type          | HyperV agent                         |
| alive               | True                                 |
| availability_zone   |                                      |
| binary              | neutron-hyperv-agent                 |
| configurations      | {                                    |
|                     |      "vswitch_mappings": {           |
|                     |           ".*$": "Back"              |
|                     |      }                               |
|                     | }                                    |
| created_at          | 2017-05-19 02:53:36                  |
| description         |                                      |
| heartbeat_timestamp | 2017-05-19 03:20:58                  |
| host                | T-Cloud-1                            |
| id                  | c8a70f38-9b70-4eed-b3c8-eb0fc2481a87 |
| started_at          | 2017-05-19 02:53:58                  |
| topic               | N/A                                  |
+---------------------+--------------------------------------+

UPDATE: I'm starting to think this log about the agent being dead probably just occurred when I was restarting the agent here and there testing other agent configs.

HyperV-Agent Error: PortBindingFailed

I am attempting to configure OpenStack to launch instances on a specific VLAN on one of the interfaces on my compute nodes. The VMswitch name is Back-end zone. I want to launch VMs on VLAN 150 on this VMswitch.

I have installed the latest Cloudbase Nova driver for Ocata and am using the default Hyper-V neutron agent.

Do I need to do anything with the segment_id directive for the subnet?

I have tried setting the router:external directive for the network to 'internal' but this doesn't seem to help.

All relevant information I could think of has been included below. Please let me know if there is any additional info you would like to see. Any help would be greatly appreciated.

Here is my network agent list on the controller:

# openstack network agent list
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| ID                                   | Agent Type     | Host          | Availability Zone | Alive | State | Binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | True  | UP    | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack | None              | True  | UP    | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | True  | UP    | neutron-l3-agent       |
| da6107b9-c6ac-41cd-8063-28ce0d702c24 | HyperV agent   | T-Cloud-1     | None              | True  | UP    | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+

Here are my VMswitches:

PS C:\Windows\system32> get-vmswitch

Name          SwitchType NetAdapterInterfaceDescription
----          ---------- ------------------------------
Back-end zone External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #40
Proxy zone    External   Broadcom NetXtreme Gigabit Ethernet
Web zone      External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #39

I receive the following error message in Horizon when launching an instance:

2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] Instance failed network setup after 1 attempt(s)
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager Traceback (most recent call last):
[...]
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] [instance: 109a7120-b447-4313-8050-20a7099118d4] Instance failed to spawn
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] Traceback (most recent call last):
[...]
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.

I don't see any new logs in the neutron-hyperv-agent.log file on the compute node.

I see the following errors in /var/log/neutron/server.log on the controller:

2017-05-16 16:53:29.701 7634 WARNING neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] Unable to schedule network c8321bc0-f38c-4943-988b-f298313ae1be: no agents available; will retry on subsequent port and subnet creation events.
2017-05-16 16:53:29.890 7634 INFO neutron.wsgi [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] 10.9.47.71 - - [16/May/2017 16:53:29] "POST /v2.0/ports.json HTTP/1.1" 201 1087 1.430238
2017-05-16 16:53:30.174 7634 INFO neutron.wsgi [req-b6a0a893-439c-4bf1-a548-7f924ed19755 - - - - -] 10.9.47.71 - - [16/May/2017 16:53:30] "GET /v2.0/extensions.json HTTP/1.1" 200 7503 0.278365
2017-05-16 16:53:31.892 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[... attempts repeat same message ...]
2017-05-16 16:53:32.054 7634 INFO neutron.plugins.ml2.plugin [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Attempt 10 to bind port 5c6b4f72-635e-44ed-8681-01fde769998b
2017-05-16 16:53:32.074 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[...]
2017-05-18 22:53:59.037 7640 INFO neutron.plugins.ml2.plugin [req-eeeef18c-ac1d-4a21-99b9-dfcff8a9ce30 - - - - -] No ports have port_id starting with Network Adapter

This is my neutron_hyperv_agent.conf on the compute node:

[DEFAULT]
verbose=false
control_exchange=neutron
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=neutron-hyperv-agent.log
[AGENT]
polling_interval=2
physical_network_vswitch_mappings=*:Back-end zone
enable_metrics_collection=false
enable_qos_extension=false
worker_count=12
[SECURITYGROUP]
firewall_driver=hyperv
enable_security_group=true
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

This is my nova.conf on the compute node:

[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
compute_driver=compute_hyperv.driver.HyperVDriver
instances_path=C:\ClusterStorage\Volume1\VMs
use_cow_images=true
flat_injected=true
mkisofs_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\mkisofs.exe
verbose=false
allow_resize_to_same_host=true
running_deleted_instance_poll_interval=120
resize_confirm_window=5
resume_guests_state_on_host_boot=true
rpc_response_timeout=1800
lock_path=C:\OpenStack\Log\
vif_plugging_is_fatal=false
vif_plugging_timeout=60
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=nova-compute.log
force_config_drive=True
[placement]
auth_strategy=keystone
auth_plugin=v3password
auth_type=password
auth_url=http://controller:35357/v3
project_name=service
username=placement
password=[redacted]
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
[notifications]
[glance]
api_servers=http://controller:9292
[hyperv]
vswitch_name=Back-end zone
limit_cpu_features=true
config_drive_inject_password=true
qemu_img_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom=true
dynamic_memory_ratio=1
enable_instance_metrics_collection=false
[rdp]
enabled=true
html5_proxy_base_url=http://controller:8000/
[neutron]
url=http://controller:9696
auth_strategy=keystone
project_name=service
username=neutron
password=[redacted]
auth_url=http://controller:35357/v3
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
auth_plugin=v3password
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

Here is the show output for the network I am using for the instances that fail to launch:

# openstack network show c8321bc0-f38c-4943-988b-f298313ae1be
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2017-05-16T20:47:01Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | c8321bc0-f38c-4943-988b-f298313ae1be |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| mtu                       | 1500                                 |
| name                      | mgmt-net2                            |
| port_security_enabled     | True                                 |
| project_id                | f0d140af365f4c65b8144f9d5a784bd8     |
| provider:network_type     | vlan                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | 150                                  |
| qos_policy_id             | None                                 |
| revision_number           | 9                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| updated_at                | 2017-05-16T20:49:27Z                 |
+---------------------------+--------------------------------------+

Here is the show output for the subnet being used:

# openstack subnet show 7f81cf8b-32f1-44e1-bafa-19d6906e1e13
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| allocation_pools  | 10.9.48.240-10.9.48.250              |
| cidr              | 10.9.48.0/24                         |
| created_at        | 2017-05-16T20:49:27Z                 |
| description       |                                      |
| dns_nameservers   | 8.8.4.4                              |
| enable_dhcp       | True                                 |
| gateway_ip        | 10.9.48.1                            |
| host_routes       |                                      |
| id                | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| ip_version        | 4                                    |
| ipv6_address_mode | None                                 |
| ipv6_ra_mode      | None                                 |
| name              | mgmt-net_tso_test_range2             |
| network_id        | c8321bc0-f38c-4943-988b-f298313ae1be |
| project_id        | f0d140af365f4c65b8144f9d5a784bd8     |
| revision_number   | 2                                    |
| segment_id        | None                                 |
| service_types     |                                      |
| subnetpool_id     | None                                 |
| updated_at        | 2017-05-16T20:49:27Z                 |
+-------------------+--------------------------------------+

Edit:

I am also seeing the following in the neutron log on the controller:

2017-05-18 22:39:33.116 7644 WARNING neutron.db.agents_db [req-9e8d8493-fd07-4a44-85d6-4aa75e8710ea - - - - -] Agent healthcheck: found 1 dead agents out of 4:
                Type       Last heartbeat host
        HyperV agent  2017-05-19 02:38:10 T-Cloud-1

I'm not sure I understand why this is saying that the agent is dead because the latest heartbeat is recent (within a few minutes) and doing an agent list says the agent is alive:

# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+
| id                                   | agent_type     | host          | availability_zone | alive | admin_state_up | binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | :-)   | True           | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack |                   | :-)   | True           | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | :-)   | True           | neutron-l3-agent       |
| c8a70f38-9b70-4eed-b3c8-eb0fc2481a87 | HyperV agent   | T-Cloud-1     |                   | :-)   | True           | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+

# neutron agent-show c8a70f38-9b70-4eed-b3c8-eb0fc2481a87
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| agent_type          | HyperV agent                         |
| alive               | True                                 |
| availability_zone   |                                      |
| binary              | neutron-hyperv-agent                 |
| configurations      | {                                    |
|                     |      "vswitch_mappings": {           |
|                     |           ".*$": "Back"              |
|                     |      }                               |
|                     | }                                    |
| created_at          | 2017-05-19 02:53:36                  |
| description         |                                      |
| heartbeat_timestamp | 2017-05-19 03:20:58                  |
| host                | T-Cloud-1                            |
| id                  | c8a70f38-9b70-4eed-b3c8-eb0fc2481a87 |
| started_at          | 2017-05-19 02:53:58                  |
| topic               | N/A                                  |
+---------------------+--------------------------------------+

UPDATE: UPDATE: I'm starting to think this log about the agent being dead probably just occurred when I was restarting the agent here and there testing other agent configs.

UPDATE: I am seeing the following warning in the neutron-hyperv-agent log on the target node when starting the OpenStack Neutron Hyper-V Agent Service service on the Hyper-V node. I am wondering if this might have something to do with the issue.

2017-08-28 15:30:56.821 6288 WARNING neutron.agent.securitygroups_rpc [req-26cf7bfa-dfec-4367-b7d0-34a38d5ebf62 - - - - -] Driver configuration doesn't match with enable_security_group

HyperV-Agent Error: PortBindingFailed

I am attempting to configure OpenStack to launch instances on a specific VLAN on one of the interfaces on my compute nodes. The VMswitch name is Back-end zone. I want to launch VMs on VLAN 150 on this VMswitch.

I have installed the latest Cloudbase Nova driver for Ocata and am using the default Hyper-V neutron agent.

Do I need to do anything with the segment_id directive for the subnet?

I have tried setting the router:external directive for the network to 'internal' but this doesn't seem to help.

All relevant information I could think of has been included below. Please let me know if there is any additional info you would like to see. Any help would be greatly appreciated.

Here is my network agent list on the controller:

# openstack network agent list
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| ID                                   | Agent Type     | Host          | Availability Zone | Alive | State | Binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | True  | UP    | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack | None              | True  | UP    | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | True  | UP    | neutron-l3-agent       |
| da6107b9-c6ac-41cd-8063-28ce0d702c24 | HyperV agent   | T-Cloud-1     | None              | True  | UP    | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+

Here are my VMswitches:

PS C:\Windows\system32> get-vmswitch

Name          SwitchType NetAdapterInterfaceDescription
----          ---------- ------------------------------
Back-end zone External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #40
Proxy zone    External   Broadcom NetXtreme Gigabit Ethernet
Web zone      External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #39

I receive the following error message in Horizon when launching an instance:

2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] Instance failed network setup after 1 attempt(s)
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager Traceback (most recent call last):
[...]
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] [instance: 109a7120-b447-4313-8050-20a7099118d4] Instance failed to spawn
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] Traceback (most recent call last):
[...]
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.

I don't see any new logs in the neutron-hyperv-agent.log file on the compute node.

I see the following errors in /var/log/neutron/server.log on the controller:

2017-05-16 16:53:29.701 7634 WARNING neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] Unable to schedule network c8321bc0-f38c-4943-988b-f298313ae1be: no agents available; will retry on subsequent port and subnet creation events.
2017-05-16 16:53:29.890 7634 INFO neutron.wsgi [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] 10.9.47.71 - - [16/May/2017 16:53:29] "POST /v2.0/ports.json HTTP/1.1" 201 1087 1.430238
2017-05-16 16:53:30.174 7634 INFO neutron.wsgi [req-b6a0a893-439c-4bf1-a548-7f924ed19755 - - - - -] 10.9.47.71 - - [16/May/2017 16:53:30] "GET /v2.0/extensions.json HTTP/1.1" 200 7503 0.278365
2017-05-16 16:53:31.892 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[... attempts repeat same message ...]
2017-05-16 16:53:32.054 7634 INFO neutron.plugins.ml2.plugin [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Attempt 10 to bind port 5c6b4f72-635e-44ed-8681-01fde769998b
2017-05-16 16:53:32.074 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[...]
2017-05-18 22:53:59.037 7640 INFO neutron.plugins.ml2.plugin [req-eeeef18c-ac1d-4a21-99b9-dfcff8a9ce30 - - - - -] No ports have port_id starting with Network Adapter

This is my neutron_hyperv_agent.conf on the compute node:

[DEFAULT]
verbose=false
control_exchange=neutron
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=neutron-hyperv-agent.log
[AGENT]
polling_interval=2
physical_network_vswitch_mappings=*:Back-end zone
enable_metrics_collection=false
enable_qos_extension=false
worker_count=12
[SECURITYGROUP]
firewall_driver=hyperv
enable_security_group=true
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

This is my nova.conf on the compute node:

[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
compute_driver=compute_hyperv.driver.HyperVDriver
instances_path=C:\ClusterStorage\Volume1\VMs
use_cow_images=true
flat_injected=true
mkisofs_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\mkisofs.exe
verbose=false
allow_resize_to_same_host=true
running_deleted_instance_poll_interval=120
resize_confirm_window=5
resume_guests_state_on_host_boot=true
rpc_response_timeout=1800
lock_path=C:\OpenStack\Log\
vif_plugging_is_fatal=false
vif_plugging_timeout=60
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=nova-compute.log
force_config_drive=True
[placement]
auth_strategy=keystone
auth_plugin=v3password
auth_type=password
auth_url=http://controller:35357/v3
project_name=service
username=placement
password=[redacted]
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
[notifications]
[glance]
api_servers=http://controller:9292
[hyperv]
vswitch_name=Back-end zone
limit_cpu_features=true
config_drive_inject_password=true
qemu_img_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom=true
dynamic_memory_ratio=1
enable_instance_metrics_collection=false
[rdp]
enabled=true
html5_proxy_base_url=http://controller:8000/
[neutron]
url=http://controller:9696
auth_strategy=keystone
project_name=service
username=neutron
password=[redacted]
auth_url=http://controller:35357/v3
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
auth_plugin=v3password
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

Here is the show output for the network I am using for the instances that fail to launch:

# openstack network show c8321bc0-f38c-4943-988b-f298313ae1be
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2017-05-16T20:47:01Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | c8321bc0-f38c-4943-988b-f298313ae1be |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| mtu                       | 1500                                 |
| name                      | mgmt-net2                            |
| port_security_enabled     | True                                 |
| project_id                | f0d140af365f4c65b8144f9d5a784bd8     |
| provider:network_type     | vlan                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | 150                                  |
| qos_policy_id             | None                                 |
| revision_number           | 9                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| updated_at                | 2017-05-16T20:49:27Z                 |
+---------------------------+--------------------------------------+

Here is the show output for the subnet being used:

# openstack subnet show 7f81cf8b-32f1-44e1-bafa-19d6906e1e13
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| allocation_pools  | 10.9.48.240-10.9.48.250              |
| cidr              | 10.9.48.0/24                         |
| created_at        | 2017-05-16T20:49:27Z                 |
| description       |                                      |
| dns_nameservers   | 8.8.4.4                              |
| enable_dhcp       | True                                 |
| gateway_ip        | 10.9.48.1                            |
| host_routes       |                                      |
| id                | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| ip_version        | 4                                    |
| ipv6_address_mode | None                                 |
| ipv6_ra_mode      | None                                 |
| name              | mgmt-net_tso_test_range2             |
| network_id        | c8321bc0-f38c-4943-988b-f298313ae1be |
| project_id        | f0d140af365f4c65b8144f9d5a784bd8     |
| revision_number   | 2                                    |
| segment_id        | None                                 |
| service_types     |                                      |
| subnetpool_id     | None                                 |
| updated_at        | 2017-05-16T20:49:27Z                 |
+-------------------+--------------------------------------+

Edit:

I am also seeing the following in the neutron log on the controller:

2017-05-18 22:39:33.116 7644 WARNING neutron.db.agents_db [req-9e8d8493-fd07-4a44-85d6-4aa75e8710ea - - - - -] Agent healthcheck: found 1 dead agents out of 4:
                Type       Last heartbeat host
        HyperV agent  2017-05-19 02:38:10 T-Cloud-1

I'm not sure I understand why this is saying that the agent is dead because the latest heartbeat is recent (within a few minutes) and doing an agent list says the agent is alive:

# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+
| id                                   | agent_type     | host          | availability_zone | alive | admin_state_up | binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | :-)   | True           | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack |                   | :-)   | True           | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | :-)   | True           | neutron-l3-agent       |
| c8a70f38-9b70-4eed-b3c8-eb0fc2481a87 | HyperV agent   | T-Cloud-1     |                   | :-)   | True           | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+

# neutron agent-show c8a70f38-9b70-4eed-b3c8-eb0fc2481a87
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| agent_type          | HyperV agent                         |
| alive               | True                                 |
| availability_zone   |                                      |
| binary              | neutron-hyperv-agent                 |
| configurations      | {                                    |
|                     |      "vswitch_mappings": {           |
|                     |           ".*$": "Back"              |
|                     |      }                               |
|                     | }                                    |
| created_at          | 2017-05-19 02:53:36                  |
| description         |                                      |
| heartbeat_timestamp | 2017-05-19 03:20:58                  |
| host                | T-Cloud-1                            |
| id                  | c8a70f38-9b70-4eed-b3c8-eb0fc2481a87 |
| started_at          | 2017-05-19 02:53:58                  |
| topic               | N/A                                  |
+---------------------+--------------------------------------+

UPDATE: I'm starting to think this log about the agent being dead probably just occurred when I was restarting the agent here and there testing other agent configs.

UPDATE: I am seeing the following warning in the neutron-hyperv-agent log on the target node when starting the OpenStack Neutron Hyper-V Agent Service service on the Hyper-V node. I am wondering if this might have something to do with the issue.

2017-08-28 15:30:56.821 6288 WARNING neutron.agent.securitygroups_rpc [req-26cf7bfa-dfec-4367-b7d0-34a38d5ebf62 - - - - -] Driver configuration doesn't match with enable_security_group

UPDATE: I have set enable_security_group to false in the neutron_hyperv_agent.conf config file and restarted the service but I am still seeing the warning mentioned directly above.

HyperV-Agent Error: PortBindingFailed

I am attempting to configure OpenStack to launch instances on a specific VLAN on one of the interfaces on my compute nodes. The VMswitch name is Back-end zone. I want to launch VMs on VLAN 150 on this VMswitch.

I have installed the latest Cloudbase Nova driver for Ocata and am using the default Hyper-V neutron agent.

Do I need to do anything with the segment_id directive for the subnet?

I have tried setting the router:external directive for the network to 'internal' but this doesn't seem to help.

All relevant information I could think of has been included below. Please let me know if there is any additional info you would like to see. Any help would be greatly appreciated.

Here is my network agent list on the controller:

# openstack network agent list
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| ID                                   | Agent Type     | Host          | Availability Zone | Alive | State | Binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | True  | UP    | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack | None              | True  | UP    | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | True  | UP    | neutron-l3-agent       |
| da6107b9-c6ac-41cd-8063-28ce0d702c24 | HyperV agent   | T-Cloud-1     | None              | True  | UP    | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+

Here are my VMswitches:

PS C:\Windows\system32> get-vmswitch

Name          SwitchType NetAdapterInterfaceDescription
----          ---------- ------------------------------
Back-end zone External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #40
Proxy zone    External   Broadcom NetXtreme Gigabit Ethernet
Web zone      External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #39

I receive the following error message in Horizon when launching an instance:

2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] Instance failed network setup after 1 attempt(s)
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager Traceback (most recent call last):
[...]
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] [instance: 109a7120-b447-4313-8050-20a7099118d4] Instance failed to spawn
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] Traceback (most recent call last):
[...]
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.

I don't see any new logs in the neutron-hyperv-agent.log file on the compute node.

I see the following errors in /var/log/neutron/server.log on the controller:

2017-05-16 16:53:29.701 7634 WARNING neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] Unable to schedule network c8321bc0-f38c-4943-988b-f298313ae1be: no agents available; will retry on subsequent port and subnet creation events.
2017-05-16 16:53:29.890 7634 INFO neutron.wsgi [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] 10.9.47.71 - - [16/May/2017 16:53:29] "POST /v2.0/ports.json HTTP/1.1" 201 1087 1.430238
2017-05-16 16:53:30.174 7634 INFO neutron.wsgi [req-b6a0a893-439c-4bf1-a548-7f924ed19755 - - - - -] 10.9.47.71 - - [16/May/2017 16:53:30] "GET /v2.0/extensions.json HTTP/1.1" 200 7503 0.278365
2017-05-16 16:53:31.892 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[... attempts repeat same message ...]
2017-05-16 16:53:32.054 7634 INFO neutron.plugins.ml2.plugin [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Attempt 10 to bind port 5c6b4f72-635e-44ed-8681-01fde769998b
2017-05-16 16:53:32.074 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'f2fa8cbe-08ed-4491-9be3-d200f4f4000b', 'network_type': u'vlan'}]
[...]
2017-05-18 22:53:59.037 7640 INFO neutron.plugins.ml2.plugin [req-eeeef18c-ac1d-4a21-99b9-dfcff8a9ce30 - - - - -] No ports have port_id starting with Network Adapter

This is my neutron_hyperv_agent.conf on the compute node:

[DEFAULT]
verbose=false
control_exchange=neutron
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=neutron-hyperv-agent.log
[AGENT]
polling_interval=2
physical_network_vswitch_mappings=*:Back-end zone
enable_metrics_collection=false
enable_qos_extension=false
worker_count=12
[SECURITYGROUP]
firewall_driver=hyperv
enable_security_group=true
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

This is my nova.conf on the compute node:

[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
compute_driver=compute_hyperv.driver.HyperVDriver
instances_path=C:\ClusterStorage\Volume1\VMs
use_cow_images=true
flat_injected=true
mkisofs_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\mkisofs.exe
verbose=false
allow_resize_to_same_host=true
running_deleted_instance_poll_interval=120
resize_confirm_window=5
resume_guests_state_on_host_boot=true
rpc_response_timeout=1800
lock_path=C:\OpenStack\Log\
vif_plugging_is_fatal=false
vif_plugging_timeout=60
rpc_backend=rabbit
log_dir=C:\OpenStack\Log\
log_file=nova-compute.log
force_config_drive=True
[placement]
auth_strategy=keystone
auth_plugin=v3password
auth_type=password
auth_url=http://controller:35357/v3
project_name=service
username=placement
password=[redacted]
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
[notifications]
[glance]
api_servers=http://controller:9292
[hyperv]
vswitch_name=Back-end zone
limit_cpu_features=true
config_drive_inject_password=true
qemu_img_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom=true
dynamic_memory_ratio=1
enable_instance_metrics_collection=false
[rdp]
enabled=true
html5_proxy_base_url=http://controller:8000/
[neutron]
url=http://controller:9696
auth_strategy=keystone
project_name=service
username=neutron
password=[redacted]
auth_url=http://controller:35357/v3
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
auth_plugin=v3password
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=[redacted]

Here is the show output for the network I am using for the instances that fail to launch:

# openstack network show c8321bc0-f38c-4943-988b-f298313ae1be
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2017-05-16T20:47:01Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | c8321bc0-f38c-4943-988b-f298313ae1be |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| mtu                       | 1500                                 |
| name                      | mgmt-net2                            |
| port_security_enabled     | True                                 |
| project_id                | f0d140af365f4c65b8144f9d5a784bd8     |
| provider:network_type     | vlan                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | 150                                  |
| qos_policy_id             | None                                 |
| revision_number           | 9                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| updated_at                | 2017-05-16T20:49:27Z                 |
+---------------------------+--------------------------------------+

Here is the show output for the subnet being used:

# openstack subnet show 7f81cf8b-32f1-44e1-bafa-19d6906e1e13
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| allocation_pools  | 10.9.48.240-10.9.48.250              |
| cidr              | 10.9.48.0/24                         |
| created_at        | 2017-05-16T20:49:27Z                 |
| description       |                                      |
| dns_nameservers   | 8.8.4.4                              |
| enable_dhcp       | True                                 |
| gateway_ip        | 10.9.48.1                            |
| host_routes       |                                      |
| id                | 7f81cf8b-32f1-44e1-bafa-19d6906e1e13 |
| ip_version        | 4                                    |
| ipv6_address_mode | None                                 |
| ipv6_ra_mode      | None                                 |
| name              | mgmt-net_tso_test_range2             |
| network_id        | c8321bc0-f38c-4943-988b-f298313ae1be |
| project_id        | f0d140af365f4c65b8144f9d5a784bd8     |
| revision_number   | 2                                    |
| segment_id        | None                                 |
| service_types     |                                      |
| subnetpool_id     | None                                 |
| updated_at        | 2017-05-16T20:49:27Z                 |
+-------------------+--------------------------------------+

Edit:

I am also seeing the following in the neutron log on the controller:

2017-05-18 22:39:33.116 7644 WARNING neutron.db.agents_db [req-9e8d8493-fd07-4a44-85d6-4aa75e8710ea - - - - -] Agent healthcheck: found 1 dead agents out of 4:
                Type       Last heartbeat host
        HyperV agent  2017-05-19 02:38:10 T-Cloud-1

I'm not sure I understand why this is saying that the agent is dead because the latest heartbeat is recent (within a few minutes) and doing an agent list says the agent is alive:

# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+
| id                                   | agent_type     | host          | availability_zone | alive | admin_state_up | binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | :-)   | True           | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack |                   | :-)   | True           | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | :-)   | True           | neutron-l3-agent       |
| c8a70f38-9b70-4eed-b3c8-eb0fc2481a87 | HyperV agent   | T-Cloud-1     |                   | :-)   | True           | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+----------------+------------------------+

# neutron agent-show c8a70f38-9b70-4eed-b3c8-eb0fc2481a87
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| agent_type          | HyperV agent                         |
| alive               | True                                 |
| availability_zone   |                                      |
| binary              | neutron-hyperv-agent                 |
| configurations      | {                                    |
|                     |      "vswitch_mappings": {           |
|                     |           ".*$": "Back"              |
|                     |      }                               |
|                     | }                                    |
| created_at          | 2017-05-19 02:53:36                  |
| description         |                                      |
| heartbeat_timestamp | 2017-05-19 03:20:58                  |
| host                | T-Cloud-1                            |
| id                  | c8a70f38-9b70-4eed-b3c8-eb0fc2481a87 |
| started_at          | 2017-05-19 02:53:58                  |
| topic               | N/A                                  |
+---------------------+--------------------------------------+

UPDATE: I'm starting to think this log about the agent being dead probably just occurred when I was restarting the agent here and there testing other agent configs.

UPDATE: I am seeing the following warning in the neutron-hyperv-agent log on the target node when starting the OpenStack Neutron Hyper-V Agent Service service on the Hyper-V node. I am wondering if this might have something to do with the issue.

2017-08-28 15:30:56.821 6288 WARNING neutron.agent.securitygroups_rpc [req-26cf7bfa-dfec-4367-b7d0-34a38d5ebf62 - - - - -] Driver configuration doesn't match with enable_security_group

UPDATE: I have set enable_security_group to false in the neutron_hyperv_agent.conf config file and restarted the service but I am still seeing the warning mentioned directly above.

UPDATE: Enabled debugging and going through my debug neutron server.log (/var/log/neutron/server.log). I went ahead and updated my /etc/neutron/plugin.ini to just mechanism_drivers = hyperv so that I don't have to read through the DEBUGs of the neutron service trying through all of the agent types. I am seeing the logs below. I have a feeling there is a clue here in the logs after the 10th and final attempt but I'm having a rough time identifying the exact issue.

2017-08-28 16:57:26.219 8852 INFO neutron.plugins.ml2.plugin [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] Attempt 9 to bind port 846babb6-9e9a-49cf-92b3-c28599a70dc8
2017-08-28 16:57:26.234 8852 DEBUG neutron.plugins.ml2.managers [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] Attempting to bind port 846babb6-9e9a-49cf-92b3-c28599a70dc8 on host T-Cloud-2 for vnic_type normal with profile  bind_port /usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:726
2017-08-28 16:57:26.234 8852 DEBUG neutron.plugins.ml2.managers [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] Attempting to bind port 846babb6-9e9a-49cf-92b3-c28599a70dc8 on host T-Cloud-2 at level 0 using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'02f2d0c1-0bce-4d9d-910d-bc2483fe285e', 'network_type': u'vlan'}] _bind_port_level /usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:747
2017-08-28 16:57:26.234 8852 ERROR neutron.plugins.ml2.managers [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] Failed to bind port 846babb6-9e9a-49cf-92b3-c28599a70dc8 on host T-Cloud-2 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'02f2d0c1-0bce-4d9d-910d-bc2483fe285e', 'network_type': u'vlan'}]
2017-08-28 16:57:26.235 8852 INFO neutron.plugins.ml2.plugin [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] Attempt 10 to bind port 846babb6-9e9a-49cf-92b3-c28599a70dc8
2017-08-28 16:57:26.248 8852 DEBUG neutron.plugins.ml2.managers [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] Attempting to bind port 846babb6-9e9a-49cf-92b3-c28599a70dc8 on host T-Cloud-2 for vnic_type normal with profile  bind_port /usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:726
2017-08-28 16:57:26.249 8852 DEBUG neutron.plugins.ml2.managers [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] Attempting to bind port 846babb6-9e9a-49cf-92b3-c28599a70dc8 on host T-Cloud-2 at level 0 using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'02f2d0c1-0bce-4d9d-910d-bc2483fe285e', 'network_type': u'vlan'}] _bind_port_level /usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:747
2017-08-28 16:57:26.249 8852 ERROR neutron.plugins.ml2.managers [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] Failed to bind port 846babb6-9e9a-49cf-92b3-c28599a70dc8 on host T-Cloud-2 for vnic_type normal using segments [{'segmentation_id': 150, 'physical_network': u'provider', 'id': u'02f2d0c1-0bce-4d9d-910d-bc2483fe285e', 'network_type': u'vlan'}]
2017-08-28 16:57:26.269 8852 DEBUG neutron.plugins.ml2.db [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] For port 846babb6-9e9a-49cf-92b3-c28599a70dc8, host T-Cloud-2, cleared binding levels clear_binding_levels /usr/lib/python2.7/site-packages/neutron/plugins/ml2/db.py:109
2017-08-28 16:57:26.270 8852 DEBUG neutron.plugins.ml2.db [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] Attempted to set empty binding levels set_binding_levels /usr/lib/python2.7/site-packages/neutron/plugins/ml2/db.py:84
2017-08-28 16:57:26.275 8852 DEBUG neutron.callbacks.manager [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] Notify callbacks [('neutron.db.l3_hascheduler_db._notify_l3_agent_ha_port_update-5627924', <function _notify_l3_agent_ha_port_update at 0x55e0140>), ('neutron.plugins.ml2.ovo_rpc._ObjectChangeHandler.handle_event--9223372036852910339', <bound method _ObjectChangeHandler.handle_event of <neutron.plugins.ml2.ovo_rpc._ObjectChangeHandler object at 0x5546790>>), ('neutron.db.l3_dvrscheduler_db._notify_l3_agent_port_update-5583801', <function _notify_l3_agent_port_update at 0x5533b90>), ('neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api.DhcpAgentNotifyAPI._native_event_send_dhcp_notification--9223372036847566023', <bound method DhcpAgentNotifyAPI._native_event_send_dhcp_notification of <neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api.DhcpAgentNotifyAPI object at 0x5546710>>)] for port, after_update _notify_loop /usr/lib/python2.7/site-packages/neutron/callbacks/manager.py:142
2017-08-28 16:57:26.364 8852 DEBUG neutron.plugins.ml2.ovo_rpc [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] Dispatching RPC callback event updated for port 846babb6-9e9a-49cf-92b3-c28599a70dc8. handle_event /usr/lib/python2.7/site-packages/neutron/plugins/ml2/ovo_rpc.py:85
2017-08-28 16:57:26.365 8852 DEBUG neutron.api.rpc.handlers.resources_rpc [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] neutron.api.rpc.handlers.resources_rpc.ResourcesPushRpcApi method push called with arguments (<neutron.context.Context object at 0x6766650>, [Port(admin_state_up=True,allowed_address_pairs=[],binding=PortBinding,binding_levels=[],created_at=2017-08-28T20:57:23Z,description='',device_id='bddcd667-b823-40c6-854a-f1b6af5bc74b',device_owner='compute:nova',dhcp_options=[],distributed_binding=None,dns=None,fixed_ips=[IPAllocation],id=846babb6-9e9a-49cf-92b3-c28599a70dc8,mac_address=fa:16:3e:ed:0e:58,name='',network_id=ce4fda94-7924-4dda-88b7-ed76a890fde3,project_id='f0d140af365f4c65b8144f9d5a784bd8',qos_policy_id=None,revision_number=6,security=PortSecurity(846babb6-9e9a-49cf-92b3-c28599a70dc8),security_group_ids=set([]),status='DOWN',updated_at=2017-08-28T20:57:25Z)], 'updated') {} wrapper /usr/lib/python2.7/site-packages/oslo_log/helpers.py:47
2017-08-28 16:57:26.369 8852 DEBUG oslo_messaging._drivers.amqpdriver [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] CAST unique_id: 62c4f48fdb0c4f3b8e4483efd489b11b FANOUT topic 'neutron-vo-Port-1.0' _send /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:431
2017-08-28 16:57:26.626 8852 DEBUG oslo_messaging._drivers.amqpdriver [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] CAST unique_id: 907383eb7f184a818c831ccbc05e013c exchange 'neutron' topic 'dhcp_agent.gcp-openstack' _send /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:442
2017-08-28 16:57:26.630 8852 DEBUG neutron.plugins.ml2.plugin [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] In _notify_port_updated(), no bound segment for port 846babb6-9e9a-49cf-92b3-c28599a70dc8 on network ce4fda94-7924-4dda-88b7-ed76a890fde3 _notify_port_updated /usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py:637
2017-08-28 16:57:26.633 8852 DEBUG neutron.callbacks.manager [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] Notify callbacks [('neutron.notifiers.nova.Notifier._send_nova_notification-649', <bound method Notifier._send_nova_notification of <neutron.notifiers.nova.Notifier object at 0x4bff410>>), ('neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api.DhcpAgentNotifyAPI._send_dhcp_notification-7209761', <bound method DhcpAgentNotifyAPI._send_dhcp_notification of <neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api.DhcpAgentNotifyAPI object at 0x5546710>>)] for port, before_response _notify_loop /usr/lib/python2.7/site-packages/neutron/callbacks/manager.py:142
2017-08-28 16:57:26.635 8852 INFO neutron.wsgi [req-da45a04d-ee9f-4ff1-9b65-2ef54b24982c - - - - -] 10.9.47.72 - - [28/Aug/2017 16:57:26] "PUT /v2.0/ports/846babb6-9e9a-49cf-92b3-c28599a70dc8.json HTTP/1.1" 200 1069 1.563450
2017-08-28 16:57:26.996 8852 DEBUG neutron.plugins.ml2.plugin [req-a62b4454-66a4-4a0c-8d6b-ec3c1685189d - - - - -] Deleting port 846babb6-9e9a-49cf-92b3-c28599a70dc8 _pre_delete_port /usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py:1576
2017-08-28 16:57:26.996 8852 DEBUG neutron.callbacks.manager [req-a62b4454-66a4-4a0c-8d6b-ec3c1685189d - - - - -] Notify callbacks [('neutron.db.l3_db._prevent_l3_port_delete_callback--9223372036849913468', <function _prevent_l3_port_delete_callback at 0x4a31848>)] for port, before_delete _notify_loop /usr/lib/python2.7/site-packages/neutron/callbacks/manager.py:142
2017-08-28 16:57:27.174 8852 DEBUG neutron.plugins.ml2.db [req-a62b4454-66a4-4a0c-8d6b-ec3c1685189d - - - - -] For port 846babb6-9e9a-49cf-92b3-c28599a70dc8, host T-Cloud-2, got binding levels [] get_binding_levels /usr/lib/python2.7/site-packages/neutron/plugins/ml2/db.py:97
2017-08-28 16:57:27.199 8852 DEBUG neutron.plugins.ml2.plugin [req-a62b4454-66a4-4a0c-8d6b-ec3c1685189d - - - - -] Calling delete_port for 846babb6-9e9a-49cf-92b3-c28599a70dc8 owned by compute:nova delete_port /usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py:1634