New Question
0

HyperV-Agent Error: PortBindingFailed

asked 2017-05-17 00:42:20 +0200

awestin1 gravatar image

updated 2017-08-29 00:10:12 +0200

I am attempting to configure OpenStack to launch instances on a specific VLAN on one of the interfaces on my compute nodes. The VMswitch name is Back-end zone. I want to launch VMs on VLAN 150 on this VMswitch.

I have installed the latest Cloudbase Nova driver for Ocata and am using the default Hyper-V neutron agent.

Do I need to do anything with the segment_id directive for the subnet?

I have tried setting the router:external directive for the network to 'internal' but this doesn't seem to help.

All relevant information I could think of has been included below. Please let me know if there is any additional info you would like to see. Any help would be greatly appreciated.

Here is my network agent list on the controller:

# openstack network agent list
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| ID                                   | Agent Type     | Host          | Availability Zone | Alive | State | Binary                 |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+
| 1f512938-132e-47d3-b1a5-eb0be5aed61c | DHCP agent     | gcp-openstack | nova              | True  | UP    | neutron-dhcp-agent     |
| 46d88273-4245-4206-b636-6944addb1c1f | Metadata agent | gcp-openstack | None              | True  | UP    | neutron-metadata-agent |
| 4d1ca29e-ece8-4380-bde7-17d167d6a103 | L3 agent       | gcp-openstack | nova              | True  | UP    | neutron-l3-agent       |
| da6107b9-c6ac-41cd-8063-28ce0d702c24 | HyperV agent   | T-Cloud-1     | None              | True  | UP    | neutron-hyperv-agent   |
+--------------------------------------+----------------+---------------+-------------------+-------+-------+------------------------+

Here are my VMswitches:

PS C:\Windows\system32> get-vmswitch

Name          SwitchType NetAdapterInterfaceDescription
----          ---------- ------------------------------
Back-end zone External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #40
Proxy zone    External   Broadcom NetXtreme Gigabit Ethernet
Web zone      External   Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #39

I receive the following error message in Horizon when launching an instance:

2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] Instance failed network setup after 1 attempt(s)
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager Traceback (most recent call last):
[...]
2017-05-16 16:53:33.328 2112 ERROR nova.compute.manager PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [req-df281722-7a6a-45d8-a5e1-96cf329a669d 0797a3b035234af49fa8fc8a80680060 f0d140af365f4c65b8144f9d5a784bd8 - - -] [instance: 109a7120-b447-4313-8050-20a7099118d4] Instance failed to spawn
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] Traceback (most recent call last):
[...]
2017-05-16 16:53:33.375 2112 ERROR nova.compute.manager [instance: 109a7120-b447-4313-8050-20a7099118d4] PortBindingFailed: Binding failed for port 5c6b4f72-635e-44ed-8681-01fde769998b, please check neutron logs for more information.

I don't see any new logs in the neutron-hyperv-agent.log file on the compute node.

I see the following errors in /var/log/neutron/server.log on the controller:

2017-05-16 16:53:29.701 7634 WARNING neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] Unable to schedule network c8321bc0-f38c-4943-988b-f298313ae1be: no agents available; will retry on subsequent port and subnet creation events.
2017-05-16 16:53:29.890 7634 INFO neutron.wsgi [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] 10.9.47.71 - - [16/May/2017 16:53:29] "POST /v2.0/ports.json HTTP/1.1" 201 1087 1.430238
2017-05-16 16:53:30.174 7634 INFO neutron.wsgi [req-b6a0a893-439c-4bf1-a548-7f924ed19755 - - - - -] 10.9.47.71 - - [16/May/2017 16:53:30] "GET /v2.0/extensions.json HTTP/1.1" 200 7503 0.278365
2017-05-16 16:53:31.892 7634 ERROR neutron.plugins.ml2.managers [req-69aaa9cb-d10b-49fd-8812-5a973b188d11 - - - - -] Failed to bind port 5c6b4f72-635e-44ed-8681-01fde769998b on host T-Cloud-1 for vnic_type normal using segments ...
(more)
edit retag flag offensive close merge delete

Comments

I also found the following log in the neutron log on the controller: 2017-05-18 22:53:59.037 7640 INFO neutron.plugins.ml2.plugin [req-eeeef18c-ac1d-4a21-99b9-dfcff8a9ce30 - - - - -] No ports have port_id starting with Network Adapter

awestin1 gravatar imageawestin1 ( 2017-05-19 06:14:37 +0200 )edit

I added additional detail to the bottom. I am seeing an error in the logs stating that the agent is dead. I'm not sure why I am getting this error. When I list the neutron agents, the agent is listed as alive. The latest heartbeat is always recent within a few minutes.

awestin1 gravatar imageawestin1 ( 2017-05-19 06:24:37 +0200 )edit

I have deleted and re-added the hyper-v neutron agent without any change. I'm not sure I see any issue with the agent. I'm not sure why it's being reported as dead...

awestin1 gravatar imageawestin1 ( 2017-05-19 06:25:45 +0200 )edit

UPDATE: I'm thinking the dead agent log was just from when I was restarting the agent here and there as I was testing other configs.

awestin1 gravatar imageawestin1 ( 2017-05-19 06:38:24 +0200 )edit

2 answers

Sort by » oldest newest most voted
0

answered 2017-08-29 02:24:28 +0200

awestin1 gravatar image

updated 2017-08-29 02:25:36 +0200

As Claudiu mentioned in the comments "Neutron Ocata doesn't fail to start if a configured mechanism_driver is not installed ... You can check by running: pip freeze | grep networking-hyperv For Ocata, it should be version 4.0.0." This is required if you are going to be using the neutron-hyperv-agent on your hypervisor nodes.

Per Claudiu's advise, I checked to see if the networking-hyperv Python package was installed on the neutron controller/server. It was not. I installed the package via "pip install networking-hyperv==4.0.0", rebooted the controller, and the issue was resolved.

Note, you will want to specify the version number (4.0.0) as I did in my example above. Otherwise, pip will install the latest version (which was 2015.1.0 at time of writing) and it did not work. I had to uninstall the latest version and install 4.0.0.

edit flag offensive delete link more
0

answered 2017-08-18 18:34:41 +0200

lpetrut gravatar image

2017-05-16 16:53:29.701 7634 WARNING neutron.api.rpc.agentnotifiers.dhcprpcagent_api [req-0825988b-430f-41e6-83f3-422bb61927ff - - - - -] Unable to schedule network c8321bc0-f38c-4943-988b-f298313ae1be: no agents available; will retry on subsequent port and subnet creation events.

Judging by this, can you please double check that your neutron dhcp service is running? While at it, please make sure that the hyperv ML2 mechanism driver is enabled and installed on your neutron server.

edit flag offensive delete link more

Comments

Thanks for the input! I ran "neutron dhcp-agent-list-hosting-net [network_id_here]" and confirmed "admin_state_up" is True and "alive" is :-). However, I was not planning on using DHCP. I was going to pass the IP via the config drive.

awestin1 gravatar imageawestin1 ( 2017-08-28 20:54:52 +0200 )edit

I am pretty sure the instance being created will not have network access to the DHCP agent. Will this keep the instance from launching? Should I be specifying "--disable-dhcp" when creating the subnet?

awestin1 gravatar imageawestin1 ( 2017-08-28 20:58:25 +0200 )edit

Turned off DHCP for the subnet. This does not appear to have helped: openstack subnet set --no-dhcp subnet_name_here

awestin1 gravatar imageawestin1 ( 2017-08-28 22:18:23 +0200 )edit

Also, yes. I have confirmed that the Hyper-V ML2 mechanism driver is enabled in my config (/etc/neutron/plugin.ini): [ml2] mechanism_drivers = linuxbridge,l2population,openvswitch,hyperv type_drivers = local,flat,vlan,gre,vxlan,geneve tenant_network_types = vxlan,vlan,flat,local

awestin1 gravatar imageawestin1 ( 2017-08-28 22:24:34 +0200 )edit

Also, yes. I have confirmed that the Hyper-V ML2 mechanism driver is enabled in my config (/etc/neutron/plugin.ini): [ml2] mechanism_drivers = linuxbridge,l2population,openvswitch,hyperv

awestin1 gravatar imageawestin1 ( 2017-08-28 22:24:43 +0200 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2017-05-17 00:42:20 +0200

Seen: 1,632 times

Last updated: Aug 29 '17