VM creating without default virtual switch
Hyper v integration with open stack rocky is successful. We are using linux bridge and flat network. But VM is creating without attaching to hyper v switch. we need to manually select virtual switch from instance settings and working fine after that. Anyone faced same issue?
Controller working fine with KVM based compute node. Previously we tested with windows 2016 standard edition with queens and it was successful. Currently we are using Windows 2016 Data center edition. Is cloud base package compatible with Windows 2016 data center edition?
neutron-hyperv-agent.log
2019-11-13 16:15:51.430 7912 INFO networking_hyperv.neutron.agent.layer2 [req-94608b28-d66b-4826-b62d-cdaf08ddd924 - - - - -] Adding port 6e29d405-43a0-41b7-9066-61008a299187
2019-11-13 16:15:51.430 7912 DEBUG networking_hyperv.neutron.agent.layer2 [req-94608b28-d66b-4826-b62d-cdaf08ddd924 - - - - -] Missing port_id from device details: 6e29d405-43a0-41b7-9066-61008a299187. Details: {'device': '6e29d405-43a0-41b7-9066-61008a299187', 'no_active_binding': True} _treat_devices_added C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python\lib\site-packages\networking_hyperv\neutron\agent\layer2.py:374
2019-11-13 16:15:51.430 7912 DEBUG networking_hyperv.neutron.agent.layer2 [req-94608b28-d66b-4826-b62d-cdaf08ddd924 - - - - -] Remove the port from added ports set, so it doesn't get reprocessed. _treat_devices_added C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python\lib\site-packages\networking_hyperv\neutron\agent\layer2.py:376
Port Bind
# openstack port show 18bef8e7-8d34-4c81-b326-7ad88d683316 | grep bind
| binding_host_id | controller |
| binding_profile | |
| binding_vif_details | port_filter='True' |
| binding_vif_type | bridge |
| binding_vnic_type | normal |
#openstack port show 6e29d405-43a0-41b7-9066-61008a299187 | grep bind
| binding_host_id | controller |
| binding_profile | |
| binding_vif_details | port_filter='False' |
| binding_vif_type | hyperv |
| binding_vnic_type | normal |
neutron_hyperv_agent.conf
=======================
[DEFAULT]
control_exchange=neutron
transport_url=rabbit://user:password@controller:5672
log_dir=D:\OpenStack\Log\
log_file=neutron-hyperv-agent.log
[AGENT]
polling_interval=2
physical_network_vswitch_mappings=*:hyperv
enable_metrics_collection=false
enable_qos_extension=false
[SECURITYGROUP]
firewall_driver=hyperv
enable_security_group=true
nova.conf
==========
[DEFAULT]
compute_driver=compute_hyperv.driver.HyperVDriver
instances_path=D:\OpenStack\Instances
use_cow_images=true
force_config_drive=false
flat_injected=true
mkisofs_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\mkisofs.exe
allow_resize_to_same_host=true
running_deleted_instance_poll_interval=120
resize_confirm_window=5
resume_guests_state_on_host_boot=true
transport_url=rabbit://user:password@controller:5672
rpc_response_timeout=1800
lock_path=D:\OpenStack\Log\
vif_plugging_is_fatal=false
vif_plugging_timeout=60
block_device_allocate_retries=600
log_dir=D:\OpenStack\Log\
log_file=nova-compute.log
[placement]
auth_strategy=keystone
auth_type=password
auth_url=http://controller:5000/v3
project_name=service
username=placement_user
password=password
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
[notifications]
[glance]
api_servers=http://controller:9292
[hyperv]
vswitch_name = hyperv
limit_cpu_features=false
config_drive_inject_password=false
qemu_img_cmd=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom=true
dynamic_memory_ratio=1
enable_instance_metrics_collection=false
[os_win]
cache_temporary_wmi_objects=false
[rdp]
enabled=true
html5_proxy_base_url=http://ip:8000/
[neutron]
url=http://controller:9696
auth_strategy=keystone
project_name=service
username=neutron
password=password
auth_url=http://controller:5000/v3
project_domain_name=Default
user_domain_name=Default
os_region_name=RegionOne
auth_type=password
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = hyperv
extension_drivers = port_security
neutron-dhcp-agent.log
2019-11-14 11:17:33.463 24674 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=compute, binding:profile=, binding:vif_details=port_filter=False, binding:vif_type=hyperv, binding:vnic_type=normal, created_at=2019-11-14T05:47:29Z, description=, device_id=9d6aea44-be26-4dc6-ac49-fa477bd9848a, device_owner=compute:nova, extra_dhcp_opts=[], fixed_ips=[{u'subnet_id': u'd5a25656-c500-4633-be70-4c69229421ca', u'ip_address': u'instance_ip'}], id=c4cc5654-f5a6-4b65-b671-bf42803149fb, mac_address=fa:16:3e:08:c5:2e, name=, network_id=d89e0c96-0fa1-4d69-bcf3-5efcbcc50250, port_security_enabled=True, project_id=216c81f417014f44b38276cc2e5c1189, revision_number=3, security_groups=[u'e871b589-f986-41de-acd8-372744d8f534'], status=DOWN, tags=[], tenant_id=216c81f417014f44b38276cc2e5c1189, updated_at=2019-11-14T05:47:32Z
Which neutron agent are you using ("neutron-ovs-agent" or "neutron-hyperv-agent")? What network type are you using (flat, vlan, vxlan, etc)? It's most probably a configuration issue.
@Ipetrut We are using "neutron-hyperv-agent" and Flat network. Its working fine with Linux controller nodes. Windows Hyper-V OpenStack Installer: Rocky 18.0.3
Just to make sure, your hyper-v switch is named "hyperv", right? Also, don't forget to install networking-hyperv on your neutron server and enable the hyperv ml2 plugin as described here: https://compute-hyperv.readthedocs.io/en/latest/install/next-steps.html .
Yes, the vswitch hyper v created by cloudbase installer. On controller networking-hyperv 7.0.0 is there. Sorry the mentioned URL is not working.
The correct link: https://compute-hyperv.readthedocs.io/en/latest/install/next-steps.html (ask added an extra html character at the end)