New Question
0

vSwitch not being selected in Hyper-V

asked 2019-01-24 23:27:15 +0300

johnwc gravatar image

I'm creating a new thread, as the other did not get much support. And, I'm not seeing that error anymore since restarting with a fresh install using packstack. To help try to "help you, help me." I am posting all the relevant configs for my install, in hopes that you can see a misconfiguration on my part. This was a fresh install of Rocky via packstack on CentOS 7, and HyperVNovaCompute_Beta.msi from http://cloudbase.it. I had planned on just doing an initial base install with Hyper-V without OVS. After getting that running, then setup a second Hyper-V server with OVS configured, using the same controller. I am having a hard time just getting the hyperv agent(non-OVS) to work properly and assign the "Servers" vSwitch to a new VM. I followed the same workflow that the v-magine scripts use to do a install, looking through it's source to find out. (V-Magine would not install, I think there is new updates to packstack and packages in repos that it gets stuck with.)

If there is anymore configs/settings that you need to see, just let me know.

Here is the setup
Hyper01 - HyperV

NIC1(Broadcom NetXtreme Gigabit Ethernet #2): 172.16.1.91 with vSwitch "Servers" + Allow Management (No Extensions checked/enabled)

OpenStack01 - Controller VM on Hyper01

eth0: 172.16.1.90, assigned to "Servers" vSwitch, VMQ enabled, MAC spoofing enabled # For management access
eth1(br-data): no ip, assigned to "Servers" vSwitch, VMQ enabled, MAC spoofing enabled # For future use with openvswitch, will move to new vSwitch when setup Open vSwitch
eth2(br-ext): no ip, assigned to "Servers" vSwitch, VMQ enabled, MAC spoofing enabled # For future use with openvswitch, will move to new vSwitch when setup Open vSwitch

Controller - openstack01(172.16.1.90):

# ip address

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:01:5b:05 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.90/24 brd 172.16.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::215:5dff:fe01:5b05/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 00:15:5d:01:5b:08 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::215:5dff:fe01:5b08/64 scope link
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 00:15:5d:01:5b:09 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::215:5dff:fe01:5b09/64 ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted
0

answered 2019-01-29 11:28:11 +0300

lpetrut gravatar image

Hi,

So there's no error in the logs, yet the ports are not connected, right? Make sure that the neutron agent is running and that it's reported as active (e.g. by doing a neutron agent-list). If you'd like, you can send us some more detailed logs and we can take a look. Just enable debug logging, spin up an instance and send us the nova compute, neutron-hyperv-agent and the neutron server logs (15 minutes around the instance creation time should be enough).

Regards, Lucian Petrut

edit flag offensive delete link more

Comments

Thanks, time was of the essence, so I scrapped it and started over deploying it with OVS instead of the HyperV agent. When I have time within the next few weeks, I will setup another Hyper-V server and attempt to get it working without OVS and just the HyperV agent.

johnwc gravatar imagejohnwc ( 2019-01-31 23:37:10 +0300 )edit

So I now have two test VMs deployed on their private network of 10.0.0.0/24. They both can ping each other, but not the OVS router that's @ 10.0.0.1. Any suggestions on what to look at?

johnwc gravatar imagejohnwc ( 2019-01-31 23:37:34 +0300 )edit

I'm not seeing any ports being listen to on the controller or compute when I run `ovs-appctl tnl/ports/show`

johnwc gravatar imagejohnwc ( 2019-02-01 22:02:46 +0300 )edit

Which ovs version are you using? Can you double check that the tunnel endpoints can reach each other?

lpetrut gravatar imagelpetrut ( 2019-02-03 10:19:18 +0300 )edit

The controller is running 2.10.1, the one that is in the CentOS rpm repo. The version that seems to be install with the openvswitch-hyperv-installer-beta.msi is 2.9.4. On both machines, the command `ovs-appctl tnl/ports/show` shows nothing listening. Listening ports:

johnwc gravatar imagejohnwc ( 2019-02-06 22:54:41 +0300 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

2 followers

Stats

Asked: 2019-01-24 23:27:15 +0300

Seen: 403 times

Last updated: Jan 29 '19