New Question

tahder's profile - activity

2017-11-03 13:41:33 +0300 received badge  Notable Question (source)
2017-09-28 11:33:58 +0300 received badge  Popular Question (source)
2017-08-30 04:49:46 +0300 asked a question Ocata or any version with S2D

Ocata or any version with S2D Are there any experience how to work the Ocata or any version of OpenStack with HyperV Sto

2017-02-26 15:41:15 +0300 received badge  Notable Question (source)
2017-01-06 18:06:01 +0300 received badge  Famous Question (source)
2016-12-20 22:55:41 +0300 received badge  Popular Question (source)
2016-12-20 00:00:27 +0300 received badge  Enthusiast
2016-12-19 23:53:30 +0300 commented answer Hyper-v Open vSwitch agent not working

are you referring to the hyperv neutron_ovs_agent.log? which is this one http://pastebin.com/tc2BYnw6

and in the controller (openvswitch-agent.log?) http://pastebin.com/9bQjmBEZ

which I found error on ofctl..

2016-12-19 23:41:33 +0300 received badge  Commentator
2016-12-19 23:41:33 +0300 commented question external network doesn't work directly connecting

I am running the neutron-ovs-agent as I stop the neutron-hyperv-agent. Pls see more details of my external network - pls see for more details http://pastebin.com/J1FLuqUJ

2016-12-19 06:10:36 +0300 received badge  Notable Question (source)
2016-12-16 06:57:53 +0300 asked a question external network doesn't work directly connecting

I got running Hyper-V compute node and a Controller/Network node on a separate linux server. This configurations working well on KVM compute nodes (which I remove all networks, disable kvm compute nodes prior to testing this)

As per testing, i used to upload a Centos 6.5 image. I made 2 instances, 1 VM directly connected to the external network (aka public network) and 1 VM connected to the private network which routed to the external network.

But my questions now why directly connecting to the external network doesn't work or not able to ping the NIC connected to the external router (10.20.30.254)?

ip netns 
qrouter-94683fda-8425-44e4-a3b9-011146d3db3c
qdhcp-40c419d7-9b49-4566-a5b6-c01dc270dbdf
qdhcp-8971a629-8326-466a-979a-c39401245043

IPs are the following (remove lo):

ip netns exec qdhcp-40c419d7-9b49-4566-a5b6-c01dc270dbdf ifconfig
tap84ddb98a-6f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.0.1.2  netmask 255.255.255.0  broadcast 10.0.1.255
        inet6 fe80::f816:3eff:fe6e:60a1  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:6e:60:a1  txqueuelen 0  (Ethernet)
        RX packets 585  bytes 29880 (29.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32  bytes 4761 (4.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

and this is directly connected external network

ip netns exec qdhcp-8971a629-8326-466a-979a-c39401245043 ifconfig
tap344a41fd-c9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.20.30.20  netmask 255.255.255.0  broadcast 10.20.30.255
        inet6 fe80::f816:3eff:fef4:7072  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:f4:70:72  txqueuelen 0  (Ethernet)
        RX packets 1342  bytes 67134 (65.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 138  bytes 12426 (12.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Gateways:

ip netns exec qdhcp-8971a629-8326-466a-979a-c39401245043 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.20.30.254    0.0.0.0         UG    0      0        0 tap344a41fd-c9
10.20.30.0      0.0.0.0         255.255.255.0   U     0      0        0 tap344a41fd-c9

ip netns exec qdhcp-40c419d7-9b49-4566-a5b6-c01dc270dbdf route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.1.1        0.0.0.0         UG    0      0        0 tap84ddb98a-6f
10.0.1.0        0.0.0.0         255.255.255.0   U     0      0        0 tap84ddb98a-6f

Pinging the main gateway (10.20.30.254).

ip netns exec qdhcp-8971a629-8326-466a-979a-c39401245043 ping 10.20.30.254
PING 10.20.30.254 (10.20.30.254) 56(84) bytes of data.
From 10.20.30.20 icmp_seq=1 Destination Host Unreachable
From 10.20.30.20 icmp_seq=2 Destination Host Unreachable

ip netns exec qdhcp-40c419d7-9b49-4566-a5b6-c01dc270dbdf ping 10.20.30.254
PING 10.20.30.254 (10.20.30.254) 56(84) bytes of data.
64 bytes from 10.20.30.254: icmp_seq=1 ttl=127 time=5.77 ms
^C
--- 10.20.30.254 ping statistics ---
1 ...
(more)
2016-12-15 23:19:07 +0300 commented question v-magine installation - network issues

as per mentioned by alex, you still can't you ping 10.0.0.1 from your VMs and in the controller? is v-magine-data is a private network in hyperv? is your VRACK connected to External network?

2016-12-15 22:58:24 +0300 commented question v-magine installation - network issues

can you also ping from hyper-v to the controller node mgmt-int IP (10.236.245.3 in your case)? If you can't the issue is here.

2016-12-15 22:42:21 +0300 commented question v-magine installation - network issues

can you show us in the controller node do login and issue the command 1) source ~/keystonerc_admin or ~/keystonerc_demo 2) neutron subnet-list 3) neutron subnet-show xxx where xxx is your network_name with an issue. 4) ifconfig 5) route -n. Maybe we can see it over there.

2016-12-15 05:25:21 +0300 answered a question Hyper-v Open vSwitch agent not working

Hi aserdean,

It seems working right now, what I did remove cloudbase-related software by uninstalling

HyperVNovaCompute_Newton 14.0.1 and openvswitch hyperv 2.5.1 certified

And stop/remove the running service (neutron-ovs-agent) and stop/disable neutron-hyperv-agent. Then remove the Hyper-V (as this is Microsoft :-) ).

After boot-up, install back the HyperVNovaCompute and openvswitch but this case using 2.6.1 do the steps on the article, add per your your solution ofinterface & ovsdbinterface and change the VM Switch extension to (take note of Cloudbase):

Get-VMSwitchExtension -VMSwitchName vSwitch -Name "Cloudbase Open vSwitch Extension"
Enable-VMSwitchExtension -VMSwitchName vSwitch -Name "Cloudbase Open vSwitch Extension"

As per checking on the logs, it seems on the first run br-int is created but it will stop suddenly the neutron-ovs-agent service. Solution is to rerun it or reboot on my case which then automatically created those bridges/ports br-tun, patch-int, xvlan-* et al after reboot.

My controller now is happy and able to see the Hyper-V Open vSwitch agent and Hyper-v nova-compute.

Thanks a heap, MBC

2016-12-14 22:43:20 +0300 received badge  Popular Question (source)
2016-12-14 11:05:27 +0300 commented answer Hyper-v Open vSwitch agent not working

I tried to execute the command, and got this still not shown in neutron ovs-vsctl show 42842d57-2c30-4f59-a258-7ab149768d66 Bridge "br-port1" Port "ens5" Interface "ens5" Port "br-port1" Interface "br-port1" type: internal Bridge br-int

2016-12-14 02:49:33 +0300 commented answer Hyper-v Open vSwitch agent not working

Tried to reboot and got these: http://paste.openstack.org/show/592285/ pls do remove "< / p>" as it added in the url.

2016-12-14 02:27:21 +0300 commented answer Hyper-v Open vSwitch agent not working

Hello Alin, It seems not working adding those 2 lines. I don't know if this related to the nova.conf configurations as per checking with the compute node using kvm they have slight differences under [neutron] section.. auth_plugin=v3password (in hyper-v) auth_type=v3password (in kvm)

2016-12-14 01:17:04 +0300 commented question V-Magine Setup Error

Are you behind proxy server in connecting the internet? which I assume as per in your logs or not able continue as you got the errors.

2016-12-14 01:09:35 +0300 commented question v-magine installation - network issues

As per your config last table, you don't have the dns_nameservers pls do add xxx.xxx.xxx.62 (as this is your gateway), as this what I did in my configs (or use the DNS of your internet for sure). I assume you created a public network - same network range in your internet NIC xxx.xxx.xxx.??/yy.

2016-12-14 00:59:17 +0300 commented question v-magine installation - network issues

First try to enable the ICMP Echo Request for ICMPv4 and ICMPv6, in the Hyper-v server as by default it is disable. Second, check your networks in the dashboard, say you created your private network under subnet details DNS Name Servers should be the IPs of your public network (NIC).

2016-12-14 00:41:43 +0300 received badge  Editor (source)
2016-12-14 00:39:35 +0300 asked a question Hyper-v Open vSwitch agent not working

I used to follow the article Open vSwitch 2.5 on Hyper-V (OpenStack) but seems not working the openvswitch as I could not get it in the neutron from the controller node (via neutron agent-list command) but the Hyper-v compute node is working well (via nova service-list command) in the controller node.

By the way software that I use, OpenStack Newton 1.1 (in the controller node-separate server). In Hyper-v server - HyperVNovaCompute_Newton 14.0.1 and openvswitch hyperv 2.5.1 certified (or even tried openvswitch hyperv 2.6.1 certified).

I am stuck in the part of the article that mentioned "Note: creating a service manually for the OVS agent won’t be necessary anymore starting with the next Nova Hyper-V MSI installer version. Here is the content of the neutron_ovs_agent.conf file:" as it mentioned not necessary anymore but seems not created a file, even tried to copy the same configurations as per stated and this is my configuration(C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\neutronovsagent.conf)

[DEFAULT]
verbose=true
debug=false
control_exchange=neutron
policy_file=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\policy.json
rpc_backend=neutron.openstack.common.rpc.impl_kombu
rabbit_host=10.20.30.203
rabbit_port=5672
rabbit_userid=guest
rabbit_password=guest
logdir=C:\OpenStack\Log\
logfile=neutron-ovs-agent.log
[agent]
tunnel_types = vxlan
enable_metrics_collection=false

[SECURITYGROUP]
enable_security_group=false
[ovs]
local_ip = 10.20.30.254
tunnel_bridge = br-tun
integration_bridge = br-int
tenant_network_type = vxlan
enable_tunneling = true

But it doesn't show me up, with the bridge/port such as br-int, patch-tun, br-tun, patch-int et al.. But only show me these, where ens5 is my network name.

PS C:\Users\Administrator> ovs-vsctl show
42842d57-2c30-4f59-a258-7ab149768d66
    Manager "ptcp:6640:127.0.0.1"
    Bridge "br-port1"
        Port "ens5"
            Interface "ens5"
        Port "br-port1"
            Interface "br-port1"
                type: internal

And the logs I got only these in c:\OpenStack\Log\neutron-ovs-agent.log

2016-12-13 17:40:56.226 2688 INFO neutron.common.config [-] Logging enabled!
2016-12-13 17:40:56.226 2688 INFO neutron.common.config [-] C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python27\Scripts\neutron-openvswitch-agent-script.py version 9.0.1.dev14
2016-12-13 17:40:56.242 2688 WARNING oslo_config.cfg [-] Option "rabbit_host" from group "DEFAULT" is deprecated. Use option "rabbit_host" from group "oslo_messaging_rabbit".
2016-12-13 17:40:56.242 2688 WARNING oslo_config.cfg [-] Option "rabbit_host" from group "oslo_messaging_rabbit" is deprecated for removal.  Its value may be silently ignored in the future.
2016-12-13 17:40:56.242 2688 WARNING oslo_config.cfg [-] Option "rabbit_port" from group "DEFAULT" is deprecated. Use option "rabbit_port" from group "oslo_messaging_rabbit".
2016-12-13 17:40:56.242 2688 WARNING oslo_config.cfg [-] Option "rabbit_port" from group "oslo_messaging_rabbit" is deprecated for removal.  Its value may be silently ignored in the future.
2016-12-13 17:40:56.242 2688 WARNING oslo_config.cfg [-] Option "rabbit_password" from group "DEFAULT" is deprecated. Use option "rabbit_password" from group "oslo_messaging_rabbit".
2016-12-13 17:40:56.256 2688 WARNING oslo_config.cfg [-] Option "rabbit_password" from group "oslo_messaging_rabbit" is deprecated for removal.  Its value may be silently ignored in the future.
2016-12-13 17:40:56.256 2688 WARNING oslo_config.cfg ...
(more)