2018-08-30 03:47:51 +0200 | received badge | ● Notable Question (source) |
2018-08-07 13:00:00 +0200 | received badge | ● Famous Question (source) |
2017-06-09 13:32:20 +0200 | received badge | ● Famous Question (source) |
2017-02-26 15:44:53 +0200 | received badge | ● Famous Question (source) |
2017-01-31 17:02:13 +0200 | received badge | ● Popular Question (source) |
2017-01-17 01:49:26 +0200 | received badge | ● Teacher (source) |
2017-01-17 01:49:26 +0200 | received badge | ● Self-Learner (source) |
2017-01-12 17:00:10 +0200 | received badge | ● Notable Question (source) |
2016-12-22 20:03:37 +0200 | commented answer | Cannot launch instance on hyper-v compute node Check your neutron server status and logs for any errors or python traces. Possibly you made something wrong because it seems neutron is not working properly if you can not retrieve objects in dashboard. |
2016-12-22 18:51:10 +0200 | commented answer | Cannot launch instance on hyper-v compute node After compute driver installation you need to configure you neutron server to work with native hyperv switch. Please look through this article: https://cloudbase.it/neutron-hyper-v-plugin/ You need to configure neutron ml2 plugin with hyperv mechanism driver. |
2016-12-22 17:51:59 +0200 | commented answer | Cannot launch instance on hyper-v compute node Well, you have Neutron server with linuxbridges, it's okay. How did you prepare compute nodes to connect to Neutron networks? Did you install hyperv neutron agent? Did you configure neutron mech driver on neutron server for hyperv networking? |
2016-12-22 15:50:30 +0200 | answered a question | Cannot launch instance on hyper-v compute node Hi, what kind of networking do you use in Hyper-V compute nodes? Native hyper-v switch or OVS extension? |
2016-12-22 11:53:21 +0200 | commented answer | v-magine installation - network issues Those guys ask only simple questions :) |
2016-12-22 09:40:51 +0200 | asked a question | Exception during cluster live migration, <x_wmi: Generic failure > Hi, When I try to live migrate an instance, it goes to another host successfully, but in nova-compute.log I see: Exception during cluster live migration of instance-0000027d to bc12: <x_wmi: generic="" failure="" >="" what="" can="" it="" be?<="" p=""> I use Mitaka Release and OVS 2.5.1 |
2016-12-15 10:27:44 +0200 | received badge | ● Popular Question (source) |
2016-12-15 10:26:25 +0200 | commented answer | OVS 2.5.1 Tunnel Problem, No Network inside VM Hi, Alin. Thanks. Wait you update. |
2016-12-12 15:00:21 +0200 | commented answer | OVS 2.5.1 Tunnel Problem, No Network inside VM Hi, Alin. Solution is to add some configuration options to agent section. of_interface = ovs-ofctl ovsdb_interface = vsctl There is another problem with lost interface TAG in OVS after Live Migration. https://bugs.launchpad.net/compute-hyperv/+bug/1644122 |
2016-12-10 22:44:21 +0200 | answered a question | OVS 2.5.1 Tunnel Problem, No Network inside VM Problem closed. Found solution. |
2016-12-10 21:16:26 +0200 | asked a question | OVS 2.5.1 Tunnel Problem, No Network inside VM Hello, After updating OVS 2.5.0.1 to 2.5.1 no vxlan tunnels can't be created automaticaly within br-tun. To check we can remove all VM from host so OVS will have no tunnels to the other agents. Then we create a new VM and OVS tells failed to query port : Invalid argument when tries to get tunnels. I'm using OVS 2.5 on neutron server. 2016-12-10T19:07:48.249Z|00106|dpif|WARN|system@ovs-system: failed to query port : Invalid argument 2016-12-10T19:07:48.877Z|00107|bridge|INFO|bridge br-int: added interface 25b5cc64-b59f-4624-8bdf-def533e354f6 on port 3 I'd be more than happy if you can help me with this issue. [DEFAULT] controlexchange=neutron policyfile=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\policy.json verbose=true logdir=C:\OpenStack\Log\ logfile=neutron-ovs-agent.log rpcbackend=neutron.openstack.common.rpc.implkombu rabbithost=db-server rabbitport=5672 rabbituserid=nova rabbitpassword=compaq [ovs] localip = 172.17.4.111 bridgemappings=isp1:br-ex tunnelbridge = br-tun integrationbridge = br-int tenantnetworktype = vlan,vxlan enable_tunneling = true [agent] pollinginterval=2 tunneltypes = vxlan l2_population = True [SECURITYGROUP] enablesecuritygroup = False ovs-vsctl show 13128f31-ae60-47b0-b614-6af878234f81 Bridge br-tun failmode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-ex failmode: secure Port br-ex Interface br-ex type: internal Port vxlan tag: 4 Interface vxlan type: internal Port "port2vlans" Interface "port2vlans" Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "25b5cc64-b59f-4624-8bdf-def533e354f6" tag: 1 Interface "25b5cc64-b59f-4624-8bdf-def533e354f6" Port br-int Interface br-int type: internal |
2016-12-06 19:33:33 +0200 | received badge | ● Notable Question (source) |
2016-11-24 14:19:20 +0200 | received badge | ● Popular Question (source) |
2016-11-22 21:53:05 +0200 | received badge | ● Enthusiast |
2016-11-21 11:02:02 +0200 | received badge | ● Editor (source) |
2016-11-21 10:55:53 +0200 | asked a question | Hyper-V Live Migration doesn't work Hi Team, I'm using Mitaka cluster driver with OVS 2.5. When I perform live migration, an instance loses network connection but succesfully goes to another hosts. There was a bug in Liberty where ovs port had not been migrated to target hosts but now it is seems to be okay, ovs port is on target host but add with errors. vswitchd.log (migration time) https://paste.ubuntu.com/23510786/ nova-compute.log (migration time) https://paste.ubuntu.com/23510782/ nova.conf https://paste.ubuntu.com/23510815/ neutron-ovs-agent.conf https://paste.ubuntu.com/23510816/ If I execute the command (ovs-vsctl --timeout=120 -- --if-exists del-port 4b26fd63-e779-49c9-b419-ac2c53ef8c9a ....) logged in nova manually then I get network connection even without restarting the services. I've also logged OVS debug rpc (another migration) and it contains "protocols":"OpenFlow10","datapath_version":"<unknown>"} 2016-11-21T13:10:55.017Z|04642|netdevwindows|DBG|construct device 4b26fd63-e779-49c9-b419-ac2c53ef8c9a, ovstype: 0. 2016-11-21T13:10:55.017Z|04643|dpif|WARN|system@ovs-system: failed to add 4b26fd63-e779-49c9-b419-ac2c53ef8c9a as port: Invalid argument Another try: 2016-11-21T12:48:21.789Z|00444|dpif|DBG|system@ovs-system: device br-tun is on port 5 2016-11-21T12:48:21.789Z|00445|netlinksocket|DBG|received NAK error=0 (No such device) 2016-11-21T12:48:21.789Z|00446|netdevwindows|DBG|construct device 4b26fd63-e779-49c9-b419-ac2c53ef8c9a, ovstype: 0. 2016-11-21T12:48:21.789Z|00447|netlinksocket|DBG|received NAK error=0 (No such device) 2016-11-21T12:48:21.789Z|00448|netlink_socket|DBG|received NAK error=0 (Invalid argument) 2016-11-21T12:48:21.789Z|00449|dpif|WARN|system@ovs-system: failed to add 4b26fd63-e779-49c9-b419-ac2c53ef8c9a as port: Invalid argument https://paste.ubuntu.com/23511010/ Please, help to investigate it, as our team is implementing cloudbase solution. I'm ready to provide all logs. |
2016-11-17 10:25:25 +0200 | received badge | ● Famous Question (source) |
2016-11-17 10:25:25 +0200 | received badge | ● Notable Question (source) |
2016-11-17 10:25:25 +0200 | received badge | ● Popular Question (source) |
2016-11-09 17:53:40 +0200 | asked a question | FreeRDP WinHttpSendRequest Hi Team, I'm trying to use your console in Mitaka. When I open it through dashboard, I see the blank screen and error: : "Instance seems to be offline". But I CAN get access if I connect to manually from http://ip:8000 freerdp instance, setting vm_id and port 2179. Event Viewer tells for wsgate: WinHttpSendRequest Keystone API v2.0 [global] debug=true redirect=false port=8000 bindaddr=172.17.0.111 [http] documentroot=C:\Program Files (x86)\Cloudbase Solutions\FreeRDP-WebConnect\WebRoot\ [ssl] port=8443 bindaddr=172.17.0.111 certfile=C:\Program Files (x86)\Cloudbase Solutions\FreeRDP-WebConnect\etc\server.cer [rdpoverride] nofullwindowdrag=true [openstack] authurl=http://keystone-server:5000/v2.0 tenantname=services username=neutron password=password [hyperv] hostusername=... hostpassword=... |
2016-11-08 12:16:26 +0200 | received badge | ● Notable Question (source) |
2016-09-09 13:47:51 +0200 | received badge | ● Popular Question (source) |
2016-09-05 12:27:47 +0200 | asked a question | nova.virt.hyperv.cluster.driver.HyperVClusterDriver Is there any nova.virt.hyperv.cluster.driver.HyperVClusterDriver release for Openstack Liberty? Is it possible to adapt mitaka driver for liberty release? |