2018-11-20 19:01:54 +0300 | received badge | ● Famous Question (source) |
2018-11-20 19:01:54 +0300 | received badge | ● Notable Question (source) |
2018-11-20 19:01:54 +0300 | received badge | ● Popular Question (source) |
2016-11-12 04:11:18 +0300 | received badge | ● Popular Question (source) |
2016-11-12 04:11:18 +0300 | received badge | ● Famous Question (source) |
2016-11-12 04:11:18 +0300 | received badge | ● Notable Question (source) |
2016-09-05 12:42:04 +0300 | received badge | ● Famous Question (source) |
2016-09-05 12:42:04 +0300 | received badge | ● Notable Question (source) |
2015-08-25 13:00:00 +0300 | received badge | ● Famous Question (source) |
2015-08-03 15:57:56 +0300 | received badge | ● Taxonomist |
2014-12-18 10:06:51 +0300 | received badge | ● Famous Question (source) |
2014-12-10 11:53:54 +0300 | asked a question | ambiguousEndpoints Exception during attach volume in multi region Hi It looks like if multiple cinder endpoints are defined in keystone the nova service in Hyper-V is not able to process the service catalog correctly. I'm getting the following exception: 2014-12-09 03:01:36.956 4212 ERROR nova.compute.manager [req-80d69161-4b80-4f25-ada5-de6febd308e8 3ebc4595f1ce4fcaa7327b06ee32dae6 dad897c5ed9d4e1892eb9fc1649678c9] [instance: 6730738c-0416-4c0f-b816-3caab846b983] Failed to attach 5685c25a-3129-410b-86e3-0027f416bcd1 at /dev/sdb 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] Traceback (most recent call last): 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] File "C:\Program Files (x86)\IBM\Cloud Manager with OpenStack\Hyper-V Agent\Python27\lib\site-packages\nova\compute\manager.py", line 4214, in attachvolume 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] docheckattach=False, dodriverattach=True) 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] File "C:\Program Files (x86)\IBM\Cloud Manager with OpenStack\Hyper-V Agent\Python27\lib\site-packages\nova\virt\blockdevice.py", line 44, in wrapped 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] retval = method(obj, context, args, *kwargs) 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] File "C:\Program Files (x86)\IBM\Cloud Manager with OpenStack\Hyper-V Agent\Python27\lib\site-packages\nova\virt\blockdevice.py", line 215, in attach 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] volume = volumeapi.get(context, self.volumeid) 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] File "C:\Program Files (x86)\IBM\Cloud Manager with OpenStack\Hyper-V Agent\Python27\lib\site-packages\nova\volume\cinder.py", line 190, in wrapper 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] res = method(self, ctx, volumeid, args, *kwargs) 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] File "C:\Program Files (x86)\IBM\Cloud Manager with OpenStack\Hyper-V Agent\Python27\lib\site-packages\nova\volume\cinder.py", line 223, in get 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] item = cinderclient(context).volumes.get(volumeid) 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] File "C:\Program Files (x86)\IBM\Cloud Manager with OpenStack\Hyper-V Agent\Python27\lib\site-packages\nova\volume\cinder.py", line 96, in cinderclient 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] endpointtype=endpointtype) 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] File "C:\Program Files (x86)\IBM\Cloud Manager with OpenStack\Hyper-V Agent\Python27\lib\site-packages\cinderclient\servicecatalog.py", line 86, in url_for 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] raise cinderclient.exceptions.AmbiguousEndpoints(endpoints=eplist) 2014-12-09 03:01:36.956 4212 TRACE nova.compute.manager [instance: 6730738c-0416-4c0f-b816-3caab846b983] AmbiguousEndpoints: AmbiguousEndpoints: [{u'adminURL': u'http://vmware-region-india3.82.customer.ibm.com:8776/v1/dad897c5ed9d4e1892eb9fc1649678c9', u'region': u'RegionVMware', 'serviceName': None, u'internalURL': u'http://vmware-region-india3.82.customer.ibm.com:8776/v1/dad897c5ed9d4e1892eb9fc1649678c9', u'publicURL': u'http://vmware-region-india3 ... |
2014-12-08 14:20:22 +0300 | received badge | ● Popular Question (source) |
2014-12-02 17:24:19 +0300 | asked a question | discovery of existing vms Hi I have a question regarding discovery of existing vms when installing Compute service on a Hyper-V node. I observed that existing vms do not show up in openstack. Is there a way to make those instances get managed by openstack or am I missing something ? |
2014-11-27 15:28:57 +0300 | commented answer | shutdown vm from Hyper-V => state in nova Thanks for your answer. This makes sense to me. I found out that there is a setting in nova called sync_power_state_interval. If you set this to -1 then nova seems to not sync the power states but leaves the powerstate independent of each other. |
2014-11-26 17:21:59 +0300 | commented question | shutdown vm from Hyper-V => state in nova Update, I found that the state gets updated in a way that if the instance is marked as shutoff in nova and running in Hyper-V. Nova will initiate a shutdown and the instance is powered off in Hyper-V. Can this behaviour be changed so that it is the other way round Nova will mark instance as running |
2014-11-26 12:17:30 +0300 | asked a question | shutdown vm from Hyper-V => state in nova Hi I've setup Hyper-V compute services and attached it to a controller infrastructure. I can boot images sucessfully and I can start, stop and resize vms. I now shutdown an instance from the console in Hyper-V. However the instance is still marked as active in nova. Am I missing some configuration to make nova get state changes triggered from the hypervisor ? |
2014-11-24 18:37:00 +0300 | received badge | ● Notable Question (source) |
2014-11-19 00:46:30 +0300 | received badge | ● Notable Question (source) |
2014-11-08 04:06:22 +0300 | received badge | ● Popular Question (source) |
2014-11-07 22:12:47 +0300 | received badge | ● Popular Question (source) |
2014-10-17 14:52:09 +0300 | asked a question | icehouse hyper-v neutron flat network not working Hi, I'm trying to get a hyper-v region running on an icehouse based openstack installation. I successfully installed Hyper-V-Server 2012-R2 and installed Hyper compute using HyperVNovaComputeIcehouse201412.msi. The Hypervisor was immediately visible in Horizon and I was able to register an image in glance using glance image-create (with parametes --hypervisor_type=hyperv --container-format bare --disk-format vhd) However now I'm struggling with networking. My aim is to get flat networking with neutron running first. The documentation link:(http://www.cloudbase.it/quantum-hyper-v-plugin/) seems to be outdated for icehouse, but I found some hints in link:(http://ask.cloudbase.it/question/61/vms-are-not-getting-network-with-hyper-v-openstack/) I configured the ml2 plugin to use hyperv mechanism in /etc/neutron/plugins/ml2/ml2conf.ini basically: tenantnetworktypes = flat,vlan,vxlan mechanismdrivers = hyperv I added tenantnetworktype = flat to /etc/neutron/plugins/hyperv/hypervneutronplugin.ini I have a separated neutron network node and a region server so the following services are running: region-server: neutron-server neutron-network-node neutron-dhcp-agent neutron-l3-agent neutron-linuxbridge-agent neutron-metadata-agent hyper-v node HyperV agent [root@cil017129036 ~]# neutron agent-list +--------------------------------------+--------------------+--------------+-------+----------------+ | id | agenttype | host | alive | adminstate_up | +--------------------------------------+--------------------+--------------+-------+----------------+ | cd875bc6-2398-4a1e-bdb3-d63db7df93ec | Linux bridge agent | cil017129037 | :-) | True | | 097c9c8e-d16b-43d4-9e4a-a3f87bb1bbf6 | L3 agent | cil017129037 | :-) | True | | 4ee13247-74ff-4100-a3ac-3927bfcabd2d | DHCP agent | cil017129037 | :-) | True | | c9520866-6f6e-4676-b02a-fb7c2d2b7f2c | Metadata agent | cil017129037 | :-) | True | | 024452ed-bcf5-44a9-b598-642ded942e76 | Linux bridge agent | cil017129038 | :-) | True | | c0e21a49-86c6-4936-8c55-981d2cbd7c7a | HyperV agent | cil017129063 | :-) | True | +--------------------------------------+--------------------+--------------+-------+----------------+ My problem is that the vms do not get network correctly set neither with nor without DHCP My question is, is the setup correct like this ? |
2014-10-14 15:05:11 +0300 | commented answer | SCVMM support thanks for the quick and detailed answer. I sent an request to get more detailed information. |
2014-10-13 14:29:27 +0300 | asked a question | SCVMM support Is it possible to use scvmm in openstack instead of having each Hyper-V Server directly installed as nova-compute ? This would end up in a modelling of Hyper-V more in way that the openstack vmware driver does. |