New Question

Resizing Gen2 Guest EFI [closed]

asked 2018-02-26 05:13:44 +0300

Nebukazar gravatar image


We are currently running an hyperv cluster with storage spaces direct based on 3x nodes. They are using the nova-ocata computer driver from cloudbase. We have OpenStack Ocata deployed as a single node / single VM through RDO. These nodes are configured so that it uses differential disks when creating instances. Images being used within glance contain the Gen2 attribute.

So far, instances are running just fine.

We've been trying to resize one of our CentOS7 guest (still gen2); we applied a new flavor with a new disk size (100GB -vs- 50GB) and initiated the resize process (which took quiet awhile to complete, but that is fine).

What we stumbled across though is that once the resize process has completed, the instance wasn't able to boot due to missing EFI.

We tried several things within the guest itself but to no avail. What seemed to fix the issue was to manually re-create a new differential disk from which the main disk was still the same has previously used. Booted once into that new disk and got the shim.efi (re)installed. Then, we simply re-attached the resized differential disk and was able to boot the instance properly.

Is there any way to avoid this when doing a resize of a Gen2 instance ?

Please let me know if you guys need further information.

Thanks! -Luc

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by Nebukazar
close date 2018-02-27 22:26:10.315384

1 answer

Sort by » oldest newest most voted

answered 2018-02-26 12:18:04 +0300

Claudiu Belu gravatar image

updated 2018-02-27 11:27:15 +0300

Hello Luc,

Just a small bit of information about how cold resize / migration generally works. Pretty much all of the Nova drivers are destroying VM on the source, copying the disks to the destination (FYI, this step takes the most time, especially if the disk is quite big), and recreating the VM at the destination with the copied disk. I think this might be part of the problem you're experiencing, but it shouldn't have been one in the first place. Anyways, we've changed the behaviour of how cold migration / resize works in Pike and the VMs are now exported / imported instead, keeping all of the VM configurations.

Now, I have some questions about the VM. When you say that the instance wasn't able to boot due to missing EFI, do you mean that the *.efi file is missing from the guest's /boot folder? I would assume not. And on the VM on which you've (re)installed the shim.efi file, can you execute this command on?

Get-VM  vm_name | Get-VMFirmware | fl *

Finally, how did you prepare the Centos image before uploading it to Glance (especially the EFI part)? For example, for Ubuntu images, you'll have to rename the EFI file before the image will be usable in any new VMs. [1]


Best regards,

Claudiu Belu


So, looking at your paste, it is what I actually thought: Your VM's BootOrder contains a, EFI file in it (1st item in the list). When the VM is recreated on Ocata or older, that entry is lost. As I've mentioned, thi behaviour was changed in Pike, so using Pike would solve your issue on cold resize / migration, but it would still affect you when using operations which imply destroying / recreating the VMs (e.g.: shelve / unshelve).

The best way to solve this issue is to properly prepare the images. The BootOrder shouldn't have to contain the EFI file in order for the VM to properly boot. Template images should be prepared is such a way that it doesn't rely on it (see the Ubuntu example). From your comment, it seems that you didn't change / update the EFI in any way.

Doing a quick search [2], Centos requires a small change when it comes to the EFI file:

The proper way as I tested it (run it on your own risk, take backups first and I am not responsible for this):

In the Linux VM:

> cd /boot/efi/EFI
> cp /{your_linux_type}/* ./BOOT/ (For example, on centos this would be: cp /centos/* ./BOOT/)

Let us know how it works out for you.


edit flag offensive delete link more


Hi Claudiu, Thanks for replying back. When we booted the instance in rescue mode, we were able to see the different EFI files within the instance. We still re-installed efi rpms just in case. As for the get-vm's output, please have a look at

Nebukazar gravatar imageNebukazar ( 2018-02-27 04:55:30 +0300 )edit

We created the image using the following process: - created a VM within HyperV (gen2, no secureBoot, 1x vhdx drive); - installed CentOS7 + cloud-init, did some tweaks within the VM; - exported the VM's vhdx through glance with properties related to hyperv: [...] --property hw_machine_type=hyperv-gen

Nebukazar gravatar imageNebukazar ( 2018-02-27 04:55:37 +0300 )edit

Deploying instances using the image above works fine. What I find odd though is... we created a new instance and then did an upgrade (the same way we did previously) but this time it went through ? We will keep an eye on it and see if that it a recurring issue. Thanks! -Luc

Nebukazar gravatar imageNebukazar ( 2018-02-27 04:55:44 +0300 )edit

I've updated by reply to respond to your comments.

Claudiu Belu gravatar imageClaudiu Belu ( 2018-02-27 11:27:35 +0300 )edit

Looks like that was it ! Thanks a lot; this is useful information indeed. Cheers,

Nebukazar gravatar imageNebukazar ( 2018-02-27 22:24:39 +0300 )edit

Question Tools

1 follower


Asked: 2018-02-26 05:13:44 +0300

Seen: 348 times

Last updated: Feb 27 '18