New Question

Revision history [back]

click to hide/show revision 1
initial version

Hi,

TL;DR; one workaround would be to "manually" mount the SMB share on your Glance host and configure it using the "file" store.

The os-brick Linux RemoteFS connector does support SMB, it's just that it's advertised as CIFS [1] (mount.cifs can be used for SMB 3 shares). Even if this small mismatch gets fixed and the volume becomes accessible, there would be some other issues: * the Cinder SMB driver stores the volumes as VHD/x images, while the Glance "Cinder" store driver can only handle raw disk devices * I think there would be some concurrency issues as the Glance "Cinder" store basically mounts the Cinder volumes locally (the Glance host) whenever accessing the images. This would become especially problematic when scaling out the Glance service.

The Glance "Cinder" store isn't considered production ready, so we didn't invest too much time on it. Since you're interested in a hyper-converged setup, I'd recommend the above mentioned approach.

Side note, we're using the Windows iSCSI driver for testing purposes mostly.

[1] https://github.com/openstack/os-brick/blob/d9ac24d0d2a831ecb1e87e994da088e56bb0f53f/os_brick/remotefs/remotefs.py#L42

Hi,

TL;DR; one workaround would be to "manually" mount the SMB share on your Glance host and configure it using the "file" store.

The os-brick Linux RemoteFS connector does support SMB, it's just that it's advertised as CIFS [1] (mount.cifs can be used for SMB 3 shares). Even if this small mismatch gets fixed and the volume becomes accessible, there would be some other issues: * the Cinder SMB driver stores the volumes as VHD/x images, while the Glance "Cinder" store driver can only handle raw disk devices * I think there would be some concurrency issues as the Glance "Cinder" store basically mounts the Cinder volumes locally (the Glance host) whenever accessing the images. This would become especially problematic when scaling out the Glance service.

The Glance "Cinder" store isn't considered production ready, so we didn't invest too much time on it. Since you're interested in a hyper-converged setup, I'd recommend the above mentioned approach.

Side note, we're using the Windows iSCSI driver for testing purposes mostly.

[1] https://github.com/openstack/os-brick/blob/d9ac24d0d2a831ecb1e87e994da088e56bb0f53f/os_brick/remotefs/remotefs.py#L42

Regards, Lucian Petrut

Hi,

TL;DR; one workaround would be to "manually" mount the SMB share on your Glance host and configure it using the "file" store.

The os-brick Linux RemoteFS connector does support SMB, it's just that it's advertised as CIFS [1] (mount.cifs can be used for SMB 3 shares). Even if this small mismatch gets fixed and the volume becomes accessible, there would be some other issues: * issues:

  • the Cinder SMB driver stores the volumes as VHD/x images, while the Glance "Cinder" store driver can only handle raw disk devices * devices

  • I think there would be some concurrency issues as the Glance "Cinder" store basically mounts the Cinder volumes locally (the Glance host) whenever accessing the images. This would become especially problematic when scaling out the Glance service.

The Glance "Cinder" store isn't considered production ready, so we didn't invest too much time on it. Since you're interested in a hyper-converged setup, I'd recommend the above mentioned approach.

Side note, we're using the Windows iSCSI driver for testing purposes mostly.

[1] https://github.com/openstack/os-brick/blob/d9ac24d0d2a831ecb1e87e994da088e56bb0f53f/os_brick/remotefs/remotefs.py#L42

Regards, Regards,

Lucian Petrut