mbox series

[v3,0/4] virtio/block: handle zoned backing devices

Message ID 20190723221940.25585-1-dmitry.fomichev@wdc.com (mailing list archive)
Headers show
Series virtio/block: handle zoned backing devices | expand

Message

Dmitry Fomichev July 23, 2019, 10:19 p.m. UTC
Currently, attaching zoned block devices (i.e., storage devices
compliant to ZAC/ZBC standards) using several virtio methods doesn't
work properly as zoned devices appear as regular block devices at the
guest. This may cause unexpected i/o errors and, potentially, some
data corruption.

To be more precise, attaching a zoned device via virtio-pci-blk,
virtio-scsi-pci/scsi-disk or virtio-scsi-pci/scsi-hd demonstrates the
above behavior. The virtio-scsi-pci/scsi-block method works with a
recent patch. The virtio-scsi-pci/scsi-generic method also appears to
handle zoned devices without problems.

This patch set adds code to check if the backing device that is being
opened is a zoned Host Managed device. If this is the case, the patch
prohibits attaching such device for all use cases lacking proper
zoned support.

Host Aware zoned block devices are designed to work as regular block
devices at a guest system that does not support ZBD. Therefore, this
patch set doesn't prohibit attachment of Host Aware devices.

Considering that there is still a couple of different working ways
to attach a ZBD, this patch set provides a reasonable short-term
solution for this problem. What about long term?

It appears to be beneficial to add proper ZBD support to virtio-blk.
In order to support this use case properly, some virtio-blk protocol
changes will be necessary. They are needed to allow the host code to
propagate some ZBD properties that are required for virtio guest
driver to configure the guest block device as ZBD, such as zoned
device model, zone size and the total number of zones. Further, some
support needs to be added for REPORT ZONES command as well as for zone
operations, such as OPEN ZONE, CLOSE ZONE, FINISH ZONE and RESET ZONE.

These additions to the protocol are relatively straightforward, but
they need to be approved by the virtio TC and the whole process may
take some time.

ZBD support for virtio-scsi-pci/scsi-disk and virtio-scsi-pci/scsi-hd
does not seem as necessary. Users will be expected to attach zoned
block devices via virtio-scsi-pci/scsi-block instead.

This patch set contains some Linux-specific code. This code is
necessary to obtain Zoned Block Device model value from Linux sysfs.

History:

v1 -> v2:
- rework the code to be permission-based
- always allow Host Aware devices to be attached
- add fix for Host Aware attachments aka RCAP output snoop

v2 -> v3:
- drop the patch for RCAP output snoop - merged separately


Dmitry Fomichev (4):
  block: Add zoned device model property
  raw: Recognize zoned backing devices
  block/ide/scsi: Set BLK_PERM_SUPPORT_ZONED
  raw: Don't open ZBDs if backend can't handle them

 block.c                   | 19 +++++++++
 block/file-posix.c        | 88 +++++++++++++++++++++++++++++++++------
 block/raw-format.c        |  8 ++++
 hw/block/block.c          |  8 +++-
 hw/block/fdc.c            |  4 +-
 hw/block/nvme.c           |  2 +-
 hw/block/virtio-blk.c     |  2 +-
 hw/block/xen-block.c      |  2 +-
 hw/ide/qdev.c             |  2 +-
 hw/scsi/scsi-disk.c       | 13 +++---
 hw/scsi/scsi-generic.c    |  2 +-
 hw/usb/dev-storage.c      |  2 +-
 include/block/block.h     | 21 +++++++++-
 include/block/block_int.h |  4 ++
 include/hw/block/block.h  |  3 +-
 15 files changed, 150 insertions(+), 30 deletions(-)

Comments

John Snow July 25, 2019, 5:58 p.m. UTC | #1
On 7/23/19 6:19 PM, Dmitry Fomichev wrote:
> Currently, attaching zoned block devices (i.e., storage devices
> compliant to ZAC/ZBC standards) using several virtio methods doesn't
> work properly as zoned devices appear as regular block devices at the
> guest. This may cause unexpected i/o errors and, potentially, some
> data corruption.
> 

Hi, I'm quite uninitiated here, what's a zoned block device? What are
the ZAC/ZBC standards?

I've found this:
https://www.snia.org/sites/default/files/SDC/2016/presentations/smr/DamienLeMoal_ZBC-ZAC_Linux.pdf

It looks like ZAC/ZBC are new commands -- what happens if we just don't
use them, exactly?

> To be more precise, attaching a zoned device via virtio-pci-blk,
> virtio-scsi-pci/scsi-disk or virtio-scsi-pci/scsi-hd demonstrates the
> above behavior. The virtio-scsi-pci/scsi-block method works with a
> recent patch. The virtio-scsi-pci/scsi-generic method also appears to
> handle zoned devices without problems.
> 

What exactly fails, out of curiosity?

Naively, it seems strange to me that you'd have something that presents
itself as a block device but can't be used like one. Usually I expect to
see new features / types of devices used inefficiently when we aren't
aware of a special attribute/property they have, but not create data
corruption.

The only reason I ask is because it seems odd that you need to add a
special flag to e.g. legacy IDE devices that explicitly says they don't
support zoned block devices -- instead of adding flags to virtio devices
that say they explicitly do support that feature set.

--js

> This patch set adds code to check if the backing device that is being
> opened is a zoned Host Managed device. If this is the case, the patch
> prohibits attaching such device for all use cases lacking proper
> zoned support.
> 
> Host Aware zoned block devices are designed to work as regular block
> devices at a guest system that does not support ZBD. Therefore, this
> patch set doesn't prohibit attachment of Host Aware devices.
> 
> Considering that there is still a couple of different working ways
> to attach a ZBD, this patch set provides a reasonable short-term
> solution for this problem. What about long term?
> 
> It appears to be beneficial to add proper ZBD support to virtio-blk.
> In order to support this use case properly, some virtio-blk protocol
> changes will be necessary. They are needed to allow the host code to
> propagate some ZBD properties that are required for virtio guest
> driver to configure the guest block device as ZBD, such as zoned
> device model, zone size and the total number of zones. Further, some
> support needs to be added for REPORT ZONES command as well as for zone
> operations, such as OPEN ZONE, CLOSE ZONE, FINISH ZONE and RESET ZONE.
> 
> These additions to the protocol are relatively straightforward, but
> they need to be approved by the virtio TC and the whole process may
> take some time.
> 
> ZBD support for virtio-scsi-pci/scsi-disk and virtio-scsi-pci/scsi-hd
> does not seem as necessary. Users will be expected to attach zoned
> block devices via virtio-scsi-pci/scsi-block instead.
> 
> This patch set contains some Linux-specific code. This code is
> necessary to obtain Zoned Block Device model value from Linux sysfs.
> 
> History:
> 
> v1 -> v2:
> - rework the code to be permission-based
> - always allow Host Aware devices to be attached
> - add fix for Host Aware attachments aka RCAP output snoop
> 
> v2 -> v3:
> - drop the patch for RCAP output snoop - merged separately
> 
> 
> Dmitry Fomichev (4):
>   block: Add zoned device model property
>   raw: Recognize zoned backing devices
>   block/ide/scsi: Set BLK_PERM_SUPPORT_ZONED
>   raw: Don't open ZBDs if backend can't handle them
> 
>  block.c                   | 19 +++++++++
>  block/file-posix.c        | 88 +++++++++++++++++++++++++++++++++------
>  block/raw-format.c        |  8 ++++
>  hw/block/block.c          |  8 +++-
>  hw/block/fdc.c            |  4 +-
>  hw/block/nvme.c           |  2 +-
>  hw/block/virtio-blk.c     |  2 +-
>  hw/block/xen-block.c      |  2 +-
>  hw/ide/qdev.c             |  2 +-
>  hw/scsi/scsi-disk.c       | 13 +++---
>  hw/scsi/scsi-generic.c    |  2 +-
>  hw/usb/dev-storage.c      |  2 +-
>  include/block/block.h     | 21 +++++++++-
>  include/block/block_int.h |  4 ++
>  include/hw/block/block.h  |  3 +-
>  15 files changed, 150 insertions(+), 30 deletions(-)
>
Dmitry Fomichev July 26, 2019, 11:42 p.m. UTC | #2
John, please see inline...

Regards,
Dmitry

On Thu, 2019-07-25 at 13:58 -0400, John Snow wrote:
> 
> On 7/23/19 6:19 PM, Dmitry Fomichev wrote:
> > Currently, attaching zoned block devices (i.e., storage devices
> > compliant to ZAC/ZBC standards) using several virtio methods doesn't
> > work properly as zoned devices appear as regular block devices at the
> > guest. This may cause unexpected i/o errors and, potentially, some
> > data corruption.
> > 
> 
> Hi, I'm quite uninitiated here, what's a zoned block device? What are
> the ZAC/ZBC standards?
Zoned block devices (ZBDs) are HDDs that use SMR (shingled magnetic
recording). This type of recording, if applied to the entire drive, would
only allow the drive to be written sequentially. To make such devices more
practical, the entire LBA range of the drive is divided into zones. All
writes within a particular zone must be sequential, but writing different
zones can be done concurrently in random manner. This sounds like a lot of
hassle, but in return SMR can achieve up to 20% better areal data density
compared to the most common PMR recording.

The same zoned model is used in up-and-coming NVMe ZNS standard, even
though the reason for using it in ZNS is different compared to SMR HDDs -
easier flash erase block management.

ZBC is an INCITS T10 (SCSI) standard and ZAC is the corresponding T13 (ATA)
standard.

The lack of limelight for these standards is explained by the fact that
these devices are mostly used by cloud infrastructure providers for "cold"
data storage, a purely enterprise application. Currently, both WDC and
Seagate produce SMR drives in significant quantities and Toshiba has
announced support for ZBDs in their future products.

> > 
> I've found this:
> https://www.snia.org/sites/default/files/SDC/2016/presentations/smr/DamienLeMoal_ZBC-ZAC_Linux.pdf
> 
AFAIK, the most useful collection of public resources about zoned block
devices is this website -
http://zonedstorage.io
The site is maintained by our group at WDC (shameless plug here :) ).
BTW, here is the page containing the links to T10/T13 standards
(the access might be restricted for non-members of T10/T13 committees) -
http://zonedstorage.io/introduction/smr/#governing-standards

> It looks like ZAC/ZBC are new commands -- what happens if we just don't
> use them, exactly?
The standards define three models of zoned block devices: drive-managed,
host-aware and host-managed.

Drive-managed zoned devices behave just like regular SCSI/ATA devices and
don't require any additional support. There is no point for manufacturers
to market such devices as zoned. Host-managed and host-aware devices can
read data exactly the same way as common SCSI/ATA drives, but there are
I/O pattern limitations in the write path that the host must adhere to.

Host-aware drives will work without I/O errors under purely random write
workload, but their performance might be significantly degraded
compared to running them under zone-sequential workload. With
host-managed drives, any non-sequential writes within zones will lead
to an I/O error, most likely, "unaligned write".

It is important to mention that almost all zoned devices that are
currently on the market are host-managed.

ZAC/ZBC standards do add some new commands to the common SCSI/ACS command
sets, but, at least for the host-managed model, it wouldn't be sufficient
just to never issue these commands to be able to utilize these devices.

> 
> > To be more precise, attaching a zoned device via virtio-pci-blk,
> > virtio-scsi-pci/scsi-disk or virtio-scsi-pci/scsi-hd demonstrates the
> > above behavior. The virtio-scsi-pci/scsi-block method works with a
> > recent patch. The virtio-scsi-pci/scsi-generic method also appears to
> > handle zoned devices without problems.
> > 
> 
> What exactly fails, out of curiosity?
The current Linux kernel is able to recognize zoned block devices and
provide some means for the user to see that a particular device is zoned.
For example, lsscsi will show "zbc" instead of "disk" for zoned devices.
Another useful value is the "zoned" sysfs attribute that carries the
zoned model of the drive. Without this patch, the attachment methods
mentioned above present host-managed drives as regular drives at the
guest system. There is no way for the user to figure out that they are
dealing with a ZBD besides starting I/O and getting "unaligned write"
error.

The folks who designed ZAC/ZBC were very careful about this not to
happen and this doesn't happen on bare metal. Host-managed drives have
a distinctive SCSI device type, 0x14, and old kernels without zoned
device support simply are not be able to classify these drives during
the device scan. The kernels with ZBD support are able to recognize
a host-managed drive by its SCSI type and read some additional
protocol-specific info from the drive that is necessary for the kernel
to support it (how? see http://zonedstorage.io/linux/sched/).
In QEMU, this SCSI device type mechanism currently only works for
attachment methods that directly pass SCSI commands to the host OS
during the initial device scan, i.e. scsi-block and scsi-generic.
All other methods should be disabled until a meaningful way of handling
ZBDs is developed for each of them (or disabled permanently for "legacy"
attachment methods).

> 
> Naively, it seems strange to me that you'd have something that presents
> itself as a block device but can't be used like one. Usually I expect to
> see new features / types of devices used inefficiently when we aren't
> aware of a special attribute/property they have, but not create data
> corruption.
Data corruption can theoretically happen, for example, if a regular hard
drive is accidentally swapped for a zoned one in a complex environment
under I/O. Any environment where this can potentially be a problem must
have udev rules defined to prevent this situation. Without this type of
patch, these udev rules will not work.
> 
> The only reason I ask is because it seems odd that you need to add a
> special flag to e.g. legacy IDE devices that explicitly says they don't
> support zoned block devices -- instead of adding flags to virtio devices
> that say they explicitly do support that feature set.
The initial version of the patch set had some bits of code added in the
drivers that are not capable of supporting zoned devices to check if the
device is zoned and abort if it is. Kevin and Paolo suggested the current
approach and I think it's a lot cleaner than the initial attempt since it
minimizes the necessary changes across the whole set of block drivers. The
flag is a true/false setting that is set individually by each driver. It
is in line with two existing flags in blkconf_apply_backend_options(),
"readonly" and "resizable". There is no "default" setting for any of these.
> 
> --js
> 
> > This patch set adds code to check if the backing device that is being
> > opened is a zoned Host Managed device. If this is the case, the patch
> > prohibits attaching such device for all use cases lacking proper
> > zoned support.
> > 
> > Host Aware zoned block devices are designed to work as regular block
> > devices at a guest system that does not support ZBD. Therefore, this
> > patch set doesn't prohibit attachment of Host Aware devices.
> > 
> > Considering that there is still a couple of different working ways
> > to attach a ZBD, this patch set provides a reasonable short-term
> > solution for this problem. What about long term?
> > 
> > It appears to be beneficial to add proper ZBD support to virtio-blk.
> > In order to support this use case properly, some virtio-blk protocol
> > changes will be necessary. They are needed to allow the host code to
> > propagate some ZBD properties that are required for virtio guest
> > driver to configure the guest block device as ZBD, such as zoned
> > device model, zone size and the total number of zones. Further, some
> > support needs to be added for REPORT ZONES command as well as for zone
> > operations, such as OPEN ZONE, CLOSE ZONE, FINISH ZONE and RESET ZONE.
> > 
> > These additions to the protocol are relatively straightforward, but
> > they need to be approved by the virtio TC and the whole process may
> > take some time.
> > 
> > ZBD support for virtio-scsi-pci/scsi-disk and virtio-scsi-pci/scsi-hd
> > does not seem as necessary. Users will be expected to attach zoned
> > block devices via virtio-scsi-pci/scsi-block instead.
> > 
> > This patch set contains some Linux-specific code. This code is
> > necessary to obtain Zoned Block Device model value from Linux sysfs.
> > 
> > History:
> > 
> > v1 -> v2:
> > - rework the code to be permission-based
> > - always allow Host Aware devices to be attached
> > - add fix for Host Aware attachments aka RCAP output snoop
> > 
> > v2 -> v3:
> > - drop the patch for RCAP output snoop - merged separately
> > 
> > 
> > Dmitry Fomichev (4):
> >   block: Add zoned device model property
> >   raw: Recognize zoned backing devices
> >   block/ide/scsi: Set BLK_PERM_SUPPORT_ZONED
> >   raw: Don't open ZBDs if backend can't handle them
> > 
> >  block.c                   | 19 +++++++++
> >  block/file-posix.c        | 88 +++++++++++++++++++++++++++++++++------
> >  block/raw-format.c        |  8 ++++
> >  hw/block/block.c          |  8 +++-
> >  hw/block/fdc.c            |  4 +-
> >  hw/block/nvme.c           |  2 +-
> >  hw/block/virtio-blk.c     |  2 +-
> >  hw/block/xen-block.c      |  2 +-
> >  hw/ide/qdev.c             |  2 +-
> >  hw/scsi/scsi-disk.c       | 13 +++---
> >  hw/scsi/scsi-generic.c    |  2 +-
> >  hw/usb/dev-storage.c      |  2 +-
> >  include/block/block.h     | 21 +++++++++-
> >  include/block/block_int.h |  4 ++
> >  include/hw/block/block.h  |  3 +-
> >  15 files changed, 150 insertions(+), 30 deletions(-)
> >
John Snow July 29, 2019, 9:23 p.m. UTC | #3
On 7/26/19 7:42 PM, Dmitry Fomichev wrote:
> John, please see inline...
> 
> Regards,
> Dmitry
> 
> On Thu, 2019-07-25 at 13:58 -0400, John Snow wrote:
>>
>> On 7/23/19 6:19 PM, Dmitry Fomichev wrote:
>>> Currently, attaching zoned block devices (i.e., storage devices
>>> compliant to ZAC/ZBC standards) using several virtio methods doesn't
>>> work properly as zoned devices appear as regular block devices at the
>>> guest. This may cause unexpected i/o errors and, potentially, some
>>> data corruption.
>>>
>>
>> Hi, I'm quite uninitiated here, what's a zoned block device? What are
>> the ZAC/ZBC standards?
> Zoned block devices (ZBDs) are HDDs that use SMR (shingled magnetic
> recording). This type of recording, if applied to the entire drive, would
> only allow the drive to be written sequentially. To make such devices more
> practical, the entire LBA range of the drive is divided into zones. All
> writes within a particular zone must be sequential, but writing different
> zones can be done concurrently in random manner. This sounds like a lot of
> hassle, but in return SMR can achieve up to 20% better areal data density
> compared to the most common PMR recording.
> 
> The same zoned model is used in up-and-coming NVMe ZNS standard, even
> though the reason for using it in ZNS is different compared to SMR HDDs -
> easier flash erase block management.
> 
> ZBC is an INCITS T10 (SCSI) standard and ZAC is the corresponding T13 (ATA)
> standard.
> 
> The lack of limelight for these standards is explained by the fact that
> these devices are mostly used by cloud infrastructure providers for "cold"
> data storage, a purely enterprise application. Currently, both WDC and
> Seagate produce SMR drives in significant quantities and Toshiba has
> announced support for ZBDs in their future products.
> 
>>>
>> I've found this:
>> https://www.snia.org/sites/default/files/SDC/2016/presentations/smr/DamienLeMoal_ZBC-ZAC_Linux.pdf
>>
> AFAIK, the most useful collection of public resources about zoned block
> devices is this website -
> http://zonedstorage.io
> The site is maintained by our group at WDC (shameless plug here :) ).
> BTW, here is the page containing the links to T10/T13 standards
> (the access might be restricted for non-members of T10/T13 committees) -
> http://zonedstorage.io/introduction/smr/#governing-standards
> 
>> It looks like ZAC/ZBC are new commands -- what happens if we just don't
>> use them, exactly?
> The standards define three models of zoned block devices: drive-managed,
> host-aware and host-managed.
> 
> Drive-managed zoned devices behave just like regular SCSI/ATA devices and
> don't require any additional support. There is no point for manufacturers
> to market such devices as zoned. Host-managed and host-aware devices can
> read data exactly the same way as common SCSI/ATA drives, but there are
> I/O pattern limitations in the write path that the host must adhere to.
> 
> Host-aware drives will work without I/O errors under purely random write
> workload, but their performance might be significantly degraded
> compared to running them under zone-sequential workload. With
> host-managed drives, any non-sequential writes within zones will lead
> to an I/O error, most likely, "unaligned write".
> 
> It is important to mention that almost all zoned devices that are
> currently on the market are host-managed.
> 

OK, understood.

> ZAC/ZBC standards do add some new commands to the common SCSI/ACS command
> sets, but, at least for the host-managed model, it wouldn't be sufficient
> just to never issue these commands to be able to utilize these devices.
> 
>>
>>> To be more precise, attaching a zoned device via virtio-pci-blk,
>>> virtio-scsi-pci/scsi-disk or virtio-scsi-pci/scsi-hd demonstrates the
>>> above behavior. The virtio-scsi-pci/scsi-block method works with a
>>> recent patch. The virtio-scsi-pci/scsi-generic method also appears to
>>> handle zoned devices without problems.
>>>
>>
>> What exactly fails, out of curiosity?
> The current Linux kernel is able to recognize zoned block devices and
> provide some means for the user to see that a particular device is zoned.
> For example, lsscsi will show "zbc" instead of "disk" for zoned devices.
> Another useful value is the "zoned" sysfs attribute that carries the
> zoned model of the drive. Without this patch, the attachment methods
> mentioned above present host-managed drives as regular drives at the
> guest system. There is no way for the user to figure out that they are
> dealing with a ZBD besides starting I/O and getting "unaligned write"
> error.
> 

Mmhmm...

> The folks who designed ZAC/ZBC were very careful about this not to
> happen and this doesn't happen on bare metal. Host-managed drives have
> a distinctive SCSI device type, 0x14, and old kernels without zoned
> device support simply are not be able to classify these drives during
> the device scan. The kernels with ZBD support are able to recognize
> a host-managed drive by its SCSI type and read some additional
> protocol-specific info from the drive that is necessary for the kernel
> to support it (how? see http://zonedstorage.io/linux/sched/).
> In QEMU, this SCSI device type mechanism currently only works for
> attachment methods that directly pass SCSI commands to the host OS
> during the initial device scan, i.e. scsi-block and scsi-generic.
> All other methods should be disabled until a meaningful way of handling
> ZBDs is developed for each of them (or disabled permanently for "legacy"
> attachment methods).
> 
>>
>> Naively, it seems strange to me that you'd have something that presents
>> itself as a block device but can't be used like one. Usually I expect to
>> see new features / types of devices used inefficiently when we aren't
>> aware of a special attribute/property they have, but not create data
>> corruption.
> Data corruption can theoretically happen, for example, if a regular hard
> drive is accidentally swapped for a zoned one in a complex environment
> under I/O. Any environment where this can potentially be a problem must
> have udev rules defined to prevent this situation. Without this type of
> patch, these udev rules will not work.
>>
>> The only reason I ask is because it seems odd that you need to add a
>> special flag to e.g. legacy IDE devices that explicitly says they don't
>> support zoned block devices -- instead of adding flags to virtio devices
>> that say they explicitly do support that feature set.
> The initial version of the patch set had some bits of code added in the
> drivers that are not capable of supporting zoned devices to check if the
> device is zoned and abort if it is. Kevin and Paolo suggested the current
> approach and I think it's a lot cleaner than the initial attempt since it
> minimizes the necessary changes across the whole set of block drivers. The
> flag is a true/false setting that is set individually by each driver. It
> is in line with two existing flags in blkconf_apply_backend_options(),
> "readonly" and "resizable". There is no "default" setting for any of these.

Thank you for the detailed explanation! This is good information to have
on the ML archive.

I'm still surprised that we need to prohibit IDE specifically from
interacting with drives of this type, as I would have hoped that the
kernel driver beneath our feet would have managed the access for us, but
I guess that's not true?

(If it isn't, I worry what happens if we have a format layer between us
and the baremetal: if we write qcow2 to the block device instead of raw,
even if we advertise to the emulated guest that we're using a zoned
device, we might remap things in/outside of zones and that coordination
is lost, wouldn't it?)

Not that I really desire people to use IDE emulators with fancy new
disks, it just seemed like an unusual patch.

If Kevin and Paolo are on board with the design, it's not my place to
try to begin managing this, it just caught my eye because it touched
something as old as IDE.

Thanks,
--js