Message ID | 20220919022921.946344-1-dmitry.fomichev@wdc.com (mailing list archive) |
---|---|
Headers | show |
Series | virtio-blk: support zoned block devices | expand |
On Sun, Sep 18, 2022 at 10:29:18PM -0400, Dmitry Fomichev wrote: > In its current form, the virtio protocol for block devices (virtio-blk) > is not aware of zoned block devices (ZBDs) but it allows the driver to > successfully scan a host-managed drive provided by the virtio block > device. As the result, the host-managed drive is recognized by the > virtio driver as a regular, non-zoned drive that will operate > erroneously under the most common write workloads. Host-aware ZBDs are > currently usable, but their performance may not be optimal because the > driver can only see them as non-zoned block devices. What is the advantage in extending virtio-blk vs just using virtio-scsi or nvme with shadow doorbells that just work?
On Tue, 20 Sept 2022 at 03:43, Christoph Hellwig <hch@infradead.org> wrote: > > On Sun, Sep 18, 2022 at 10:29:18PM -0400, Dmitry Fomichev wrote: > > In its current form, the virtio protocol for block devices (virtio-blk) > > is not aware of zoned block devices (ZBDs) but it allows the driver to > > successfully scan a host-managed drive provided by the virtio block > > device. As the result, the host-managed drive is recognized by the > > virtio driver as a regular, non-zoned drive that will operate > > erroneously under the most common write workloads. Host-aware ZBDs are > > currently usable, but their performance may not be optimal because the > > driver can only see them as non-zoned block devices. > > What is the advantage in extending virtio-blk vs just using virtio-scsi > or nvme with shadow doorbells that just work? virtio-blk is widely used and new request types are added as needed. QEMU's NVMe emulation may support passing through zoned storage devices in the future but it doesn't today. Support was implemented in virtio-blk first because NVMe emulation isn't widely used in production QEMU VMs. Stefan
On Tue, 2022-09-20 at 06:41 -0400, Stefan Hajnoczi wrote: > On Tue, 20 Sept 2022 at 03:43, Christoph Hellwig <hch@infradead.org> wrote: > > > > On Sun, Sep 18, 2022 at 10:29:18PM -0400, Dmitry Fomichev wrote: > > > In its current form, the virtio protocol for block devices (virtio-blk) > > > is not aware of zoned block devices (ZBDs) but it allows the driver to > > > successfully scan a host-managed drive provided by the virtio block > > > device. As the result, the host-managed drive is recognized by the > > > virtio driver as a regular, non-zoned drive that will operate > > > erroneously under the most common write workloads. Host-aware ZBDs are > > > currently usable, but their performance may not be optimal because the > > > driver can only see them as non-zoned block devices. > > > > What is the advantage in extending virtio-blk vs just using virtio-scsi > > or nvme with shadow doorbells that just work? > > virtio-blk is widely used and new request types are added as needed. > > QEMU's NVMe emulation may support passing through zoned storage > devices in the future but it doesn't today. Support was implemented in > virtio-blk first because NVMe emulation isn't widely used in > production QEMU VMs. > > Stefan A large share of hyperscaler guest VM images only supports virtio for storage and doesn't define CONFIG_SCSI, COPNFIG_ATA, etc. at all in their kernel config. This is especially common in hyperscale environments that are dedicated to serverless computing. In such environments, there is currently no way to present a zoned device to the guest user because the virtio-blk driver is not ZBD-aware. An attempt to virtualize a host-managed drive in this setup causes the drive to show up at the guest as a regular block device - certainly not an ideal situation. Dmitry