diff mbox

[v4,8/9] docs: Add section for NVMe VFIO driver

Message ID 20180110091846.10699-9-famz@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Fam Zheng Jan. 10, 2018, 9:18 a.m. UTC
Signed-off-by: Fam Zheng <famz@redhat.com>
---
 docs/qemu-block-drivers.texi | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

Comments

Stefan Hajnoczi Jan. 10, 2018, 7:05 p.m. UTC | #1
On Wed, Jan 10, 2018 at 05:18:45PM +0800, Fam Zheng wrote:
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
>  docs/qemu-block-drivers.texi | 32 ++++++++++++++++++++++++++++++++
>  1 file changed, 32 insertions(+)
> 
> diff --git a/docs/qemu-block-drivers.texi b/docs/qemu-block-drivers.texi
> index 503c1847aa..66b27cc4f7 100644
> --- a/docs/qemu-block-drivers.texi
> +++ b/docs/qemu-block-drivers.texi
> @@ -785,6 +785,38 @@ warning: ssh server @code{ssh.example.com:22} does not support fsync
>  With sufficiently new versions of libssh2 and OpenSSH, @code{fsync} is
>  supported.
>  
> +@node disk_images_nvme
> +@subsection NVMe disk images
> +
> +You can access disk images on a NVMe controller with the built-in VFIO based
> +NVMe driver. Before starting QEMU, bind the host NVMe controller to vfio-pci.

The text dives straight into vfio-pci without any explanation of this
feature.  Please include something like:

NVM Express (NVMe) storage controllers can be accessed directly by a
userspace driver in QEMU.  This bypasses the host kernel file system and
block layers while retaining QEMU block layer functionality, such as
block jobs, I/O throttling, etc.  Disk I/O performance is typically
higher than with -drive file=/dev/sda.

> +For example:
> +
> +@example
> +# modprobe vfio-pci
> +# lspci -n -s 0000:06:0d.0
> +06:0d.0 0401: 1102:0002 (rev 08)
> +# echo 0000:06:0d.0 > /sys/bus/pci/devices/0000:06:0d.0/driver/unbind
> +# echo 1102 0002 > /sys/bus/pci/drivers/vfio-pci/new_id
> +
> +# qemu-system-x86_64 -drive file=nvme://@var{host}:@var{bus}:@var{slot}.@var{func}/@var{namespace}
> +@end example
> +
> +Alternative syntax using properties:
> +
> +@example
> +qemu-system-x86_64 -drive file.driver=nvme,file.device=@var{host}:@var{bus}:@var{slot}.@var{func},file.namespace=@var{namespace}
> +@end example
> +
> +@var{host}:@var{bus}:@var{slot}.@var{func} is the NVMe controller's PCI device
> +address on the host.
> +
> +@var{namespace} is the NVMe namespace number, starting from 1.
> +
> +The controller will be exclusively used by the QEMU process once started. To be
> +able to share storage between multiple VMs and other applications on the host,
> +please use file based protocols.

I suggest moving this up to the beginning in the hopes that people will
read it before asking questions on IRC or qemu-devel :).
diff mbox

Patch

diff --git a/docs/qemu-block-drivers.texi b/docs/qemu-block-drivers.texi
index 503c1847aa..66b27cc4f7 100644
--- a/docs/qemu-block-drivers.texi
+++ b/docs/qemu-block-drivers.texi
@@ -785,6 +785,38 @@  warning: ssh server @code{ssh.example.com:22} does not support fsync
 With sufficiently new versions of libssh2 and OpenSSH, @code{fsync} is
 supported.
 
+@node disk_images_nvme
+@subsection NVMe disk images
+
+You can access disk images on a NVMe controller with the built-in VFIO based
+NVMe driver. Before starting QEMU, bind the host NVMe controller to vfio-pci.
+For example:
+
+@example
+# modprobe vfio-pci
+# lspci -n -s 0000:06:0d.0
+06:0d.0 0401: 1102:0002 (rev 08)
+# echo 0000:06:0d.0 > /sys/bus/pci/devices/0000:06:0d.0/driver/unbind
+# echo 1102 0002 > /sys/bus/pci/drivers/vfio-pci/new_id
+
+# qemu-system-x86_64 -drive file=nvme://@var{host}:@var{bus}:@var{slot}.@var{func}/@var{namespace}
+@end example
+
+Alternative syntax using properties:
+
+@example
+qemu-system-x86_64 -drive file.driver=nvme,file.device=@var{host}:@var{bus}:@var{slot}.@var{func},file.namespace=@var{namespace}
+@end example
+
+@var{host}:@var{bus}:@var{slot}.@var{func} is the NVMe controller's PCI device
+address on the host.
+
+@var{namespace} is the NVMe namespace number, starting from 1.
+
+The controller will be exclusively used by the QEMU process once started. To be
+able to share storage between multiple VMs and other applications on the host,
+please use file based protocols.
+
 @node disk_image_locking
 @subsection Disk image file locking