From patchwork Wed Sep 28 01:38:34 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 9353043 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 31D4A60757 for ; Wed, 28 Sep 2016 01:40:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 120D028538 for ; Wed, 28 Sep 2016 01:40:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0343929244; Wed, 28 Sep 2016 01:40:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 21A9328538 for ; Wed, 28 Sep 2016 01:40:46 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bp3pd-0000NG-0S; Wed, 28 Sep 2016 01:38:45 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bp3pb-0000N9-Hm for xen-devel@lists.xenproject.org; Wed, 28 Sep 2016 01:38:43 +0000 Received: from [193.109.254.147] by server-4.bemta-6.messagelabs.com id 91/B8-29421-22F1BE75; Wed, 28 Sep 2016 01:38:42 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrJIsWRWlGSWpSXmKPExsVybKJsh66S/Ot wg2df2C2+b5nM5MDocfjDFZYAxijWzLyk/IoE1ox9z84zFVzzrmjZ28fYwPjUsouRi0NIYCqj xL7z+1khnO1MEi1H/7F3MXJysAkYSvx9sokNxBYRkJHovLSIEaSIWaCTUeLE9r2MIAlhAQ+J5 YsfMIHYLAKqEv//nWcBsXkF3CSef7wNZksIyEmcPDaZFcTmFPCWeLB6CZDNAbTNS+LeETOIck GJkzOfsICEmQXUJdbPEwIJMwvISzRvnc0MMSVDYl7PHFYI20ti0Y1LULaaxNVzm5gnMArOQjJ pFsKkWUgmLWBkXsWoUZxaVJZapGtkopdUlJmeUZKbmJmja2hgppebWlycmJ6ak5hUrJecn7uJ ERi0DECwg3HlusBDjJIcTEqivBE8r8OF+JLyUyozEosz4otKc1KLDzFqcHAI9K1ZfYFRiiUvP y9VSYJ3myxQnWBRanpqRVpmDjCuYEolOHiURHgz5YDSvMUFibnFmekQqVOMuhx/Nky+wiQENk NKnHc7yAwBkKKM0jy4EbAYv8QoKyXMywh0oBBPQWpRbmYJqvwrRnEORiVhXnOQVTyZeSVwm14 BHcEEdMTSEy9AjihJREhJNTCuX8H1KsxwqvfjmjMXn80uYNfuulT/emJd0MxtU5MfnV7ZeeaC pFP8va3ld8S+9Z37zspik3+6K/nvfyWf2F+yXA1Byj/VuIuuvLJX7nj4/Zet/2vpzW5nMn4Jf fdfwS1znfHaS7FnciZFi/6wWWgrVPqKNfHdlTp449niDvOkNT6lHR96Kv8rsRRnJBpqMRcVJw IACoCax+wCAAA= X-Env-Sender: sstabellini@kernel.org X-Msg-Ref: server-5.tower-27.messagelabs.com!1475026720!61798352!1 X-Originating-IP: [198.145.29.136] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 37188 invoked from network); 28 Sep 2016 01:38:41 -0000 Received: from mail.kernel.org (HELO mail.kernel.org) (198.145.29.136) by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 28 Sep 2016 01:38:41 -0000 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2C983203EC; Wed, 28 Sep 2016 01:38:39 +0000 (UTC) Received: from sstabellini-ThinkPad-X260.hsd1.ca.comcast.net (96-82-76-110-static.hfc.comcastbusiness.net [96.82.76.110]) (using TLSv1.2 with cipher AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 990FE20414; Wed, 28 Sep 2016 01:38:37 +0000 (UTC) From: Stefano Stabellini To: peter.maydell@linaro.org Date: Tue, 27 Sep 2016 18:38:34 -0700 Message-Id: <1475026714-10440-1-git-send-email-sstabellini@kernel.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Cc: anthony.perard@citrix.com, xen-devel@lists.xenproject.org, Paulina Szubarczyk , qemu-devel@nongnu.org Subject: [Xen-devel] [PULL 1/1] qdisk - hw/block/xen_disk: grant copy implementation X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Paulina Szubarczyk Copy data operated on during request from/to local buffers to/from the grant references. Before grant copy operation local buffers must be allocated what is done by calling ioreq_init_copy_buffers. For the 'read' operation, first, the qemu device invokes the read operation on local buffers and on the completion grant copy is called and buffers are freed. For the 'write' operation grant copy is performed before invoking write by qemu device. A new value 'feature_grant_copy' is added to recognize when the grant copy operation is supported by a guest. Signed-off-by: Paulina Szubarczyk Reviewed-by: Stefano Stabellini Acked-by: Anthony PERARD Acked-by: Roger Pau Monné --- configure | 55 ++++++++++++++++ hw/block/xen_disk.c | 153 ++++++++++++++++++++++++++++++++++++++++++-- include/hw/xen/xen_common.h | 14 ++++ 3 files changed, 217 insertions(+), 5 deletions(-) diff --git a/configure b/configure index 8fa62ad..1fb343d 100755 --- a/configure +++ b/configure @@ -1955,6 +1955,61 @@ EOF /* * If we have stable libs the we don't want the libxc compat * layers, regardless of what CFLAGS we may have been given. + * + * Also, check if xengnttab_grant_copy_segment_t is defined and + * grant copy operation is implemented. + */ +#undef XC_WANT_COMPAT_EVTCHN_API +#undef XC_WANT_COMPAT_GNTTAB_API +#undef XC_WANT_COMPAT_MAP_FOREIGN_API +#include +#include +#include +#include +#include +#include +#include +#if !defined(HVM_MAX_VCPUS) +# error HVM_MAX_VCPUS not defined +#endif +int main(void) { + xc_interface *xc = NULL; + xenforeignmemory_handle *xfmem; + xenevtchn_handle *xe; + xengnttab_handle *xg; + xen_domain_handle_t handle; + xengnttab_grant_copy_segment_t* seg = NULL; + + xs_daemon_open(); + + xc = xc_interface_open(0, 0, 0); + xc_hvm_set_mem_type(0, 0, HVMMEM_ram_ro, 0, 0); + xc_domain_add_to_physmap(0, 0, XENMAPSPACE_gmfn, 0, 0); + xc_hvm_inject_msi(xc, 0, 0xf0000000, 0x00000000); + xc_hvm_create_ioreq_server(xc, 0, HVM_IOREQSRV_BUFIOREQ_ATOMIC, NULL); + xc_domain_create(xc, 0, handle, 0, NULL, NULL); + + xfmem = xenforeignmemory_open(0, 0); + xenforeignmemory_map(xfmem, 0, 0, 0, 0, 0); + + xe = xenevtchn_open(0, 0); + xenevtchn_fd(xe); + + xg = xengnttab_open(0, 0); + xengnttab_grant_copy(xg, 0, seg); + + return 0; +} +EOF + compile_prog "" "$xen_libs $xen_stable_libs" + then + xen_ctrl_version=480 + xen=yes + elif + cat > $TMPC <= 480 + +static void ioreq_free_copy_buffers(struct ioreq *ioreq) +{ + int i; + + for (i = 0; i < ioreq->v.niov; i++) { + ioreq->page[i] = NULL; + } + + qemu_vfree(ioreq->pages); +} + +static int ioreq_init_copy_buffers(struct ioreq *ioreq) +{ + int i; + + if (ioreq->v.niov == 0) { + return 0; + } + + ioreq->pages = qemu_memalign(XC_PAGE_SIZE, ioreq->v.niov * XC_PAGE_SIZE); + + for (i = 0; i < ioreq->v.niov; i++) { + ioreq->page[i] = ioreq->pages + i * XC_PAGE_SIZE; + ioreq->v.iov[i].iov_base = ioreq->page[i]; + } + + return 0; +} + +static int ioreq_grant_copy(struct ioreq *ioreq) +{ + xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev; + xengnttab_grant_copy_segment_t segs[BLKIF_MAX_SEGMENTS_PER_REQUEST]; + int i, count, rc; + int64_t file_blk = ioreq->blkdev->file_blk; + + if (ioreq->v.niov == 0) { + return 0; + } + + count = ioreq->v.niov; + + for (i = 0; i < count; i++) { + if (ioreq->req.operation == BLKIF_OP_READ) { + segs[i].flags = GNTCOPY_dest_gref; + segs[i].dest.foreign.ref = ioreq->refs[i]; + segs[i].dest.foreign.domid = ioreq->domids[i]; + segs[i].dest.foreign.offset = ioreq->req.seg[i].first_sect * file_blk; + segs[i].source.virt = ioreq->v.iov[i].iov_base; + } else { + segs[i].flags = GNTCOPY_source_gref; + segs[i].source.foreign.ref = ioreq->refs[i]; + segs[i].source.foreign.domid = ioreq->domids[i]; + segs[i].source.foreign.offset = ioreq->req.seg[i].first_sect * file_blk; + segs[i].dest.virt = ioreq->v.iov[i].iov_base; + } + segs[i].len = (ioreq->req.seg[i].last_sect + - ioreq->req.seg[i].first_sect + 1) * file_blk; + } + + rc = xengnttab_grant_copy(gnt, count, segs); + + if (rc) { + xen_be_printf(&ioreq->blkdev->xendev, 0, + "failed to copy data %d\n", rc); + ioreq->aio_errors++; + return -1; + } + + for (i = 0; i < count; i++) { + if (segs[i].status != GNTST_okay) { + xen_be_printf(&ioreq->blkdev->xendev, 3, + "failed to copy data %d for gref %d, domid %d\n", + segs[i].status, ioreq->refs[i], ioreq->domids[i]); + ioreq->aio_errors++; + rc = -1; + } + } + + return rc; +} +#else +static void ioreq_free_copy_buffers(struct ioreq *ioreq) +{ + abort(); +} + +static int ioreq_init_copy_buffers(struct ioreq *ioreq) +{ + abort(); +} + +static int ioreq_grant_copy(struct ioreq *ioreq) +{ + abort(); +} +#endif + static int ioreq_runio_qemu_aio(struct ioreq *ioreq); static void qemu_aio_complete(void *opaque, int ret) @@ -511,8 +614,31 @@ static void qemu_aio_complete(void *opaque, int ret) return; } + if (ioreq->blkdev->feature_grant_copy) { + switch (ioreq->req.operation) { + case BLKIF_OP_READ: + /* in case of failure ioreq->aio_errors is increased */ + if (ret == 0) { + ioreq_grant_copy(ioreq); + } + ioreq_free_copy_buffers(ioreq); + break; + case BLKIF_OP_WRITE: + case BLKIF_OP_FLUSH_DISKCACHE: + if (!ioreq->req.nr_segments) { + break; + } + ioreq_free_copy_buffers(ioreq); + break; + default: + break; + } + } + ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY; - ioreq_unmap(ioreq); + if (!ioreq->blkdev->feature_grant_copy) { + ioreq_unmap(ioreq); + } ioreq_finish(ioreq); switch (ioreq->req.operation) { case BLKIF_OP_WRITE: @@ -538,8 +664,18 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) { struct XenBlkDev *blkdev = ioreq->blkdev; - if (ioreq->req.nr_segments && ioreq_map(ioreq) == -1) { - goto err_no_map; + if (ioreq->blkdev->feature_grant_copy) { + ioreq_init_copy_buffers(ioreq); + if (ioreq->req.nr_segments && (ioreq->req.operation == BLKIF_OP_WRITE || + ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) && + ioreq_grant_copy(ioreq)) { + ioreq_free_copy_buffers(ioreq); + goto err; + } + } else { + if (ioreq->req.nr_segments && ioreq_map(ioreq)) { + goto err; + } } ioreq->aio_inflight++; @@ -582,6 +718,9 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) } default: /* unknown operation (shouldn't happen -- parse catches this) */ + if (!ioreq->blkdev->feature_grant_copy) { + ioreq_unmap(ioreq); + } goto err; } @@ -590,8 +729,6 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) return 0; err: - ioreq_unmap(ioreq); -err_no_map: ioreq_finish(ioreq); ioreq->status = BLKIF_RSP_ERROR; return -1; @@ -1034,6 +1171,12 @@ static int blk_connect(struct XenDevice *xendev) xen_be_bind_evtchn(&blkdev->xendev); + blkdev->feature_grant_copy = + (xengnttab_grant_copy(blkdev->xendev.gnttabdev, 0, NULL) == 0); + + xen_be_printf(&blkdev->xendev, 3, "grant copy operation %s\n", + blkdev->feature_grant_copy ? "enabled" : "disabled"); + xen_be_printf(&blkdev->xendev, 1, "ok: proto %s, ring-ref %d, " "remote port %d, local port %d\n", blkdev->xendev.protocol, blkdev->ring_ref, diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h index bd39287..8e1580d 100644 --- a/include/hw/xen/xen_common.h +++ b/include/hw/xen/xen_common.h @@ -424,4 +424,18 @@ static inline int xen_domain_create(xc_interface *xc, uint32_t ssidref, #endif #endif +/* Xen before 4.8 */ + +#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 480 + + +typedef void *xengnttab_grant_copy_segment_t; + +static inline int xengnttab_grant_copy(xengnttab_handle *xgt, uint32_t count, + xengnttab_grant_copy_segment_t *segs) +{ + return -ENOSYS; +} +#endif + #endif /* QEMU_HW_XEN_COMMON_H */