From patchwork Fri Jan 29 10:45:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhang X-Patchwork-Id: 8162211 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 257A7BEEED for ; Fri, 29 Jan 2016 10:56:29 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0590A20382 for ; Fri, 29 Jan 2016 10:56:28 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7550620380 for ; Fri, 29 Jan 2016 10:56:26 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aP6gW-0004CW-Lw; Fri, 29 Jan 2016 10:53:48 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aP6gV-0004C8-49 for xen-devel@lists.xen.org; Fri, 29 Jan 2016 10:53:47 +0000 Received: from [193.109.254.147] by server-7.bemta-14.messagelabs.com id 41/6E-28221-AB44BA65; Fri, 29 Jan 2016 10:53:46 +0000 X-Env-Sender: yu.c.zhang@linux.intel.com X-Msg-Ref: server-8.tower-27.messagelabs.com!1454064824!16255164!1 X-Originating-IP: [192.55.52.88] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 7448 invoked from network); 29 Jan 2016 10:53:45 -0000 Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88) by server-8.tower-27.messagelabs.com with SMTP; 29 Jan 2016 10:53:45 -0000 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP; 29 Jan 2016 02:53:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,363,1449561600"; d="scan'208";a="871601752" Received: from zhangyu-xengt.bj.intel.com ([10.238.157.63]) by orsmga001.jf.intel.com with ESMTP; 29 Jan 2016 02:53:40 -0800 From: Yu Zhang To: xen-devel@lists.xen.org Date: Fri, 29 Jan 2016 18:45:13 +0800 Message-Id: <1454064314-7799-3-git-send-email-yu.c.zhang@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1454064314-7799-1-git-send-email-yu.c.zhang@linux.intel.com> References: <1454064314-7799-1-git-send-email-yu.c.zhang@linux.intel.com> Cc: kevin.tian@intel.com, keir@xen.org, ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com, andrew.cooper3@citrix.com, ian.jackson@eu.citrix.com, Paul.Durrant@citrix.com, zhiyuan.lv@intel.com, jbeulich@suse.com, wei.liu2@citrix.com Subject: [Xen-devel] [PATCH v12 2/3] Differentiate IO/mem resources tracked by ioreq server X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently in ioreq server, guest write-protected ram pages are tracked in the same rangeset with device mmio resources. Yet unlike device mmio, which can be in big chunks, the guest write- protected pages may be discrete ranges with 4K bytes each. This patch uses a seperate rangeset for the guest ram pages. To differentiate the ioreq type between the write-protected memory ranges and the mmio ranges when selecting an ioreq server, the p2m type is retrieved by calling get_page_from_gfn(). And we do not need to worry about the p2m type change during the ioreq selection process. Note: Previously, a new hypercall or subop was suggested to map write-protected pages into ioreq server. However, it turned out handler of this new hypercall would be almost the same with the existing pair - HVMOP_[un]map_io_range_to_ioreq_server, and there's already a type parameter in this hypercall. So no new hypercall defined, only a new type is introduced. Acked-by: Wei Liu Acked-by: Ian Campbell Reviewed-by: Kevin Tian Reviewed-by: Paul Durrant Signed-off-by: Shuai Ruan Signed-off-by: Yu Zhang --- tools/libxc/include/xenctrl.h | 31 ++++++++++++++++++++++ tools/libxc/xc_domain.c | 55 ++++++++++++++++++++++++++++++++++++++++ xen/arch/x86/hvm/hvm.c | 26 ++++++++++++++++--- xen/include/asm-x86/hvm/domain.h | 2 +- xen/include/public/hvm/hvm_op.h | 1 + 5 files changed, 110 insertions(+), 5 deletions(-) diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h index 1d656ac..1a5f4ec 100644 --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -1714,6 +1714,37 @@ int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch, int is_mmio, uint64_t start, uint64_t end); +/** + * This function registers a range of write-protected memory for emulation. + * + * @parm xch a handle to an open hypervisor interface. + * @parm domid the domain id to be serviced + * @parm id the IOREQ Server id. + * @parm start start of range + * @parm end end of range (inclusive). + * @return 0 on success, -1 on failure. + */ +int xc_hvm_map_wp_mem_range_to_ioreq_server(xc_interface *xch, + domid_t domid, + ioservid_t id, + xen_pfn_t start, + xen_pfn_t end); + +/** + * This function deregisters a range of write-protected memory for emulation. + * + * @parm xch a handle to an open hypervisor interface. + * @parm domid the domain id to be serviced + * @parm id the IOREQ Server id. + * @parm start start of range + * @parm end end of range (inclusive). + * @return 0 on success, -1 on failure. + */ +int xc_hvm_unmap_wp_mem_range_from_ioreq_server(xc_interface *xch, + domid_t domid, + ioservid_t id, + xen_pfn_t start, + xen_pfn_t end); /** * This function registers a PCI device for config space emulation. diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c index 921113d..e21b602 100644 --- a/tools/libxc/xc_domain.c +++ b/tools/libxc/xc_domain.c @@ -1523,6 +1523,61 @@ int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch, domid_t domid, return rc; } +int xc_hvm_map_wp_mem_range_to_ioreq_server(xc_interface *xch, + domid_t domid, + ioservid_t id, + xen_pfn_t start, + xen_pfn_t end) +{ + DECLARE_HYPERCALL_BUFFER(xen_hvm_io_range_t, arg); + int rc; + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) + return -1; + + arg->domid = domid; + arg->id = id; + arg->type = HVMOP_IO_RANGE_WP_MEM; + arg->start = start; + arg->end = end; + + rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op, + HVMOP_map_io_range_to_ioreq_server, + HYPERCALL_BUFFER_AS_ARG(arg)); + + xc_hypercall_buffer_free(xch, arg); + return rc; +} + +int xc_hvm_unmap_wp_mem_range_from_ioreq_server(xc_interface *xch, + domid_t domid, + ioservid_t id, + xen_pfn_t start, + xen_pfn_t end) +{ + DECLARE_HYPERCALL_BUFFER(xen_hvm_io_range_t, arg); + int rc; + + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); + if ( arg == NULL ) + return -1; + + arg->domid = domid; + arg->id = id; + arg->type = HVMOP_IO_RANGE_WP_MEM; + arg->start = start; + arg->end = end; + + rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op, + HVMOP_unmap_io_range_from_ioreq_server, + HYPERCALL_BUFFER_AS_ARG(arg)); + + xc_hypercall_buffer_free(xch, arg); + return rc; + +} + int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch, domid_t domid, ioservid_t id, uint16_t segment, uint8_t bus, uint8_t device, diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 674feea..e0d998f 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -932,6 +932,9 @@ static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s, rangeset_destroy(s->range[i]); } +const char *io_range_name[NR_IO_RANGE_TYPES] = + {"port", "mmio", "pci", "wp-mem"}; + static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, bool_t is_default) { @@ -946,10 +949,7 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, char *name; rc = asprintf(&name, "ioreq_server %d %s", s->id, - (i == HVMOP_IO_RANGE_PORT) ? "port" : - (i == HVMOP_IO_RANGE_MEMORY) ? "memory" : - (i == HVMOP_IO_RANGE_PCI) ? "pci" : - ""); + (i < NR_IO_RANGE_TYPES) ? io_range_name[i] : ""); if ( rc ) goto fail; @@ -1267,6 +1267,7 @@ static int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, case HVMOP_IO_RANGE_PORT: case HVMOP_IO_RANGE_MEMORY: case HVMOP_IO_RANGE_PCI: + case HVMOP_IO_RANGE_WP_MEM: r = s->range[type]; break; @@ -1318,6 +1319,7 @@ static int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, case HVMOP_IO_RANGE_PORT: case HVMOP_IO_RANGE_MEMORY: case HVMOP_IO_RANGE_PCI: + case HVMOP_IO_RANGE_WP_MEM: r = s->range[type]; break; @@ -2558,6 +2560,8 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, uint32_t cf8; uint8_t type; uint64_t addr; + p2m_type_t p2mt; + struct page_info *ram_page; if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) ) return NULL; @@ -2601,6 +2605,15 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, type = (p->type == IOREQ_TYPE_PIO) ? HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY; addr = p->addr; + if ( type == HVMOP_IO_RANGE_MEMORY ) + { + ram_page = get_page_from_gfn(d, p->addr >> PAGE_SHIFT, &p2mt, 0); + if ( p2mt == p2m_mmio_write_dm ) + type = HVMOP_IO_RANGE_WP_MEM; + + if ( ram_page ) + put_page(ram_page); + } } list_for_each_entry ( s, @@ -2642,6 +2655,11 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, } break; + case HVMOP_IO_RANGE_WP_MEM: + if ( rangeset_contains_singleton(r, PFN_DOWN(addr)) ) + return s; + + break; } } diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index a8cc2ad..1e13973 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -48,7 +48,7 @@ struct hvm_ioreq_vcpu { bool_t pending; }; -#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1) +#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_WP_MEM + 1) #define MAX_NR_IO_RANGES 256 struct hvm_ioreq_server { diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h index 1606185..c0b1e30 100644 --- a/xen/include/public/hvm/hvm_op.h +++ b/xen/include/public/hvm/hvm_op.h @@ -333,6 +333,7 @@ struct xen_hvm_io_range { # define HVMOP_IO_RANGE_PORT 0 /* I/O port range */ # define HVMOP_IO_RANGE_MEMORY 1 /* MMIO range */ # define HVMOP_IO_RANGE_PCI 2 /* PCI segment/bus/dev/func range */ +# define HVMOP_IO_RANGE_WP_MEM 3 /* Write-protected ram range */ uint64_aligned_t start, end; /* IN - inclusive start and end of range */ }; typedef struct xen_hvm_io_range xen_hvm_io_range_t;