From patchwork Mon Apr 25 10:35:38 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhang X-Patchwork-Id: 8925401 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 15E389F372 for ; Mon, 25 Apr 2016 10:50:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E930920125 for ; Mon, 25 Apr 2016 10:50:22 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C0AA02011B for ; Mon, 25 Apr 2016 10:50:21 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aue46-0007EX-1I; Mon, 25 Apr 2016 10:48:30 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aue44-0007Dw-KT for xen-devel@lists.xen.org; Mon, 25 Apr 2016 10:48:28 +0000 Received: from [85.158.139.211] by server-12.bemta-5.messagelabs.com id 7C/80-25799-BF5FD175; Mon, 25 Apr 2016 10:48:27 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrFLMWRWlGSWpSXmKPExsVywNwkQvf3V9l wg53zpSyWfFzM4sDocXT3b6YAxijWzLyk/IoE1ozlu+8wFuy0reg+3cTWwLhJr4uRg0NIoELi 9Cv2LkZODgkBXokjy2awgoQlBPwl/vaZQVTUSax5UAVSwSagLfFj9W9GEFtEQFri2ufLQDYXB 7PARyaJOye+MYEkhAXSJOZs3QQ2kkVAVeLJ6jksIDavgKfEzk3P2CBWyUmcPDaZFcTmFPCSON L/D6xXCKjmyPdmxgmMvAsYGVYxqhenFpWlFula6CUVZaZnlOQmZuboGhqY6uWmFhcnpqfmJCY V6yXn525iBIYBAxDsYDzY7HyIUZKDSUmU9+Qh2XAhvqT8lMqMxOKM+KLSnNTiQ4wyHBxKErwn vgDlBItS01Mr0jJzgAEJk5bg4FES4X0MkuYtLkjMLc5Mh0idYlSUEue1A4axkABIIqM0D64NF gWXGGWlhHkZgQ4R4ilILcrNLEGVf8UozsGoJMxbCzKeJzOvBG76K6DFTECLLx8CW1ySiJCSam CceMX71vFZEkqa5Rm5uimv1sXoJd9NzNa/vtXh0i6FM3unz93veMVhT+0/tuN6BjcqOh+IsE/ 99TdozY93+2/+7p529/mOswebp9b0bwv13rxk6oUVHDN/SUisW+9awrpeepKx3X/l32t2rQpU m3iJ49vsXpNDZQmHY2/M4vRkzv8de0DqlfS6g0osxRmJhlrMRcWJAJocA919AgAA X-Env-Sender: yu.c.zhang@linux.intel.com X-Msg-Ref: server-14.tower-206.messagelabs.com!1461581303!3568679!2 X-Originating-IP: [192.55.52.88] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n X-StarScan-Received: X-StarScan-Version: 8.28; banners=-,-,- X-VirusChecked: Checked Received: (qmail 60437 invoked from network); 25 Apr 2016 10:48:26 -0000 Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88) by server-14.tower-206.messagelabs.com with SMTP; 25 Apr 2016 10:48:26 -0000 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP; 25 Apr 2016 03:48:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,532,1455004800"; d="scan'208";a="791819025" Received: from zhangyu-xengt.bj.intel.com ([10.238.157.66]) by orsmga003.jf.intel.com with ESMTP; 25 Apr 2016 03:48:23 -0700 From: Yu Zhang To: xen-devel@lists.xen.org Date: Mon, 25 Apr 2016 18:35:38 +0800 Message-Id: <1461580540-9314-2-git-send-email-yu.c.zhang@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1461580540-9314-1-git-send-email-yu.c.zhang@linux.intel.com> References: <1461580540-9314-1-git-send-email-yu.c.zhang@linux.intel.com> Cc: Kevin Tian , Keir Fraser , Jun Nakajima , George Dunlap , Andrew Cooper , Tim Deegan , Paul Durrant , zhiyuan.lv@intel.com, Jan Beulich , wei.liu2@citrix.com Subject: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Previously p2m type p2m_mmio_write_dm was introduced for write- protected memory pages whose write operations are supposed to be forwarded to and emulated by an ioreq server. Yet limitations of rangeset restrict the number of guest pages to be write-protected. This patch replaces the p2m type p2m_mmio_write_dm with a new name: p2m_ioreq_server, which means this p2m type can be claimed by one ioreq server, instead of being tracked inside the rangeset of ioreq server. Patches following up will add the related hvmop handling code which map/unmap type p2m_ioreq_server to/from an ioreq server. changes in v3: - According to Jan & George's comments, keep HVMMEM_mmio_write_dm for old xen interface versions, and replace it with HVMMEM_unused for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new enum - HVMMEM_ioreq_server is introduced for the get/set mem type interfaces; - Add George's Reviewed-by and Acked-by from Tim & Andrew. changes in v2: - According to George Dunlap's comments, only rename the p2m type, with no behavior changes. Signed-off-by: Paul Durrant Signed-off-by: Yu Zhang Acked-by: Tim Deegan Acked-by: Andrew Cooper Reviewed-by: George Dunlap Cc: Keir Fraser Cc: Jan Beulich Cc: Andrew Cooper Cc: Jun Nakajima Cc: Kevin Tian Cc: George Dunlap Cc: Tim Deegan Reviewed-by: Jan Beulich --- xen/arch/x86/hvm/hvm.c | 14 ++++++++------ xen/arch/x86/mm/p2m-ept.c | 2 +- xen/arch/x86/mm/p2m-pt.c | 2 +- xen/arch/x86/mm/shadow/multi.c | 2 +- xen/include/asm-x86/p2m.h | 4 ++-- xen/include/public/hvm/hvm_op.h | 8 +++++++- 6 files changed, 20 insertions(+), 12 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index f24126d..874cb0f 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, */ if ( (p2mt == p2m_mmio_dm) || (npfec.write_access && - (p2m_is_discard_write(p2mt) || (p2mt == p2m_mmio_write_dm))) ) + (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) ) { __put_gfn(p2m, gfn); if ( ap2m_active ) @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) get_gfn_query_unlocked(d, a.pfn, &t); if ( p2m_is_mmio(t) ) a.mem_type = HVMMEM_mmio_dm; - else if ( t == p2m_mmio_write_dm ) - a.mem_type = HVMMEM_mmio_write_dm; + else if ( t == p2m_ioreq_server ) + a.mem_type = HVMMEM_ioreq_server; else if ( p2m_is_readonly(t) ) a.mem_type = HVMMEM_ram_ro; else if ( p2m_is_ram(t) ) @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) [HVMMEM_ram_rw] = p2m_ram_rw, [HVMMEM_ram_ro] = p2m_ram_ro, [HVMMEM_mmio_dm] = p2m_mmio_dm, - [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm + [HVMMEM_unused] = p2m_invalid, + [HVMMEM_ioreq_server] = p2m_ioreq_server }; if ( copy_from_guest(&a, arg, 1) ) @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) ) goto setmemtype_fail; - if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ) + if ( a.hvmmem_type >= ARRAY_SIZE(memtype) || + unlikely(a.hvmmem_type == HVMMEM_unused) ) goto setmemtype_fail; while ( a.nr > start_iter ) @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) } if ( !p2m_is_ram(t) && (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm) && - (t != p2m_mmio_write_dm || a.hvmmem_type != HVMMEM_ram_rw) ) + (t != p2m_ioreq_server || a.hvmmem_type != HVMMEM_ram_rw) ) { put_gfn(d, pfn); goto setmemtype_fail; diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 3cb6868..380ec25 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry, entry->a = entry->d = !!cpu_has_vmx_ept_ad; break; case p2m_grant_map_ro: - case p2m_mmio_write_dm: + case p2m_ioreq_server: entry->r = 1; entry->w = entry->x = 0; entry->a = !!cpu_has_vmx_ept_ad; diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index 3d80612..eabd2e3 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -94,7 +94,7 @@ static unsigned long p2m_type_to_flags(p2m_type_t t, mfn_t mfn, default: return flags | _PAGE_NX_BIT; case p2m_grant_map_ro: - case p2m_mmio_write_dm: + case p2m_ioreq_server: return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT; case p2m_ram_ro: case p2m_ram_logdirty: diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index e5c8499..c81302a 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v, /* Need to hand off device-model MMIO to the device model */ if ( p2mt == p2m_mmio_dm - || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) ) + || (p2mt == p2m_ioreq_server && ft == ft_demand_write) ) { gpa = guest_walk_to_gpa(&gw); goto mmio; diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 5392eb0..ee2ea9c 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -71,7 +71,7 @@ typedef enum { p2m_ram_shared = 12, /* Shared or sharable memory */ p2m_ram_broken = 13, /* Broken page, access cause domain crash */ p2m_map_foreign = 14, /* ram pages from foreign domain */ - p2m_mmio_write_dm = 15, /* Read-only; writes go to the device model */ + p2m_ioreq_server = 15, } p2m_type_t; /* Modifiers to the query */ @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t; | p2m_to_mask(p2m_ram_ro) \ | p2m_to_mask(p2m_grant_map_ro) \ | p2m_to_mask(p2m_ram_shared) \ - | p2m_to_mask(p2m_mmio_write_dm)) + | p2m_to_mask(p2m_ioreq_server)) /* Write-discard types, which should discard the write operations */ #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro) \ diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h index 1606185..b3e45cf 100644 --- a/xen/include/public/hvm/hvm_op.h +++ b/xen/include/public/hvm/hvm_op.h @@ -83,7 +83,13 @@ typedef enum { HVMMEM_ram_rw, /* Normal read/write guest RAM */ HVMMEM_ram_ro, /* Read-only; writes are discarded */ HVMMEM_mmio_dm, /* Reads and write go to the device model */ - HVMMEM_mmio_write_dm /* Read-only; writes go to the device model */ +#if __XEN_INTERFACE_VERSION__ < 0x00040700 + HVMMEM_mmio_write_dm, /* Read-only; writes go to the device model */ +#else + HVMMEM_unused, /* Placeholder; setting memory to this type + will fail for code after 4.7.0 */ +#endif + HVMMEM_ioreq_server } hvmmem_type_t; /* Following tools-only interfaces may change in future. */