From patchwork Thu Mar 31 10:53:37 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhang X-Patchwork-Id: 8711011 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 386289F30C for ; Thu, 31 Mar 2016 11:07:37 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 07D662026F for ; Thu, 31 Mar 2016 11:07:36 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EF4D620253 for ; Thu, 31 Mar 2016 11:07:33 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1alaPR-00048t-Et; Thu, 31 Mar 2016 11:05:05 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1alaPQ-00048Z-D9 for xen-devel@lists.xen.org; Thu, 31 Mar 2016 11:05:04 +0000 Received: from [193.109.254.147] by server-6.bemta-14.messagelabs.com id 42/F0-03497-F540DF65; Thu, 31 Mar 2016 11:05:03 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrNLMWRWlGSWpSXmKPExsVywNykWDee5W+ YwcFoiyUfF7M4MHoc3f2bKYAxijUzLym/IoE1Y+aui6wFbQYV17oXMzcwzlftYuTkEBKokGi5 tZkdxJYQ4JU4smwGK4TtL/Ht0ntWiJo6iXUfv7KA2GwC2hI/Vv9mBLFFBKQlrn2+DGRzcTALH GSS+PT/EliRsECYxI6la8GGsgioSlx+s5YJxOYV8JQ4fmEmG8QCOYmTxyaDLeAU8JKYffUoUJ wDaJmnxO8ZshMYeRcwMqxiVC9OLSpLLdI110sqykzPKMlNzMzRNTQ00ctNLS5OTE/NSUwq1kv Oz93ECAwEBiDYwfhlifMhRkkOJiVR3nuv/4QJ8SXlp1RmJBZnxBeV5qQWH2KU4eBQkuDdxvQ3 TEiwKDU9tSItMwcYkjBpCQ4eJRHeqyBp3uKCxNzizHSI1ClGRSlx3r0gCQGQREZpHlwbLA4uM cpKCfMyAh0ixFOQWpSbWYIq/4pRnINRSZh3M8gUnsy8Erjpr4AWMwEt3qrxC2RxSSJCSqqBkU F2313jWs7A52ERybHvbl8q6f23VPx0eEHfzQMXk0/nMjl0WG1vthQvjoqsT73wf9XUj4u1FY+ U+HpOmnCt7MyVGzd49lodNswz8MrWZFd+w/FKu5o7/eza19tnLzftYPobseUn86PDfDnevtFH ru033mM0+fnSBGbPd38ubEwV1P18SETngBJLcUaioRZzUXEiALDykK1+AgAA X-Env-Sender: yu.c.zhang@linux.intel.com X-Msg-Ref: server-9.tower-27.messagelabs.com!1459422297!34684854!3 X-Originating-IP: [192.55.52.115] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 8.28; banners=-,-,- X-VirusChecked: Checked Received: (qmail 44646 invoked from network); 31 Mar 2016 11:05:02 -0000 Received: from mga14.intel.com (HELO mga14.intel.com) (192.55.52.115) by server-9.tower-27.messagelabs.com with SMTP; 31 Mar 2016 11:05:02 -0000 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP; 31 Mar 2016 04:05:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,422,1455004800"; d="scan'208";a="945169265" Received: from zhangyu-xengt.bj.intel.com ([10.238.157.44]) by orsmga002.jf.intel.com with ESMTP; 31 Mar 2016 04:04:59 -0700 From: Yu Zhang To: xen-devel@lists.xen.org Date: Thu, 31 Mar 2016 18:53:37 +0800 Message-Id: <1459421618-5991-3-git-send-email-yu.c.zhang@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1459421618-5991-1-git-send-email-yu.c.zhang@linux.intel.com> References: <1459421618-5991-1-git-send-email-yu.c.zhang@linux.intel.com> Cc: Kevin Tian , Keir Fraser , Jan Beulich , George Dunlap , Andrew Cooper , Tim Deegan , Paul Durrant , zhiyuan.lv@intel.com, Jun Nakajima Subject: [Xen-devel] [PATCH v2 2/3] x86/ioreq server: Rename p2m_mmio_write_dm to p2m_ioreq_server X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Previously p2m type p2m_mmio_write_dm was introduced for write- protected memory pages whose write operations are supposed to be forwarded to and emulated by an ioreq server. Yet limitations of rangeset restrict the number of guest pages to be write-protected. This patch replaces the p2m type p2m_mmio_write_dm with a new name: p2m_ioreq_server, which means this p2m type can be claimed by one ioreq server, instead of being tracked inside the rangeset of ioreq server. Patches following up will add the related hvmop handling code which map/unmap type p2m_ioreq_server to/from an ioreq server. Signed-off-by: Paul Durrant Signed-off-by: Yu Zhang Cc: Keir Fraser Cc: Jan Beulich Cc: Andrew Cooper Cc: Jun Nakajima Cc: Kevin Tian Cc: George Dunlap Cc: Tim Deegan Reviewed-by: George Dunlap Acked-by: Andrew Cooper --- xen/arch/x86/hvm/hvm.c | 10 +++++----- xen/arch/x86/mm/p2m-ept.c | 2 +- xen/arch/x86/mm/p2m-pt.c | 2 +- xen/arch/x86/mm/shadow/multi.c | 2 +- xen/include/asm-x86/p2m.h | 4 ++-- xen/include/public/hvm/hvm_op.h | 2 +- 6 files changed, 11 insertions(+), 11 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index f700923..bec6a8a 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3166,7 +3166,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, */ if ( (p2mt == p2m_mmio_dm) || (npfec.write_access && - (p2m_is_discard_write(p2mt) || (p2mt == p2m_mmio_write_dm))) ) + (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) ) { __put_gfn(p2m, gfn); if ( ap2m_active ) @@ -6578,8 +6578,8 @@ static int hvmop_get_mem_type( get_gfn_query_unlocked(d, a.pfn, &t); if ( p2m_is_mmio(t) ) a.mem_type = HVMMEM_mmio_dm; - else if ( t == p2m_mmio_write_dm ) - a.mem_type = HVMMEM_mmio_write_dm; + else if ( t == p2m_ioreq_server ) + a.mem_type = HVMMEM_ioreq_server; else if ( p2m_is_readonly(t) ) a.mem_type = HVMMEM_ram_ro; else if ( p2m_is_ram(t) ) @@ -6614,7 +6614,7 @@ static bool_t hvm_allow_p2m_type_change(p2m_type_t old, p2m_type_t new) { if ( p2m_is_ram(old) || (p2m_is_hole(old) && new == p2m_mmio_dm) || - (old == p2m_mmio_write_dm && new == p2m_ram_rw) ) + (old == p2m_ioreq_server && new == p2m_ram_rw) ) return 1; return 0; @@ -6634,7 +6634,7 @@ static int hvmop_set_mem_type( [HVMMEM_ram_rw] = p2m_ram_rw, [HVMMEM_ram_ro] = p2m_ram_ro, [HVMMEM_mmio_dm] = p2m_mmio_dm, - [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm + [HVMMEM_ioreq_server] = p2m_ioreq_server }; if ( copy_from_guest(&a, arg, 1) ) diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 3cb6868..380ec25 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry, entry->a = entry->d = !!cpu_has_vmx_ept_ad; break; case p2m_grant_map_ro: - case p2m_mmio_write_dm: + case p2m_ioreq_server: entry->r = 1; entry->w = entry->x = 0; entry->a = !!cpu_has_vmx_ept_ad; diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index 3d80612..eabd2e3 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -94,7 +94,7 @@ static unsigned long p2m_type_to_flags(p2m_type_t t, mfn_t mfn, default: return flags | _PAGE_NX_BIT; case p2m_grant_map_ro: - case p2m_mmio_write_dm: + case p2m_ioreq_server: return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT; case p2m_ram_ro: case p2m_ram_logdirty: diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index e5c8499..c81302a 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v, /* Need to hand off device-model MMIO to the device model */ if ( p2mt == p2m_mmio_dm - || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) ) + || (p2mt == p2m_ioreq_server && ft == ft_demand_write) ) { gpa = guest_walk_to_gpa(&gw); goto mmio; diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 5392eb0..ee2ea9c 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -71,7 +71,7 @@ typedef enum { p2m_ram_shared = 12, /* Shared or sharable memory */ p2m_ram_broken = 13, /* Broken page, access cause domain crash */ p2m_map_foreign = 14, /* ram pages from foreign domain */ - p2m_mmio_write_dm = 15, /* Read-only; writes go to the device model */ + p2m_ioreq_server = 15, } p2m_type_t; /* Modifiers to the query */ @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t; | p2m_to_mask(p2m_ram_ro) \ | p2m_to_mask(p2m_grant_map_ro) \ | p2m_to_mask(p2m_ram_shared) \ - | p2m_to_mask(p2m_mmio_write_dm)) + | p2m_to_mask(p2m_ioreq_server)) /* Write-discard types, which should discard the write operations */ #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro) \ diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h index 1606185..a1eae52 100644 --- a/xen/include/public/hvm/hvm_op.h +++ b/xen/include/public/hvm/hvm_op.h @@ -83,7 +83,7 @@ typedef enum { HVMMEM_ram_rw, /* Normal read/write guest RAM */ HVMMEM_ram_ro, /* Read-only; writes are discarded */ HVMMEM_mmio_dm, /* Reads and write go to the device model */ - HVMMEM_mmio_write_dm /* Read-only; writes go to the device model */ + HVMMEM_ioreq_server, } hvmmem_type_t; /* Following tools-only interfaces may change in future. */