From patchwork Thu May 19 09:05:09 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhang X-Patchwork-Id: 9125611 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7A3009F1D3 for ; Thu, 19 May 2016 09:20:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5B2CB201BC for ; Thu, 19 May 2016 09:20:51 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2F2D920172 for ; Thu, 19 May 2016 09:20:50 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1b3K6Y-0001cA-2o; Thu, 19 May 2016 09:18:54 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1b3K6W-0001ak-Io for xen-devel@lists.xen.org; Thu, 19 May 2016 09:18:52 +0000 Received: from [85.158.139.211] by server-3.bemta-5.messagelabs.com id 84/1F-29997-BF48D375; Thu, 19 May 2016 09:18:51 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrDLMWRWlGSWpSXmKPExsVywNwkVvd3i22 4weSNkhZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8a+f4uZC85bV7R8EW9gXK/bxcjJISRQKfH+ 9V9mEFtCgFfiyLIZrBC2v8Tpa5MZIWrqJI7susEGYrMJaEv8WP0bLC4iIC1x7fNlIJuLg1lgH 5NER/t8sGZhgXCJg3teghWxCKhKdGy4wQRi8wp4SZy/OIENYoGcxMljk8HqOQW8Jb48msIOsc xL4u/KLUwTGHkXMDKsYlQvTi0qSy3SNddLKspMzyjJTczM0TU0MNXLTS0uTkxPzUlMKtZLzs/ dxAgMBQYg2MF4bLLzIUZJDiYlUd4/DbbhQnxJ+SmVGYnFGfFFpTmpxYcYZTg4lCR43zQD5QSL UtNTK9Iyc4BBCZOW4OBREuEtAQamEG9xQWJucWY6ROoUo6KUOC8jSEIAJJFRmgfXBouES4yyU sK8jECHCPEUpBblZpagyr9iFOdgVBLmnQoyhSczrwRu+iugxUxAi2+J2YAsLklESEk1MPplnF ii2f89fpthzPT2q/xXe7hCAyX4/Yp8byp0HPhUYv5n42Z5wUKOo997TGO+XfN7xxC4d+YVvX/ RBYfl15u8Knb6VMLxWT9AoHNP/qzl3goxOdZiYi2eQtZT5tw4Yrzu+T+B64qzFtVe7lljcORZ j+Rk47vbT/u0zuktXVkzra1k91KxF0osxRmJhlrMRcWJANelX3N/AgAA X-Env-Sender: yu.c.zhang@linux.intel.com X-Msg-Ref: server-13.tower-206.messagelabs.com!1463649528!36532413!2 X-Originating-IP: [192.55.52.93] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n X-StarScan-Received: X-StarScan-Version: 8.34; banners=-,-,- X-VirusChecked: Checked Received: (qmail 36049 invoked from network); 19 May 2016 09:18:50 -0000 Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93) by server-13.tower-206.messagelabs.com with SMTP; 19 May 2016 09:18:50 -0000 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP; 19 May 2016 02:18:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,334,1459839600"; d="scan'208";a="705611487" Received: from zhangyu-xengt.bj.intel.com ([10.238.157.66]) by FMSMGA003.fm.intel.com with ESMTP; 19 May 2016 02:18:49 -0700 From: Yu Zhang To: xen-devel@lists.xen.org Date: Thu, 19 May 2016 17:05:09 +0800 Message-Id: <1463648711-26595-2-git-send-email-yu.c.zhang@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1463648711-26595-1-git-send-email-yu.c.zhang@linux.intel.com> References: <1463648711-26595-1-git-send-email-yu.c.zhang@linux.intel.com> Cc: Kevin Tian , Jan Beulich , George Dunlap , Andrew Cooper , Tim Deegan , Paul Durrant , zhiyuan.lv@intel.com, Jun Nakajima Subject: [Xen-devel] [PATCH v4 1/3] x86/ioreq server: Rename p2m_mmio_write_dm to p2m_ioreq_server. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Previously p2m type p2m_mmio_write_dm was introduced for write- protected memory pages whose write operations are supposed to be forwarded to and emulated by an ioreq server. Yet limitations of rangeset restrict the number of guest pages to be write-protected. This patch replaces the p2m type p2m_mmio_write_dm with a new name: p2m_ioreq_server, which means this p2m type can be claimed by one ioreq server, instead of being tracked inside the rangeset of ioreq server. And a new memory type, HVMMEM_ioreq_server, is now used in the HVMOP_set/get_mem_type interface to set/get this p2m type. Patches following up will add the related HVMOP handling code which map/unmap type p2m_ioreq_server to/from an ioreq server. Signed-off-by: Paul Durrant Signed-off-by: Yu Zhang Acked-by: Tim Deegan Acked-by: Andrew Cooper Acked-by: George Dunlap --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Jun Nakajima Cc: Kevin Tian Cc: George Dunlap Cc: Tim Deegan changes in v4: - According to George's comments, move the HVMMEM_unused part into a seperate patch(which has already been accepted); - Removed George's Reviewed-by because of changes after v3. - According to Wei Liu's comments, change the format of the commit message. changes in v3: - According to Jan & George's comments, keep HVMMEM_mmio_write_dm for old xen interface versions, and replace it with HVMMEM_unused for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new enum - HVMMEM_ioreq_server is introduced for the get/set mem type interfaces; - Add George's Reviewed-by and Acked-by from Tim & Andrew. changes in v2: - According to George Dunlap's comments, only rename the p2m type, with no behavior changes. --- xen/arch/x86/hvm/hvm.c | 9 ++++++--- xen/arch/x86/mm/p2m-ept.c | 2 +- xen/arch/x86/mm/p2m-pt.c | 2 +- xen/arch/x86/mm/shadow/multi.c | 2 +- xen/include/asm-x86/p2m.h | 4 ++-- xen/include/public/hvm/hvm_op.h | 5 +++-- 6 files changed, 14 insertions(+), 10 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 5040a5c..21bc45c 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, */ if ( (p2mt == p2m_mmio_dm) || (npfec.write_access && - (p2m_is_discard_write(p2mt) || (p2mt == p2m_mmio_write_dm))) ) + (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) ) { __put_gfn(p2m, gfn); if ( ap2m_active ) @@ -5507,6 +5507,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) get_gfn_query_unlocked(d, a.pfn, &t); if ( p2m_is_mmio(t) ) a.mem_type = HVMMEM_mmio_dm; + else if ( t == p2m_ioreq_server ) + a.mem_type = HVMMEM_ioreq_server; else if ( p2m_is_readonly(t) ) a.mem_type = HVMMEM_ram_ro; else if ( p2m_is_ram(t) ) @@ -5537,7 +5539,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) [HVMMEM_ram_rw] = p2m_ram_rw, [HVMMEM_ram_ro] = p2m_ram_ro, [HVMMEM_mmio_dm] = p2m_mmio_dm, - [HVMMEM_unused] = p2m_invalid + [HVMMEM_unused] = p2m_invalid, + [HVMMEM_ioreq_server] = p2m_ioreq_server }; if ( copy_from_guest(&a, arg, 1) ) @@ -5586,7 +5589,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) } if ( !p2m_is_ram(t) && (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm) && - (t != p2m_mmio_write_dm || a.hvmmem_type != HVMMEM_ram_rw) ) + (t != p2m_ioreq_server || a.hvmmem_type != HVMMEM_ram_rw) ) { put_gfn(d, pfn); goto setmemtype_fail; diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 1ed5b47..a45a573 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry, entry->a = entry->d = !!cpu_has_vmx_ept_ad; break; case p2m_grant_map_ro: - case p2m_mmio_write_dm: + case p2m_ioreq_server: entry->r = 1; entry->w = entry->x = 0; entry->a = !!cpu_has_vmx_ept_ad; diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index 3d80612..eabd2e3 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -94,7 +94,7 @@ static unsigned long p2m_type_to_flags(p2m_type_t t, mfn_t mfn, default: return flags | _PAGE_NX_BIT; case p2m_grant_map_ro: - case p2m_mmio_write_dm: + case p2m_ioreq_server: return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT; case p2m_ram_ro: case p2m_ram_logdirty: diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index 428be37..b322293 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -3226,7 +3226,7 @@ static int sh_page_fault(struct vcpu *v, /* Need to hand off device-model MMIO to the device model */ if ( p2mt == p2m_mmio_dm - || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) ) + || (p2mt == p2m_ioreq_server && ft == ft_demand_write) ) { gpa = guest_walk_to_gpa(&gw); goto mmio; diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 65675a2..f3e87d6 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -71,7 +71,7 @@ typedef enum { p2m_ram_shared = 12, /* Shared or sharable memory */ p2m_ram_broken = 13, /* Broken page, access cause domain crash */ p2m_map_foreign = 14, /* ram pages from foreign domain */ - p2m_mmio_write_dm = 15, /* Read-only; writes go to the device model */ + p2m_ioreq_server = 15, } p2m_type_t; /* Modifiers to the query */ @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t; | p2m_to_mask(p2m_ram_ro) \ | p2m_to_mask(p2m_grant_map_ro) \ | p2m_to_mask(p2m_ram_shared) \ - | p2m_to_mask(p2m_mmio_write_dm)) + | p2m_to_mask(p2m_ioreq_server)) /* Write-discard types, which should discard the write operations */ #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro) \ diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h index ebb907a..b3e45cf 100644 --- a/xen/include/public/hvm/hvm_op.h +++ b/xen/include/public/hvm/hvm_op.h @@ -84,11 +84,12 @@ typedef enum { HVMMEM_ram_ro, /* Read-only; writes are discarded */ HVMMEM_mmio_dm, /* Reads and write go to the device model */ #if __XEN_INTERFACE_VERSION__ < 0x00040700 - HVMMEM_mmio_write_dm /* Read-only; writes go to the device model */ + HVMMEM_mmio_write_dm, /* Read-only; writes go to the device model */ #else - HVMMEM_unused /* Placeholder; setting memory to this type + HVMMEM_unused, /* Placeholder; setting memory to this type will fail for code after 4.7.0 */ #endif + HVMMEM_ioreq_server } hvmmem_type_t; /* Following tools-only interfaces may change in future. */