From patchwork Wed Mar 16 12:21:56 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhang X-Patchwork-Id: 8599621 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A3CAD9F294 for ; Wed, 16 Mar 2016 12:35:04 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9F8FB202AE for ; Wed, 16 Mar 2016 12:35:03 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2CEC1202F0 for ; Wed, 16 Mar 2016 12:35:02 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1agAd1-0006DW-61; Wed, 16 Mar 2016 12:32:43 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1agAd0-0006DK-D3 for xen-devel@lists.xen.org; Wed, 16 Mar 2016 12:32:42 +0000 Received: from [193.109.254.147] by server-7.bemta-14.messagelabs.com id F0/93-04065-96259E65; Wed, 16 Mar 2016 12:32:41 +0000 X-Env-Sender: yu.c.zhang@linux.intel.com X-Msg-Ref: server-5.tower-27.messagelabs.com!1458131560!31660835!1 X-Originating-IP: [192.55.52.120] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 8.11; banners=-,-,- X-VirusChecked: Checked Received: (qmail 34748 invoked from network); 16 Mar 2016 12:32:40 -0000 Received: from mga04.intel.com (HELO mga04.intel.com) (192.55.52.120) by server-5.tower-27.messagelabs.com with SMTP; 16 Mar 2016 12:32:40 -0000 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP; 16 Mar 2016 05:32:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,344,1455004800"; d="scan'208";a="911963251" Received: from zhangyu-xengt.bj.intel.com ([10.238.157.44]) by orsmga001.jf.intel.com with ESMTP; 16 Mar 2016 05:32:37 -0700 From: Yu Zhang To: xen-devel@lists.xen.org Date: Wed, 16 Mar 2016 20:21:56 +0800 Message-Id: <1458130916-30068-1-git-send-email-yu.c.zhang@linux.intel.com> X-Mailer: git-send-email 1.9.1 Cc: kevin.tian@intel.com, keir@xen.org, jbeulich@suse.com, george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, tim@xen.org, Paul.Durrant@citrix.com, zhiyuan.lv@intel.com, jun.nakajima@intel.com Subject: [Xen-devel] [PATCH 2/3] Rename p2m_mmio_write_dm to p2m_ioreq_server X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Previously p2m type p2m_mmio_write_dm was introduced for write- protected memory pages whose write operations are supposed to be forwarded to and emulated by an ioreq server. Yet limitations of rangeset restricts the number of guest pages to be write-protected. This patch replace the p2m type p2m_mmio_write_dm with a new name: p2m_ioreq_server, which means this p2m type can be claimed by one ioreq server, instead of being tracked inside the rangeset of ioreq server. Patches following up will add the related hvmop handling code which maps type p2m_ioreq_server to an ioreq server. Signed-off-by: Paul Durrant Signed-off-by: Yu Zhang --- xen/arch/x86/hvm/hvm.c | 10 +++++----- xen/arch/x86/mm/p2m-ept.c | 2 +- xen/arch/x86/mm/p2m-pt.c | 2 +- xen/arch/x86/mm/shadow/multi.c | 2 +- xen/include/asm-x86/p2m.h | 7 ++++--- xen/include/public/hvm/hvm_op.h | 2 +- 6 files changed, 13 insertions(+), 12 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 3ccd33f..07eee4a 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3176,7 +3176,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, */ if ( (p2mt == p2m_mmio_dm) || (npfec.write_access && - (p2m_is_discard_write(p2mt) || (p2mt == p2m_mmio_write_dm))) ) + (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) ) { __put_gfn(p2m, gfn); if ( ap2m_active ) @@ -6587,8 +6587,8 @@ static int hvmop_get_mem_type( get_gfn_query_unlocked(d, a.pfn, &t); if ( p2m_is_mmio(t) ) a.mem_type = HVMMEM_mmio_dm; - else if ( t == p2m_mmio_write_dm ) - a.mem_type = HVMMEM_mmio_write_dm; + else if ( t == p2m_ioreq_server ) + a.mem_type = HVMMEM_ioreq_server; else if ( p2m_is_readonly(t) ) a.mem_type = HVMMEM_ram_ro; else if ( p2m_is_ram(t) ) @@ -6620,7 +6620,7 @@ static bool_t hvm_allow_p2m_type_change(p2m_type_t old, p2m_type_t new) { if ( p2m_is_ram(old) || (p2m_is_hole(old) && new == p2m_mmio_dm) || - (old == p2m_mmio_write_dm && new == p2m_ram_rw) ) + (old == p2m_ioreq_server && new == p2m_ram_rw) ) return 1; return 0; @@ -6640,7 +6640,7 @@ static int hvmop_set_mem_type( [HVMMEM_ram_rw] = p2m_ram_rw, [HVMMEM_ram_ro] = p2m_ram_ro, [HVMMEM_mmio_dm] = p2m_mmio_dm, - [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm + [HVMMEM_ioreq_server] = p2m_ioreq_server }; if ( copy_from_guest(&a, arg, 1) ) diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 3cb6868..380ec25 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry, entry->a = entry->d = !!cpu_has_vmx_ept_ad; break; case p2m_grant_map_ro: - case p2m_mmio_write_dm: + case p2m_ioreq_server: entry->r = 1; entry->w = entry->x = 0; entry->a = !!cpu_has_vmx_ept_ad; diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index 3d80612..eabd2e3 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -94,7 +94,7 @@ static unsigned long p2m_type_to_flags(p2m_type_t t, mfn_t mfn, default: return flags | _PAGE_NX_BIT; case p2m_grant_map_ro: - case p2m_mmio_write_dm: + case p2m_ioreq_server: return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT; case p2m_ram_ro: case p2m_ram_logdirty: diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index e5c8499..c81302a 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v, /* Need to hand off device-model MMIO to the device model */ if ( p2mt == p2m_mmio_dm - || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) ) + || (p2mt == p2m_ioreq_server && ft == ft_demand_write) ) { gpa = guest_walk_to_gpa(&gw); goto mmio; diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 5392eb0..084a1f2 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -71,7 +71,7 @@ typedef enum { p2m_ram_shared = 12, /* Shared or sharable memory */ p2m_ram_broken = 13, /* Broken page, access cause domain crash */ p2m_map_foreign = 14, /* ram pages from foreign domain */ - p2m_mmio_write_dm = 15, /* Read-only; writes go to the device model */ + p2m_ioreq_server = 15, } p2m_type_t; /* Modifiers to the query */ @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t; | p2m_to_mask(p2m_ram_ro) \ | p2m_to_mask(p2m_grant_map_ro) \ | p2m_to_mask(p2m_ram_shared) \ - | p2m_to_mask(p2m_mmio_write_dm)) + | p2m_to_mask(p2m_ioreq_server)) /* Write-discard types, which should discard the write operations */ #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro) \ @@ -174,7 +174,8 @@ typedef unsigned int p2m_query_t; #define p2m_is_any_ram(_t) (p2m_to_mask(_t) & \ (P2M_RAM_TYPES | P2M_GRANT_TYPES | \ - p2m_to_mask(p2m_map_foreign))) + p2m_to_mask(p2m_map_foreign) | \ + p2m_to_mask(p2m_ioreq_server))) #define p2m_allows_invalid_mfn(t) (p2m_to_mask(t) & P2M_INVALID_MFN_TYPES) diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h index 1606185..a1eae52 100644 --- a/xen/include/public/hvm/hvm_op.h +++ b/xen/include/public/hvm/hvm_op.h @@ -83,7 +83,7 @@ typedef enum { HVMMEM_ram_rw, /* Normal read/write guest RAM */ HVMMEM_ram_ro, /* Read-only; writes are discarded */ HVMMEM_mmio_dm, /* Reads and write go to the device model */ - HVMMEM_mmio_write_dm /* Read-only; writes go to the device model */ + HVMMEM_ioreq_server, } hvmmem_type_t; /* Following tools-only interfaces may change in future. */