From patchwork Wed Mar 8 15:33:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhang X-Patchwork-Id: 9611479 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B7D2B60414 for ; Wed, 8 Mar 2017 15:55:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B8467285F7 for ; Wed, 8 Mar 2017 15:55:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ACFA928609; Wed, 8 Mar 2017 15:55:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3F988285F7 for ; Wed, 8 Mar 2017 15:55:10 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cldtj-0003aC-7N; Wed, 08 Mar 2017 15:53:07 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cldti-0003ZY-Gb for xen-devel@lists.xen.org; Wed, 08 Mar 2017 15:53:06 +0000 Received: from [193.109.254.147] by server-3.bemta-6.messagelabs.com id 9E/E9-27751-1E820C85; Wed, 08 Mar 2017 15:53:05 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrHLMWRWlGSWpSXmKPExsXS1tYhr/tA40C EwaFr/BZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8aWm/9ZCjr0Knre3WdsYOxT7GLk4BASqJC4 c9+hi5GTQ0KAV+LIshmsELafxM7D14BsLqCSdkaJWdu2soEk2AS0JX6s/s0IYosISEtc+3yZE aSIWWAvo8SnNUfBioQF6iW+3vzPDGKzCKhKnP0wgQXE5hXwkri17jIzxAY5iZPHJoNt4xTwlm jqugI2VAio5vfqnUwTGHkXMDKsYtQoTi0qSy3SNTTQSyrKTM8oyU3MzAHyzPRyU4uLE9NTcxK TivWS83M3MQLDgQEIdjDeWxZwiFGSg0lJlPeq6oEIIb6k/JTKjMTijPii0pzU4kOMMhwcShK8 YsDwEhIsSk1PrUjLzAEGJkxagoNHSYT3oTpQmre4IDG3ODMdInWKUVFKnPc8SEIAJJFRmgfXB ouGS4yyUsK8jECHCPEUpBblZpagyr9iFOdgVBLmXQ4yhSczrwRu+iugxUxAi7Vd94IsLklESE k1MC7hYGVImf7Krjch92vlIYMa/VjuJZ/uHLx04trT31oWp6yYn3Ft6grWaGKxVRBLjZBPv8Z T0dB6fuIOGZ1d8/nTf17UzXj44363bqOSscwLZ8al+39O8V0t4Z4wZ1J9/gmHv9OuvHzwWfLW toMO+ReWJe01X/nabbl4j16sygWN4CfXm9XM9imxFGckGmoxFxUnAgCA4HJfgQIAAA== X-Env-Sender: yu.c.zhang@linux.intel.com X-Msg-Ref: server-9.tower-27.messagelabs.com!1488988376!90834393!2 X-Originating-IP: [134.134.136.31] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 4129 invoked from network); 8 Mar 2017 15:53:03 -0000 Received: from mga06.intel.com (HELO mga06.intel.com) (134.134.136.31) by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 8 Mar 2017 15:53:03 -0000 Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga104.jf.intel.com with ESMTP; 08 Mar 2017 07:53:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,264,1486454400"; d="scan'208";a="65498368" Received: from zhangyu-optiplex-9020.bj.intel.com ([10.238.135.159]) by orsmga004.jf.intel.com with ESMTP; 08 Mar 2017 07:53:01 -0800 From: Yu Zhang To: xen-devel@lists.xen.org Date: Wed, 8 Mar 2017 23:33:52 +0800 Message-Id: <1488987232-12349-6-git-send-email-yu.c.zhang@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1488987232-12349-1-git-send-email-yu.c.zhang@linux.intel.com> References: <1488987232-12349-1-git-send-email-yu.c.zhang@linux.intel.com> Cc: George Dunlap , Andrew Cooper , Paul Durrant , zhiyuan.lv@intel.com, Jan Beulich Subject: [Xen-devel] [PATCH v7 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP After an ioreq server has unmapped, the remaining p2m_ioreq_server entries need to be reset back to p2m_ram_rw. This patch does this synchronously by iterating the p2m table. The synchronous resetting is necessary because we need to guarantee the p2m table is clean before another ioreq server is mapped. And since the sweeping of p2m table could be time consuming, it is done with hypercall continuation. Signed-off-by: Yu Zhang --- Cc: Paul Durrant Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap changes in v1: - This patch is splitted from patch 4 of last version. - According to comments from Jan: update the gfn_start for when use hypercall continuation to reset the p2m type. - According to comments from Jan: use min() to compare gfn_end and max mapped pfn in p2m_finish_type_change() --- xen/arch/x86/hvm/dm.c | 43 +++++++++++++++++++++++++++++++++++++----- xen/arch/x86/mm/p2m.c | 29 ++++++++++++++++++++++++++++ xen/include/asm-x86/p2m.h | 5 +++++ xen/include/public/hvm/dm_op.h | 3 +-- 4 files changed, 73 insertions(+), 7 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index f97478b..a92d5d7 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -288,6 +288,7 @@ static int inject_event(struct domain *d, return 0; } +#define DMOP_op_mask 0xff static int dm_op(domid_t domid, unsigned int nr_bufs, xen_dm_op_buf_t bufs[]) @@ -315,10 +316,8 @@ static int dm_op(domid_t domid, } rc = -EINVAL; - if ( op.pad ) - goto out; - switch ( op.op ) + switch ( op.op & DMOP_op_mask ) { case XEN_DMOP_create_ioreq_server: { @@ -387,6 +386,10 @@ static int dm_op(domid_t domid, { const struct xen_dm_op_map_mem_type_to_ioreq_server *data = &op.u.map_mem_type_to_ioreq_server; + unsigned long gfn_start = op.op & ~DMOP_op_mask; + unsigned long gfn_end; + + const_op = false; rc = -EINVAL; if ( data->pad ) @@ -396,8 +399,38 @@ static int dm_op(domid_t domid, if ( !hap_enabled(d) ) break; - rc = hvm_map_mem_type_to_ioreq_server(d, data->id, - data->type, data->flags); + if ( gfn_start == 0 ) + rc = hvm_map_mem_type_to_ioreq_server(d, data->id, + data->type, data->flags); + /* + * Iterate p2m table when an ioreq server unmaps from p2m_ioreq_server, + * and reset the remaining p2m_ioreq_server entries back to p2m_ram_rw. + */ + if ( (gfn_start > 0) || (data->flags == 0 && rc == 0) ) + { + struct p2m_domain *p2m = p2m_get_hostp2m(d); + + while ( read_atomic(&p2m->ioreq.entry_count) && + gfn_start <= p2m->max_mapped_pfn ) + { + gfn_end = gfn_start + DMOP_op_mask; + + p2m_finish_type_change(d, gfn_start, gfn_end, + p2m_ioreq_server, p2m_ram_rw); + + gfn_start = gfn_end + 1; + + /* Check for continuation if it's not the last iteration. */ + if ( gfn_start <= p2m->max_mapped_pfn && + hypercall_preempt_check() ) + { + rc = -ERESTART; + op.op |= gfn_start; + break; + } + } + } + break; } diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 94d7141..9a81f00 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1038,6 +1038,35 @@ void p2m_change_type_range(struct domain *d, p2m_unlock(p2m); } +/* Synchronously modify the p2m type of a range of gfns from ot to nt. */ +void p2m_finish_type_change(struct domain *d, + unsigned long start, unsigned long end, + p2m_type_t ot, p2m_type_t nt) +{ + struct p2m_domain *p2m = p2m_get_hostp2m(d); + p2m_type_t t; + unsigned long gfn = start; + + ASSERT(start <= end); + ASSERT(ot != nt); + ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt)); + + p2m_lock(p2m); + + end = min(end, p2m->max_mapped_pfn); + while ( gfn <= end ) + { + get_gfn_query_unlocked(d, gfn, &t); + + if ( t == ot ) + p2m_change_type_one(d, gfn, t, nt); + + gfn++; + } + + p2m_unlock(p2m); +} + /* * Returns: * 0 for success diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 395f125..3eadd89 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -611,6 +611,11 @@ void p2m_change_type_range(struct domain *d, int p2m_change_type_one(struct domain *d, unsigned long gfn, p2m_type_t ot, p2m_type_t nt); +/* Synchronously change types across a range of p2m entries (start ... end) */ +void p2m_finish_type_change(struct domain *d, + unsigned long start, unsigned long end, + p2m_type_t ot, p2m_type_t nt); + /* Report a change affecting memory types. */ void p2m_memory_type_changed(struct domain *d); diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index c643b67..23b364b 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -343,8 +343,7 @@ struct xen_dm_op_map_mem_type_to_ioreq_server { }; struct xen_dm_op { - uint32_t op; - uint32_t pad; + uint64_t op; union { struct xen_dm_op_create_ioreq_server create_ioreq_server; struct xen_dm_op_get_ioreq_server_info get_ioreq_server_info;