From patchwork Fri Sep 2 10:47:20 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhang X-Patchwork-Id: 9310709 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 456E360760 for ; Fri, 2 Sep 2016 11:07:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 35DCD29789 for ; Fri, 2 Sep 2016 11:07:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2A7B72978B; Fri, 2 Sep 2016 11:07:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9A9382978A for ; Fri, 2 Sep 2016 11:07:01 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bfmHD-0004m5-Nn; Fri, 02 Sep 2016 11:04:51 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bfmHC-0004kw-JF for xen-devel@lists.xen.org; Fri, 02 Sep 2016 11:04:50 +0000 Received: from [85.158.137.68] by server-9.bemta-3.messagelabs.com id 74/BB-27233-1DC59C75; Fri, 02 Sep 2016 11:04:49 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrHLMWRWlGSWpSXmKPExsVywNwkVvdizMl wg4ajvBZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8bkTY+YCnZFViyZsYGxgXG5YxcjJ4eQQKXE qpUfWEBsCQFeiSPLZrBC2L4SZ+bPZoeoqZdY+/sjWA2bgLbEj9W/GUFsEQFpiWufLwPZXBzMA p1MEie3XmYDSQgLFEo0tj4As1kEVCUuXtwDNpRXwEvi1uur7BAL5CROHpsMFucU8JboOgVRLw RU8+HJKdYJjLwLGBlWMWoUpxaVpRbpGprpJRVlpmeU5CZm5ugaGhjr5aYWFyemp+YkJhXrJef nbmIEhgMDEOxgXLXd8xCjJAeTkijvg4CT4UJ8SfkplRmJxRnxRaU5qcWHGGU4OJQkeCdGA+UE i1LTUyvSMnOAgQmTluDgURLhvQyS5i0uSMwtzkyHSJ1iVJQS590BkhAASWSU5sG1waLhEqOsl DAvI9AhQjwFqUW5mSWo8q8YxTkYlYR554NM4cnMK4Gb/gpoMRPQ4pJrx0EWlyQipKQaGGW+as srhD1tbWXWebYsRlFsesjSRCeO/ZMXlm26daplUY2tvbCEprDPu5hHq/dvqdUV0Evfu5dPLHG F3pVNrX4CbruvH7/G+2Qqi/AvlX6Ja9dzfzT/nlKT/Z7hwx+t4MMOHasCMoxuXLt1sFD4nff2 3Wpc+5Rvt929Ie/rtv2Z3oLK9+t2lCuxFGckGmoxFxUnAgAlXlbHgQIAAA== X-Env-Sender: yu.c.zhang@linux.intel.com X-Msg-Ref: server-4.tower-31.messagelabs.com!1472814285!8445133!2 X-Originating-IP: [192.55.52.93] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 49151 invoked from network); 2 Sep 2016 11:04:48 -0000 Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93) by server-4.tower-31.messagelabs.com with DHE-RSA-CAMELLIA256-SHA encrypted SMTP; 2 Sep 2016 11:04:48 -0000 Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP; 02 Sep 2016 04:04:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,270,1470726000"; d="scan'208";a="3948367" Received: from zhangyu-xengt.bj.intel.com ([10.238.154.168]) by fmsmga006.fm.intel.com with ESMTP; 02 Sep 2016 04:04:45 -0700 From: Yu Zhang To: xen-devel@lists.xen.org Date: Fri, 2 Sep 2016 18:47:20 +0800 Message-Id: <1472813240-11011-5-git-send-email-yu.c.zhang@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1472813240-11011-1-git-send-email-yu.c.zhang@linux.intel.com> References: <1472813240-11011-1-git-send-email-yu.c.zhang@linux.intel.com> Cc: Kevin Tian , Jun Nakajima , George Dunlap , Andrew Cooper , Paul Durrant , zhiyuan.lv@intel.com, Jan Beulich Subject: [Xen-devel] [PATCH v6 4/4] x86/ioreq server: Reset outstanding p2m_ioreq_server entries when an ioreq server unmaps. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch resets p2m_ioreq_server entries back to p2m_ram_rw, after an ioreq server has unmapped. The resync is done both asynchronously with the current p2m_change_entry_type_global() interface, and synchronously by iterating the p2m table. The synchronous resetting is necessary because we need to guarantee the p2m table is clean before another ioreq server is mapped. And since the sweeping of p2m table could be time consuming, it is done with hypercall continuation. Asynchronous approach is also taken so that p2m_ioreq_server entries can also be reset when the HVM is scheduled between hypercall continuations. This patch also disallows live migration, when there's still any outstanding p2m_ioreq_server entry left. The core reason is our current implementation of p2m_change_entry_type_global() can not tell the state of p2m_ioreq_server entries(can not decide if an entry is to be emulated or to be resynced). Signed-off-by: Yu Zhang --- Cc: Paul Durrant Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Cc: Jun Nakajima Cc: Kevin Tian changes in v2: - Move the calculation of ioreq server page entry_cout into p2m_change_type_one() so that we do not need a seperate lock. Note: entry_count is also calculated in resolve_misconfig()/ do_recalc(), fortunately callers of both routines have p2m lock protected already. - Simplify logic in hvmop_set_mem_type(). - Introduce routine p2m_finish_type_change() to walk the p2m table and do the p2m reset. --- xen/arch/x86/hvm/hvm.c | 38 +++++++++++++++++++++++++++++++++--- xen/arch/x86/mm/hap/hap.c | 9 +++++++++ xen/arch/x86/mm/p2m-ept.c | 6 +++++- xen/arch/x86/mm/p2m-pt.c | 10 ++++++++-- xen/arch/x86/mm/p2m.c | 49 +++++++++++++++++++++++++++++++++++++++++++++++ xen/include/asm-x86/p2m.h | 9 ++++++++- 6 files changed, 114 insertions(+), 7 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 9b419d2..200c661 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -5522,11 +5522,13 @@ static int hvmop_set_mem_type( } static int hvmop_map_mem_type_to_ioreq_server( - XEN_GUEST_HANDLE_PARAM(xen_hvm_map_mem_type_to_ioreq_server_t) uop) + XEN_GUEST_HANDLE_PARAM(xen_hvm_map_mem_type_to_ioreq_server_t) uop, + unsigned long *iter) { xen_hvm_map_mem_type_to_ioreq_server_t op; struct domain *d; int rc; + unsigned long gfn = *iter; if ( copy_from_guest(&op, uop, 1) ) return -EFAULT; @@ -5551,7 +5553,35 @@ static int hvmop_map_mem_type_to_ioreq_server( if ( rc != 0 ) goto out; - rc = hvm_map_mem_type_to_ioreq_server(d, op.id, op.type, op.flags); + if ( gfn == 0 ) + rc = hvm_map_mem_type_to_ioreq_server(d, op.id, op.type, op.flags); + + /* + * Iterate p2m table when an ioreq server unmaps from p2m_ioreq_server, + * and reset the remaining p2m_ioreq_server entries back to p2m_ram_rw. + */ + if ( op.flags == 0 && rc == 0 ) + { + struct p2m_domain *p2m = p2m_get_hostp2m(d); + + while ( read_atomic(&p2m->ioreq.entry_count) && + gfn <= p2m->max_mapped_pfn ) + { + unsigned long gfn_end = gfn + HVMOP_op_mask; + + p2m_finish_type_change(d, gfn, gfn_end, + p2m_ioreq_server, p2m_ram_rw); + + /* Check for continuation if it's not the last iteration. */ + if ( ++gfn_end <= p2m->max_mapped_pfn && + hypercall_preempt_check() ) + { + rc = -ERESTART; + *iter = gfn_end; + goto out; + } + } + } out: rcu_unlock_domain(d); @@ -5570,6 +5600,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) break; case HVMOP_modified_memory: case HVMOP_set_mem_type: + case HVMOP_map_mem_type_to_ioreq_server: mask = HVMOP_op_mask; break; } @@ -5599,7 +5630,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) case HVMOP_map_mem_type_to_ioreq_server: rc = hvmop_map_mem_type_to_ioreq_server( - guest_handle_cast(arg, xen_hvm_map_mem_type_to_ioreq_server_t)); + guest_handle_cast(arg, xen_hvm_map_mem_type_to_ioreq_server_t), + &start_iter); break; case HVMOP_set_ioreq_server_state: diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index 3218fa2..5df2d62 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -190,6 +190,15 @@ out: */ static int hap_enable_log_dirty(struct domain *d, bool_t log_global) { + struct p2m_domain *p2m = p2m_get_hostp2m(d); + + /* + * Refuse to turn on global log-dirty mode if + * there's outstanding p2m_ioreq_server pages. + */ + if ( log_global && read_atomic(&p2m->ioreq.entry_count) ) + return -EBUSY; + /* turn on PG_log_dirty bit in paging mode */ paging_lock(d); d->arch.paging.mode |= PG_log_dirty; diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 700420a..6679ae7 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -545,6 +545,9 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn) e.ipat = ipat; if ( e.recalc && p2m_is_changeable(e.sa_p2mt) ) { + if ( e.sa_p2mt == p2m_ioreq_server ) + p2m->ioreq.entry_count--; + e.sa_p2mt = p2m_is_logdirty_range(p2m, gfn + i, gfn + i) ? p2m_ram_logdirty : p2m_ram_rw; ept_p2m_type_to_flags(p2m, &e, e.sa_p2mt, e.access); @@ -965,7 +968,8 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m, if ( is_epte_valid(ept_entry) ) { if ( (recalc || ept_entry->recalc) && - p2m_is_changeable(ept_entry->sa_p2mt) ) + p2m_is_changeable(ept_entry->sa_p2mt) && + (ept_entry->sa_p2mt != p2m_ioreq_server) ) *t = p2m_is_logdirty_range(p2m, gfn, gfn) ? p2m_ram_logdirty : p2m_ram_rw; else diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index 46a56fa..7f31c0e 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -439,11 +439,13 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn) needs_recalc(l1, *pent) ) { l1_pgentry_t e = *pent; + p2m_type_t p2mt_old; if ( !valid_recalc(l1, e) ) P2M_DEBUG("bogus recalc leaf at d%d:%lx:%u\n", p2m->domain->domain_id, gfn, level); - if ( p2m_is_changeable(p2m_flags_to_type(l1e_get_flags(e))) ) + p2mt_old = p2m_flags_to_type(l1e_get_flags(e)); + if ( p2m_is_changeable(p2mt_old) ) { unsigned long mask = ~0UL << (level * PAGETABLE_ORDER); p2m_type_t p2mt = p2m_is_logdirty_range(p2m, gfn & mask, gfn | ~mask) @@ -463,6 +465,10 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn) mfn &= ~(_PAGE_PSE_PAT >> PAGE_SHIFT); flags |= _PAGE_PSE; } + + if ( p2mt_old == p2m_ioreq_server ) + p2m->ioreq.entry_count--; + e = l1e_from_pfn(mfn, flags); p2m_add_iommu_flags(&e, level, (p2mt == p2m_ram_rw) @@ -729,7 +735,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, static inline p2m_type_t recalc_type(bool_t recalc, p2m_type_t t, struct p2m_domain *p2m, unsigned long gfn) { - if ( !recalc || !p2m_is_changeable(t) ) + if ( !recalc || !p2m_is_changeable(t) || (t == p2m_ioreq_server) ) return t; return p2m_is_logdirty_range(p2m, gfn, gfn) ? p2m_ram_logdirty : p2m_ram_rw; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 6e4cb1f..6581a70 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -313,6 +313,9 @@ int p2m_set_ioreq_server(struct domain *d, p2m->ioreq.server = NULL; p2m->ioreq.flags = 0; + + if ( read_atomic(&p2m->ioreq.entry_count) ) + p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); } else { @@ -957,6 +960,23 @@ int p2m_change_type_one(struct domain *d, unsigned long gfn, p2m->default_access) : -EBUSY; + if ( !rc ) + { + switch ( nt ) + { + case p2m_ram_rw: + if ( ot == p2m_ioreq_server ) + p2m->ioreq.entry_count--; + break; + case p2m_ioreq_server: + if ( ot == p2m_ram_rw ) + p2m->ioreq.entry_count++; + break; + default: + break; + } + } + gfn_unlock(p2m, gfn, 0); return rc; @@ -1021,6 +1041,35 @@ void p2m_change_type_range(struct domain *d, p2m_unlock(p2m); } +/* Synchronously modify the p2m type of a range of gfns from ot to nt. */ +void p2m_finish_type_change(struct domain *d, + unsigned long start, unsigned long end, + p2m_type_t ot, p2m_type_t nt) +{ + struct p2m_domain *p2m = p2m_get_hostp2m(d); + p2m_type_t t; + unsigned long gfn = start; + + ASSERT(start <= end); + ASSERT(ot != nt); + ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt)); + + p2m_lock(p2m); + + end = (end > p2m->max_mapped_pfn) ? p2m->max_mapped_pfn : end; + while ( gfn <= end ) + { + get_gfn_query_unlocked(d, gfn, &t); + + if ( t == ot ) + p2m_change_type_one(d, gfn, t, nt); + + gfn++; + } + + p2m_unlock(p2m); +} + /* * Returns: * 0 for success diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 4924c4b..bf9adca 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -120,7 +120,8 @@ typedef unsigned int p2m_query_t; /* Types that can be subject to bulk transitions. */ #define P2M_CHANGEABLE_TYPES (p2m_to_mask(p2m_ram_rw) \ - | p2m_to_mask(p2m_ram_logdirty) ) + | p2m_to_mask(p2m_ram_logdirty) \ + | p2m_to_mask(p2m_ioreq_server)) #define P2M_POD_TYPES (p2m_to_mask(p2m_populate_on_demand)) @@ -349,6 +350,7 @@ struct p2m_domain { * are to be emulated by an ioreq server. */ unsigned int flags; + unsigned int entry_count; } ioreq; }; @@ -604,6 +606,11 @@ void p2m_change_type_range(struct domain *d, int p2m_change_type_one(struct domain *d, unsigned long gfn, p2m_type_t ot, p2m_type_t nt); +/* Synchronously change types across a range of p2m entries (start ... end) */ +void p2m_finish_type_change(struct domain *d, + unsigned long start, unsigned long end, + p2m_type_t ot, p2m_type_t nt); + /* Report a change affecting memory types. */ void p2m_memory_type_changed(struct domain *d);