From patchwork Mon Mar 9 10:23:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11426643 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7D39A138D for ; Mon, 9 Mar 2020 10:25:50 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5874E20848 for ; Mon, 9 Mar 2020 10:25:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5874E20848 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jBFZX-0000HS-GN; Mon, 09 Mar 2020 10:23:43 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jBFZV-0000G9-K6 for xen-devel@lists.xenproject.org; Mon, 09 Mar 2020 10:23:41 +0000 X-Inumbo-ID: 0b9bb9d4-61f0-11ea-b52f-bc764e2007e4 Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 0b9bb9d4-61f0-11ea-b52f-bc764e2007e4; Mon, 09 Mar 2020 10:23:40 +0000 (UTC) IronPort-SDR: NbswpU7gsi+6W1bGVM1yOHVEiawCpfey6T4zz4sBRYWeES2wGhhfdoSy2tZX+Aq/mZGqggzkZ+ xSTITdYmcq1w== X-IronPort-AV: E=Sophos;i="5.70,532,1574121600"; d="scan'208";a="31432319" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1a-af6a10df.us-east-1.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 09 Mar 2020 10:23:39 +0000 Received: from EX13MTAUEA002.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166]) by email-inbound-relay-1a-af6a10df.us-east-1.amazon.com (Postfix) with ESMTPS id 229B7A216C; Mon, 9 Mar 2020 10:23:35 +0000 (UTC) Received: from EX13D32EUC001.ant.amazon.com (10.43.164.159) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1236.3; Mon, 9 Mar 2020 10:23:18 +0000 Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by EX13D32EUC001.ant.amazon.com (10.43.164.159) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 9 Mar 2020 10:23:17 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP Server id 15.0.1236.3 via Frontend Transport; Mon, 9 Mar 2020 10:23:15 +0000 From: To: Date: Mon, 9 Mar 2020 10:23:03 +0000 Message-ID: <20200309102304.1251-6-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200309102304.1251-1-paul@xen.org> References: <20200309102304.1251-1-paul@xen.org> MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH v5 5/6] mm: add 'is_special_page' inline function... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Julien Grall , Wei Liu , Paul Durrant , Andrew Cooper , Paul Durrant , Konrad Rzeszutek Wilk , Ian Jackson , George Dunlap , Tim Deegan , Stefano Stabellini , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Paul Durrant ... to cover xenheap and PGC_extra pages. PGC_extra pages are intended to hold data structures that are associated with a domain and may be mapped by that domain. They should not be treated as 'normal' guest pages (i.e. RAM or page tables). Hence, in many cases where code currently tests is_xen_heap_page() it should also check for the PGC_extra bit in 'count_info'. This patch therefore defines is_special_page() to cover both cases and converts tests if is_xen_heap_page() to is_special_page() where appropriate. Signed-off-by: Paul Durrant Acked-by: Tamas K Lengyel --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" Cc: George Dunlap Cc: Ian Jackson Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan v4: - Use inline function instead of macro - Add missing conversions from is_xen_heap_page() v3: - Delete obsolete comment. v2: - New in v2 --- xen/arch/x86/domctl.c | 2 +- xen/arch/x86/mm.c | 9 ++++----- xen/arch/x86/mm/altp2m.c | 2 +- xen/arch/x86/mm/mem_sharing.c | 3 +-- xen/arch/x86/mm/shadow/common.c | 13 ++++++++----- xen/arch/x86/mm/shadow/multi.c | 2 +- xen/arch/x86/tboot.c | 4 ++-- xen/include/xen/mm.h | 5 +++++ 8 files changed, 23 insertions(+), 17 deletions(-) diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index ed86762fa6..add70126b9 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -394,7 +394,7 @@ long arch_do_domctl( page = get_page_from_gfn(d, gfn, &t, P2M_ALLOC); if ( unlikely(!page) || - unlikely(is_xen_heap_page(page)) ) + unlikely(is_special_page(page)) ) { if ( unlikely(p2m_is_broken(t)) ) type = XEN_DOMCTL_PFINFO_BROKEN; diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index ba7563ed3c..353bde5c2c 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -1014,7 +1014,7 @@ get_page_from_l1e( unsigned long cacheattr = pte_flags_to_cacheattr(l1f); int err; - if ( is_xen_heap_page(page) ) + if ( is_special_page(page) ) { if ( write ) put_page_type(page); @@ -2447,7 +2447,7 @@ static int cleanup_page_mappings(struct page_info *page) { page->count_info &= ~PGC_cacheattr_mask; - BUG_ON(is_xen_heap_page(page)); + BUG_ON(is_special_page(page)); rc = update_xen_mappings(mfn, 0); } @@ -2477,7 +2477,7 @@ static int cleanup_page_mappings(struct page_info *page) rc = rc2; } - if ( likely(!is_xen_heap_page(page)) ) + if ( likely(!is_special_page(page)) ) { ASSERT((page->u.inuse.type_info & (PGT_type_mask | PGT_count_mask)) == PGT_writable_page); @@ -4216,8 +4216,7 @@ int steal_page( if ( !(owner = page_get_owner_and_reference(page)) ) goto fail; - if ( owner != d || is_xen_heap_page(page) || - (page->count_info & PGC_extra) ) + if ( owner != d || is_special_page(page) ) goto fail_put; /* diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c index 50768f2547..c091b03ea3 100644 --- a/xen/arch/x86/mm/altp2m.c +++ b/xen/arch/x86/mm/altp2m.c @@ -77,7 +77,7 @@ int altp2m_vcpu_enable_ve(struct vcpu *v, gfn_t gfn) * pageable() predicate for this, due to it having the same properties * that we want. */ - if ( !p2m_is_pageable(p2mt) || is_xen_heap_page(pg) ) + if ( !p2m_is_pageable(p2mt) || is_special_page(pg) ) { rc = -EINVAL; goto err; diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 3835bc928f..f49f27a3ef 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -840,9 +840,8 @@ static int nominate_page(struct domain *d, gfn_t gfn, if ( !p2m_is_sharable(p2mt) ) goto out; - /* Skip xen heap pages */ page = mfn_to_page(mfn); - if ( !page || is_xen_heap_page(page) ) + if ( !page || is_special_page(page) ) goto out; /* Check if there are mem_access/remapped altp2m entries for this page */ diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c index cba3ab1eba..e835940d86 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -2087,19 +2087,22 @@ static int sh_remove_all_mappings(struct domain *d, mfn_t gmfn, gfn_t gfn) * The qemu helper process has an untyped mapping of this dom's RAM * and the HVM restore program takes another. * Also allow one typed refcount for - * - Xen heap pages, to match share_xen_page_with_guest(), - * - ioreq server pages, to match prepare_ring_for_helper(). + * - special pages, which are explicitly referenced and mapped by + * Xen. + * - ioreq server pages, which may be special pages or normal + * guest pages with an extra reference taken by + * prepare_ring_for_helper(). */ if ( !(shadow_mode_external(d) && (page->count_info & PGC_count_mask) <= 3 && ((page->u.inuse.type_info & PGT_count_mask) - == (is_xen_heap_page(page) || + == (is_special_page(page) || (is_hvm_domain(d) && is_ioreq_server_page(d, page))))) ) printk(XENLOG_G_ERR "can't find all mappings of mfn %"PRI_mfn - " (gfn %"PRI_gfn"): c=%lx t=%lx x=%d i=%d\n", + " (gfn %"PRI_gfn"): c=%lx t=%lx s=%d i=%d\n", mfn_x(gmfn), gfn_x(gfn), page->count_info, page->u.inuse.type_info, - !!is_xen_heap_page(page), + !!is_special_page(page), (is_hvm_domain(d) && is_ioreq_server_page(d, page))); } diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index 26798b317c..ac19d203d7 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -559,7 +559,7 @@ _sh_propagate(struct vcpu *v, * caching attributes in the shadows to match what was asked for. */ if ( (level == 1) && is_hvm_domain(d) && - !is_xen_heap_mfn(target_mfn) ) + !is_special_page(mfn_to_page(target_mfn)) ) { int type; diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c index 6cc020cb71..2fd7ce5305 100644 --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -189,7 +189,7 @@ static void update_pagetable_mac(vmac_ctx_t *ctx) if ( !mfn_valid(_mfn(mfn)) ) continue; - if ( is_page_in_use(page) && !is_xen_heap_page(page) ) + if ( is_page_in_use(page) && !is_special_page(page) ) { if ( page->count_info & PGC_page_table ) { @@ -294,7 +294,7 @@ static void tboot_gen_xenheap_integrity(const uint8_t key[TB_KEY_SIZE], + 3 * PAGE_SIZE)) ) continue; /* skip tboot and its page tables */ - if ( is_page_in_use(page) && is_xen_heap_page(page) ) + if ( is_page_in_use(page) && is_special_page(page) ) { void *pg; diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index d0d095d9c7..373de59969 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -285,6 +285,11 @@ extern struct domain *dom_cow; #include +static inline bool is_special_page(struct page_info *page) +{ + return is_xen_heap_page(page) || (page->count_info & PGC_extra); +} + #ifndef page_list_entry struct page_list_head {