From patchwork Mon Mar 9 09:35:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11426507 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9FC6C92A for ; Mon, 9 Mar 2020 09:37:17 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 85DB320727 for ; Mon, 9 Mar 2020 09:37:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 85DB320727 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jBEpH-0003Cm-SN; Mon, 09 Mar 2020 09:35:55 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jBEpH-0003Cb-1N for xen-devel@lists.xenproject.org; Mon, 09 Mar 2020 09:35:55 +0000 X-Inumbo-ID: 5f3bd04e-61e9-11ea-abf1-12813bfff9fa Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 5f3bd04e-61e9-11ea-abf1-12813bfff9fa; Mon, 09 Mar 2020 09:35:54 +0000 (UTC) IronPort-SDR: E65nSZfRjLlgV7pr0Kp8ytq0U7/cNfSCW6++cCQ0QNaSkxoypMxM97rNti/aheiPrZh5zxXFq2 UUe4juvDV3Eg== X-IronPort-AV: E=Sophos;i="5.70,532,1574121600"; d="scan'208";a="30034829" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2c-4e7c8266.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 09 Mar 2020 09:35:54 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2c-4e7c8266.us-west-2.amazon.com (Postfix) with ESMTPS id 2BA73A1F0B; Mon, 9 Mar 2020 09:35:53 +0000 (UTC) Received: from EX13D32EUC004.ant.amazon.com (10.43.164.121) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1236.3; Mon, 9 Mar 2020 09:35:21 +0000 Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by EX13D32EUC004.ant.amazon.com (10.43.164.121) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 9 Mar 2020 09:35:19 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 9 Mar 2020 09:35:18 +0000 From: To: Date: Mon, 9 Mar 2020 09:35:08 +0000 Message-ID: <20200309093511.1727-4-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200309093511.1727-1-paul@xen.org> References: <20200309093511.1727-1-paul@xen.org> MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH v4 3/6] x86 / pv: do not treat PGC_extra pages as RAM X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , Paul Durrant , Andrew Cooper , Paul Durrant , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Paul Durrant This patch modifies several places walking the domain's page_list to make them ignore PGC_extra pages: - dump_pageframe_info() should ignore PGC_extra pages in its dump as it determines whether to dump using domain_tot_pages() which also ignores PGC_extra pages. - arch_set_info_guest() is looking for an L4 page table which will definitely not be in a PGC_extra page. - audit_p2m() should ignore PGC_extra pages as it is perfectly legitimate for them not to be present in the P2M. - dump_nama() should ignore PGC_extra pages as they are essentially uninteresting in that context. - dom0_construct_pv() should ignore PGC_extra pages when setting up the physmap as they are only created for special purposes and, if they need to be mapped, will be mapped explicitly for whatever purpose is relevant. - tboot_gen_domain_integrity() should ignore PGC_extra pages as they should not form part of the measurement. Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" v4: - Expand to cover more than just dom0_construct_pv() v2: - New in v2 --- xen/arch/x86/domain.c | 6 +++++- xen/arch/x86/mm/p2m.c | 3 +++ xen/arch/x86/numa.c | 3 +++ xen/arch/x86/pv/dom0_build.c | 4 ++++ xen/arch/x86/tboot.c | 7 ++++++- 5 files changed, 21 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index bdcc0d972a..f6ed25e8ee 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -231,6 +231,9 @@ void dump_pageframe_info(struct domain *d) unsigned int index = MASK_EXTR(page->u.inuse.type_info, PGT_type_mask); + if ( page->count_info & PGC_extra ) + continue; + if ( ++total[index] > 16 ) { switch ( page->u.inuse.type_info & PGT_type_mask ) @@ -1044,7 +1047,8 @@ int arch_set_info_guest( { struct page_info *page = page_list_remove_head(&d->page_list); - if ( page_lock(page) ) + if ( !(page->count_info & PGC_extra) && + page_lock(page) ) { if ( (page->u.inuse.type_info & PGT_type_mask) == PGT_l4_page_table ) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 9f51370327..71d2fb9bbc 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -2843,6 +2843,9 @@ void audit_p2m(struct domain *d, spin_lock(&d->page_alloc_lock); page_list_for_each ( page, &d->page_list ) { + if ( page->count_info & PGC_extra ) + continue; + mfn = mfn_x(page_to_mfn(page)); P2M_PRINTK("auditing guest page, mfn=%#lx\n", mfn); diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c index f1066c59c7..7e5aa8dc95 100644 --- a/xen/arch/x86/numa.c +++ b/xen/arch/x86/numa.c @@ -428,6 +428,9 @@ static void dump_numa(unsigned char key) spin_lock(&d->page_alloc_lock); page_list_for_each(page, &d->page_list) { + if ( page->count_info & PGC_extra ) + break; + i = phys_to_nid(page_to_maddr(page)); page_num_node[i]++; } diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c index dc16ef2e79..f8f1bbe2f4 100644 --- a/xen/arch/x86/pv/dom0_build.c +++ b/xen/arch/x86/pv/dom0_build.c @@ -792,6 +792,10 @@ int __init dom0_construct_pv(struct domain *d, { mfn = mfn_x(page_to_mfn(page)); BUG_ON(SHARED_M2P(get_gpfn_from_mfn(mfn))); + + if ( page->count_info & PGC_extra ) + continue; + if ( get_gpfn_from_mfn(mfn) >= count ) { BUG_ON(is_pv_32bit_domain(d)); diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c index 8c232270b4..6cc020cb71 100644 --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -220,7 +220,12 @@ static void tboot_gen_domain_integrity(const uint8_t key[TB_KEY_SIZE], spin_lock(&d->page_alloc_lock); page_list_for_each(page, &d->page_list) { - void *pg = __map_domain_page(page); + void *pg; + + if ( page->count_info & PGC_extra ) + continue; + + pg = __map_domain_page(page); vmac_update(pg, PAGE_SIZE, &ctx); unmap_domain_page(pg); }