From patchwork Fri Dec 13 17:37:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: George Dunlap X-Patchwork-Id: 11291125 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1BD4714BD for ; Fri, 13 Dec 2019 21:00:13 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D42D624682 for ; Fri, 13 Dec 2019 21:00:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="OSNCeTj9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D42D624682 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ifot1-0000LW-94; Fri, 13 Dec 2019 17:37:55 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ifot0-0000LQ-7x for xen-devel@lists.xenproject.org; Fri, 13 Dec 2019 17:37:54 +0000 X-Inumbo-ID: 472156d6-1dcf-11ea-8fa7-12813bfff9fa Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 472156d6-1dcf-11ea-8fa7-12813bfff9fa; Fri, 13 Dec 2019 17:37:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1576258668; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XJoBKVr1xnxEx2YA5bZnDBwblMT0dntvOAbVZCdtvvo=; b=OSNCeTj9sJKSxrBjUYvvVh/rKOQzgEdVuElNGlo8/rSsjoMfxDSccjDD yvoC7Q7AAhf1K0nIyJju7LdXCJQalnGtUtqHJXhgGsI8fJ3ocBZxTZCsH LYydG3n0jhjcK6pjhY0AIRv+/kmDkLV4+Lewsx5I4GEjUXuLjRA6lcON3 Y=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=george.dunlap@citrix.com; spf=Pass smtp.mailfrom=George.Dunlap@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of george.dunlap@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="George.Dunlap@citrix.com"; x-sender="george.dunlap@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of George.Dunlap@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="George.Dunlap@citrix.com"; x-sender="George.Dunlap@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="George.Dunlap@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: Nc0113P35LfzzwBCxWWPqw5At+UE+gsTfcR264LDdrUwyOQ/gjw0npWQampRXZCPx/ss8qOzzh 9kXPT/m1EklkGYCaDEee6kuukQszMVrlszbyxwAcnMh2bhWLci05HTq5h3rpYPET2XTn9k8qgt z6Hmz1Ph42XR+fwYMWIkviiChCW1ZMo/Mt6la7FnPuynlcE+axnQCyNs5B7XUPpNlhv2DU3Ff4 R6WsRzCc/n/vww4kuWsGKJRYuxQLVGSzIlRMg1ixEPhtgHieoTgzG9WHawNoV9N+lgNlBMNXWv QW4= X-SBRS: 2.7 X-MesageID: 10226688 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.69,309,1571716800"; d="scan'208";a="10226688" From: George Dunlap To: Date: Fri, 13 Dec 2019 17:37:40 +0000 Message-ID: <20191213173742.1960441-2-george.dunlap@citrix.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191213173742.1960441-1-george.dunlap@citrix.com> References: <20191213173742.1960441-1-george.dunlap@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 1/3] x86/mm: Use a more descriptive name for pagetable mfns X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , George Dunlap , Jan Beulich Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In many places, a PTE being modified is accompanied by the pagetable mfn which contains the PTE (primarily in order to be able to maintain linear mapping counts). In many cases, this mfn is stored in the non-descript variable (or argement) "pfn". Replace these names with lNmfn, to indicate that 1) this is a pagetable mfn, and 2) that it is the same level as the PTE in question. This should be enough to remind readers that it's the mfn containing the PTE. No functional change. Signed-off-by: George Dunlap Acked-by: Jan Beulich --- v2: - Also rename arguments for put_page_from_lNe CC: Andrew Cooper CC: Jan Beulich --- xen/arch/x86/mm.c | 68 +++++++++++++++++++++++------------------------ 1 file changed, 34 insertions(+), 34 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 9556e8f780..ceb656ca75 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -1141,7 +1141,7 @@ static int get_page_and_type_from_mfn( define_get_linear_pagetable(l2); static int get_page_from_l2e( - l2_pgentry_t l2e, unsigned long pfn, struct domain *d, unsigned int flags) + l2_pgentry_t l2e, unsigned long l2mfn, struct domain *d, unsigned int flags) { unsigned long mfn = l2e_get_pfn(l2e); int rc; @@ -1156,7 +1156,7 @@ get_page_from_l2e( ASSERT(!(flags & PTF_preemptible)); rc = get_page_and_type_from_mfn(_mfn(mfn), PGT_l1_page_table, d, flags); - if ( unlikely(rc == -EINVAL) && get_l2_linear_pagetable(l2e, pfn, d) ) + if ( unlikely(rc == -EINVAL) && get_l2_linear_pagetable(l2e, l2mfn, d) ) rc = 0; return rc; @@ -1165,7 +1165,7 @@ get_page_from_l2e( define_get_linear_pagetable(l3); static int get_page_from_l3e( - l3_pgentry_t l3e, unsigned long pfn, struct domain *d, unsigned int flags) + l3_pgentry_t l3e, unsigned long l3mfn, struct domain *d, unsigned int flags) { int rc; @@ -1180,7 +1180,7 @@ get_page_from_l3e( l3e_get_mfn(l3e), PGT_l2_page_table, d, flags | PTF_preemptible); if ( unlikely(rc == -EINVAL) && !is_pv_32bit_domain(d) && - get_l3_linear_pagetable(l3e, pfn, d) ) + get_l3_linear_pagetable(l3e, l3mfn, d) ) rc = 0; return rc; @@ -1189,7 +1189,7 @@ get_page_from_l3e( define_get_linear_pagetable(l4); static int get_page_from_l4e( - l4_pgentry_t l4e, unsigned long pfn, struct domain *d, unsigned int flags) + l4_pgentry_t l4e, unsigned long l4mfn, struct domain *d, unsigned int flags) { int rc; @@ -1202,7 +1202,7 @@ get_page_from_l4e( rc = get_page_and_type_from_mfn( l4e_get_mfn(l4e), PGT_l3_page_table, d, flags | PTF_preemptible); - if ( unlikely(rc == -EINVAL) && get_l4_linear_pagetable(l4e, pfn, d) ) + if ( unlikely(rc == -EINVAL) && get_l4_linear_pagetable(l4e, l4mfn, d) ) rc = 0; return rc; @@ -1329,10 +1329,10 @@ static int put_data_pages(struct page_info *page, bool writeable, int pt_shift) * NB. Virtual address 'l2e' maps to a machine address within frame 'pfn'. * Note also that this automatically deals correctly with linear p.t.'s. */ -static int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn, +static int put_page_from_l2e(l2_pgentry_t l2e, unsigned long l2mfn, unsigned int flags) { - if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) || (l2e_get_pfn(l2e) == pfn) ) + if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) || (l2e_get_pfn(l2e) == l2mfn) ) return 1; if ( l2e_get_flags(l2e) & _PAGE_PSE ) @@ -1340,13 +1340,13 @@ static int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn, l2e_get_flags(l2e) & _PAGE_RW, L2_PAGETABLE_SHIFT); - return put_pt_page(l2e_get_page(l2e), mfn_to_page(_mfn(pfn)), flags); + return put_pt_page(l2e_get_page(l2e), mfn_to_page(_mfn(l2mfn)), flags); } -static int put_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, +static int put_page_from_l3e(l3_pgentry_t l3e, unsigned long l3mfn, unsigned int flags) { - if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) || (l3e_get_pfn(l3e) == pfn) ) + if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) || (l3e_get_pfn(l3e) == l3mfn) ) return 1; if ( unlikely(l3e_get_flags(l3e) & _PAGE_PSE) ) @@ -1354,16 +1354,16 @@ static int put_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, l3e_get_flags(l3e) & _PAGE_RW, L3_PAGETABLE_SHIFT); - return put_pt_page(l3e_get_page(l3e), mfn_to_page(_mfn(pfn)), flags); + return put_pt_page(l3e_get_page(l3e), mfn_to_page(_mfn(l3mfn)), flags); } -static int put_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, +static int put_page_from_l4e(l4_pgentry_t l4e, unsigned long l4mfn, unsigned int flags) { - if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) || (l4e_get_pfn(l4e) == pfn) ) + if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) || (l4e_get_pfn(l4e) == l4mfn) ) return 1; - return put_pt_page(l4e_get_page(l4e), mfn_to_page(_mfn(pfn)), flags); + return put_pt_page(l4e_get_page(l4e), mfn_to_page(_mfn(l4mfn)), flags); } static int alloc_l1_table(struct page_info *page) @@ -1460,13 +1460,13 @@ static int create_pae_xen_mappings(struct domain *d, l3_pgentry_t *pl3e) static int alloc_l2_table(struct page_info *page, unsigned long type) { struct domain *d = page_get_owner(page); - unsigned long pfn = mfn_x(page_to_mfn(page)); + unsigned long l2mfn = mfn_x(page_to_mfn(page)); l2_pgentry_t *pl2e; unsigned int i; int rc = 0; unsigned int partial_flags = page->partial_flags; - pl2e = map_domain_page(_mfn(pfn)); + pl2e = map_domain_page(_mfn(l2mfn)); /* * NB that alloc_l2_table will never set partial_pte on an l2; but @@ -1492,7 +1492,7 @@ static int alloc_l2_table(struct page_info *page, unsigned long type) rc = -EINTR; } else - rc = get_page_from_l2e(l2e, pfn, d, partial_flags); + rc = get_page_from_l2e(l2e, l2mfn, d, partial_flags); /* * It shouldn't be possible for get_page_from_l2e to return @@ -1559,14 +1559,14 @@ static int alloc_l2_table(struct page_info *page, unsigned long type) static int alloc_l3_table(struct page_info *page) { struct domain *d = page_get_owner(page); - unsigned long pfn = mfn_x(page_to_mfn(page)); + unsigned long l3mfn = mfn_x(page_to_mfn(page)); l3_pgentry_t *pl3e; unsigned int i; int rc = 0; unsigned int partial_flags = page->partial_flags; l3_pgentry_t l3e = l3e_empty(); - pl3e = map_domain_page(_mfn(pfn)); + pl3e = map_domain_page(_mfn(l3mfn)); /* * PAE guests allocate full pages, but aren't required to initialize @@ -1603,7 +1603,7 @@ static int alloc_l3_table(struct page_info *page) rc = -EINTR; } else - rc = get_page_from_l3e(l3e, pfn, d, + rc = get_page_from_l3e(l3e, l3mfn, d, partial_flags | PTF_retain_ref_on_restart); if ( rc == -ERESTART ) @@ -1786,8 +1786,8 @@ void zap_ro_mpt(mfn_t mfn) static int alloc_l4_table(struct page_info *page) { struct domain *d = page_get_owner(page); - unsigned long pfn = mfn_x(page_to_mfn(page)); - l4_pgentry_t *pl4e = map_domain_page(_mfn(pfn)); + unsigned long l4mfn = mfn_x(page_to_mfn(page)); + l4_pgentry_t *pl4e = map_domain_page(_mfn(l4mfn)); unsigned int i; int rc = 0; unsigned int partial_flags = page->partial_flags; @@ -1809,7 +1809,7 @@ static int alloc_l4_table(struct page_info *page) rc = -EINTR; } else - rc = get_page_from_l4e(l4e, pfn, d, + rc = get_page_from_l4e(l4e, l4mfn, d, partial_flags | PTF_retain_ref_on_restart); if ( rc == -ERESTART ) @@ -1869,7 +1869,7 @@ static int alloc_l4_table(struct page_info *page) if ( !rc ) { - init_xen_l4_slots(pl4e, _mfn(pfn), + init_xen_l4_slots(pl4e, _mfn(l4mfn), d, INVALID_MFN, VM_ASSIST(d, m2p_strict)); atomic_inc(&d->arch.pv.nr_l4_pages); } @@ -1896,18 +1896,18 @@ static void free_l1_table(struct page_info *page) static int free_l2_table(struct page_info *page) { struct domain *d = page_get_owner(page); - unsigned long pfn = mfn_x(page_to_mfn(page)); + unsigned long l2mfn = mfn_x(page_to_mfn(page)); l2_pgentry_t *pl2e; int rc = 0; unsigned int partial_flags = page->partial_flags, i = page->nr_validated_ptes - !(partial_flags & PTF_partial_set); - pl2e = map_domain_page(_mfn(pfn)); + pl2e = map_domain_page(_mfn(l2mfn)); for ( ; ; ) { if ( is_guest_l2_slot(d, page->u.inuse.type_info, i) ) - rc = put_page_from_l2e(pl2e[i], pfn, partial_flags); + rc = put_page_from_l2e(pl2e[i], l2mfn, partial_flags); if ( rc < 0 ) break; @@ -1948,17 +1948,17 @@ static int free_l2_table(struct page_info *page) static int free_l3_table(struct page_info *page) { struct domain *d = page_get_owner(page); - unsigned long pfn = mfn_x(page_to_mfn(page)); + unsigned long l3mfn = mfn_x(page_to_mfn(page)); l3_pgentry_t *pl3e; int rc = 0; unsigned int partial_flags = page->partial_flags, i = page->nr_validated_ptes - !(partial_flags & PTF_partial_set); - pl3e = map_domain_page(_mfn(pfn)); + pl3e = map_domain_page(_mfn(l3mfn)); for ( ; ; ) { - rc = put_page_from_l3e(pl3e[i], pfn, partial_flags); + rc = put_page_from_l3e(pl3e[i], l3mfn, partial_flags); if ( rc < 0 ) break; @@ -1995,15 +1995,15 @@ static int free_l3_table(struct page_info *page) static int free_l4_table(struct page_info *page) { struct domain *d = page_get_owner(page); - unsigned long pfn = mfn_x(page_to_mfn(page)); - l4_pgentry_t *pl4e = map_domain_page(_mfn(pfn)); + unsigned long l4mfn = mfn_x(page_to_mfn(page)); + l4_pgentry_t *pl4e = map_domain_page(_mfn(l4mfn)); int rc = 0; unsigned partial_flags = page->partial_flags, i = page->nr_validated_ptes - !(partial_flags & PTF_partial_set); do { if ( is_guest_l4_slot(d, i) ) - rc = put_page_from_l4e(pl4e[i], pfn, partial_flags); + rc = put_page_from_l4e(pl4e[i], l4mfn, partial_flags); if ( rc < 0 ) break; partial_flags = 0;