From patchwork Mon Mar 18 20:04:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13595768 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C928BC54E58 for ; Mon, 18 Mar 2024 20:04:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D37B6B0098; Mon, 18 Mar 2024 16:04:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 882EC6B009A; Mon, 18 Mar 2024 16:04:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 723FB6B009B; Mon, 18 Mar 2024 16:04:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5C2276B0098 for ; Mon, 18 Mar 2024 16:04:28 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3DC5DA0E52 for ; Mon, 18 Mar 2024 20:04:28 +0000 (UTC) X-FDA: 81911237016.21.B3666B4 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf21.hostedemail.com (Postfix) with ESMTP id E81981C001D for ; Mon, 18 Mar 2024 20:04:25 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Z498adPF; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710792266; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=C5IYGWVxUA2LCxblG8GUdWUManYweqkS1nMwnfIC++w=; b=ExsDZtM6AyN/lsf8IwSKXj/zpIucFUxPmLtxkfZrYbX6h6fFavYBls5yS74ISm3R5Up36k BYKQ/x+V/5iBEz8XaTTDAaarbjbyfuVoL+gy1Ma7cASq7U/LoGc+hlIwBTUV45a1mUxDVS eDmQhiCiS1c3Kt89Jlw0qmLI+sgWXK4= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Z498adPF; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710792266; a=rsa-sha256; cv=none; b=cPr+4BoX7P2a/NCqIDYWXDBZ+pYubuJ1NOi4wN7vM0/q+QPCmQpj7Ig6H8Axdum8IebPkQ rrF0jeX+vgBMQ57LuRB/lv+7+x13sI8fUpivogFt45jTdtIMnm1JdK12Vt2HIaI0ueczzA mUVesc/ws2m2srjtobgej6WIwJ+4b5o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710792265; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=C5IYGWVxUA2LCxblG8GUdWUManYweqkS1nMwnfIC++w=; b=Z498adPF2UKiyFZAlIFmjrZG2Wpmhzpldr9g3qI77nrFrrQ43wqLLIvfVC55nIP3xThKqP L2htOrlfbz1s0fr29Turz39a23GsSDgCF5g6SDTe1pWAlg2a5/hjwyhd7pO/sGqnDa43hP k1tO4zlXjzj5NItW8d0VYdzDNMOMfoc= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-635-2DJazxPKP8KvZXJ0AxHMXw-1; Mon, 18 Mar 2024 16:04:23 -0400 X-MC-Unique: 2DJazxPKP8KvZXJ0AxHMXw-1 Received: by mail-qt1-f197.google.com with SMTP id d75a77b69052e-430d73c0492so3450541cf.1 for ; Mon, 18 Mar 2024 13:04:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710792263; x=1711397063; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C5IYGWVxUA2LCxblG8GUdWUManYweqkS1nMwnfIC++w=; b=ek24LZnnF0Z73EbRhiO7NK0j3rp68r2E1ic0HzORMQp8yHT/KW6tPzpbYesQKNsn6C QBd+pVxS84xqoOzdA8v27WF7rNNgHJM8Acmy+b3SoLSTdt2uwU8JKxmdgaNCrPbe4pYt BFDN82iso1UsU5zO5e+BheGPuGtyXb+I5qodFpHKHGhq1yWlEJ49oiJ65N1/jTTTHDTV oluuV4SembvUYBGqAML7MdLHo/QYyfxsCZ6cugicp6LBFOpavKB/dCaGcPz8N2jqgHfb cgTR2+o3TVnt5gF08n8etx3I2aDXat/AlimytiU6bMROIT5BH/e0FpEcgwtLrPXhsxov Ku2Q== X-Forwarded-Encrypted: i=1; AJvYcCXH/YQb7HT5MEfDFtDpUMfrGCCTnuwv6hrBj3ZAEnXvnzxtaaObLgctW5Jpxvn2Y949jvOhF2SixOnbo0dO2epwLcQ= X-Gm-Message-State: AOJu0YzvKZ0xu62uP6ECnG2BVYpofoJf6pHJyj1uNgmHGzT/62GqzPoj 6HfqFsfgxqkuC6q4GtFzo9qL6aI/UF3h42Etp1zgAs/2471lSAod/tpElnIfOuXka4N4g0MgP5+ 8FqU9Vj7u8XwuM+sR4E5b/sBt6yvgZS2N7/5VMTWUkXqEXEYR X-Received: by 2002:ac8:5f94:0:b0:430:a3bd:178b with SMTP id j20-20020ac85f94000000b00430a3bd178bmr450225qta.2.1710792263111; Mon, 18 Mar 2024 13:04:23 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHp9pbq1yp/IEiDWtUuy/HeuR42qObaCDKOoRIIvKhBVTuD6LDz0uhx2Vxyh/zWi2fCOK+MTg== X-Received: by 2002:ac8:5f94:0:b0:430:a3bd:178b with SMTP id j20-20020ac85f94000000b00430a3bd178bmr450185qta.2.1710792262668; Mon, 18 Mar 2024 13:04:22 -0700 (PDT) Received: from x1n.. ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hj10-20020a05622a620a00b0042ebbc1196fsm3484491qtb.87.2024.03.18.13.04.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Mar 2024 13:04:22 -0700 (PDT) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andrew Morton , x86@kernel.org, Muchun Song , Mike Rapoport , Matthew Wilcox , sparclinux@vger.kernel.org, Jason Gunthorpe , linuxppc-dev@lists.ozlabs.org, Christophe Leroy , linux-arm-kernel@lists.infradead.org, peterx@redhat.com Subject: [PATCH v2 11/14] mm/treewide: Replace pXd_huge() with pXd_leaf() Date: Mon, 18 Mar 2024 16:04:01 -0400 Message-ID: <20240318200404.448346-12-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240318200404.448346-1-peterx@redhat.com> References: <20240318200404.448346-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: E81981C001D X-Stat-Signature: yt8gwcgijwiap1f985dd9qzba88i8jsh X-HE-Tag: 1710792265-744719 X-HE-Meta: U2FsdGVkX196avUdUFtUFF9h9laKnTdMOCZyAHoX/eqiNUbimhw+rOTF3e55YXqf1b5q6VhVwHnOLm6yYFq6CfZiCIDjKvJYvF8db36cT0qF35VR7U9KW+3xSphtqk/z8K4TrhrU8pWq+3jdD642zpohuICNFLXPcSsMAvAzH464d8iI/TZNSZUcbqKlCVMaldOhK2RzdX1YE6OumqXuKdSUx1DGHlFCSpW1K3Lw3hFPDEQO8LIAguTKNPpl+9O9+t6AH8mqtLw5kB5obpN+7C+Y1dgBBh037Kqyrz7/NMtipF244FIdoszSTvvCtKO0Yk0kuzTRFoPcIqxfl/49p/EdKNl/TSXPfXyljSRgGHtFU+TC90AO3RsGpZc1Ky3bza/uCNqYX5TGE0MbOdhAqm3lfAOqmIk29w801665AH15Bz1/46zy3DxVPDw8ZDn+wc/H2kmIfvwlFhlm0uyS8Vu+jrrHehMQAXyRjPMsWI4XQ3BEluNfNOEDoGaolzBvp6NER60onXxOjzwux/3EkCxJqTYyt+lcrAmq9bAfBrABgXuaJa7GhmmP1a0yNWxa6ZtKc9kzrHn77JULzq6lN4YckM9oZlHMla9rKo004CjRae0RFVo0B95CKIuvTJAgdNHIVxeJ2W9YX9pYOI0P8NJg29EJaDN6KC8pnoLYDrtcLNKjn/PQ7eZ5ZAy4B3XFA96zN5BSkmuQQ8P8HVeWVTttNtPmV0XnuH1D3W8/61FMOjKbW3zdYZTlVKNvpJzoVbedZHi/RN/JkVp5RrPyYaBUZPwct+TkBEu86J8UMxF1cRdmWmFK9tdkkbUqWSn88VxYCJiF9DIwxxWZHPiX0qlLHqJo8R0RN0iinCZl2k1qnti9nlnsUEtbmHuBOPmMRnWlozy4e+M7mDIlhozLaebSRewsVqYFeWsyojhqjGy9Mw8f6j3/99pMDOHohgweIIhak9V81BCfvVc39zc VSp5lleW uHnqAaculRn423ttf5dq+h6RTVvcBHxq3GT/2NOAlC+bYvoIKN2MyjbOFhqgp8LetOff80q7t9KYxjYa4Z1o6feB83tyW+uaG8zHB0d2fi1llGy7qVwrVk2nDyeORtj0maDd+LvbBqoF4eQJVPEv6kbFrPadKso3GxQ+QW2y/hxNN6UuvPKsdDNPvZu3u5QfjzehSsg7325GptqAofJjWLZIhXRQEVzdz7MFDmP+dlitXor19EjoYtpkqgWxrrM3BGqmJ1tOsoAMiEflLtpXKisae5aQuaeCjUlBWT+Uar153q5eQ7Abh8fkxsWCX48M1HBx3tPlTurFw9SIs/ikvTJBDOxpF2h7oOc31F4XCrlwk07jKMYueLQtIMoybUrzbPDS0M+AGhMiiecrpNU/bKTy4fdE0Vxm8ZfFtUgP00XIcB2AzFT3LliRwYA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Now after we're sure all pXd_huge() definitions are the same as pXd_leaf(), reuse it. Luckily, pXd_huge() isn't widely used. Signed-off-by: Peter Xu --- arch/arm/include/asm/pgtable-3level.h | 2 +- arch/arm64/include/asm/pgtable.h | 2 +- arch/arm64/mm/hugetlbpage.c | 4 ++-- arch/loongarch/mm/hugetlbpage.c | 2 +- arch/mips/mm/tlb-r4k.c | 2 +- arch/powerpc/mm/pgtable_64.c | 6 +++--- arch/x86/mm/pgtable.c | 4 ++-- mm/gup.c | 4 ++-- mm/hmm.c | 2 +- mm/memory.c | 2 +- 10 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h index e7aecbef75c9..9e3c44f0aea2 100644 --- a/arch/arm/include/asm/pgtable-3level.h +++ b/arch/arm/include/asm/pgtable-3level.h @@ -190,7 +190,7 @@ static inline pte_t pte_mkspecial(pte_t pte) #define pmd_dirty(pmd) (pmd_isset((pmd), L_PMD_SECT_DIRTY)) #define pmd_hugewillfault(pmd) (!pmd_young(pmd) || !pmd_write(pmd)) -#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd)) +#define pmd_thp_or_huge(pmd) (pmd_leaf(pmd) || pmd_trans_huge(pmd)) #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define pmd_trans_huge(pmd) (pmd_val(pmd) && !pmd_table(pmd)) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 14d24c357c7a..c4efa47fed5f 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -512,7 +512,7 @@ static inline pmd_t pmd_mkinvalid(pmd_t pmd) return pmd; } -#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd)) +#define pmd_thp_or_huge(pmd) (pmd_leaf(pmd) || pmd_trans_huge(pmd)) #define pmd_write(pmd) pte_write(pmd_pte(pmd)) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 1234bbaef5bf..f494fc31201f 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -321,7 +321,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, if (sz != PUD_SIZE && pud_none(pud)) return NULL; /* hugepage or swap? */ - if (pud_huge(pud) || !pud_present(pud)) + if (pud_leaf(pud) || !pud_present(pud)) return (pte_t *)pudp; /* table; check the next level */ @@ -333,7 +333,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, if (!(sz == PMD_SIZE || sz == CONT_PMD_SIZE) && pmd_none(pmd)) return NULL; - if (pmd_huge(pmd) || !pmd_present(pmd)) + if (pmd_leaf(pmd) || !pmd_present(pmd)) return (pte_t *)pmdp; if (sz == CONT_PTE_SIZE) diff --git a/arch/loongarch/mm/hugetlbpage.c b/arch/loongarch/mm/hugetlbpage.c index 1e76fcb83093..a4e78e74aa21 100644 --- a/arch/loongarch/mm/hugetlbpage.c +++ b/arch/loongarch/mm/hugetlbpage.c @@ -64,7 +64,7 @@ uint64_t pmd_to_entrylo(unsigned long pmd_val) { uint64_t val; /* PMD as PTE. Must be huge page */ - if (!pmd_huge(__pmd(pmd_val))) + if (!pmd_leaf(__pmd(pmd_val))) panic("%s", __func__); val = pmd_val ^ _PAGE_HUGE; diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c index 4106084e57d7..76f3b9c0a9f0 100644 --- a/arch/mips/mm/tlb-r4k.c +++ b/arch/mips/mm/tlb-r4k.c @@ -326,7 +326,7 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte) idx = read_c0_index(); #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT /* this could be a huge page */ - if (pmd_huge(*pmdp)) { + if (pmd_leaf(*pmdp)) { unsigned long lo; write_c0_pagemask(PM_HUGE_MASK); ptep = (pte_t *)pmdp; diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c index 9b99113cb51a..6621cfc3baf8 100644 --- a/arch/powerpc/mm/pgtable_64.c +++ b/arch/powerpc/mm/pgtable_64.c @@ -102,7 +102,7 @@ struct page *p4d_page(p4d_t p4d) { if (p4d_leaf(p4d)) { if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP)) - VM_WARN_ON(!p4d_huge(p4d)); + VM_WARN_ON(!p4d_leaf(p4d)); return pte_page(p4d_pte(p4d)); } return virt_to_page(p4d_pgtable(p4d)); @@ -113,7 +113,7 @@ struct page *pud_page(pud_t pud) { if (pud_leaf(pud)) { if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP)) - VM_WARN_ON(!pud_huge(pud)); + VM_WARN_ON(!pud_leaf(pud)); return pte_page(pud_pte(pud)); } return virt_to_page(pud_pgtable(pud)); @@ -132,7 +132,7 @@ struct page *pmd_page(pmd_t pmd) * enabled so these checks can't be used. */ if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP)) - VM_WARN_ON(!(pmd_leaf(pmd) || pmd_huge(pmd))); + VM_WARN_ON(!pmd_leaf(pmd)); return pte_page(pmd_pte(pmd)); } return virt_to_page(pmd_page_vaddr(pmd)); diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index ff690ddc2334..d74f0814e086 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -731,7 +731,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot) return 0; /* Bail out if we are we on a populated non-leaf entry: */ - if (pud_present(*pud) && !pud_huge(*pud)) + if (pud_present(*pud) && !pud_leaf(*pud)) return 0; set_pte((pte_t *)pud, pfn_pte( @@ -760,7 +760,7 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot) } /* Bail out if we are we on a populated non-leaf entry: */ - if (pmd_present(*pmd) && !pmd_huge(*pmd)) + if (pmd_present(*pmd) && !pmd_leaf(*pmd)) return 0; set_pte((pte_t *)pmd, pfn_pte( diff --git a/mm/gup.c b/mm/gup.c index e2415e9789bc..8e04a04ef138 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -778,7 +778,7 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4d = READ_ONCE(*p4dp); if (!p4d_present(p4d)) return no_page_table(vma, flags); - BUILD_BUG_ON(p4d_huge(p4d)); + BUILD_BUG_ON(p4d_leaf(p4d)); if (unlikely(p4d_bad(p4d))) return no_page_table(vma, flags); @@ -3070,7 +3070,7 @@ static int gup_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr, unsigned lo next = p4d_addr_end(addr, end); if (!p4d_present(p4d)) return 0; - BUILD_BUG_ON(p4d_huge(p4d)); + BUILD_BUG_ON(p4d_leaf(p4d)); if (unlikely(is_hugepd(__hugepd(p4d_val(p4d))))) { if (!gup_huge_pd(__hugepd(p4d_val(p4d)), addr, P4D_SHIFT, next, flags, pages, nr)) diff --git a/mm/hmm.c b/mm/hmm.c index c95b9ec5d95f..93aebd9cc130 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -429,7 +429,7 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, return hmm_vma_walk_hole(start, end, -1, walk); } - if (pud_huge(pud) && pud_devmap(pud)) { + if (pud_leaf(pud) && pud_devmap(pud)) { unsigned long i, npages, pfn; unsigned int required_fault; unsigned long *hmm_pfns; diff --git a/mm/memory.c b/mm/memory.c index 904f70b99498..baee777dcd2d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2765,7 +2765,7 @@ static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud, unsigned long next; int err = 0; - BUG_ON(pud_huge(*pud)); + BUG_ON(pud_leaf(*pud)); if (create) { pmd = pmd_alloc_track(mm, pud, addr, mask);