From patchwork Mon Feb 17 14:04:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13977895 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6165C021A9 for ; Mon, 17 Feb 2025 14:04:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2CC4E280053; Mon, 17 Feb 2025 09:04:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2548728004D; Mon, 17 Feb 2025 09:04:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A812280053; Mon, 17 Feb 2025 09:04:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D916028004D for ; Mon, 17 Feb 2025 09:04:43 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 9E6CB1A0454 for ; Mon, 17 Feb 2025 14:04:43 +0000 (UTC) X-FDA: 83129607246.03.B0B5502 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf06.hostedemail.com (Postfix) with ESMTP id DDBC418000B for ; Mon, 17 Feb 2025 14:04:41 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf06.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739801082; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1mDtizuy6k2YYtHAep5gjHi4py7OCTAbLn4fXmKkqa4=; b=vbfU4apZqAISNWorfysT6EIuSrl02JJWlzpkHf3UI9Y37PnURns1mDvCP9GSWOB1uXKDmB PePbTbdc7pFcFYzex5nt82KfE6XoZ/e8yemty89QowK5g7W1zO81rZa7G6ypiueijTVC2A bVJq9VAXfdVGOUiw5VGx2ExZGDxfiSY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf06.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739801082; a=rsa-sha256; cv=none; b=YVVz9PeyjCKwd8LU4eLvb3HEwAtqKoM7NXVQ2OndjnR5dHQoa/PmsBEAcEml1eUBmBH4RP JpCt7pqh4M+alp/+pJdwjhyDMWQqxktfa7uLisO2JnzlwQFWD82dl5xvtFLaNLkufc2GJR Lf4GJJOD/hoK9PC8yVY4cFRmcZCUulc= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E0A451692; Mon, 17 Feb 2025 06:04:59 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 433B13F6A8; Mon, 17 Feb 2025 06:04:35 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Gerald Schaefer , "David S. Miller" , Andreas Larsson , Arnd Bergmann , Muchun Song , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Anshuman Khandual , Dev Jain , Kevin Brodsky , Alexandre Ghiti Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH v2 1/4] mm: hugetlb: Add huge page size param to huge_ptep_get_and_clear() Date: Mon, 17 Feb 2025 14:04:14 +0000 Message-ID: <20250217140419.1702389-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217140419.1702389-1-ryan.roberts@arm.com> References: <20250217140419.1702389-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: DDBC418000B X-Stat-Signature: qyyx18xe63yoxrkr7iihxmtg6j6i6n99 X-HE-Tag: 1739801081-379805 X-HE-Meta: U2FsdGVkX19AGR34Ob6ud+vI5BQb+9b6wB/g/0POsSw6L+0O7hfFRdyBHbcPNorFL7QBL7qma9AcpqL/xvUcfBJ74hrwbDN9Oe6XcTVM/CaXMxuj+tIN6cyF5WZ2BJJNrYkzAURLPw5sN1TQPxPgfhihMCdwslzXEjvRiGOYKmtawg78y1Q+hKeIjd1oIC4T4d74GBCpG0k2InP+KrOVwv0UxMGht+HkJaBrX4cWtqcAYymwwonJCznGey2S1WJgiEw3AD1gBBCyO0TAvgV7qqfg1IZzms0n3ukhqRi51UxrS5za0gSvW4l6cXyy9xzGiFMPw/gsCDLb6kvXKdNusZyZ7LScpAQsztI71lFDd/jRQt7ilEcBNUzLUA8jJpmIq8rCMc8AN+wKb9SaBl/YHWwuihgjgYsKxnQgHLaNnyxkXU4Q1JUjqf5qZ4ErNH+iMDt8r1LUhvoyaM9tyxzjgrWDTMOhj93cWccO7LOB8XXtbmYk0I65KWCfTSy67/3ryp2jDowupemy0yQ+yCJRE3Ogx1t768ityZSL8e2pMSbG+wFO3/lXAjkqYtmi8X5X5OaCGjPydfGH0cuUDT37LcrCjMsqBvVdXVbia8IT9MPdQ/pyKDcJfvRQDs/tfNKmxjPk+uxjvaCqd8EEf0wNhOz9GqjmY5I5qRpOuYoj7RBO5vM4qLNcbHD4tdiLCdHiR7UNLFX2uT3B0gO7JRSKoD1p7KT8rvhYJ3wbTlu7+koxn2pwDxH21IHTW+Ayuxas9SovcBMpKwvTaRn2pHxD16rDW+IlTQ5nmiu9BvbN4lbtL9Izpv0lugqM3lw89kc9iQ6MhVPJIhb3vsuIoaNeSErqs+RS9mwQ+4LTMglu3lG17ZVzoNVKfYvEB6tGU5oFsY2XGdn/TDS5PzcXLK9eyjOZ9jJ3GcnKdYgTPfBMJNW8pir2w7vAdiOsVqw2j3toBx4pnz4+thKvCg5tejg f5iUBneb oHKa6DIxaariMapFUSum3w5QbhgZk/aAF7c4gS1XHWfYFBqpm7X6GQNE8tElzPyr1eFSFC+j9nZcSr9TX6juabpLAB0vpjH88RAdxvfc2T0HNbKPIom+q8OQN0zv3s+80gPasb6M56aAUjkGe6tkBeAe3Xu3JbldUdeX7nZmSRbtl8dPjH/3Z/i6cWfmN9HoGHKwAPA7F7YY6uSdOtS2aDCjNBINxlanVW690 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In order to fix a bug, arm64 needs to be told the size of the huge page for which the huge_pte is being set in huge_ptep_get_and_clear(). Provide for this by adding an `unsigned long sz` parameter to the function. This follows the same pattern as huge_pte_clear() and set_huge_pte_at(). This commit makes the required interface modifications to the core mm as well as all arches that implement this function (arm64, loongarch, mips, parisc, powerpc, riscv, s390, sparc). The actual arm64 bug will be fixed in a separate commit. Cc: stable@vger.kernel.org Fixes: 66b3923a1a0f ("arm64: hugetlb: add support for PTE contiguous bit") Signed-off-by: Ryan Roberts Reviewed-by: Anshuman Khandual Reviewed-by: Catalin Marinas Acked-by: David Hildenbrand --- arch/arm64/include/asm/hugetlb.h | 4 ++-- arch/arm64/mm/hugetlbpage.c | 8 +++++--- arch/loongarch/include/asm/hugetlb.h | 6 ++++-- arch/mips/include/asm/hugetlb.h | 6 ++++-- arch/parisc/include/asm/hugetlb.h | 2 +- arch/parisc/mm/hugetlbpage.c | 2 +- arch/powerpc/include/asm/hugetlb.h | 6 ++++-- arch/riscv/include/asm/hugetlb.h | 3 ++- arch/riscv/mm/hugetlbpage.c | 2 +- arch/s390/include/asm/hugetlb.h | 12 ++++++++---- arch/s390/mm/hugetlbpage.c | 10 ++++++++-- arch/sparc/include/asm/hugetlb.h | 2 +- arch/sparc/mm/hugetlbpage.c | 2 +- include/asm-generic/hugetlb.h | 2 +- include/linux/hugetlb.h | 4 +++- mm/hugetlb.c | 4 ++-- 16 files changed, 48 insertions(+), 27 deletions(-) diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h index c6dff3e69539..03db9cb21ace 100644 --- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -42,8 +42,8 @@ extern int huge_ptep_set_access_flags(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t pte, int dirty); #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR -extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep); +extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, unsigned long sz); #define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT extern void huge_ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep); diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 98a2a0e64e25..06db4649af91 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -396,8 +396,8 @@ void huge_pte_clear(struct mm_struct *mm, unsigned long addr, __pte_clear(mm, addr, ptep); } -pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep) +pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, unsigned long sz) { int ncontig; size_t pgsize; @@ -549,6 +549,8 @@ bool __init arch_hugetlb_valid_size(unsigned long size) pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { + unsigned long psize = huge_page_size(hstate_vma(vma)); + if (alternative_has_cap_unlikely(ARM64_WORKAROUND_2645198)) { /* * Break-before-make (BBM) is required for all user space mappings @@ -558,7 +560,7 @@ pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr if (pte_user_exec(__ptep_get(ptep))) return huge_ptep_clear_flush(vma, addr, ptep); } - return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, psize); } void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, diff --git a/arch/loongarch/include/asm/hugetlb.h b/arch/loongarch/include/asm/hugetlb.h index c8e4057734d0..4dc4b3e04225 100644 --- a/arch/loongarch/include/asm/hugetlb.h +++ b/arch/loongarch/include/asm/hugetlb.h @@ -36,7 +36,8 @@ static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr, #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep) + unsigned long addr, pte_t *ptep, + unsigned long sz) { pte_t clear; pte_t pte = ptep_get(ptep); @@ -51,8 +52,9 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { pte_t pte; + unsigned long sz = huge_page_size(hstate_vma(vma)); - pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, sz); flush_tlb_page(vma, addr); return pte; } diff --git a/arch/mips/include/asm/hugetlb.h b/arch/mips/include/asm/hugetlb.h index d0a86ce83de9..fbc71ddcf0f6 100644 --- a/arch/mips/include/asm/hugetlb.h +++ b/arch/mips/include/asm/hugetlb.h @@ -27,7 +27,8 @@ static inline int prepare_hugepage_range(struct file *file, #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep) + unsigned long addr, pte_t *ptep, + unsigned long sz) { pte_t clear; pte_t pte = *ptep; @@ -42,13 +43,14 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { pte_t pte; + unsigned long sz = huge_page_size(hstate_vma(vma)); /* * clear the huge pte entry firstly, so that the other smp threads will * not get old pte entry after finishing flush_tlb_page and before * setting new huge pte entry */ - pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, sz); flush_tlb_page(vma, addr); return pte; } diff --git a/arch/parisc/include/asm/hugetlb.h b/arch/parisc/include/asm/hugetlb.h index 5b3a5429f71b..21e9ace17739 100644 --- a/arch/parisc/include/asm/hugetlb.h +++ b/arch/parisc/include/asm/hugetlb.h @@ -10,7 +10,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, - pte_t *ptep); + pte_t *ptep, unsigned long sz); #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, diff --git a/arch/parisc/mm/hugetlbpage.c b/arch/parisc/mm/hugetlbpage.c index e9d18cf25b79..a94fe546d434 100644 --- a/arch/parisc/mm/hugetlbpage.c +++ b/arch/parisc/mm/hugetlbpage.c @@ -126,7 +126,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, - pte_t *ptep) + pte_t *ptep, unsigned long sz) { pte_t entry; diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h index dad2e7980f24..86326587e58d 100644 --- a/arch/powerpc/include/asm/hugetlb.h +++ b/arch/powerpc/include/asm/hugetlb.h @@ -45,7 +45,8 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep) + unsigned long addr, pte_t *ptep, + unsigned long sz) { return __pte(pte_update(mm, addr, ptep, ~0UL, 0, 1)); } @@ -55,8 +56,9 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { pte_t pte; + unsigned long sz = huge_page_size(hstate_vma(vma)); - pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, sz); flush_hugetlb_page(vma, addr); return pte; } diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h index faf3624d8057..446126497768 100644 --- a/arch/riscv/include/asm/hugetlb.h +++ b/arch/riscv/include/asm/hugetlb.h @@ -28,7 +28,8 @@ void set_huge_pte_at(struct mm_struct *mm, #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep); + unsigned long addr, pte_t *ptep, + unsigned long sz); #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c index 42314f093922..b4a78a4b35cf 100644 --- a/arch/riscv/mm/hugetlbpage.c +++ b/arch/riscv/mm/hugetlbpage.c @@ -293,7 +293,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, - pte_t *ptep) + pte_t *ptep, unsigned long sz) { pte_t orig_pte = ptep_get(ptep); int pte_num; diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h index 7c52acaf9f82..420c74306779 100644 --- a/arch/s390/include/asm/hugetlb.h +++ b/arch/s390/include/asm/hugetlb.h @@ -26,7 +26,11 @@ void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep); #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR -pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep); +pte_t huge_ptep_get_and_clear(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, + unsigned long sz); +pte_t __huge_ptep_get_and_clear(struct mm_struct *mm, + unsigned long addr, pte_t *ptep); static inline void arch_clear_hugetlb_flags(struct folio *folio) { @@ -48,7 +52,7 @@ static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr, static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) { - return huge_ptep_get_and_clear(vma->vm_mm, address, ptep); + return __huge_ptep_get_and_clear(vma->vm_mm, address, ptep); } #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS @@ -59,7 +63,7 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma, int changed = !pte_same(huge_ptep_get(vma->vm_mm, addr, ptep), pte); if (changed) { - huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + __huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); __set_huge_pte_at(vma->vm_mm, addr, ptep, pte); } return changed; @@ -69,7 +73,7 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma, static inline void huge_ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - pte_t pte = huge_ptep_get_and_clear(mm, addr, ptep); + pte_t pte = __huge_ptep_get_and_clear(mm, addr, ptep); __set_huge_pte_at(mm, addr, ptep, pte_wrprotect(pte)); } diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c index d9ce199953de..52ee8e854195 100644 --- a/arch/s390/mm/hugetlbpage.c +++ b/arch/s390/mm/hugetlbpage.c @@ -188,8 +188,8 @@ pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep) return __rste_to_pte(pte_val(*ptep)); } -pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep) +pte_t __huge_ptep_get_and_clear(struct mm_struct *mm, + unsigned long addr, pte_t *ptep) { pte_t pte = huge_ptep_get(mm, addr, ptep); pmd_t *pmdp = (pmd_t *) ptep; @@ -202,6 +202,12 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, return pte; } +pte_t huge_ptep_get_and_clear(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, unsigned long sz) +{ + return __huge_ptep_get_and_clear(mm, addr, ptep); +} + pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long sz) { diff --git a/arch/sparc/include/asm/hugetlb.h b/arch/sparc/include/asm/hugetlb.h index c714ca6a05aa..e7a9cdd498dc 100644 --- a/arch/sparc/include/asm/hugetlb.h +++ b/arch/sparc/include/asm/hugetlb.h @@ -20,7 +20,7 @@ void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr, #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, - pte_t *ptep); + pte_t *ptep, unsigned long sz); #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c index eee601a0d2cf..80504148d8a5 100644 --- a/arch/sparc/mm/hugetlbpage.c +++ b/arch/sparc/mm/hugetlbpage.c @@ -260,7 +260,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, } pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, - pte_t *ptep) + pte_t *ptep, unsigned long sz) { unsigned int i, nptes, orig_shift, shift; unsigned long size; diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h index f42133dae68e..2afc95bf1655 100644 --- a/include/asm-generic/hugetlb.h +++ b/include/asm-generic/hugetlb.h @@ -90,7 +90,7 @@ static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, #ifndef __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep) + unsigned long addr, pte_t *ptep, unsigned long sz) { return ptep_get_and_clear(mm, addr, ptep); } diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ec8c0ccc8f95..bf5f7256bd28 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -1004,7 +1004,9 @@ static inline void hugetlb_count_sub(long l, struct mm_struct *mm) static inline pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { - return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + unsigned long psize = huge_page_size(hstate_vma(vma)); + + return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, psize); } #endif diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 65068671e460..de9d49e521c1 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5447,7 +5447,7 @@ static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr, if (src_ptl != dst_ptl) spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); - pte = huge_ptep_get_and_clear(mm, old_addr, src_pte); + pte = huge_ptep_get_and_clear(mm, old_addr, src_pte, sz); if (need_clear_uffd_wp && pte_marker_uffd_wp(pte)) huge_pte_clear(mm, new_addr, dst_pte, sz); @@ -5622,7 +5622,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, set_vma_resv_flags(vma, HPAGE_RESV_UNMAPPED); } - pte = huge_ptep_get_and_clear(mm, address, ptep); + pte = huge_ptep_get_and_clear(mm, address, ptep, sz); tlb_remove_huge_tlb_entry(h, tlb, ptep, address); if (huge_pte_dirty(pte)) set_page_dirty(page); From patchwork Mon Feb 17 14:04:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13977896 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A29EC021A9 for ; Mon, 17 Feb 2025 14:04:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7FC49280054; Mon, 17 Feb 2025 09:04:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7AB3C28004D; Mon, 17 Feb 2025 09:04:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 62592280054; Mon, 17 Feb 2025 09:04:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 3FB1B28004D for ; Mon, 17 Feb 2025 09:04:49 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A674BC0446 for ; Mon, 17 Feb 2025 14:04:48 +0000 (UTC) X-FDA: 83129607456.26.FC0AF9B Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf09.hostedemail.com (Postfix) with ESMTP id 09153140019 for ; Mon, 17 Feb 2025 14:04:46 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf09.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739801087; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UCOUTcTfw4qYTSuXaw58P4F2TMysB+Npd4FFe2C6z/o=; b=Dk/ZxwwKLGlC4mcyQP+UcOsYDuAR8EGctBhK4nVmXmmxijTbHjrvmfdsAtQZbtp2QRNAG3 D4RxXW1We1MANDkMLPwszI6ZDyO+3T3sInzPRFIjHAVbWUZDThqJSUKLqMwlx3eHVQkDeU g4yIgScPi6Dl+p0DCywZw90h3NM41TA= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf09.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739801087; a=rsa-sha256; cv=none; b=ZWplHo4nriTV/jRwMeHSwuYP7xK3cfEcTwjMcTJGHmdIiyt2LBSuFAmqNyhVlkGZbFKlxI 7tqDNGElFUoU8MfzmQE8u+EVQWpSkjUe4c5McYgyqUk5Yc6LJSnvGccvLt18R9Jax2J8m6 haUwbm8m0NEqu/DlQQYRwsw0wcTqa/k= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 78A2216F8; Mon, 17 Feb 2025 06:05:05 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 043963F6A8; Mon, 17 Feb 2025 06:04:40 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Gerald Schaefer , "David S. Miller" , Andreas Larsson , Arnd Bergmann , Muchun Song , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Anshuman Khandual , Dev Jain , Kevin Brodsky , Alexandre Ghiti Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH v2 2/4] arm64: hugetlb: Fix huge_ptep_get_and_clear() for non-present ptes Date: Mon, 17 Feb 2025 14:04:15 +0000 Message-ID: <20250217140419.1702389-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217140419.1702389-1-ryan.roberts@arm.com> References: <20250217140419.1702389-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 09153140019 X-Stat-Signature: p4fyzgmuqgn4q4ftkwu7qufin3uhi8gy X-HE-Tag: 1739801086-233630 X-HE-Meta: U2FsdGVkX1+o6hYkp4b873rx2KyrFfNx5r2P5cQq5yRKK7uddLSfP/dB+xKWkp6o3o0hE/5Bz3R9gUMv7qTiNnYM7iYYxFWjwif6Uf83iCY7O4tboSDJyTF9Js0/KELBphuENetYXouKFQUwXb3MvQGJNPS7YsDH8IoBgZaSkUNOqSmUTKkODcBDlae0T5TDkf0U59mTBI5EThG4kkJ7t97DMkXxriygU8/4cYDiqJQWj0BI4Axt8cXbZirVbkxrs64YlVKkFUAcih4ITEq26vHiTmLvjvnfcdxrtRuxs2E87ns4JqD7n2DdJM2JUahCD1m1xAD4Nsz2WsssnZ8NL4nx8UO1AfMrtUnWupENTLjHzef5WJRDZo26BFZ5jkWdNb5b4ZMZeXreQn4doO6McjlSd4L3zfM265kj3SdVYxO59Adn2A1461DyOxadev1qhv0Vcl7tdDWyuENYmGCwlVwThSqYTEW7vqNXVB2WaYeBEueVEfINlKuVIk+VJ/S2UBdvrrCezaMsb7d66PRgqtsx2vnOypt1pAN8X4AWzQXK5AmDvaxo86qhp/DlanNf90Xc0sUyZYvqWe8DNgQMhaDbzC5/l2uIvl387vj7Xjp+H9HjVRfHkiIBwrBFBaNX28b7qNinmNyCjWnVzUZgxfB1NMjK16j51SKtzyjgtNhj9Xrle6eTMj2VPVlzG1SXaOdFVyP1SVLAZfZZm6AbfxS9OoMsjtgdCQXK4gD5KbmTYRHGKByGqNSvRBqBupLSAdCmtezesKI59Hk83ObLGiXzcLxx/aTz6sTSWepT6B9rsV0O2vzV7eov+4ascCtSbDZ/SXHhbEKcKhWlaBw+f9SycfSUmgiBawvN9apswVWBe+z9Lk/YvQWVMC+BspZb9Bp48tLZRM2yyvDAkTodjPHDKnUQwkUgAz+NQeZgNj4JamArvE0gzMtNPMWi9kUoT0iA+kDkuUE+GgLBNpg OUYRMMIL PmLa42n/WatBHL7zbj+Y37m85XM/UbvyWsgm8sv/5QhbdTBwQ3RPOMv/dvwL8f4UrpouBOKlvrl4xTBBuEIizQW6P1x2DGZbNUhaIA9+dIUgXYiY/1R6NIdJdSBpvuJl0lNs8sEpoxS4dUC/ICPnYqn5T0nywprsjp5et5r+K+jzYDKvbRgURcquDhHgaDdt9DQCfDQ5N3f17rLFugxh+ieROxm5BNzh0nWO6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: arm64 supports multiple huge_pte sizes. Some of the sizes are covered by a single pte entry at a particular level (PMD_SIZE, PUD_SIZE), and some are covered by multiple ptes at a particular level (CONT_PTE_SIZE, CONT_PMD_SIZE). So the function has to figure out the size from the huge_pte pointer. This was previously done by walking the pgtable to determine the level and by using the PTE_CONT bit to determine the number of ptes at the level. But the PTE_CONT bit is only valid when the pte is present. For non-present pte values (e.g. markers, migration entries), the previous implementation was therefore erroniously determining the size. There is at least one known caller in core-mm, move_huge_pte(), which may call huge_ptep_get_and_clear() for a non-present pte. So we must be robust to this case. Additionally the "regular" ptep_get_and_clear() is robust to being called for non-present ptes so it makes sense to follow the behaviour. Fix this by using the new sz parameter which is now provided to the function. Additionally when clearing each pte in a contig range, don't gather the access and dirty bits if the pte is not present. An alternative approach that would not require API changes would be to store the PTE_CONT bit in a spare bit in the swap entry pte for the non-present case. But it felt cleaner to follow other APIs' lead and just pass in the size. As an aside, PTE_CONT is bit 52, which corresponds to bit 40 in the swap entry offset field (layout of non-present pte). Since hugetlb is never swapped to disk, this field will only be populated for markers, which always set this bit to 0 and hwpoison swap entries, which set the offset field to a PFN; So it would only ever be 1 for a 52-bit PVA system where memory in that high half was poisoned (I think!). So in practice, this bit would almost always be zero for non-present ptes and we would only clear the first entry if it was actually a contiguous block. That's probably a less severe symptom than if it was always interpretted as 1 and cleared out potentially-present neighboring PTEs. Cc: stable@vger.kernel.org Fixes: 66b3923a1a0f ("arm64: hugetlb: add support for PTE contiguous bit") Signed-off-by: Ryan Roberts Reviewed-by: Catalin Marinas --- arch/arm64/mm/hugetlbpage.c | 40 ++++++++++++++++--------------------- 1 file changed, 17 insertions(+), 23 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 06db4649af91..614b2feddba2 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -163,24 +163,23 @@ static pte_t get_clear_contig(struct mm_struct *mm, unsigned long pgsize, unsigned long ncontig) { - pte_t orig_pte = __ptep_get(ptep); - unsigned long i; - - for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) { - pte_t pte = __ptep_get_and_clear(mm, addr, ptep); - - /* - * If HW_AFDBM is enabled, then the HW could turn on - * the dirty or accessed bit for any page in the set, - * so check them all. - */ - if (pte_dirty(pte)) - orig_pte = pte_mkdirty(orig_pte); - - if (pte_young(pte)) - orig_pte = pte_mkyoung(orig_pte); + pte_t pte, tmp_pte; + bool present; + + pte = __ptep_get_and_clear(mm, addr, ptep); + present = pte_present(pte); + while (--ncontig) { + ptep++; + addr += pgsize; + tmp_pte = __ptep_get_and_clear(mm, addr, ptep); + if (present) { + if (pte_dirty(tmp_pte)) + pte = pte_mkdirty(pte); + if (pte_young(tmp_pte)) + pte = pte_mkyoung(pte); + } } - return orig_pte; + return pte; } static pte_t get_clear_contig_flush(struct mm_struct *mm, @@ -401,13 +400,8 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, { int ncontig; size_t pgsize; - pte_t orig_pte = __ptep_get(ptep); - - if (!pte_cont(orig_pte)) - return __ptep_get_and_clear(mm, addr, ptep); - - ncontig = find_num_contig(mm, addr, ptep, &pgsize); + ncontig = num_contig_ptes(sz, &pgsize); return get_clear_contig(mm, addr, ptep, pgsize, ncontig); } From patchwork Mon Feb 17 14:04:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13977897 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B041C021A9 for ; Mon, 17 Feb 2025 14:04:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 00C3F280055; Mon, 17 Feb 2025 09:04:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ED81228004D; Mon, 17 Feb 2025 09:04:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA027280055; Mon, 17 Feb 2025 09:04:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id BB67F28004D for ; Mon, 17 Feb 2025 09:04:54 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 77A26AF0F8 for ; Mon, 17 Feb 2025 14:04:54 +0000 (UTC) X-FDA: 83129607708.30.00F3298 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf20.hostedemail.com (Postfix) with ESMTP id B4A141C001D for ; Mon, 17 Feb 2025 14:04:52 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739801092; a=rsa-sha256; cv=none; b=mAovw8TIHvmsSHak7iI9GlRsX9qvL0TWcEPf/BxRr9LFeI/4HK7dO4HOkCBotV6/LDdALQ S88ZbtBjnHLFbkWaUQJpJePuUdeODiv/qucvwmfHRLEb1OLCi1YxweSsbtbvEQwkgOiI8D wKerT8jUvvHDseG8T/5mCaFXuI3Pi2w= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739801092; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+9mwnZQpzod0UnINf3LqRbHm7apaqJ7LUOpjc4gUVy0=; b=IHxExHSbkjJQfc8GV0Gg7f5a66tJ2JODvD0k6zP/Q2zPlOawSiJW8BX17FOp3gcdoPML8/ W/CcgyVG1nXKuVlsFPBcu2GWDfoLp4l0MkarVxk5bGYG5vTbR5nXOG5GsP0O64Z0YDeRB6 VXzCRD1RQpY60W4YTv8aBorrcEj4KnI= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 10677175D; Mon, 17 Feb 2025 06:05:11 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 905E83F6A8; Mon, 17 Feb 2025 06:04:46 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Gerald Schaefer , "David S. Miller" , Andreas Larsson , Arnd Bergmann , Muchun Song , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Anshuman Khandual , Dev Jain , Kevin Brodsky , Alexandre Ghiti Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH v2 3/4] arm64: hugetlb: Fix flush_hugetlb_tlb_range() invalidation level Date: Mon, 17 Feb 2025 14:04:16 +0000 Message-ID: <20250217140419.1702389-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217140419.1702389-1-ryan.roberts@arm.com> References: <20250217140419.1702389-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B4A141C001D X-Stat-Signature: itrnh86jbehkhngrnrnfc1py9ihsjsrg X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1739801092-597512 X-HE-Meta: U2FsdGVkX1+X6ObOr6nSJHnTkbMQXIuO1hVRbLVS9XVSZG7aw3MKriP4xoaSZfF0oBnOrIIqLduvCGwEArz80PfpevwFJIWuc2y63SRWI6Jc+Dm1shitknulQOl2w/aPf0/EzpIdvDkIrtoJkJEBiZIHsKJ9YaZVPqf1N4vR+M+TZps0DJ0hGbki2phNSXVO8M1XYiiF6BcJCjbOmBCTyhJt2xeBm9Gy92fWJWT2jCuXAfAMKEq2a++h1RLegYkeCfvuaOR0QRD6QzMjRLC7Bf+nHVhOMOmAJ2c8CU2ql7+PHSBdOwDRvJPr21eDTYpdCDdHs4BYk4FHpCfETVTPg4W+0NO+Eqku08u1NMUtBnjGVNdRBc3B8UXR5uUuXSjkULcXMHGAGGVf/x9Wsdj1tOWTGRmH6KjC7a4IbgjYrQoo4fEqV/xgan4RY22RBiHeI9kK8ZW/tYzuhz7bZWGVwA1PtIEw/HFhAtxU/eh4zOHIdr/85FD3V/cejCzfPoXXsDEvvE4OUBsf6PlC8i0QUZIvVjRWYn49aeOVUq+1f6FCUaGUc8aJIHsuhqJv+6Nh6LhviiQaYs5gB/nPTVuKI3sMya9pA3Wu7srZB7V4A9+LzISPI5xJNYeHqdNry3l6Py1TUv3TbS+Uy7Kom1m6Ru20+AcaxuBntMD7e29DlX/KUU0IeK8jb5/xqdtqtL1NeV+xXmEY2Lgi8R5ttymK+bXZwA2fZIN8TCvB1SyLNo+INv0y5a4h0iFhYbSs0kNX2UFAZEdtY7AtlzZhCBb6oNzSSmJb3+V9W+YPG9QoYVuDj3ZMZUMTBPNxL465CbpRPI5QOIdH78e3lXy7gsGUDJWe3WKLdojCXEZnEm1vW7D4UurFYpM9CZJ3K0JVYPiYK37IaqVeffzz/e4kFRYCP5igdHwA+ph14XnN6ygxvuV/3vie65rvLv17ip82nboRuRsYflwQS1NxAMwFu9P tdIsQp6Z mX1k4hkSedaed5XQyful+QGfdqhXxTuymWfBfNijEjzD3fAMwlBUwIi/D1GPHkrMy1Hrl/Wx+0oAo+X2oiYtSDN1Jnt6Sp37JifMsaCkTyXlI33XVJVgd50THIIjXeulx8S+MvBk60YNCOWniPGstdebXt8UtwH4pSebun8Z/GQk2SMhU27NAXfSxXQZAltYdarOa+1WPenE2SK2bbSYWZZ6FMYUppf2KSuZK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: commit c910f2b65518 ("arm64/mm: Update tlb invalidation routines for FEAT_LPA2") changed the "invalidation level unknown" hint from 0 to TLBI_TTL_UNKNOWN (INT_MAX). But the fallback "unknown level" path in flush_hugetlb_tlb_range() was not updated. So as it stands, when trying to invalidate CONT_PMD_SIZE or CONT_PTE_SIZE hugetlb mappings, we will spuriously try to invalidate at level 0 on LPA2-enabled systems. Fix this so that the fallback passes TLBI_TTL_UNKNOWN, and while we are at it, explicitly use the correct stride and level for CONT_PMD_SIZE and CONT_PTE_SIZE, which should provide a minor optimization. Cc: stable@vger.kernel.org Fixes: c910f2b65518 ("arm64/mm: Update tlb invalidation routines for FEAT_LPA2") Signed-off-by: Ryan Roberts Reviewed-by: Anshuman Khandual Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/hugetlb.h | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h index 03db9cb21ace..07fbf5bf85a7 100644 --- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -76,12 +76,22 @@ static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma, { unsigned long stride = huge_page_size(hstate_vma(vma)); - if (stride == PMD_SIZE) - __flush_tlb_range(vma, start, end, stride, false, 2); - else if (stride == PUD_SIZE) - __flush_tlb_range(vma, start, end, stride, false, 1); - else - __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0); + switch (stride) { +#ifndef __PAGETABLE_PMD_FOLDED + case PUD_SIZE: + __flush_tlb_range(vma, start, end, PUD_SIZE, false, 1); + break; +#endif + case CONT_PMD_SIZE: + case PMD_SIZE: + __flush_tlb_range(vma, start, end, PMD_SIZE, false, 2); + break; + case CONT_PTE_SIZE: + __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 3); + break; + default: + __flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN); + } } #endif /* __ASM_HUGETLB_H */ From patchwork Mon Feb 17 14:04:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13977898 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03BB3C021AA for ; Mon, 17 Feb 2025 14:05:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6C48B280056; Mon, 17 Feb 2025 09:05:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 673FE28004D; Mon, 17 Feb 2025 09:05:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4ECC3280056; Mon, 17 Feb 2025 09:05:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 311E028004D for ; Mon, 17 Feb 2025 09:05:00 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E5080120474 for ; Mon, 17 Feb 2025 14:04:59 +0000 (UTC) X-FDA: 83129607918.08.AC3C644 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf08.hostedemail.com (Postfix) with ESMTP id 310AA160011 for ; Mon, 17 Feb 2025 14:04:57 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf08.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739801098; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=c3wy/Ge3n3KdfGqL4IOjId92Wi3FclaJmBP0NTUpOeU=; b=pMKNfVSp4VHVZT1MpVk0urEISzD2TuDFiIKZ0sBWws/K119MrcZaqM4VRGtk/cvFj3XmBA jqQFgLaY81sNMk388YNZJ0Tym8Rc56E4oxMivlsLu1dk2B9TfAg9CtcdPGB4b34Ke03yoh 1krdfGl8XV6o8B1snKaaoWA8lJTQb9M= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf08.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739801098; a=rsa-sha256; cv=none; b=4CNIaPiRsqcwPZWO1TPq9wUcn3HhNS5YUyMcTyZqUrha78fGcYsXYfpDwDxHVTl0zZqn7J ZWQUO/TbkuENSE8b7c+nlWEHKbq69Pt6K/kWS13BxGKVoOGOg8QaZo0g/AHJ7LjRsogycU r7x6WxSf8gLaAc7sGvCg4Wg5UeDXGGs= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9D924176C; Mon, 17 Feb 2025 06:05:16 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 288023F6A8; Mon, 17 Feb 2025 06:04:52 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Gerald Schaefer , "David S. Miller" , Andreas Larsson , Arnd Bergmann , Muchun Song , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Anshuman Khandual , Dev Jain , Kevin Brodsky , Alexandre Ghiti Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH v2 4/4] mm: Don't skip arch_sync_kernel_mappings() in error paths Date: Mon, 17 Feb 2025 14:04:17 +0000 Message-ID: <20250217140419.1702389-5-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217140419.1702389-1-ryan.roberts@arm.com> References: <20250217140419.1702389-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 310AA160011 X-Stat-Signature: 58y8x4i8mynfoo8hba1ne3brjqbxce54 X-HE-Tag: 1739801097-728004 X-HE-Meta: U2FsdGVkX1/95xQOwi0SxAUPJoq++Db2ajBxCZUG+Ni0AEN8c8iaZD82cj4YtIEaJE15zKCz3AMA9wf0G+gh2O/BBmmBAcuNLwgAWF08byd5l/VcimtQKihUOECohfPJp0j4IcnL7yh/pXh2k2Ad+kx8HbmOjYizQmAscg9P2JJTLzMRjNLCjJz4V6II0ay54XEZULA/nGdeK4QUP0K6FbVy7d5Qoyv+opLgTg2rjJxXCQJUwTeo3bc02oIrPnlzNHaqLQDfLdYByplsvV/BfdtT5CJyoF4FBNmXDHg0/h8hdMBQ1ywd9+GraJRf7y1XHlR5Ybe6rA16gnfknbn2YGOfHUWz15zWsBT489pacdbX4EY80Enw2QA+ibhMr/ohSk8MvS4hRU0EFjqIQA3TlYQqBuFkhDfZy3wmV7EQC2DLQGqduQNGlivBTTJbTVLrFOBCG4WAG1dMs8byL239rW4LWDLoF7I+bXeBlGvKIlwLT+z9noNWcZeYfzO/At0jheSPKOW7GjfVTfPfj4Bq8xK3/XgYF6f6YTLgJrrTCU9Dsqa+e7RDowlCU1VX97LQepAjq7LvqSCcNX/eUgXD0fAFKgHKCGycGtp188NBrJxmJP9t0csN2KB9Cym4QQV4+74u8yZ9FqpRzFw/FPtStQEo6f5WTUftpiwd5sPn+aVxf/Ac935KLQ20iUeHs2wVgPbfRFbNUxJAvBfqb9GGyxt5X3kfkswBXAg5u9qvz/zS6MS+vHIHmTlAt3QCxoeznpK/PDxa7X+9joE1uYL181JVhlSLbP608XecuXdZy1NC3oQ80FFkbQG3Yjf31GoIBVipKuImXPg4puiY/AemjM83L5+Rvu+rLju4zAWrJMG+58DXu0GLkaLQcoGbA6trELgShXTHDvkjJmFDBU6dU1AWSp2f3gR2DIa1M1KNeZXP8jOJO8nQD5JqpiXWAmOGxiUPB4qKUihLrF2pCZC PAGRgb6c XCNqRWjg/w3i9g2PpRZL1AIWqCjvJnpY9IyYUkk1wyr376Enj3EXlyGroQJWKJKdsGQNDTumgyGEwXwMdDKiNPrxsrRo2yU8iD/gv+DW6teDmWw/5+2SWgAParrl9DbSD0ypbpsK7wWDgTWO9N1WE/Pf5hDv05mYOZKcaBx4njOAxnGIBdzLh0Ha8pPqmL8pH3lJC3MKi26S9TCWc9QXuWeKLuSWYRirpsZK0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Fix callers that previously skipped calling arch_sync_kernel_mappings() if an error occurred during a pgtable update. The call is still required to sync any pgtable updates that may have occurred prior to hitting the error condition. These are theoretical bugs discovered during code review. Cc: stable@vger.kernel.org Fixes: 2ba3e6947aed ("mm/vmalloc: track which page-table levels were modified") Fixes: 0c95cba49255 ("mm: apply_to_pte_range warn and fail if a large pte is encountered") Reviewed-by: Anshuman Khandual Signed-off-by: Ryan Roberts Reviewed-by: Catalin Marinas --- mm/memory.c | 6 ++++-- mm/vmalloc.c | 4 ++-- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 539c0f7c6d54..a15f7dd500ea 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3040,8 +3040,10 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr, next = pgd_addr_end(addr, end); if (pgd_none(*pgd) && !create) continue; - if (WARN_ON_ONCE(pgd_leaf(*pgd))) - return -EINVAL; + if (WARN_ON_ONCE(pgd_leaf(*pgd))) { + err = -EINVAL; + break; + } if (!pgd_none(*pgd) && WARN_ON_ONCE(pgd_bad(*pgd))) { if (!create) continue; diff --git a/mm/vmalloc.c b/mm/vmalloc.c index a6e7acebe9ad..61981ee1c9d2 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -586,13 +586,13 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, mask |= PGTBL_PGD_MODIFIED; err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); if (err) - return err; + break; } while (pgd++, addr = next, addr != end); if (mask & ARCH_PAGE_TABLE_SYNC_MASK) arch_sync_kernel_mappings(start, end); - return 0; + return err; } /*