From patchwork Mon Mar 25 14:55:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13602367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49824C54E64 for ; Mon, 25 Mar 2024 14:56:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C44166B0083; Mon, 25 Mar 2024 10:56:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BCB806B0085; Mon, 25 Mar 2024 10:56:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A45026B0087; Mon, 25 Mar 2024 10:56:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8DC6A6B0083 for ; Mon, 25 Mar 2024 10:56:20 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 59877802E2 for ; Mon, 25 Mar 2024 14:56:20 +0000 (UTC) X-FDA: 81935862120.06.EE8CEDF Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) by imf29.hostedemail.com (Postfix) with ESMTP id 49DF912000C for ; Mon, 25 Mar 2024 14:56:18 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf29.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711378578; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RD5sLVuXteUovs3ESYCjF+GnYnE0LAegXmf8aiLMCkA=; b=dvrr2gB2ThztfMLthe7f0HHfaj5InzV74ta4DSfZsj7TerDhfxkRnAZ0OtGVvWUVCn+kjl jNeqYMYrR7074EW+Nh9nozSJIM94wYMkS7QmaetX5JhT7FAOI+IT59R9pH81Hs5OJhGokt IL9zceo0bxbCYPLLrbOvfWRPEKxyyeQ= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf29.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711378578; a=rsa-sha256; cv=none; b=pNkDmEurAYlBVX5IEDbR2KtXhD/Nuub4s0p44P05AcJ83CTuKzEpNen5eP0VMOW4sFK9oC u9GgsJO0oA8mXqRMW5ma463r8DEwJeIXtEF0exT3YCLjSe0I7qiSXUsBTaFTjOuL70uVoR GdyG3Rvlk2GjJVCAuDcGcV7G3ReKLWg= Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4V3GGM0mf5z9sTD; Mon, 25 Mar 2024 15:56:11 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id DFWWQuMGrNUw; Mon, 25 Mar 2024 15:56:11 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4V3GGK1jq7z9sbF; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 37A088B765; Mon, 25 Mar 2024 15:56:09 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id Izv57FnJsesi; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from PO20335.idsi0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 131B68B76D; Mon, 25 Mar 2024 15:56:09 +0100 (CET) From: Christophe Leroy To: Andrew Morton , Jason Gunthorpe , Peter Xu Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org Subject: [RFC PATCH 1/8] mm: Provide pagesize to pmd_populate() Date: Mon, 25 Mar 2024 15:55:54 +0100 Message-ID: <54d78f1b7e7f1c671e40b7c0c637380bcb834326.1711377230.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1711378567; l=7768; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=ekqDiax6XYjI7QmradPChM7m3NA5stHWTQ4duig8CUY=; b=372K9QS4WRy+tMs3GDykxT5niJJw9B8R1716FMGSolDQuf04emGhfpHWhk/1LFu8r6xDLN8Zv T9yhPnF3XT2CjU1pa0bAYbml2OpX9dP01GcgxUfRygbH9cm4HCInE+n X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= X-Rspamd-Queue-Id: 49DF912000C X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: t64yh79bgmfq1p13rf6z1qo187fux75q X-HE-Tag: 1711378578-162047 X-HE-Meta: U2FsdGVkX1+UYDBrdegrXkvGKrNfxRPhyx/8Z+o3MFz9HC8InXpV7QbYjUA3GKERROj0fwp1Vc3W4VPb95dBkdMz2qR/KP/E9K1SNXdhbipD5m4gzIx/WGLtLxLgqscs66AvmNon5h2UDqCrCzupdI/Tae+GP8iIoVtCTMDYJHjvl6onT4sPbQ8FhfV1pGS+Q6x6viq9ZmIRdTLuvMtUxIxLrldufy8kkEMX0jzwh8Q/w9kPbgCGUGYr50h/wJh/IbkdXzKtr8aRH6vof2F/wikeBMvuZQ7u3i8v/7JuVTXA2kx2UhCS66TuCnmMlR+ggAJ/6WuFA2ZvYOWeIv68cN+3Dc5NT3AnybJqvfR0zYKg1q6U2vFz8N9T2g8aQiflgh8wQoLweaXC0V6ta8t89aOvZbYBKpp2CANyPM5aUXbp+mXrTGABOTVC/ql6yslqTTEMdSq8ha3O5p0aeRLLQgwNfv+cjmHr2EVZsXAAoXL7oqNvauhs9MzcXpz9vDSpfdWAQQ5sAdXxxhB6UYYMJpMEjKe+KNmRAcL0iY/HozRzCt0VjmqE8FS6NrRjUD5hnPwYl6Z2vZc4M/n7JH8dUBJOvlogsjynA0mOH8KMARV37onfzp8BJt2xoY9TphHozv39HXsTzNgOILB+kIo4oNMM9vAtJJdgrtOgoL77YTfhDtqckUSxfacM6KghL6AO0MHBvf8pUqKUoFlW/vWXVJ0WTAyG++Z2zFE4e471Ok8V4A6ZsqUSk0FjN04nQ6noCIsUHjCzEi5OhKceAGaV2TUZk/kZs1da2hjMQEhZ+SShBM/v/w26/pWqR3eTKmP93l4E+sImxkCndJjBLnGdhn25QH5znBJoQAGqvFXzg+k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Unlike many architectures, powerpc 8xx hardware tablewalk requires a two level process for all page sizes, allthough second level only has one entry when pagesize is 8M. To fit with Linux page table topology and without requiring special page directory layout like hugepd, the page entry will be replicated 1024 times in the standard page table. However for large pages it is necessary to set bits in the level-1 (PMD) entry. At the time being, for 512k pages the flag is kept in the PTE and inserted in the PMD entry at TLB miss exception, that is necessary because we can have pages of different sizes in a page table. However the 12 PTE bits are fully used and there is no room for an additional bit for page size. For 8M pages, there will be only one page per PMD entry, it is therefore possible to flag the pagesize in the PMD entry, with the advantage that the information will already be at the right place for the hardware. To do so, add a new helper called pmd_populate_size() which takes the page size as an additional argument, and modify __pte_alloc() to also take that argument. pte_alloc() is left unmodified in order to reduce churn on callers, and a pte_alloc_size() is added for use by pte_alloc_huge(). When an architecture doesn't provide pmd_populate_size(), pmd_populate() is used as a fallback. Signed-off-by: Christophe Leroy --- include/linux/mm.h | 12 +++++++----- mm/filemap.c | 2 +- mm/internal.h | 2 +- mm/memory.c | 19 ++++++++++++------- mm/pgalloc-track.h | 2 +- mm/userfaultfd.c | 4 ++-- 6 files changed, 24 insertions(+), 17 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 2c0910bc3e4a..6c5c15955d4e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2801,8 +2801,8 @@ static inline void mm_inc_nr_ptes(struct mm_struct *mm) {} static inline void mm_dec_nr_ptes(struct mm_struct *mm) {} #endif -int __pte_alloc(struct mm_struct *mm, pmd_t *pmd); -int __pte_alloc_kernel(pmd_t *pmd); +int __pte_alloc(struct mm_struct *mm, pmd_t *pmd, unsigned long sz); +int __pte_alloc_kernel(pmd_t *pmd, unsigned long sz); #if defined(CONFIG_MMU) @@ -2987,7 +2987,8 @@ pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, pte_unmap(pte); \ } while (0) -#define pte_alloc(mm, pmd) (unlikely(pmd_none(*(pmd))) && __pte_alloc(mm, pmd)) +#define pte_alloc_size(mm, pmd, sz) (unlikely(pmd_none(*(pmd))) && __pte_alloc(mm, pmd, sz)) +#define pte_alloc(mm, pmd) pte_alloc_size(mm, pmd, PAGE_SIZE) #define pte_alloc_map(mm, pmd, address) \ (pte_alloc(mm, pmd) ? NULL : pte_offset_map(pmd, address)) @@ -2996,9 +2997,10 @@ pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, (pte_alloc(mm, pmd) ? \ NULL : pte_offset_map_lock(mm, pmd, address, ptlp)) -#define pte_alloc_kernel(pmd, address) \ - ((unlikely(pmd_none(*(pmd))) && __pte_alloc_kernel(pmd))? \ +#define pte_alloc_kernel_size(pmd, address, sz) \ + ((unlikely(pmd_none(*(pmd))) && __pte_alloc_kernel(pmd, sz))? \ NULL: pte_offset_kernel(pmd, address)) +#define pte_alloc_kernel(pmd, address) pte_alloc_kernel_size(pmd, address, PAGE_SIZE) #if USE_SPLIT_PMD_PTLOCKS diff --git a/mm/filemap.c b/mm/filemap.c index 7437b2bd75c1..b013000ea84f 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3428,7 +3428,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct folio *folio, } if (pmd_none(*vmf->pmd) && vmf->prealloc_pte) - pmd_install(mm, vmf->pmd, &vmf->prealloc_pte); + pmd_install(mm, vmf->pmd, &vmf->prealloc_pte, PAGE_SIZE); return false; } diff --git a/mm/internal.h b/mm/internal.h index 7e486f2c502c..b81c3ca59f45 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -206,7 +206,7 @@ void folio_activate(struct folio *folio); void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas, struct vm_area_struct *start_vma, unsigned long floor, unsigned long ceiling, bool mm_wr_locked); -void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte); +void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte, unsigned long sz); struct zap_details; void unmap_page_range(struct mmu_gather *tlb, diff --git a/mm/memory.c b/mm/memory.c index f2bc6dd15eb8..c846bb75746b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -409,7 +409,12 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas, } while (vma); } -void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte) +#ifndef pmd_populate_size +#define pmd_populate_size(mm, pmdp, pte, sz) pmd_populate(mm, pmdp, pte) +#define pmd_populate_kernel_size(mm, pmdp, pte, sz) pmd_populate_kernel(mm, pmdp, pte) +#endif + +void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte, unsigned long sz) { spinlock_t *ptl = pmd_lock(mm, pmd); @@ -429,25 +434,25 @@ void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte) * smp_rmb() barriers in page table walking code. */ smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */ - pmd_populate(mm, pmd, *pte); + pmd_populate_size(mm, pmd, *pte, sz); *pte = NULL; } spin_unlock(ptl); } -int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) +int __pte_alloc(struct mm_struct *mm, pmd_t *pmd, unsigned long sz) { pgtable_t new = pte_alloc_one(mm); if (!new) return -ENOMEM; - pmd_install(mm, pmd, &new); + pmd_install(mm, pmd, &new, sz); if (new) pte_free(mm, new); return 0; } -int __pte_alloc_kernel(pmd_t *pmd) +int __pte_alloc_kernel(pmd_t *pmd, unsigned long sz) { pte_t *new = pte_alloc_one_kernel(&init_mm); if (!new) @@ -456,7 +461,7 @@ int __pte_alloc_kernel(pmd_t *pmd) spin_lock(&init_mm.page_table_lock); if (likely(pmd_none(*pmd))) { /* Has another populated it ? */ smp_wmb(); /* See comment in pmd_install() */ - pmd_populate_kernel(&init_mm, pmd, new); + pmd_populate_kernel_size(&init_mm, pmd, new, sz); new = NULL; } spin_unlock(&init_mm.page_table_lock); @@ -4738,7 +4743,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf) } if (vmf->prealloc_pte) - pmd_install(vma->vm_mm, vmf->pmd, &vmf->prealloc_pte); + pmd_install(vma->vm_mm, vmf->pmd, &vmf->prealloc_pte, PAGE_SIZE); else if (unlikely(pte_alloc(vma->vm_mm, vmf->pmd))) return VM_FAULT_OOM; } diff --git a/mm/pgalloc-track.h b/mm/pgalloc-track.h index e9e879de8649..90e37de7ab77 100644 --- a/mm/pgalloc-track.h +++ b/mm/pgalloc-track.h @@ -45,7 +45,7 @@ static inline pmd_t *pmd_alloc_track(struct mm_struct *mm, pud_t *pud, #define pte_alloc_kernel_track(pmd, address, mask) \ ((unlikely(pmd_none(*(pmd))) && \ - (__pte_alloc_kernel(pmd) || ({*(mask)|=PGTBL_PMD_MODIFIED;0;})))?\ + (__pte_alloc_kernel(pmd, PAGE_SIZE) || ({*(mask)|=PGTBL_PMD_MODIFIED;0;})))?\ NULL: pte_offset_kernel(pmd, address)) #endif /* _LINUX_PGALLOC_TRACK_H */ diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 712160cd41ec..9baf507ce193 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -764,7 +764,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, break; } if (unlikely(pmd_none(dst_pmdval)) && - unlikely(__pte_alloc(dst_mm, dst_pmd))) { + unlikely(__pte_alloc(dst_mm, dst_pmd, PAGE_SIZE))) { err = -ENOMEM; break; } @@ -1686,7 +1686,7 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, err = -ENOENT; break; } - if (unlikely(__pte_alloc(mm, src_pmd))) { + if (unlikely(__pte_alloc(mm, src_pmd, PAGE_SIZE))) { err = -ENOMEM; break; } From patchwork Mon Mar 25 14:55:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13602368 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC369C54E64 for ; Mon, 25 Mar 2024 14:56:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2CC6F6B0088; Mon, 25 Mar 2024 10:56:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 27D426B0089; Mon, 25 Mar 2024 10:56:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 11E6D6B008A; Mon, 25 Mar 2024 10:56:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id EDDC46B0088 for ; Mon, 25 Mar 2024 10:56:24 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8A40B160529 for ; Mon, 25 Mar 2024 14:56:24 +0000 (UTC) X-FDA: 81935862288.04.9007588 Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) by imf06.hostedemail.com (Postfix) with ESMTP id 526D1180013 for ; Mon, 25 Mar 2024 14:56:22 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf06.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711378582; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qUkEJqYUVjEh5Z6oyXqL7B5FhLnD5QL9G0hteLFS7dg=; b=DxzRdQ2CPOpOvmyN4hRyog3fTOUsHa9GnqskxgcwqPSLUpa7eJHjSPkvGEqThC1EC6hAau p9d+MMFZLCloOxLo7snGtfrLE1yOwv67CHot9goiPw7tuCLj5Nl4B5A0yAiZ1BxntDnb41 a4LTrjuXQ3hZykufwHy622Z5MHQZxUY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf06.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711378582; a=rsa-sha256; cv=none; b=VqL1/10FFO/6mBoBwLUMp3uabFrK4JrIlFk1mYygYksTcQJCCW8BK9z+NZw3HA5q9q+TCq TVL9qgnjAFbmOD2ndGY4FLHs3GHb3DfNCRUfhGJbToG6yHQblNuipXdz95T/d9C2yB56t7 qAR3p+aRbFlxwM3I1PbIoeYt39VfK9I= Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4V3GGN4Xwnz9sqS; Mon, 25 Mar 2024 15:56:12 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Dh3jRGrDMhrs; Mon, 25 Mar 2024 15:56:12 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4V3GGK2Nmzz9sHR; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 4EBDB8B765; Mon, 25 Mar 2024 15:56:09 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id A_57aGmhzA1A; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from PO20335.idsi0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 2208C8B76E; Mon, 25 Mar 2024 15:56:09 +0100 (CET) From: Christophe Leroy To: Andrew Morton , Jason Gunthorpe , Peter Xu Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org Subject: [RFC PATCH 2/8] mm: Provide page size to pte_alloc_huge() Date: Mon, 25 Mar 2024 15:55:55 +0100 Message-ID: <32f0c0802a202cc738f3f21682f53d76f01fc70e.1711377230.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1711378567; l=4165; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=G0Wo+zSPvm0zKcYamX7mo4IBjfP690rcbl767tVRdqk=; b=FskSsaG4xsnz5gHg1RDzdpAAKIx/z4JVr9MAnODJ+e8Ywfb4R16JcI9JRXe2WOfUslhVdSRvk VwbcylHgRCdB6arpi04zDdwNgCSdflFTtov6OEfbi3s3u+96ivxekvC X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= X-Rspamd-Queue-Id: 526D1180013 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: oqwosgi8xpnbfm98ycd5w6paxkz1wqez X-HE-Tag: 1711378582-193845 X-HE-Meta: U2FsdGVkX196xjEM5Yv/3eIVBGFYmr4rpSzYVjBYnY0zX0KRNwznUKlNWzPYFAdFzCb5Wq/xKf+D3TJY3GcvxcyV38DtOFkEz8FJs5iXpP5YOlHr8QM1YB332f0xJojVbHLNWM2+CI3/rVsPkdSO036Xie8YP1g8NuNxak3Suux/Zey8EpeHEHreMeZ8B52osziXEFl5vFLR4gfqSX8Wdozj6p+8DMtboMlqfU8waC9vZ5hMAwhJFRwDDh7bVRXbrgmFuO4hO1vi4JEZgKMLZHTxmQTcjTzjWn5lMRVIZj6wLYKjA3e1Zj9BsYaQFeMwm9kC8cl9dJleVWU0kodS9bvsPUY7mfKDXxOiGTKHNgM8UL6C/PERhGvvxbl6usiyWBAjU5Rm8Eg7y0nEboGIZJiePnoJSi34cSxM+9eNHcd6rJGy2b+/nYAFecBHw7qWAKioWfBleEDRc4mH5Uir8AzP5CCCCCxKxyevIUyH/QcyLoArVX3fv7S5KnwWTNn5fIAMOflUPsz2fBo+MsruHufzUc4CgG4/zOiNwl5icytjyNCu8FlHLqjChd+93JerSw/7JgefkzeRHnlN5xaElwCQLHkmO1+n0zPhYvHSyGOIt7X7kl7+asY78yRSian9gsi7iPBBfKUkYie1NNASWz4ngnSKbURk38osiHRCvWGs3GPEm0z02z6ph1MJt95Wx5XXm9NZf+OOgQUp0+IjUgPO/iobRWPbyUxfsrglqvl1A7nI6BYbYht3Lv5ZT9YdtmfPI0tJ20YDXa7Oxb424gkl+TkV7oA1MSXCmk7krFV/rX/53ztbTjIvfO52eAV6I6BCMD0NvYt+R12UDLTXYykhkAoR+bX7B6G3GxKOGdgVfrHlU4rSjT9ygk9pUpn5kFT8PcLt0W+9JQFyQSSKsN8V+9/lPkZ0iu2BV5otRGUa1RDk0k1BgfIG5H2G9UQW2y+Q+MKMg2ME/X7WVUu 24FpFWbv /3x7r9oEPg3BoX6Tj6PdGYO8iIRDlV2YlztNwnBsjek0ASxhUzm5dQCWA7k1YM5RlVxeojvsRsqUlZd3Jh/ejAvUqFrxgd237xnVrm1H08SFKwVsxo+0Ftez8lyTpARSAEe5Kk5kd1Ec2P5p+HBj86i0no6hxntq07LO40URlk0PvfSE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In order to be able to flag the PMD entry with _PMD_HUGE_8M on powerpc 8xx, provide page size to pte_alloc_huge() and use it through the newly introduced pte_alloc_size(). Signed-off-by: Christophe Leroy --- arch/arm64/mm/hugetlbpage.c | 2 +- arch/parisc/mm/hugetlbpage.c | 2 +- arch/powerpc/mm/hugetlbpage.c | 2 +- arch/riscv/mm/hugetlbpage.c | 2 +- arch/sh/mm/hugetlbpage.c | 2 +- arch/sparc/mm/hugetlbpage.c | 2 +- include/linux/hugetlb.h | 4 ++-- 7 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 0f0e10bb0a95..71161c655fd6 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -289,7 +289,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, return NULL; WARN_ON(addr & (sz - 1)); - ptep = pte_alloc_huge(mm, pmdp, addr); + ptep = pte_alloc_huge(mm, pmdp, addr, sz); } else if (sz == PMD_SIZE) { if (want_pmd_share(vma, addr) && pud_none(READ_ONCE(*pudp))) ptep = huge_pmd_share(mm, vma, addr, pudp); diff --git a/arch/parisc/mm/hugetlbpage.c b/arch/parisc/mm/hugetlbpage.c index a9f7e21f6656..2f4c6b440710 100644 --- a/arch/parisc/mm/hugetlbpage.c +++ b/arch/parisc/mm/hugetlbpage.c @@ -66,7 +66,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, if (pud) { pmd = pmd_alloc(mm, pud, addr); if (pmd) - pte = pte_alloc_huge(mm, pmd, addr); + pte = pte_alloc_huge(mm, pmd, addr, sz); } return pte; } diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 594a4b7b2ca2..66ac56b26007 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -183,7 +183,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, return NULL; if (IS_ENABLED(CONFIG_PPC_8xx) && pshift < PMD_SHIFT) - return pte_alloc_huge(mm, (pmd_t *)hpdp, addr); + return pte_alloc_huge(mm, (pmd_t *)hpdp, addr, sz); BUG_ON(!hugepd_none(*hpdp) && !hugepd_ok(*hpdp)); diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c index 5ef2a6891158..dc77a58c6321 100644 --- a/arch/riscv/mm/hugetlbpage.c +++ b/arch/riscv/mm/hugetlbpage.c @@ -67,7 +67,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, for_each_napot_order(order) { if (napot_cont_size(order) == sz) { - pte = pte_alloc_huge(mm, pmd, addr & napot_cont_mask(order)); + pte = pte_alloc_huge(mm, pmd, addr & napot_cont_mask(order), sz); break; } } diff --git a/arch/sh/mm/hugetlbpage.c b/arch/sh/mm/hugetlbpage.c index 6cb0ad73dbb9..26579429e5ed 100644 --- a/arch/sh/mm/hugetlbpage.c +++ b/arch/sh/mm/hugetlbpage.c @@ -38,7 +38,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, if (pud) { pmd = pmd_alloc(mm, pud, addr); if (pmd) - pte = pte_alloc_huge(mm, pmd, addr); + pte = pte_alloc_huge(mm, pmd, addr, sz); } } } diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c index b432500c13a5..5a342199e837 100644 --- a/arch/sparc/mm/hugetlbpage.c +++ b/arch/sparc/mm/hugetlbpage.c @@ -298,7 +298,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, return NULL; if (sz >= PMD_SIZE) return (pte_t *)pmd; - return pte_alloc_huge(mm, pmd, addr); + return pte_alloc_huge(mm, pmd, addr, sz); } pte_t *huge_pte_offset(struct mm_struct *mm, diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 77b30a8c6076..d9c5d9daadc5 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -193,9 +193,9 @@ static inline pte_t *pte_offset_huge(pmd_t *pmd, unsigned long address) return pte_offset_kernel(pmd, address); } static inline pte_t *pte_alloc_huge(struct mm_struct *mm, pmd_t *pmd, - unsigned long address) + unsigned long address, unsigned long sz) { - return pte_alloc(mm, pmd) ? NULL : pte_offset_huge(pmd, address); + return pte_alloc_size(mm, pmd, sz) ? NULL : pte_offset_huge(pmd, address); } #endif From patchwork Mon Mar 25 14:55:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13602369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFD4AC54E64 for ; Mon, 25 Mar 2024 14:56:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6EA496B0089; Mon, 25 Mar 2024 10:56:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 69AA76B008A; Mon, 25 Mar 2024 10:56:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 562BC6B008C; Mon, 25 Mar 2024 10:56:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 460386B0089 for ; Mon, 25 Mar 2024 10:56:29 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 11F79A0405 for ; Mon, 25 Mar 2024 14:56:29 +0000 (UTC) X-FDA: 81935862498.25.0B7294A Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) by imf11.hostedemail.com (Postfix) with ESMTP id DB94B40013 for ; Mon, 25 Mar 2024 14:56:26 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf11.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711378587; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jDtbguhxPHzfMmdkoqYlgXq+VgRm+/7ZglTUWWOKk3Q=; b=WdmMda/IMzx7avf/+86IEo8BPSCPwBnIPst4ETM4LVSAY4lkgmxcDfEE9fdtEOTRwbdTBE /0yCZHThnJ9LF7olGfGXpgovcQDZERjavAoyCTcez8yl6Krcm281rujxMR9cU5/NsKWHzn c7ZPYzUKLCGbDcU07T+tj+zuZmASw9s= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf11.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711378587; a=rsa-sha256; cv=none; b=7VHs+PdNbmemcJTr7sL4y+4TZYXwojzUfQLIu0CV73i6yQNEkIFI3JPQF7RKRT0qDPMC4C Fdeha5L+aLXNP+XJQkWQao72PYWKz1UurNYA+t2XpxwCV3QrMX7S5E4Kbk0S81UmcoX39X o+VA+ZPp9QbZ/UZADss5MlVn3OBUS2A= Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4V3GGP1D5Qz9sqT; Mon, 25 Mar 2024 15:56:13 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 70e8zKdeppRM; Mon, 25 Mar 2024 15:56:13 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4V3GGK36JTz9sfW; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 6731A8B765; Mon, 25 Mar 2024 15:56:09 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id ZS39sT5H_ogC; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from PO20335.idsi0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 3A48B8B770; Mon, 25 Mar 2024 15:56:09 +0100 (CET) From: Christophe Leroy To: Andrew Morton , Jason Gunthorpe , Peter Xu Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org Subject: [RFC PATCH 3/8] mm: Provide pmd to pte_leaf_size() Date: Mon, 25 Mar 2024 15:55:56 +0100 Message-ID: <32921d5b2767fe7d6f16ec09afe1bf8a571cb538.1711377230.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1711378567; l=4399; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=4ts0uoP1Urxiva7aePDf1AHFQE/7fSObtY1zE6Z2IwU=; b=a05y8ni8WptI6RYZT/NgealekJsHbCYwBgmtjyMtZc/BKP6kcg4jNxmDeoGDUWSnnyYi3u2Op 17yI9VGc556Aj1+EarKB2yZYxn8f5BXklKDyWXoNYXVM9tCfIpuXUrt X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= X-Rspam-User: X-Stat-Signature: 8pc4q46th6kr98ajxgyqf3kmmr3dk6rf X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: DB94B40013 X-HE-Tag: 1711378586-702739 X-HE-Meta: U2FsdGVkX199uNZ+sFEHaFOsEp3XhoQaoGGJ54rV28ndZ18ZnwDnz4YsazXJmd5vJB0zwnmVbH0a+wgqOf4HmoiRFyoHNPueAmqgRlvQGnmmd84qB1I9B3QWCfwV3zspAhiuwpkTDUl6n7Cu/85xX7MnznFwICHm4AVL+AbZhru7Fln8xVQuLi1/QN0sSjJgpQ+RT6CtI9HB7Oh8++aRsjLQtauj1UULJFVhx/U9JzQnmAbYMYNbQqedUAcchaKCl7AExwACXvQxvhx7xp5FPTOEAjpZ8ohv0OLl0/x+E5zF4wDqebedusr3SawYuBVI73FOrhiir3F7MPEQhg7IMrBvT5SxwxvRNGOlXSWJ8SNL+HG5zWMz7eYp/brrEb00+r2BzRVcgzxWA33qZp3gKOHWETc662rEJqUXkQmVxbPYp4ErHiWXb/IgiVKXRuviZUx4W5VPdioSpTe1HYfle6m2rB71CvalHpu+jMSDwu4MMdfmCfTHdzIEYaXSDbO720A04Yxauv+46tMgqDw1IVgzmyRHCQy6db2T4EXSChex4yXYXTQwuzD05bUkk7wNbDIYg+EjB7jO6EQBAYxxD2xT24l5J033554NI9p4uQa4KvqbLbt0sATxoE1OGHxfQryAlCxm6QLYK9kshAp9IdAdXiXfgj5h9dI9sl/Sx2plQKAD0vfPpIW0MKZJ2OFsCLC7qdODu1u56E8XI2/qH7rgtr+hEipX/lAUT5UmVpFlPloacea145Fs3s3Z7ceOFKs8KrSRC7ovLKC4kUSJjUx6M8lyZcA+EwTmE616X5G2YePsE38xjxFNiaVa/xD0Pfvge1tkcrtt+2ZSFfih58X09sbIzJ3hHS2JQPBWLb8x9B/fSed73T6cD5W0LS7XfNEglMrANglZjo7xGacyB26OKZR3qjIFxyOVots6kPc8IE8ga2EXtBsd3XVjOlm1OiMBt3sOTQMHNyV65rv 8u+lEHe5 uD8JJ89TkrFMRop/DRnMnybBLft0W1atS65PNXXboYRj47dw+qyFHZEW4oZY8O8acCc+lSy4gLTLZxLC6zJP5lV/2/sPWBFRzo8ri6Dx+gbx5uxWI029FznpmOT/Y67MknLOh8qlIv1CDDb75k7KlPIVYusrMrKC/bD+A0W/bUc9o7Sw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On powerpc 8xx, when a page is 8M size, the information is in the PMD entry. So provide it to pte_leaf_size(). Signed-off-by: Christophe Leroy --- arch/arm64/include/asm/pgtable.h | 2 +- arch/powerpc/include/asm/nohash/32/pte-8xx.h | 2 +- arch/riscv/include/asm/pgtable.h | 2 +- arch/sparc/include/asm/pgtable_64.h | 2 +- arch/sparc/mm/hugetlbpage.c | 2 +- include/linux/pgtable.h | 2 +- kernel/events/core.c | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index afdd56d26ad7..57c40f2498ab 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -624,7 +624,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, #define pmd_bad(pmd) (!pmd_table(pmd)) #define pmd_leaf_size(pmd) (pmd_cont(pmd) ? CONT_PMD_SIZE : PMD_SIZE) -#define pte_leaf_size(pte) (pte_cont(pte) ? CONT_PTE_SIZE : PAGE_SIZE) +#define pte_leaf_size(pmd, pte) (pte_cont(pte) ? CONT_PTE_SIZE : PAGE_SIZE) #if defined(CONFIG_ARM64_64K_PAGES) || CONFIG_PGTABLE_LEVELS < 3 static inline bool pud_sect(pud_t pud) { return false; } diff --git a/arch/powerpc/include/asm/nohash/32/pte-8xx.h b/arch/powerpc/include/asm/nohash/32/pte-8xx.h index 137dc3c84e45..07df6b664861 100644 --- a/arch/powerpc/include/asm/nohash/32/pte-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/pte-8xx.h @@ -151,7 +151,7 @@ static inline unsigned long pgd_leaf_size(pgd_t pgd) #define pgd_leaf_size pgd_leaf_size -static inline unsigned long pte_leaf_size(pte_t pte) +static inline unsigned long pte_leaf_size(pmd_t pmd, pte_t pte) { pte_basic_t val = pte_val(pte); diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 20242402fc11..45fa27810f25 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -439,7 +439,7 @@ static inline pte_t pte_mkhuge(pte_t pte) return pte; } -#define pte_leaf_size(pte) (pte_napot(pte) ? \ +#define pte_leaf_size(pmd, pte) (pte_napot(pte) ? \ napot_cont_size(napot_cont_order(pte)) :\ PAGE_SIZE) diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 4d1bafaba942..67063af2ff8f 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -1175,7 +1175,7 @@ extern unsigned long pud_leaf_size(pud_t pud); extern unsigned long pmd_leaf_size(pmd_t pmd); #define pte_leaf_size pte_leaf_size -extern unsigned long pte_leaf_size(pte_t pte); +extern unsigned long pte_leaf_size(pmd_t pmd, pte_t pte); #endif /* CONFIG_HUGETLB_PAGE */ diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c index 5a342199e837..60c845a15bee 100644 --- a/arch/sparc/mm/hugetlbpage.c +++ b/arch/sparc/mm/hugetlbpage.c @@ -276,7 +276,7 @@ static unsigned long huge_tte_to_size(pte_t pte) unsigned long pud_leaf_size(pud_t pud) { return 1UL << tte_to_shift(*(pte_t *)&pud); } unsigned long pmd_leaf_size(pmd_t pmd) { return 1UL << tte_to_shift(*(pte_t *)&pmd); } -unsigned long pte_leaf_size(pte_t pte) { return 1UL << tte_to_shift(pte); } +unsigned long pte_leaf_size(pmd_t pmd, pte_t pte) { return 1UL << tte_to_shift(pte); } pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long sz) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 85fc7554cd52..e605a4149fc7 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1802,7 +1802,7 @@ typedef unsigned int pgtbl_mod_mask; #define pmd_leaf_size(x) PMD_SIZE #endif #ifndef pte_leaf_size -#define pte_leaf_size(x) PAGE_SIZE +#define pte_leaf_size(x, y) PAGE_SIZE #endif /* diff --git a/kernel/events/core.c b/kernel/events/core.c index 724e6d7e128f..5c1c083222b2 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7585,7 +7585,7 @@ static u64 perf_get_pgtable_size(struct mm_struct *mm, unsigned long addr) pte = ptep_get_lockless(ptep); if (pte_present(pte)) - size = pte_leaf_size(pte); + size = pte_leaf_size(pmd, pte); pte_unmap(ptep); #endif /* CONFIG_HAVE_FAST_GUP */ From patchwork Mon Mar 25 14:55:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13602370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A61FAC54E58 for ; Mon, 25 Mar 2024 14:56:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B7336B008C; Mon, 25 Mar 2024 10:56:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 366106B0092; Mon, 25 Mar 2024 10:56:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E17C6B0093; Mon, 25 Mar 2024 10:56:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 09BA66B008C for ; Mon, 25 Mar 2024 10:56:33 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CF8268067E for ; Mon, 25 Mar 2024 14:56:32 +0000 (UTC) X-FDA: 81935862624.19.5260BAF Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) by imf08.hostedemail.com (Postfix) with ESMTP id 8B85E160018 for ; Mon, 25 Mar 2024 14:56:30 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf08.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711378590; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hEeQClPx5tB6SS1a314ZnxFdBqZYib9kaxU/BKjU2y8=; b=y/dOCUT06b/f6TQvN6tyGDvNHhed60KMcJEOveUlIKTZW8nX+391K4oaWW0R32/0HTKKk4 bsirmjxJ7Vf7g/aG2+U5zywxFWPoKwf3OcPa4MAjpaaIpU0LTL4G3oKpxZNJrLSQA9mt4f G52UgNTaCTuXxMWmv9HX3NeqBztLW54= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf08.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711378590; a=rsa-sha256; cv=none; b=BY7vnVzUhTy6nLIL7LFz95X7yk4BSFEXKndaPYFVEk0tPSte/PirdNwWm3JoUCn3u0RGgW 2HhLkZbkvP8bZImxaooVRD0stRSuThLzV7A1uquGfDFfhouAfLMkTgt4eaCmdGWPrteusb HStWNv+ID1EeT1ZpumEv7NIvEkQRR0A= Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4V3GGR5XzZz9sr6; Mon, 25 Mar 2024 15:56:15 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id cqjE_cfuK08a; Mon, 25 Mar 2024 15:56:15 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4V3GGK5Fzmz9sjs; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id B0DE68B770; Mon, 25 Mar 2024 15:56:09 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id qP-I5BmsK7sT; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from PO20335.idsi0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 53A098B76D; Mon, 25 Mar 2024 15:56:09 +0100 (CET) From: Christophe Leroy To: Andrew Morton , Jason Gunthorpe , Peter Xu Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org Subject: [RFC PATCH 4/8] mm: Provide mm_struct and address to huge_ptep_get() Date: Mon, 25 Mar 2024 15:55:57 +0100 Message-ID: <1abe6cfaba2ad41a9deb705a4d3de8d1a9b6d5ca.1711377230.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1711378567; l=18435; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=WR9OVHGFiV54vVPSUXvghSAnRwewayw3cf217moexVA=; b=xEUD/lUcLiU9V6GE2m2YsyFeG1/EHoGfkQckZ++0SnKodrrkujAcXnjcRt9fjXZemrFP7OD02 jzx67AauV5IBYAsXcrNUmEXeoIFZn1vkUp6DVI/BY/ZlLu9PCYShSuI X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= X-Rspam-User: X-Stat-Signature: wsxp7boufsuo6wfnmpgi9b5rp1qdyopp X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 8B85E160018 X-HE-Tag: 1711378590-549034 X-HE-Meta: U2FsdGVkX18LtPc+do6wEWVkXd+iTR5b7WmI8+Zs31eM2FQMdi+e8cxSkTgueuaS/CJc7DlsIy/oIARg+SngNt850tD9dO330vdWH9y189UH5MEMaFbH2IMHyUcLsAvHYXFuopzkL2+WYcJuRE1q8l/dvzk4NazJistVWl1zcKCBWKDvE/gVElKNFScX+Xa9WHrAswkWyXG5zX8dRLlR7Rk9JMCCL5HRL9961lRjImuBaFUvUj++s1C4oWPnHYti8j7maph/4Vrkns4tAQ3OP1gRuWtCpASedkD7n8T1SzBEd/rBN/spYrG7oq5WFbAn8Ou3FbL+ynkZfd9zkGM27zknLxAXLf/41mlYE4NVWapfD4rhppuxMbzf7nBb1IRBdaxZxpV4FHcEmv/60I4gh5AG4hbCWMW+mMy5xqzhEwmuLqaIOz1eTyamh5J2cLd65CEPgz4zcb7WzG7jofmorCEJqKKjAAsBwJBRe6GLtYgga1ZLVou2U5JfWMApbTfh4AJFy/Vcmwo97whNxHpKFrFy3igcWyBwSGR9sVCN+dJDPDC6Jn3wZfpBVpwPWTB5cneZh814DHoDxVjeizjMq98FrlElx9h+GZHWTw7CFI9QE0DilY9af4QKJZodIVTZiKAXRMrKHjTXmtBLHA2X9mpqILYPsmCqBNC7s7LeeGRI/DFMZ2AVc4haKsROrCXWpVMQVqg3MVm9eOtpwG9IoeMbvkK5Wxh/TEzVjzoD8s3F20fDNS9aaLaNYQP5/TE0qlTqy0bUrrA4pJEcsdnuEh1LKwKx2ip/SaHVGRzmlEl7B51X5Ir195f0/dqMoDU17WUJXlo/hC6VkzHxmzqqboekFcZor3aWopg/hmhEH/vMMhaCdaeHaxn66KSVqzK2Ln5rGT9ZVSejakE3L3aaetgVhA+bOV555AS8Yakj69RDmUstHjwlKcclA7zUMyVqDtefa8fxs/lI0bdcm2z se3pEBrJ zoR+M/SiGuUTgaK6ULlhO8mjwJd6Bmo92B74UA1MudO5kz+Y/SrPbLEh9JugLHQepuf89alLIUsMuTvDWFdmFo6sFdAAJAKIDILyfz42YYyXOc11dJrYKGnxxuk2ONTBYEZ3jXUro/2bWX/FyQllh4EwcAg1OpKIVHQkZfmAHIe6TYKalr9SdPcA5NWbaOol6tV9Ev1XtH9cSNjje3ibKKp6KEObqAjmyX8JEyRfcFv1+Stae/E3cg3qClg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On powerpc 8xx huge_ptep_get() will need to know whether the given ptep is a PTE entry or a PMD entry. This cannot be known with the PMD entry itself because there is no easy way to know it from the content of the entry. So huge_ptep_get() will need to know either the size of the page or get the pmd. In order to be consistent with huge_ptep_get_and_clear(), give mm and address to huge_ptep_get(). Signed-off-by: Christophe Leroy --- arch/arm64/include/asm/hugetlb.h | 2 +- fs/hugetlbfs/inode.c | 2 +- fs/proc/task_mmu.c | 8 +++--- fs/userfaultfd.c | 2 +- include/asm-generic/hugetlb.h | 2 +- include/linux/swapops.h | 2 +- mm/damon/vaddr.c | 6 ++--- mm/gup.c | 2 +- mm/hmm.c | 2 +- mm/hugetlb.c | 46 ++++++++++++++++---------------- mm/memory-failure.c | 2 +- mm/mempolicy.c | 2 +- mm/migrate.c | 4 +-- mm/mincore.c | 2 +- mm/userfaultfd.c | 2 +- 15 files changed, 43 insertions(+), 43 deletions(-) diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h index 2ddc33d93b13..1af39a74e791 100644 --- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -46,7 +46,7 @@ extern pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep, unsigned long sz); #define __HAVE_ARCH_HUGE_PTEP_GET -extern pte_t huge_ptep_get(pte_t *ptep); +extern pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep); void __init arm64_hugetlb_cma_reserve(void); diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 6502c7e776d1..ec3ec87d29e7 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -425,7 +425,7 @@ static bool hugetlb_vma_maps_page(struct vm_area_struct *vma, if (!ptep) return false; - pte = huge_ptep_get(ptep); + pte = huge_ptep_get(vma->vm_mm, addr, ptep); if (huge_pte_none(pte) || !pte_present(pte)) return false; diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 23fbab954c20..b14081bcdafe 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1572,7 +1572,7 @@ static int pagemap_hugetlb_range(pte_t *ptep, unsigned long hmask, if (vma->vm_flags & VM_SOFTDIRTY) flags |= PM_SOFT_DIRTY; - pte = huge_ptep_get(ptep); + pte = huge_ptep_get(walk->mm, addr, ptep); if (pte_present(pte)) { struct page *page = pte_page(pte); @@ -2256,7 +2256,7 @@ static int pagemap_scan_hugetlb_entry(pte_t *ptep, unsigned long hmask, if (~p->arg.flags & PM_SCAN_WP_MATCHING) { /* Go the short route when not write-protecting pages. */ - pte = huge_ptep_get(ptep); + pte = huge_ptep_get(walk->mm, start, ptep); categories = p->cur_vma_category | pagemap_hugetlb_category(pte); if (!pagemap_scan_is_interesting_page(categories, p)) @@ -2268,7 +2268,7 @@ static int pagemap_scan_hugetlb_entry(pte_t *ptep, unsigned long hmask, i_mmap_lock_write(vma->vm_file->f_mapping); ptl = huge_pte_lock(hstate_vma(vma), vma->vm_mm, ptep); - pte = huge_ptep_get(ptep); + pte = huge_ptep_get(walk->mm, start, ptep); categories = p->cur_vma_category | pagemap_hugetlb_category(pte); if (!pagemap_scan_is_interesting_page(categories, p)) @@ -2663,7 +2663,7 @@ static int gather_pte_stats(pmd_t *pmd, unsigned long addr, static int gather_hugetlb_stats(pte_t *pte, unsigned long hmask, unsigned long addr, unsigned long end, struct mm_walk *walk) { - pte_t huge_pte = huge_ptep_get(pte); + pte_t huge_pte = huge_ptep_get(walk->mm, addr, pte); struct numa_maps *md; struct page *page; diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 60dcfafdc11a..177fe1ff14d7 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -256,7 +256,7 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, goto out; ret = false; - pte = huge_ptep_get(ptep); + pte = huge_ptep_get(vma->vm_mm, vmf->address, ptep); /* * Lockless access: we're in a wait_event so it's ok if it diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h index 6dcf4d576970..594d5905f615 100644 --- a/include/asm-generic/hugetlb.h +++ b/include/asm-generic/hugetlb.h @@ -144,7 +144,7 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma, #endif #ifndef __HAVE_ARCH_HUGE_PTEP_GET -static inline pte_t huge_ptep_get(pte_t *ptep) +static inline pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { return ptep_get(ptep); } diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 48b700ba1d18..b9a9f5003bf5 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -334,7 +334,7 @@ static inline bool is_migration_entry_dirty(swp_entry_t entry) extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, unsigned long address); -extern void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte); +extern void migration_entry_wait_huge(struct vm_area_struct *vma, unsigned long addr, pte_t *pte); #else /* CONFIG_MIGRATION */ static inline swp_entry_t make_readable_migration_entry(pgoff_t offset) { diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 381559e4a1fa..58829baf8b5d 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -339,7 +339,7 @@ static void damon_hugetlb_mkold(pte_t *pte, struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr) { bool referenced = false; - pte_t entry = huge_ptep_get(pte); + pte_t entry = huge_ptep_get(mm, addr, pte); struct folio *folio = pfn_folio(pte_pfn(entry)); unsigned long psize = huge_page_size(hstate_vma(vma)); @@ -373,7 +373,7 @@ static int damon_mkold_hugetlb_entry(pte_t *pte, unsigned long hmask, pte_t entry; ptl = huge_pte_lock(h, walk->mm, pte); - entry = huge_ptep_get(pte); + entry = huge_ptep_get(walk->mm, addr, pte); if (!pte_present(entry)) goto out; @@ -509,7 +509,7 @@ static int damon_young_hugetlb_entry(pte_t *pte, unsigned long hmask, pte_t entry; ptl = huge_pte_lock(h, walk->mm, pte); - entry = huge_ptep_get(pte); + entry = huge_ptep_get(walk->mm, addr, pte); if (!pte_present(entry)) goto out; diff --git a/mm/gup.c b/mm/gup.c index df83182ec72d..ae3f27fc9cc5 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2800,7 +2800,7 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, if (pte_end < end) end = pte_end; - pte = huge_ptep_get(ptep); + pte = huge_ptep_get(NULL, addr, ptep); if (!pte_access_permitted(pte, flags & FOLL_WRITE)) return 0; diff --git a/mm/hmm.c b/mm/hmm.c index 277ddcab4947..91a0b57fcb2e 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -485,7 +485,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, pte_t entry; ptl = huge_pte_lock(hstate_vma(vma), walk->mm, pte); - entry = huge_ptep_get(pte); + entry = huge_ptep_get(walk->mm, addr, pte); i = (start - range->start) >> PAGE_SHIFT; pfn_req_flags = range->hmm_pfns[i]; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 23ef240ba48a..cfe1cf2f30dc 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5334,7 +5334,7 @@ static void set_huge_ptep_writable(struct vm_area_struct *vma, { pte_t entry; - entry = huge_pte_mkwrite(huge_pte_mkdirty(huge_ptep_get(ptep))); + entry = huge_pte_mkwrite(huge_pte_mkdirty(huge_ptep_get(vma->vm_mm, address, ptep))); if (huge_ptep_set_access_flags(vma, address, ptep, entry, 1)) update_mmu_cache(vma, address, ptep); } @@ -5442,7 +5442,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, dst_ptl = huge_pte_lock(h, dst, dst_pte); src_ptl = huge_pte_lockptr(h, src, src_pte); spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); - entry = huge_ptep_get(src_pte); + entry = huge_ptep_get(src_vma->vm_mm, addr, src_pte); again: if (huge_pte_none(entry)) { /* @@ -5480,7 +5480,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, set_huge_pte_at(dst, addr, dst_pte, make_pte_marker(marker), sz); } else { - entry = huge_ptep_get(src_pte); + entry = huge_ptep_get(src_vma->vm_mm, addr, src_pte); pte_folio = page_folio(pte_page(entry)); folio_get(pte_folio); @@ -5522,7 +5522,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, dst_ptl = huge_pte_lock(h, dst, dst_pte); src_ptl = huge_pte_lockptr(h, src, src_pte); spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); - entry = huge_ptep_get(src_pte); + entry = huge_ptep_get(src_vma->vm_mm, addr, src_pte); if (!pte_same(src_pte_old, entry)) { restore_reserve_on_error(h, dst_vma, addr, new_folio); @@ -5632,7 +5632,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, new_addr |= last_addr_mask; continue; } - if (huge_pte_none(huge_ptep_get(src_pte))) + if (huge_pte_none(huge_ptep_get(mm, old_addr, src_pte))) continue; if (huge_pmd_unshare(mm, vma, old_addr, src_pte)) { @@ -5705,7 +5705,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, continue; } - pte = huge_ptep_get(ptep); + pte = huge_ptep_get(mm, address, ptep); if (huge_pte_none(pte)) { spin_unlock(ptl); continue; @@ -5942,7 +5942,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, struct vm_fault *vmf) { const bool unshare = flags & FAULT_FLAG_UNSHARE; - pte_t pte = huge_ptep_get(ptep); + pte_t pte = huge_ptep_get(mm, address, ptep); struct hstate *h = hstate_vma(vma); struct folio *old_folio; struct folio *new_folio; @@ -6055,7 +6055,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, spin_lock(ptl); ptep = hugetlb_walk(vma, haddr, huge_page_size(h)); if (likely(ptep && - pte_same(huge_ptep_get(ptep), pte))) + pte_same(huge_ptep_get(mm, haddr, ptep), pte))) goto retry_avoidcopy; /* * race occurs while re-acquiring page table @@ -6093,7 +6093,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, */ spin_lock(ptl); ptep = hugetlb_walk(vma, haddr, huge_page_size(h)); - if (likely(ptep && pte_same(huge_ptep_get(ptep), pte))) { + if (likely(ptep && pte_same(huge_ptep_get(mm, haddr, ptep), pte))) { pte_t newpte = make_huge_pte(vma, &new_folio->page, !unshare); /* Break COW or unshare */ @@ -6193,14 +6193,14 @@ static inline vm_fault_t hugetlb_handle_userfault(struct vm_fault *vmf, * Recheck pte with pgtable lock. Returns true if pte didn't change, or * false if pte changed or is changing. */ -static bool hugetlb_pte_stable(struct hstate *h, struct mm_struct *mm, +static bool hugetlb_pte_stable(struct hstate *h, struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t old_pte) { spinlock_t *ptl; bool same; ptl = huge_pte_lock(h, mm, ptep); - same = pte_same(huge_ptep_get(ptep), old_pte); + same = pte_same(huge_ptep_get(mm, addr, ptep), old_pte); spin_unlock(ptl); return same; @@ -6265,7 +6265,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, * never happen on the page after UFFDIO_COPY has * correctly installed the page and returned. */ - if (!hugetlb_pte_stable(h, mm, ptep, old_pte)) { + if (!hugetlb_pte_stable(h, mm, haddr, ptep, old_pte)) { ret = 0; goto out; } @@ -6288,7 +6288,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, * here. Before returning error, get ptl and make * sure there really is no pte entry. */ - if (hugetlb_pte_stable(h, mm, ptep, old_pte)) + if (hugetlb_pte_stable(h, mm, haddr, ptep, old_pte)) ret = vmf_error(PTR_ERR(folio)); else ret = 0; @@ -6338,7 +6338,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, folio_unlock(folio); folio_put(folio); /* See comment in userfaultfd_missing() block above */ - if (!hugetlb_pte_stable(h, mm, ptep, old_pte)) { + if (!hugetlb_pte_stable(h, mm, haddr, ptep, old_pte)) { ret = 0; goto out; } @@ -6365,7 +6365,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, ptl = huge_pte_lock(h, mm, ptep); ret = 0; /* If pte changed from under us, retry */ - if (!pte_same(huge_ptep_get(ptep), old_pte)) + if (!pte_same(huge_ptep_get(mm, address, ptep), old_pte)) goto backout; if (anon_rmap) @@ -6488,7 +6488,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, return VM_FAULT_OOM; } - entry = huge_ptep_get(ptep); + entry = huge_ptep_get(mm, address, ptep); if (huge_pte_none_mostly(entry)) { if (is_pte_marker(entry)) { pte_marker marker = @@ -6529,7 +6529,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, * be released there. */ mutex_unlock(&hugetlb_fault_mutex_table[hash]); - migration_entry_wait_huge(vma, ptep); + migration_entry_wait_huge(vma, haddr, ptep); return 0; } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) ret = VM_FAULT_HWPOISON_LARGE | @@ -6562,11 +6562,11 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, ptl = huge_pte_lock(h, mm, ptep); /* Check for a racing update before calling hugetlb_wp() */ - if (unlikely(!pte_same(entry, huge_ptep_get(ptep)))) + if (unlikely(!pte_same(entry, huge_ptep_get(mm, address, ptep)))) goto out_ptl; /* Handle userfault-wp first, before trying to lock more pages */ - if (userfaultfd_wp(vma) && huge_pte_uffd_wp(huge_ptep_get(ptep)) && + if (userfaultfd_wp(vma) && huge_pte_uffd_wp(huge_ptep_get(mm, address, ptep)) && (flags & FAULT_FLAG_WRITE) && !huge_pte_write(entry)) { if (!userfaultfd_wp_async(vma)) { spin_unlock(ptl); @@ -6689,7 +6689,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, ptl = huge_pte_lock(h, dst_mm, dst_pte); /* Don't overwrite any existing PTEs (even markers) */ - if (!huge_pte_none(huge_ptep_get(dst_pte))) { + if (!huge_pte_none(huge_ptep_get(mm, dst_addr, dst_pte))) { spin_unlock(ptl); return -EEXIST; } @@ -6826,7 +6826,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, * page backing it, then access the page. */ ret = -EEXIST; - if (!huge_pte_none_mostly(huge_ptep_get(dst_pte))) + if (!huge_pte_none_mostly(huge_ptep_get(mm, dst_addr, dst_pte))) goto out_release_unlock; if (folio_in_pagecache) @@ -6901,7 +6901,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, goto out_unlock; ptl = huge_pte_lock(h, mm, pte); - entry = huge_ptep_get(pte); + entry = huge_ptep_get(mm, address, pte); if (pte_present(entry)) { page = pte_page(entry); @@ -7018,7 +7018,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma, address |= last_addr_mask; continue; } - pte = huge_ptep_get(ptep); + pte = huge_ptep_get(mm, address, ptep); if (unlikely(is_hugetlb_entry_hwpoisoned(pte))) { /* Nothing to do. */ } else if (unlikely(is_hugetlb_entry_migration(pte))) { diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 9349948f1abf..7c65340b1adb 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -820,7 +820,7 @@ static int hwpoison_hugetlb_range(pte_t *ptep, unsigned long hmask, struct mm_walk *walk) { struct hwpoison_walk *hwp = walk->private; - pte_t pte = huge_ptep_get(ptep); + pte_t pte = huge_ptep_get(walk->mm, addr, ptep); struct hstate *h = hstate_vma(walk->vma); return check_hwpoisoned_entry(pte, addr, huge_page_shift(h), diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0fe77738d971..50a79700f496 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -624,7 +624,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask, pte_t entry; ptl = huge_pte_lock(hstate_vma(walk->vma), walk->mm, pte); - entry = huge_ptep_get(pte); + entry = huge_ptep_get(walk->mm, addr, pte); if (!pte_present(entry)) { if (unlikely(is_hugetlb_entry_migration(entry))) qp->nr_failed++; diff --git a/mm/migrate.c b/mm/migrate.c index 73a052a382f1..87f7aedb8ee2 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -338,14 +338,14 @@ void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, * * This function will release the vma lock before returning. */ -void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *ptep) +void migration_entry_wait_huge(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), vma->vm_mm, ptep); pte_t pte; hugetlb_vma_assert_locked(vma); spin_lock(ptl); - pte = huge_ptep_get(ptep); + pte = huge_ptep_get(vma->vm_mm, addr, ptep); if (unlikely(!is_hugetlb_entry_migration(pte))) { spin_unlock(ptl); diff --git a/mm/mincore.c b/mm/mincore.c index dad3622cc963..b5735a4aaa7d 100644 --- a/mm/mincore.c +++ b/mm/mincore.c @@ -33,7 +33,7 @@ static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr, * Hugepages under user process are always in RAM and never * swapped out, but theoretically it needs to be checked. */ - present = pte && !huge_pte_none_mostly(huge_ptep_get(pte)); + present = pte && !huge_pte_none_mostly(huge_ptep_get(walk->mm, addr, pte)); for (; addr != end; vec++, addr += PAGE_SIZE) *vec = present; walk->private = vec; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 9baf507ce193..2b2ce329552e 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -555,7 +555,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } if (!uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE) && - !huge_pte_none_mostly(huge_ptep_get(dst_pte))) { + !huge_pte_none_mostly(huge_ptep_get(dst_mm, dst_addr, dst_pte))) { err = -EEXIST; hugetlb_vma_unlock_read(dst_vma); mutex_unlock(&hugetlb_fault_mutex_table[hash]); From patchwork Mon Mar 25 14:55:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13602371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAD72CD11BF for ; Mon, 25 Mar 2024 14:56:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5163B6B0096; Mon, 25 Mar 2024 10:56:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4C7B66B0098; Mon, 25 Mar 2024 10:56:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 366BF6B0099; Mon, 25 Mar 2024 10:56:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 234896B0096 for ; Mon, 25 Mar 2024 10:56:37 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E67DD120837 for ; Mon, 25 Mar 2024 14:56:36 +0000 (UTC) X-FDA: 81935862792.10.87084EF Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) by imf13.hostedemail.com (Postfix) with ESMTP id CAE7B20014 for ; Mon, 25 Mar 2024 14:56:34 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf13.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711378595; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0YCs+LKiKC2TE72Eu1gZ3o51fMCPRk7LsKO1XbTN9Z0=; b=s3GzgQmFQpUEdgNOuZcnOsFa+waulP4ZMw0HxiKcpshijuaqpWVFJSLtCJyBnR3GC0JLsA iD/Go9Nl5ERPQGhtceygkrfglVMw9ZlLq+qW9Qu/rTyaoEUHi7EZw5fx3/E83rSEFVu7Pt cHEz5zSVUa1bdAm3tRiI30cudr0dLdM= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf13.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711378595; a=rsa-sha256; cv=none; b=6BZNlml22ky4lVNO097G8yv6g3NiIvtQye/MRq4GvXTKdFp112G3VoohTUuJYLMGWXuNWM 8wDlxM9bFq6Lv5iPWNP5Ai++KCMZA5LFml9SV3aZabDb7EENJfnPQI6SHkDIXJoTlsFiyq 5XPhvWzmqK0XHxskZtLfjv1A5HUFIlI= Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4V3GGS2kLVz9sjs; Mon, 25 Mar 2024 15:56:16 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JVR3mRugAZfo; Mon, 25 Mar 2024 15:56:16 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4V3GGK5kxZz9snj; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id BC19F8B76C; Mon, 25 Mar 2024 15:56:09 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id jllWntrCz3wc; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from PO20335.idsi0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 9A95A8B765; Mon, 25 Mar 2024 15:56:09 +0100 (CET) From: Christophe Leroy To: Andrew Morton , Jason Gunthorpe , Peter Xu Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org Subject: [RFC PATCH 5/8] powerpc/mm: Allow hugepages without hugepd Date: Mon, 25 Mar 2024 15:55:58 +0100 Message-ID: <69591a2e4fd72c1dd09eb8e0b2e7f1256fd32304.1711377230.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1711378567; l=4076; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=Jznk0U1nUM93F+OjDNFTDQGGpO/+NygdkRGhqiyMtIg=; b=NRVJXrlJlcUaGzfQGV1gXGWLIRU/BzjMzXwi1e9lA4gXddnJMpRRfFmCSybSn6ZqcShsYRtTG eE1Wia4TerHBN/z+blYQ+hywkW7qiSZ3w7OWik7L539Ar+p3FBIVxKC X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: CAE7B20014 X-Stat-Signature: ficqqrm9hpbmmwenx5997r654ktipq1y X-Rspam-User: X-HE-Tag: 1711378594-804488 X-HE-Meta: U2FsdGVkX19s+Ce79odpA7BpjnfoT2ATxbqpjpWqz1NQ4f1zZWIXVm83K+aeOssiMRlNKrP9GkcOU6pvu5TIo/CIB3c3tKJ29nKjR1xeoEcyqGSmH40OpKKdCfz/1zhTddogYMC/rEp1yEzbHE5qrMkp/jGVBzv6eRfKkEE1/GmqZ2o2W844XgwFvbvay+L3tZ3BJH52xHmo6hMDtS4Kp92Isqe/AxwNVrlz9ZSB3J1KYR4vP2O5ZH772FwNmaX89jgmB5s0ZfZ0rXGUXlzkYjMXzMvtnjmTtSS/gmRgJk3JtBb6rlW2gB87J01uXyHebb4OI6xp4aDcEDX36Sd+0C1vG6frlxhwNHjnUTvuQvYsOf8bKe+amh4aKOOeZhW6SMlQc1z9nerIh4o08EMCPeKwVaPfPmMMpV/5jPrGgcA/LlopVd81u/j6cgfQXYRV7G1XSU53rwmIeHkEUjZZ0sDNFd/lVn5e4pIWNYYynnaNOZyO7ATW3ZAgV4TKyeULusG+p3aQc0OWNLitYlIELneKzSiAJOBTyPep1Xj54N74iTdgziksuXDiuOceVMQ/+uz0R8b/T5PbSF+U3y8L+VZjsu/g6KDGwVGa+Gqta7ZNyavGP5ZMd8OnVeTehMZfODXp8jzd7qfynMi4T4E/Zmpw++45WJZfIg4I5FbyrITEg1uC9m6gw09RTHHOlIpJOrx5YhQPpqhxj6tI89ndNUsBVONBXH1NnqW8eDKdMQS9bStCFRycQVT0Ge+nxAy/XlduZRo3wsBrjXOwUbAqPw3WOcJfjo4k1M0QyW57EMZFXRkueEfcTWq5KqdnsfubbIkZpkVcNT2X1OXEPSQRq9VAS8kooou5qtRaFRKZn423NNQy9NJoN3vOcRDBvWRMbTi2t+pzCbtR/mlnUjjfLyhxKVyrwf8h5ywtZFA2rwpFcpQoJs6swslTbYvyVY8+50pMydoiebKkldH4Bxx RgChKMc9 7qXC5wnVl4rxkaz1LVZbGnTOYFpMdt+OJG20oy0xPfG/NDjHKae0nDZAJd5ICbQ5cYD5FuOkjQ2akAvqxOqQZhB1I5v23LiNHkV/BlkMlhkFjol0hw88JmttZhFBo4RqrW9lX X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In preparation of implementing huge pages on powerpc 8xx without hugepd, enclose hugepd related code inside an ifdef CONFIG_ARCH_HAS_HUGEPD Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/hugetlb.h | 2 ++ arch/powerpc/include/asm/nohash/pgtable.h | 8 +++++--- arch/powerpc/mm/hugetlbpage.c | 10 ++++++++++ arch/powerpc/mm/pgtable.c | 2 ++ 4 files changed, 19 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h index ea71f7245a63..a05657e5701b 100644 --- a/arch/powerpc/include/asm/hugetlb.h +++ b/arch/powerpc/include/asm/hugetlb.h @@ -30,10 +30,12 @@ static inline int is_hugepage_only_range(struct mm_struct *mm, } #define is_hugepage_only_range is_hugepage_only_range +#ifdef CONFIG_ARCH_HAS_HUGEPD #define __HAVE_ARCH_HUGETLB_FREE_PGD_RANGE void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling); +#endif #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h index 427db14292c9..ac3353f7f2ac 100644 --- a/arch/powerpc/include/asm/nohash/pgtable.h +++ b/arch/powerpc/include/asm/nohash/pgtable.h @@ -340,7 +340,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, #define pgprot_writecombine pgprot_noncached_wc -#ifdef CONFIG_HUGETLB_PAGE +#ifdef CONFIG_ARCH_HAS_HUGEPD static inline int hugepd_ok(hugepd_t hpd) { #ifdef CONFIG_PPC_8xx @@ -351,6 +351,10 @@ static inline int hugepd_ok(hugepd_t hpd) #endif } +#define is_hugepd(hpd) (hugepd_ok(hpd)) +#endif + +#ifdef CONFIG_HUGETLB_PAGE static inline int pmd_huge(pmd_t pmd) { return 0; @@ -360,8 +364,6 @@ static inline int pud_huge(pud_t pud) { return 0; } - -#define is_hugepd(hpd) (hugepd_ok(hpd)) #endif int map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot); diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 66ac56b26007..db73ad845a2a 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -42,6 +42,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long s return __find_linux_pte(mm->pgd, addr, NULL, NULL); } +#ifdef CONFIG_ARCH_HAS_HUGEPD static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp, unsigned long address, unsigned int pdshift, unsigned int pshift, spinlock_t *ptl) @@ -193,6 +194,13 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, return hugepte_offset(*hpdp, addr, pdshift); } +#else +pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long addr, unsigned long sz) +{ + return pte_alloc_huge(mm, pmd_off(mm, addr), addr, sz); +} +#endif #ifdef CONFIG_PPC_BOOK3S_64 /* @@ -248,6 +256,7 @@ int __init alloc_bootmem_huge_page(struct hstate *h, int nid) return __alloc_bootmem_huge_page(h, nid); } +#ifdef CONFIG_ARCH_HAS_HUGEPD #ifndef CONFIG_PPC_BOOK3S_64 #define HUGEPD_FREELIST_SIZE \ ((PAGE_SIZE - sizeof(struct hugepd_freelist)) / sizeof(pte_t)) @@ -505,6 +514,7 @@ void hugetlb_free_pgd_range(struct mmu_gather *tlb, } } while (addr = next, addr != end); } +#endif bool __init arch_hugetlb_valid_size(unsigned long size) { diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index 9e7ba9c3851f..acdf64c9b93e 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -487,8 +487,10 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea, if (!hpdp) return NULL; +#ifdef CONFIG_ARCH_HAS_HUGEPD ret_pte = hugepte_offset(*hpdp, ea, pdshift); pdshift = hugepd_shift(*hpdp); +#endif out: if (hpage_shift) *hpage_shift = pdshift; From patchwork Mon Mar 25 14:55:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13602372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92C2AC54E64 for ; Mon, 25 Mar 2024 14:56:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 258016B0098; Mon, 25 Mar 2024 10:56:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E0066B0099; Mon, 25 Mar 2024 10:56:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A7E86B009A; Mon, 25 Mar 2024 10:56:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id EEA776B0098 for ; Mon, 25 Mar 2024 10:56:40 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id C91C01A06B0 for ; Mon, 25 Mar 2024 14:56:40 +0000 (UTC) X-FDA: 81935862960.04.D61E823 Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) by imf03.hostedemail.com (Postfix) with ESMTP id C39C52001B for ; Mon, 25 Mar 2024 14:56:38 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu; dmarc=pass (policy=quarantine) header.from=csgroup.eu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711378598; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qZ5KCYPj4SMvT/HBTq5bq8L6nreeyk6uwD9/N46B+Gs=; b=Oq0RY2SdnZyB0C/Wk8RNgZ8U+xY2ydEvthe3yv7g2133mxAJ4IFNoC+8QZwClZuFruDylH mYKm4PzFwC5FY5+4zsRzkG1AO7cgC4XQ/bVYfuxDZOCXFjDL/F09lKCTgnSZsQJuLeh+Px kMUSLhOQ6xSaiegiV88tZRXBa0spGGU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711378598; a=rsa-sha256; cv=none; b=KcSKer0HvlN34HBOWyA4JyU1zQgvNuyiDfviym40xr9/u+SprSXHrF36BfeO1w1TQmjLMc lMLSWfI7PEuvFTAo/r/L3QsRc5xQqTsnllAZDiQuVv7Lu09L349zb3NmfhFHEZIsFlLIUK gT8pOBnLRG7UHZ+o/GmsZOLyVa8Emps= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu; dmarc=pass (policy=quarantine) header.from=csgroup.eu Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4V3GGT3g45z9snj; Mon, 25 Mar 2024 15:56:17 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id CGKskKphEDJv; Mon, 25 Mar 2024 15:56:17 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4V3GGK5rj0z9scH; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id C557B8B765; Mon, 25 Mar 2024 15:56:09 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id sXq_S5qTELHS; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from PO20335.idsi0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id A4DC88B76E; Mon, 25 Mar 2024 15:56:09 +0100 (CET) From: Christophe Leroy To: Andrew Morton , Jason Gunthorpe , Peter Xu Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org Subject: [RFC PATCH 6/8] powerpc/8xx: Fix size given to set_huge_pte_at() Date: Mon, 25 Mar 2024 15:55:59 +0100 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1711378567; l=902; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=Lt4RxvANgRMdtDBAVPsx//g2luBxg6RgUDKkYHSTyhY=; b=c9gTmd1HGEGsfYiLHRnhana1OzLTL6trd8ka61OZyKCy5KH57OasGxcAO+/VSMoNQJJQ2bs+v xysdBK83WZFAJyRPxyzJLps05TlJdfoMccQzAAXNNuMiV2r8FUxHCkC X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= X-Stat-Signature: 53fz9i9pnmizyi4zkhw366edgprnkjaa X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: C39C52001B X-Rspam-User: X-HE-Tag: 1711378598-77340 X-HE-Meta: U2FsdGVkX1/FGFlCJbSP6mPMASJJqCtZ6KnA2cJD/4wpnes+zs7yo4ZHMFLfJSEG/xxtJtI2xR34b6nS/ap9MqB0kqaYL5W46CMlsXHmawTwQJsMJwIzIPliR6a2peidRASdLg6iZEnWcxljcNuUenNemau1IthtYGY6f7nAp0+DiMD8PH78IGAWSZJKfJDXbrcwjTQesHd9wHzoGRg8E6vFCoGDHYypnbROvRgCRBhlmLTdtXR0BTW8L19ex/90HYRT9L72qQ0GdDY2c2n1BHKQZf0GJ9hyLInchv3QQSWPh+A7Kf+ERZaBUNiEOWuJOI0aGXILWZdbpHTW8nj7Mtz2FnQ32F+Jn48MJF6AzgPsl7uFNBoHZh5JpWRI/iu9JGGVdJ8KOF2XzUtxK7IeWLQuaJckW02fkegCpUfIGvUzEYdGxCSvhKG77QfCWjWLBlHfXLj5RaVTStfC1PBA7yPApbj+C6aauEMXiljGAZ6TVPcntX5EqmLnqX2oRJ/vq8y4nn2OL51pakZSaLVO52i8nUrwhqwCFKFTcqvHtj3tH8phISNa42mHXTegxmU6xnC1VcZzMQ6kb1hyphAHRBEkkSHBvDk++kF0s+z7Mzcjn1fOEmQExQ4ThX1OQWO9n029iKi6UCsaOGdq77AmuFuwvp8HdsHtsJa2L3CWEaDF534MLdzIbrLOjBCBIu8hD4GfUdQ0G9I4eWZ8w0WKss/pkJleqv0ernQYDDTHzkRErqnmWlnYYfRn15YboAcNoR6LxnOfOoP+esIRwEvJTpEmuQMCohlAX8WOAMfq4cPwVXzqyR6Sq38+QYUoUTxJt4kFRaoteHZsiNQhh1Oe+7Az/S0Pq6BEPqwu2PR+tkY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: set_huge_pte_at() expects the real page size, not the psize which is the index of the page definition in table mmu_psize_defs[] Fixes: 935d4f0c6dc8 ("mm: hugetlb: add huge page size param to set_huge_pte_at()") Signed-off-by: Christophe Leroy --- arch/powerpc/mm/nohash/8xx.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c index 6be6421086ed..70b4d807fda5 100644 --- a/arch/powerpc/mm/nohash/8xx.c +++ b/arch/powerpc/mm/nohash/8xx.c @@ -94,7 +94,8 @@ static int __ref __early_map_kernel_hugepage(unsigned long va, phys_addr_t pa, return -EINVAL; set_huge_pte_at(&init_mm, va, ptep, - pte_mkhuge(pfn_pte(pa >> PAGE_SHIFT, prot)), psize); + pte_mkhuge(pfn_pte(pa >> PAGE_SHIFT, prot)), + 1UL << mmu_psize_to_shift(psize)); return 0; } From patchwork Mon Mar 25 14:56:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13602373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D279FC54E58 for ; Mon, 25 Mar 2024 14:56:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 69D426B009A; Mon, 25 Mar 2024 10:56:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6258C6B009C; Mon, 25 Mar 2024 10:56:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A98D6B009A; Mon, 25 Mar 2024 10:56:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 375B36B009A for ; Mon, 25 Mar 2024 10:56:45 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 02A021C0789 for ; Mon, 25 Mar 2024 14:56:44 +0000 (UTC) X-FDA: 81935863170.23.3B9190A Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) by imf29.hostedemail.com (Postfix) with ESMTP id CB9AB120028 for ; Mon, 25 Mar 2024 14:56:42 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf29.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711378603; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vnUXePZbsQLwSaVjxwiI10g1ModEP7vHCv9CJeKObcw=; b=c9xLiwP+rW62OwF5foDLhE7TN0+RHbUU4uoWeFSz+nU92TqtZz2MrN8HmL/9lmnn84ySuU ZviMlBc+xVXn2FREQ4uITdNsAAY4Fik+w52PKX4lPSao9vp+QoR1/at5sbipY5Uf63IkY8 gL2iwCzJgAOPV9HfEBOUOJDZ6YjtGNU= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf29.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711378603; a=rsa-sha256; cv=none; b=6yQY5SCTvMWpzgKtWKWkZQe5T26iVixm3//9ttPYWoHmOIZTIk9bX1KC7BeWw2A5/JwaPI 2eNSc+pyJL/eJwZyAIN01ZVis4DHVEflQUHGHoXkQwlpn79ZWwAVdJ+J3nFSJHkdA8QfXh NlcqgbfTxfVHN3e5TLo4iR+BH5FFjTA= Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4V3GGV1FJXz9scH; Mon, 25 Mar 2024 15:56:18 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id nH7Vfrfaa2ZA; Mon, 25 Mar 2024 15:56:18 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4V3GGK70w2z9sp5; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id EC8388B76C; Mon, 25 Mar 2024 15:56:09 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id QA_1DeYfNxEI; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from PO20335.idsi0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id BC3628B76D; Mon, 25 Mar 2024 15:56:09 +0100 (CET) From: Christophe Leroy To: Andrew Morton , Jason Gunthorpe , Peter Xu Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org Subject: [RFC PATCH 7/8] powerpc/8xx: Remove support for 8M pages Date: Mon, 25 Mar 2024 15:56:00 +0100 Message-ID: <288f26f487648d21fd9590e40b390934eaa5d24a.1711377230.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1711378567; l=10673; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=labbSdniQY4vn+lZfsrEeIYYZsv2QN4tTkOicuIW48w=; b=U5BUojPoZDtbPOrG9bTpBn2FDaaj2rTI+jywJDEdI56knFc8sY3VXSuGVg7FUKxU0436ZmjH+ utytvLE3nVvBmMGx3phA/erGn+nMxsF+DheFAPeqNJdrQb+w1x3UrIn X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= X-Rspamd-Queue-Id: CB9AB120028 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: god9jrucnt35mxpf1o5rkbsbqu55bs55 X-HE-Tag: 1711378602-56323 X-HE-Meta: U2FsdGVkX18kKGY7v999R3hsF6CRt7zxSk/iiFw2vSKE99mzIbUeBdYPWhbOO7T9FL0bF51masFUUKE5n4QLI9Q44OPiWD6aoA/HiGMokDKs8vpAG/XkMn29PEi6zToTpNfZDpJdFmmWIlKfR3PipsrVn9gS/4RwJW2gphq2KEKn8sweVaGmhv0uLeRiFxqWLE5viN3QOsANmzqzpa40hu9NA4Paxt/uXP2yoFEAQ7rd/usDgDVqKd2QV8YGsfPzaXZToKi+nC80AyXtPWIKFnMU7ZesndF0s1Xr7rFlee/KMeW5VLw4NE3hl4Z/z1ZuYqaeBaJuAD+gPzhpN0sFB+eOvEMvWMfgZsyHhAt1U2q3Q3Om75oNxhcf4mLn60TlTcXZfZCz3ALnJkWRWjg2Ns02oo7Pw3lQ8ddXrda+EO2+ZmADrabBkiGSct4aKSWgXiIe+20nevXXOfKXGov7aOSHmEAoxh52dYzrmYyYoiHhvooiK2I9TOWcphRPFEs9l/ASMXrxhaFg+DMyB1FTatKWXAXBxqBD4t8/RGXse6MLIyb3d+ePWTQcDTGrCcyvV7gI/5QO8fjgrB0POvs51y+90gQ22NDTA1a/E2klFMe6Vubsve5+4Xbutb/4h3taqRNBGHH9zq9WC8+js06bM/YSOw23qG9mL/pDziYgQdr9+cQHQgTm4cuciYUB0PnhFqX1bCMbTmEU6/H9ppHMMewLGat1dObRZsIIlfwKei5pD0WcMaEc7W9eBAT0p531jvmd3qVhIIFh0vi34cDu21+SPdAvvzqKQeW/t5BSD6IO2v6XrpimyJ+Oyuva+qO09IVY4RjrvKd9ijqYGclhmln6Z67pcKLKHaX9pUIAZHRnUrMOJWUOvmQYk1CsnC2oefbDNVRkYH8V+uz2myjbiZTUWsnGuWEyhnq5V71mHFBOJhS10ZnmQiarD2kwkZ35dId6jKvgVTNfWhK6KjI rzUorBEf DHSaBjM98lutY+xg4z6DCh/OK3+3pd1pHUuZV+os9+GkTPsWuigqNAqbdRwDdEIyI39umrLPJCyuZFcjTIV4vDP/fhQQzX89Wfn65C5JSGg5xQkqEqVzG9C0J0SiWGzvjtThu X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Remove support for 8M pages in order to stop using hugepd. Support for 8M pages will be added back later using the same approach as for 512k pages, in extenso using contiguous page entries in the regular page table. Signed-off-by: Christophe Leroy --- arch/powerpc/Kconfig | 1 - .../include/asm/nohash/32/hugetlb-8xx.h | 30 ------------------- arch/powerpc/include/asm/nohash/32/pte-8xx.h | 14 +-------- arch/powerpc/include/asm/nohash/pgtable.h | 4 --- arch/powerpc/include/asm/page.h | 5 ---- arch/powerpc/kernel/head_8xx.S | 9 +----- arch/powerpc/mm/hugetlbpage.c | 3 -- arch/powerpc/mm/nohash/8xx.c | 28 ++--------------- arch/powerpc/mm/nohash/tlb.c | 3 -- arch/powerpc/platforms/Kconfig.cputype | 2 ++ 10 files changed, 7 insertions(+), 92 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index a68b9e637eda..74c038cf770c 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -135,7 +135,6 @@ config PPC select ARCH_HAS_DMA_MAP_DIRECT if PPC_PSERIES select ARCH_HAS_FORTIFY_SOURCE select ARCH_HAS_GCOV_PROFILE_ALL - select ARCH_HAS_HUGEPD if HUGETLB_PAGE select ARCH_HAS_KCOV select ARCH_HAS_MEMBARRIER_CALLBACKS select ARCH_HAS_MEMBARRIER_SYNC_CORE diff --git a/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h b/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h index 92df40c6cc6b..178ed9fdd353 100644 --- a/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h @@ -4,42 +4,12 @@ #define PAGE_SHIFT_8M 23 -static inline pte_t *hugepd_page(hugepd_t hpd) -{ - BUG_ON(!hugepd_ok(hpd)); - - return (pte_t *)__va(hpd_val(hpd) & ~HUGEPD_SHIFT_MASK); -} - -static inline unsigned int hugepd_shift(hugepd_t hpd) -{ - return PAGE_SHIFT_8M; -} - -static inline pte_t *hugepte_offset(hugepd_t hpd, unsigned long addr, - unsigned int pdshift) -{ - unsigned long idx = (addr & (SZ_4M - 1)) >> PAGE_SHIFT; - - return hugepd_page(hpd) + idx; -} - static inline void flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr) { flush_tlb_page(vma, vmaddr); } -static inline void hugepd_populate(hugepd_t *hpdp, pte_t *new, unsigned int pshift) -{ - *hpdp = __hugepd(__pa(new) | _PMD_USER | _PMD_PRESENT | _PMD_PAGE_8M); -} - -static inline void hugepd_populate_kernel(hugepd_t *hpdp, pte_t *new, unsigned int pshift) -{ - *hpdp = __hugepd(__pa(new) | _PMD_PRESENT | _PMD_PAGE_8M); -} - static inline int check_and_get_huge_psize(int shift) { return shift_to_mmu_psize(shift); diff --git a/arch/powerpc/include/asm/nohash/32/pte-8xx.h b/arch/powerpc/include/asm/nohash/32/pte-8xx.h index 07df6b664861..004d7e825af2 100644 --- a/arch/powerpc/include/asm/nohash/32/pte-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/pte-8xx.h @@ -142,15 +142,6 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma, pte_t *pt } #define __ptep_set_access_flags __ptep_set_access_flags -static inline unsigned long pgd_leaf_size(pgd_t pgd) -{ - if (pgd_val(pgd) & _PMD_PAGE_8M) - return SZ_8M; - return SZ_4M; -} - -#define pgd_leaf_size pgd_leaf_size - static inline unsigned long pte_leaf_size(pmd_t pmd, pte_t pte) { pte_basic_t val = pte_val(pte); @@ -171,14 +162,11 @@ static inline unsigned long pte_leaf_size(pmd_t pmd, pte_t pte) * For other page sizes, we have a single entry in the table. */ static pmd_t *pmd_off(struct mm_struct *mm, unsigned long addr); -static int hugepd_ok(hugepd_t hpd); static inline int number_of_cells_per_pte(pmd_t *pmd, pte_basic_t val, int huge) { if (!huge) return PAGE_SIZE / SZ_4K; - else if (hugepd_ok(*((hugepd_t *)pmd))) - return 1; else if (IS_ENABLED(CONFIG_PPC_4K_PAGES) && !(val & _PAGE_HUGE)) return SZ_16K / SZ_4K; else @@ -198,7 +186,7 @@ static inline pte_basic_t pte_update(struct mm_struct *mm, unsigned long addr, p for (i = 0; i < num; i += PAGE_SIZE / SZ_4K, new += PAGE_SIZE) { *entry++ = new; - if (IS_ENABLED(CONFIG_PPC_16K_PAGES) && num != 1) { + if (IS_ENABLED(CONFIG_PPC_16K_PAGES)) { *entry++ = new; *entry++ = new; *entry++ = new; diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h index ac3353f7f2ac..c4be7754e96f 100644 --- a/arch/powerpc/include/asm/nohash/pgtable.h +++ b/arch/powerpc/include/asm/nohash/pgtable.h @@ -343,12 +343,8 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, #ifdef CONFIG_ARCH_HAS_HUGEPD static inline int hugepd_ok(hugepd_t hpd) { -#ifdef CONFIG_PPC_8xx - return ((hpd_val(hpd) & _PMD_PAGE_MASK) == _PMD_PAGE_8M); -#else /* We clear the top bit to indicate hugepd */ return (hpd_val(hpd) && (hpd_val(hpd) & PD_HUGE) == 0); -#endif } #define is_hugepd(hpd) (hugepd_ok(hpd)) diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h index e411e5a70ea3..018c3d55232c 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -293,13 +293,8 @@ static inline const void *pfn_to_kaddr(unsigned long pfn) /* * Some number of bits at the level of the page table that points to * a hugepte are used to encode the size. This masks those bits. - * On 8xx, HW assistance requires 4k alignment for the hugepte. */ -#ifdef CONFIG_PPC_8xx -#define HUGEPD_SHIFT_MASK 0xfff -#else #define HUGEPD_SHIFT_MASK 0x3f -#endif #ifndef __ASSEMBLY__ diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S index 647b0b445e89..b53af565b132 100644 --- a/arch/powerpc/kernel/head_8xx.S +++ b/arch/powerpc/kernel/head_8xx.S @@ -416,13 +416,11 @@ FixupDAR:/* Entry point for dcbx workaround. */ 3: lwz r11, (swapper_pg_dir-PAGE_OFFSET)@l(r11) /* Get the level 1 entry */ mtspr SPRN_MD_TWC, r11 - mtcrf 0x01, r11 mfspr r11, SPRN_MD_TWC lwz r11, 0(r11) /* Get the pte */ - bt 28,200f /* bit 28 = Large page (8M) */ /* concat physical page address(r11) and page offset(r10) */ rlwimi r11, r10, 0, 32 - PAGE_SHIFT, 31 -201: lwz r11,0(r11) + lwz r11,0(r11) /* Check if it really is a dcbx instruction. */ /* dcbt and dcbtst does not generate DTLB Misses/Errors, * no need to include them here */ @@ -441,11 +439,6 @@ FixupDAR:/* Entry point for dcbx workaround. */ 141: mfspr r10,SPRN_M_TW b DARFixed /* Nope, go back to normal TLB processing */ -200: - /* concat physical page address(r11) and page offset(r10) */ - rlwimi r11, r10, 0, 32 - PAGE_SHIFT_8M, 31 - b 201b - 144: mfspr r10, SPRN_DSISR rlwinm r10, r10,0,7,5 /* Clear store bit for buggy dcbst insn */ mtspr SPRN_DSISR, r10 diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index db73ad845a2a..4e9fbd5b895d 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -183,9 +183,6 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, if (!hpdp) return NULL; - if (IS_ENABLED(CONFIG_PPC_8xx) && pshift < PMD_SHIFT) - return pte_alloc_huge(mm, (pmd_t *)hpdp, addr, sz); - BUG_ON(!hugepd_none(*hpdp) && !hugepd_ok(*hpdp)); if (hugepd_none(*hpdp) && __hugepte_alloc(mm, hpdp, addr, diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c index 70b4d807fda5..fc10e08bcb85 100644 --- a/arch/powerpc/mm/nohash/8xx.c +++ b/arch/powerpc/mm/nohash/8xx.c @@ -48,42 +48,22 @@ unsigned long p_block_mapped(phys_addr_t pa) return 0; } -static pte_t __init *early_hugepd_alloc_kernel(hugepd_t *pmdp, unsigned long va) -{ - if (hpd_val(*pmdp) == 0) { - pte_t *ptep = memblock_alloc(sizeof(pte_basic_t), SZ_4K); - - if (!ptep) - return NULL; - - hugepd_populate_kernel((hugepd_t *)pmdp, ptep, PAGE_SHIFT_8M); - hugepd_populate_kernel((hugepd_t *)pmdp + 1, ptep, PAGE_SHIFT_8M); - } - return hugepte_offset(*(hugepd_t *)pmdp, va, PGDIR_SHIFT); -} - static int __ref __early_map_kernel_hugepage(unsigned long va, phys_addr_t pa, pgprot_t prot, int psize, bool new) { pmd_t *pmdp = pmd_off_k(va); pte_t *ptep; - if (WARN_ON(psize != MMU_PAGE_512K && psize != MMU_PAGE_8M)) + if (WARN_ON(psize != MMU_PAGE_512K)) return -EINVAL; if (new) { if (WARN_ON(slab_is_available())) return -EINVAL; - if (psize == MMU_PAGE_512K) - ptep = early_pte_alloc_kernel(pmdp, va); - else - ptep = early_hugepd_alloc_kernel((hugepd_t *)pmdp, va); + ptep = early_pte_alloc_kernel(pmdp, va); } else { - if (psize == MMU_PAGE_512K) - ptep = pte_offset_kernel(pmdp, va); - else - ptep = hugepte_offset(*(hugepd_t *)pmdp, va, PGDIR_SHIFT); + ptep = pte_offset_kernel(pmdp, va); } if (WARN_ON(!ptep)) @@ -130,8 +110,6 @@ static void mmu_mapin_ram_chunk(unsigned long offset, unsigned long top, for (; p < ALIGN(p, SZ_8M) && p < top; p += SZ_512K, v += SZ_512K) __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_512K, new); - for (; p < ALIGN_DOWN(top, SZ_8M) && p < top; p += SZ_8M, v += SZ_8M) - __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_8M, new); for (; p < ALIGN_DOWN(top, SZ_512K) && p < top; p += SZ_512K, v += SZ_512K) __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_512K, new); diff --git a/arch/powerpc/mm/nohash/tlb.c b/arch/powerpc/mm/nohash/tlb.c index 5ffa0af4328a..cb2afe39cee5 100644 --- a/arch/powerpc/mm/nohash/tlb.c +++ b/arch/powerpc/mm/nohash/tlb.c @@ -104,9 +104,6 @@ struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT] = { [MMU_PAGE_512K] = { .shift = 19, }, - [MMU_PAGE_8M] = { - .shift = 23, - }, }; #endif diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype index b2d8c0da2ad9..fa4bb096b3ae 100644 --- a/arch/powerpc/platforms/Kconfig.cputype +++ b/arch/powerpc/platforms/Kconfig.cputype @@ -98,6 +98,7 @@ config PPC_BOOK3S_64 select ARCH_ENABLE_HUGEPAGE_MIGRATION if HUGETLB_PAGE && MIGRATION select ARCH_ENABLE_SPLIT_PMD_PTLOCK select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE + select ARCH_HAS_HUGEPD if HUGETLB_PAGE select ARCH_SUPPORTS_HUGETLBFS select ARCH_SUPPORTS_NUMA_BALANCING select HAVE_MOVE_PMD @@ -290,6 +291,7 @@ config PPC_BOOK3S config PPC_E500 select FSL_EMB_PERFMON bool + select ARCH_HAS_HUGEPD if HUGETLB_PAGE select ARCH_SUPPORTS_HUGETLBFS if PHYS_64BIT || PPC64 select PPC_SMP_MUXED_IPI select PPC_DOORBELL From patchwork Mon Mar 25 14:56:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13602374 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A6B3C54E58 for ; Mon, 25 Mar 2024 14:56:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2A9D66B009C; Mon, 25 Mar 2024 10:56:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A13D6B009D; Mon, 25 Mar 2024 10:56:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0AAC66B009F; Mon, 25 Mar 2024 10:56:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E0BCB6B009C for ; Mon, 25 Mar 2024 10:56:49 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 9F221120670 for ; Mon, 25 Mar 2024 14:56:49 +0000 (UTC) X-FDA: 81935863338.15.162B3BC Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) by imf01.hostedemail.com (Postfix) with ESMTP id 4B50F4001F for ; Mon, 25 Mar 2024 14:56:47 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf01.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711378607; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=n+bYXvffoJi3ApzeRnI779QswADXpw068QmwWBqS0sQ=; b=XrJK4FS3gZxXvu7wxlCqfarULwt0PU/Klvbbnxj50fznzECU7481/2KBeWPsL8VsDcTEFg 3uJVwJRCmpE20DqEzzQZNVjZV7cFGvckqTcHIFQjA3uS6Q4gZpEqiM+av+GZa4u7H559yg i3FSSldHuEksCdaM8B/CvcdeDUpuLio= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=csgroup.eu; spf=pass (imf01.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.236.30 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711378607; a=rsa-sha256; cv=none; b=Gk+rxPt06eQSHTX+Mt3w0/twGQo22ZCOgoSUmExXnEpokqOVg/0qY5UUg5Z4M+sbIHg3om hdSctrkgzvUMNIaAv5aBxt20w2P+fKI6S5jhF9qba2GQ9WTsTfTRzfLB25k7eVZxQOGhc/ +/JYnLLy5WXh17UXGdFsEbU4TkaIfOE= Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4V3GGW13Shz9srg; Mon, 25 Mar 2024 15:56:19 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YV9edRV4Wprd; Mon, 25 Mar 2024 15:56:19 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4V3GGL0SFcz9sps; Mon, 25 Mar 2024 15:56:10 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 0B7A98B76C; Mon, 25 Mar 2024 15:56:10 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id YZtrcU0fiEBw; Mon, 25 Mar 2024 15:56:09 +0100 (CET) Received: from PO20335.idsi0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id D80E18B765; Mon, 25 Mar 2024 15:56:09 +0100 (CET) From: Christophe Leroy To: Andrew Morton , Jason Gunthorpe , Peter Xu Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org Subject: [RFC PATCH 8/8] powerpc/8xx: Add back support for 8M pages using contiguous PTE entries Date: Mon, 25 Mar 2024 15:56:01 +0100 Message-ID: <57c49d8be1e3f1546474ab7cbe2cce37919305d5.1711377230.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1711378567; l=14498; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=YOa3daFZCi+KeLNEZDEArooCNTMCHGXoSUjhT3Ixq1k=; b=EGYoXEf+OsOEXz4DddK0itsdaNhDO9VBOjnNDwnxq13VHxCfcNAlhW9RAs1Wb5Xs70z5bROmg 6uNfeWreaIVDl+r/VjcaqB7O6TkGafYFbKED2RTLpkPfwPTo0l1QFRL X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 4B50F4001F X-Stat-Signature: amyfhaf3zujx14hq7y5h3y646kpykxg3 X-HE-Tag: 1711378607-659446 X-HE-Meta: U2FsdGVkX1+cZqddYSR6RDKE20DaTmV1oeFfTuI4AmKe0J+9qGS32ITHoUJIah1RTHVJNIXPU2xSRziPZud63xDhYmymvHQomPAJ2NLjpF+HJePVj60kAv28YlnORSlutdAnf9/hvzwiNkZ16ciVADx8CehSnQw4Iv2ml5p1JDVhfiCc3j5YPxy0ojnbLJFEocDo0V0r2Gkf19g6p/+qLn2tbLebUnGLudKbQziiuVJb42T7ltnRith2LORDQKpSe2z3XBdpN+mkVlvb3r51huY42A+V4Q2oNfvJCYx+kDc6/xe0d4KO1LWlz+vgtW8W04j9d8uwfoMcxBx+TY0A4URV8b2L+DjlAl2MJzLDpTfx28uZy6h8jl7t71OGLEZAS/6hWOhP0bJBa+3AugZ9fdHYowRTENq+Xs6aM8bfPFGvA/G87G2uyiFQhVo3bjA66I9ynOF5rmsPRBF/rV8B3WiSCn42KHp9SKdFIlvGH5aDFDYlOYQMmZKgVLrIKR1ehdQIjIDMdWXto440bB+7cMcAvXMAUe6p/QeRhxfYsMzNoRpSET5mB0Blat2Jm9LMdrPW64wuRok5X9wo42H4rXdEx19lxpz91Igybmt2T0yLhYXeIIit06yhOLJNhnn155vCdN6bzDAqqiC5nNp2PNxndTT5hEGobbk5aDForLjqROVckWcA+ERITJyamBn8gY0A62puSCF8tRcDiKGKyXANh/rkBuKNk5rumwmk03YYYV5YHTl/OulVXuzSNh48tiXV951XFAdXOyNs1u0ht9n/Ccx5iqwFAZrzvEJJcci4soT4SUNXPIJ8YMoGo7LZgvOwtJxNpCaOGcoBBR64Y5Vll8J7FyDu1p9/9HLiu59C3+w1Huey5xAHf0i7MuHOXq+ze4WiY/70Tq+7GQLMUD4ZIabYj9PJg/QjI/ShACgTTtkyAmno/Zdb2Xp5moxwFuZRaDp7HKd40GE41vF bG7/uU+b TrZBDbrTsTyNc+YNnaWoFBhtciOMuJZVb4HK7neefAmjfM01skUus9D0PekX6b9fj1SKlKBwMCrTF0oOeuQ8QUgev8PyThdXlnziUwA6nyd+z9/iJQ+g/8Eg57SFA/eh+o1H3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In order to fit better with standard Linux page tables layout, add support for 8M pages using contiguous PTE entries in a standard page table. Page tables will then be populated with 1024 similar entries and two PMD entries will point to that page table. The PMD entries also get a flag to tell it is addressing an 8M page, this is required for the HW tablewalk assistance. Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/hugetlb.h | 11 ++++- .../include/asm/nohash/32/hugetlb-8xx.h | 28 +++++++++++- arch/powerpc/include/asm/nohash/32/pgalloc.h | 2 + arch/powerpc/include/asm/nohash/32/pte-8xx.h | 43 +++++++++++++++++-- arch/powerpc/include/asm/pgtable.h | 1 + arch/powerpc/kernel/head_8xx.S | 1 + arch/powerpc/mm/hugetlbpage.c | 12 +++++- arch/powerpc/mm/nohash/8xx.c | 31 ++++++++++--- arch/powerpc/mm/nohash/tlb.c | 3 ++ arch/powerpc/mm/pgtable.c | 24 +++++++---- arch/powerpc/mm/pgtable_32.c | 2 +- 11 files changed, 134 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h index a05657e5701b..bd60ea134f8e 100644 --- a/arch/powerpc/include/asm/hugetlb.h +++ b/arch/powerpc/include/asm/hugetlb.h @@ -41,7 +41,16 @@ void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr, static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - return __pte(pte_update(mm, addr, ptep, ~0UL, 0, 1)); + pmd_t *pmdp = (pmd_t *)ptep; + pte_t pte; + + if (pmdp == pmd_off(mm, ALIGN_DOWN(addr, SZ_8M))) { + pte = __pte(pte_update(mm, addr, pte_offset_kernel(pmdp, 0), ~0UL, 0, 1)); + pte_update(mm, addr, pte_offset_kernel(pmdp + 1, 0), ~0UL, 0, 1); + } else { + pte = __pte(pte_update(mm, addr, ptep, ~0UL, 0, 1)); + } + return pte; } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH diff --git a/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h b/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h index 178ed9fdd353..1414cfd28987 100644 --- a/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h @@ -15,6 +15,16 @@ static inline int check_and_get_huge_psize(int shift) return shift_to_mmu_psize(shift); } +#define __HAVE_ARCH_HUGE_PTEP_GET +static inline pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep) +{ + pmd_t *pmdp = (pmd_t *)ptep; + + if (pmdp == pmd_off(mm, ALIGN_DOWN(addr, SZ_8M))) + ptep = pte_offset_kernel(pmdp, 0); + return ptep_get(ptep); +} + #define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte, unsigned long sz); @@ -23,7 +33,14 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep, unsigned long sz) { - pte_update(mm, addr, ptep, ~0UL, 0, 1); + pmd_t *pmdp = (pmd_t *)ptep; + + if (pmdp == pmd_off(mm, ALIGN_DOWN(addr, SZ_8M))) { + pte_update(mm, addr, pte_offset_kernel(pmdp, 0), ~0UL, 0, 1); + pte_update(mm, addr, pte_offset_kernel(pmdp + 1, 0), ~0UL, 0, 1); + } else { + pte_update(mm, addr, ptep, ~0UL, 0, 1); + } } #define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT @@ -33,7 +50,14 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm, unsigned long clr = ~pte_val(pte_wrprotect(__pte(~0))); unsigned long set = pte_val(pte_wrprotect(__pte(0))); - pte_update(mm, addr, ptep, clr, set, 1); + pmd_t *pmdp = (pmd_t *)ptep; + + if (pmdp == pmd_off(mm, ALIGN_DOWN(addr, SZ_8M))) { + pte_update(mm, addr, pte_offset_kernel(pmdp, 0), clr, set, 1); + pte_update(mm, addr, pte_offset_kernel(pmdp + 1, 0), clr, set, 1); + } else { + pte_update(mm, addr, ptep, clr, set, 1); + } } #ifdef CONFIG_PPC_4K_PAGES diff --git a/arch/powerpc/include/asm/nohash/32/pgalloc.h b/arch/powerpc/include/asm/nohash/32/pgalloc.h index 11eac371e7e0..ff4f90cfb461 100644 --- a/arch/powerpc/include/asm/nohash/32/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/32/pgalloc.h @@ -14,6 +14,7 @@ #define __pmd_free_tlb(tlb,x,a) do { } while (0) /* #define pgd_populate(mm, pmd, pte) BUG() */ +#ifndef CONFIG_PPC_8xx static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *pte) { @@ -31,5 +32,6 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp, else *pmdp = __pmd(__pa(pte_page) | _PMD_USER | _PMD_PRESENT); } +#endif #endif /* _ASM_POWERPC_PGALLOC_32_H */ diff --git a/arch/powerpc/include/asm/nohash/32/pte-8xx.h b/arch/powerpc/include/asm/nohash/32/pte-8xx.h index 004d7e825af2..b05cc4f87713 100644 --- a/arch/powerpc/include/asm/nohash/32/pte-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/pte-8xx.h @@ -129,14 +129,23 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, } #define ptep_set_wrprotect ptep_set_wrprotect +static pmd_t *pmd_off(struct mm_struct *mm, unsigned long addr); +static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address); + static inline void __ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep, pte_t entry, unsigned long address, int psize) { unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_EXEC); unsigned long clr = ~pte_val(entry) & _PAGE_RO; int huge = psize > mmu_virtual_psize ? 1 : 0; + pmd_t *pmdp = (pmd_t *)ptep; - pte_update(vma->vm_mm, address, ptep, clr, set, huge); + if (pmdp == pmd_off(vma->vm_mm, ALIGN_DOWN(address, SZ_8M))) { + pte_update(vma->vm_mm, address, pte_offset_kernel(pmdp, 0), clr, set, huge); + pte_update(vma->vm_mm, address, pte_offset_kernel(pmdp + 1, 0), clr, set, huge); + } else { + pte_update(vma->vm_mm, address, ptep, clr, set, huge); + } flush_tlb_page(vma, address); } @@ -146,6 +155,8 @@ static inline unsigned long pte_leaf_size(pmd_t pmd, pte_t pte) { pte_basic_t val = pte_val(pte); + if (pmd_val(pmd) & _PMD_PAGE_8M) + return SZ_8M; if (val & _PAGE_HUGE) return SZ_512K; if (val & _PAGE_SPS) @@ -159,14 +170,16 @@ static inline unsigned long pte_leaf_size(pmd_t pmd, pte_t pte) * On the 8xx, the page tables are a bit special. For 16k pages, we have * 4 identical entries. For 512k pages, we have 128 entries as if it was * 4k pages, but they are flagged as 512k pages for the hardware. - * For other page sizes, we have a single entry in the table. + * For 8M pages, we have 1024 entries as if it was + * 4M pages, but they are flagged as 8M pages for the hardware. + * For 4k pages, we have a single entry in the table. */ -static pmd_t *pmd_off(struct mm_struct *mm, unsigned long addr); - static inline int number_of_cells_per_pte(pmd_t *pmd, pte_basic_t val, int huge) { if (!huge) return PAGE_SIZE / SZ_4K; + else if ((pmd_val(*pmd) & _PMD_PAGE_MASK) == _PMD_PAGE_8M) + return SZ_4M / SZ_4K; else if (IS_ENABLED(CONFIG_PPC_4K_PAGES) && !(val & _PAGE_HUGE)) return SZ_16K / SZ_4K; else @@ -209,6 +222,28 @@ static inline pte_t ptep_get(pte_t *ptep) } #endif /* CONFIG_PPC_16K_PAGES */ +static inline void pmd_populate_kernel_size(struct mm_struct *mm, pmd_t *pmdp, + pte_t *pte, unsigned long sz) +{ + if (sz == SZ_8M) + *pmdp = __pmd(__pa(pte) | _PMD_PRESENT | _PMD_PAGE_8M); + else + *pmdp = __pmd(__pa(pte) | _PMD_PRESENT); +} + +static inline void pmd_populate_size(struct mm_struct *mm, pmd_t *pmdp, + pgtable_t pte_page, unsigned long sz) +{ + if (sz == SZ_8M) + *pmdp = __pmd(__pa(pte_page) | _PMD_USER | _PMD_PRESENT | _PMD_PAGE_8M); + else + *pmdp = __pmd(__pa(pte_page) | _PMD_USER | _PMD_PRESENT); +} +#define pmd_populate_size pmd_populate_size + +#define pmd_populate(mm, pmdp, pte) pmd_populate_size(mm, pmdp, pte, PAGE_SIZE) +#define pmd_populate_kernel(mm, pmdp, pte) pmd_populate_kernel_size(mm, pmdp, pte, PAGE_SIZE) + #endif #endif /* __KERNEL__ */ diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h index 239709a2f68e..005dad336565 100644 --- a/arch/powerpc/include/asm/pgtable.h +++ b/arch/powerpc/include/asm/pgtable.h @@ -106,6 +106,7 @@ unsigned long vmalloc_to_phys(void *vmalloc_addr); void pgtable_cache_add(unsigned int shift); +void __init *early_alloc_pgtable(unsigned long size); pte_t *early_pte_alloc_kernel(pmd_t *pmdp, unsigned long va); #if defined(CONFIG_STRICT_KERNEL_RWX) || defined(CONFIG_PPC32) diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S index b53af565b132..43919ae0bd11 100644 --- a/arch/powerpc/kernel/head_8xx.S +++ b/arch/powerpc/kernel/head_8xx.S @@ -415,6 +415,7 @@ FixupDAR:/* Entry point for dcbx workaround. */ oris r11, r11, (swapper_pg_dir - PAGE_OFFSET)@ha 3: lwz r11, (swapper_pg_dir-PAGE_OFFSET)@l(r11) /* Get the level 1 entry */ + rlwinm r11, r11, 0, ~_PMD_PAGE_8M mtspr SPRN_MD_TWC, r11 mfspr r11, SPRN_MD_TWC lwz r11, 0(r11) /* Get the pte */ diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 4e9fbd5b895d..dd29845ce0ce 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -195,7 +195,17 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long sz) { - return pte_alloc_huge(mm, pmd_off(mm, addr), addr, sz); + pmd_t *pmd = pmd_off(mm, addr); + + if (sz == SZ_512M) + return pte_alloc_huge(mm, pmd, addr, sz); + if (sz != SZ_8M) + return NULL; + if (!pte_alloc_huge(mm, pmd, addr, sz)) + return NULL; + if (!pte_alloc_huge(mm, pmd + 1, addr, sz)) + return NULL; + return (pte_t *)pmd; } #endif diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c index fc10e08bcb85..b416bfc161d4 100644 --- a/arch/powerpc/mm/nohash/8xx.c +++ b/arch/powerpc/mm/nohash/8xx.c @@ -54,25 +54,40 @@ static int __ref __early_map_kernel_hugepage(unsigned long va, phys_addr_t pa, pmd_t *pmdp = pmd_off_k(va); pte_t *ptep; - if (WARN_ON(psize != MMU_PAGE_512K)) + if (WARN_ON(psize != MMU_PAGE_512K && psize != MMU_PAGE_8M)) return -EINVAL; if (new) { if (WARN_ON(slab_is_available())) return -EINVAL; - ptep = early_pte_alloc_kernel(pmdp, va); + if (psize == MMU_PAGE_8M) { + if (WARN_ON(!pmd_none(*pmdp) || !pmd_none(*(pmdp + 1)))) + return -EINVAL; + + ptep = early_alloc_pgtable(PTE_FRAG_SIZE); + pmd_populate_kernel_size(&init_mm, pmdp, ptep, SZ_8M); + + ptep = early_alloc_pgtable(PTE_FRAG_SIZE); + pmd_populate_kernel_size(&init_mm, pmdp + 1, ptep, SZ_8M); + + ptep = (pte_t *)pmdp; + } else { + ptep = early_pte_alloc_kernel(pmdp, va); + /* The PTE should never be already present */ + if (WARN_ON(pte_present(*ptep) && pgprot_val(prot))) + return -EINVAL; + } } else { - ptep = pte_offset_kernel(pmdp, va); + if (psize == MMU_PAGE_8M) + ptep = (pte_t *)pmdp; + else + ptep = pte_offset_kernel(pmdp, va); } if (WARN_ON(!ptep)) return -ENOMEM; - /* The PTE should never be already present */ - if (new && WARN_ON(pte_present(*ptep) && pgprot_val(prot))) - return -EINVAL; - set_huge_pte_at(&init_mm, va, ptep, pte_mkhuge(pfn_pte(pa >> PAGE_SHIFT, prot)), 1UL << mmu_psize_to_shift(psize)); @@ -110,6 +125,8 @@ static void mmu_mapin_ram_chunk(unsigned long offset, unsigned long top, for (; p < ALIGN(p, SZ_8M) && p < top; p += SZ_512K, v += SZ_512K) __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_512K, new); + for (; p < ALIGN_DOWN(top, SZ_8M) && p < top; p += SZ_8M, v += SZ_8M) + __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_8M, new); for (; p < ALIGN_DOWN(top, SZ_512K) && p < top; p += SZ_512K, v += SZ_512K) __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_512K, new); diff --git a/arch/powerpc/mm/nohash/tlb.c b/arch/powerpc/mm/nohash/tlb.c index cb2afe39cee5..5ffa0af4328a 100644 --- a/arch/powerpc/mm/nohash/tlb.c +++ b/arch/powerpc/mm/nohash/tlb.c @@ -104,6 +104,9 @@ struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT] = { [MMU_PAGE_512K] = { .shift = 19, }, + [MMU_PAGE_8M] = { + .shift = 23, + }, }; #endif diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index acdf64c9b93e..59f0d7706d2f 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -297,11 +297,8 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, } #if defined(CONFIG_PPC_8xx) -void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte, unsigned long sz) +static void __set_huge_pte_at(pmd_t *pmd, pte_t *ptep, pte_basic_t val) { - pmd_t *pmd = pmd_off(mm, addr); - pte_basic_t val; pte_basic_t *entry = (pte_basic_t *)ptep; int num, i; @@ -311,15 +308,26 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, */ VM_WARN_ON(pte_hw_valid(*ptep) && !pte_protnone(*ptep)); - pte = set_pte_filter(pte, addr); - - val = pte_val(pte); - num = number_of_cells_per_pte(pmd, val, 1); for (i = 0; i < num; i++, entry++, val += SZ_4K) *entry = val; } + +void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + pte_t pte, unsigned long sz) +{ + pmd_t *pmdp = pmd_off(mm, addr); + + pte = set_pte_filter(pte, addr); + + if (sz == SZ_8M) { + __set_huge_pte_at(pmdp, pte_offset_kernel(pmdp, 0), pte_val(pte)); + __set_huge_pte_at(pmdp, pte_offset_kernel(pmdp + 1, 0), pte_val(pte) + SZ_4M); + } else { + __set_huge_pte_at(pmdp, ptep, pte_val(pte)); + } +} #endif #endif /* CONFIG_HUGETLB_PAGE */ diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c index face94977cb2..0b1d68ef87cd 100644 --- a/arch/powerpc/mm/pgtable_32.c +++ b/arch/powerpc/mm/pgtable_32.c @@ -48,7 +48,7 @@ notrace void __init early_ioremap_init(void) early_ioremap_setup(); } -static void __init *early_alloc_pgtable(unsigned long size) +void __init *early_alloc_pgtable(unsigned long size) { void *ptr = memblock_alloc(size, size);