From patchwork Tue May 14 14:04:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 13664170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7626CC25B75 for ; Tue, 14 May 2024 14:05:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 12D8A8D0024; Tue, 14 May 2024 10:05:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0DD668D000D; Tue, 14 May 2024 10:05:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E232B8D0024; Tue, 14 May 2024 10:05:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B78CD8D000D for ; Tue, 14 May 2024 10:05:16 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 653B71611E5 for ; Tue, 14 May 2024 14:05:16 +0000 (UTC) X-FDA: 82117173432.21.FE9DB08 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf01.hostedemail.com (Postfix) with ESMTP id 884F34001D for ; Tue, 14 May 2024 14:05:14 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Um6pxZBb; spf=pass (imf01.hostedemail.com: domain of bjorn@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=bjorn@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715695514; a=rsa-sha256; cv=none; b=atr2bc+cH9H7bkfmMvUOKWfL9wzSFOVsvfQM2vKTkWfDssgm5TSUSlr+U7E2AYQ64G1wiW STjFVQCEbDbuUPPakMwiF6UE/yOt9inlABcQQ0GH9/CQkw5NpxHXImI1Cf2B0NEwtMsp2s +zrNHkDKmG4vQK30juWy3aY7ZhkBmMk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Um6pxZBb; spf=pass (imf01.hostedemail.com: domain of bjorn@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=bjorn@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715695514; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0Oio/wtUWs+J7EH2oiE04eQUjTO43oml1tzH1K3UrX0=; b=U/4rF3rTsOuFbbsoJY+xMXLthdPlVrznoZeFQIIkmSDA3IArqD2QVXhxjbO32yd4ADsxHG I4ak2aC70CnHlUv3TaE0otezyzO7hIqncyjdgpNCyqmdAEG2ZAyyhvVAaCBoi0zMJArzCa 3ZXTRMcUE2LDX/Dmm0WE1hjJAOuE3TM= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id BB79261263; Tue, 14 May 2024 14:05:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9ABF4C32782; Tue, 14 May 2024 14:05:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1715695513; bh=TaDiQetQybqKd6ZxR/jcE5PkU8hQbMlo+vFY+ZDy3zE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Um6pxZBb03VIozcHaKKyM/e7rsrF2bB7BRxYDLJp7hWQla6fuSi8TPOnXKDNIPa56 DMA6SCQqLJvrf9NPtQQa3VDNIg3QbcaUzxenDWFXHcy7WZ+e9lijNqsV1qEuKedsCj fhMgKGD4nqxmx88Pdfkw7QlsmYYUk1nO9c47s/UIXGvNBuqd/42g3ZEPlTu0yF+Uhn BzU437TCi2CekrhYc2nT6ojxNdLYrUlwfGcJsofv+Rl8Qf8Q4BeOiRki3hU/ihNoo/ CQZHkziFcg/X+M1WW5G561IkP3VC6xbOqRHxvURTL4ZpvQyOc5rHaDhZt9wGsJy2hU g9z/70AL1e2Wg== From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: Alexandre Ghiti , Albert Ou , David Hildenbrand , Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Andrew Bresticker , Chethan Seshadri , Lorenzo Stoakes , Oscar Salvador , Santosh Mamila , Sivakumar Munnangi , Sunil V L , linux-kernel@vger.kernel.org, linux-mm@kvack.org, virtualization@lists.linux-foundation.org Subject: [PATCH v2 2/8] riscv: mm: Change attribute from __init to __meminit for page functions Date: Tue, 14 May 2024 16:04:40 +0200 Message-Id: <20240514140446.538622-3-bjorn@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240514140446.538622-1-bjorn@kernel.org> References: <20240514140446.538622-1-bjorn@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 884F34001D X-Rspam-User: X-Rspamd-Server: rspam12 X-Stat-Signature: an7u1heaf38ybjitxsaz17cb1zp6nww9 X-HE-Tag: 1715695514-801538 X-HE-Meta: U2FsdGVkX19N0YVNTLsjlUUjon0dmCd+kVgYb0cVRodB9ZURwe/+3M6v5qPzHt1pvmLGQOVgDa4S1Zond3DMN3S7ASHqcQjvXxP3R1rxtT8UBuqnu9uhHqhIDEY5OIZPbqmfkznnwfe3yyG6kyI7sPsZ8UQ6d2VYkeZRc5BLsjjlE4ywjaAI/fdhYVnKcZn3Cr2Vu0ys6fO1P8PxCuz6iXrV0aE57cQ3vJ139ZqcMqCsc1Yg8oG0R2Cn/c6GHHDRZ9S2w3dUKWfHrrfJ1/bLdnSsCtQ/W8ZdYdxVR7x2J4RxVFS3rNfZlqmfytoC5I38b1ChhzheJC5WXpfPFxb/YdnZCWoaZ3U0VvLeGvjqlDgXOSm8NB9LQTKO9Y3/6LOxQ+AVKkvHM/c6UiFTJT1fhBgaDV8/r63Sa5JvfCsjrvuRyUprWC5KIf50aqebaGsLThLtrnfgBdXJJQ6nbY/IcFGcvvmQQS9U4AFbSvNym89Dfbvw6dMik5mgvEATwAL9ft++IPSuSbXXDnwBnqi6qcvTWK0KXqdtmGd9I9FGse+NExuMIGPG9GcFcXPIOeASuaAV0KwX3mvVLvY1i79DooLFhlnz5leo85Wzjmwrrl54RhqGLgE0zF4u0K8JcOvf7RqwjSBZdibfSMvgi3Nn+6jBfIgU+AFRH1Wa+Fg5BYHfV+TTitAY0oX4AFA9Q2RG9VIOWcvmySieh7uosVBySD2yn3z7QJae+kCR9KzyU66Npw+kBw80sevSY6EyV6QW+guESPM7kw2dQL2ksvS1GA8z/bLIFh8/a7ifVrp5P+WzqlzNIOUluO1qRZJz/xJhNV93zc9HyI4yoqtMiEZrUvPexzicpAZJQc+xaGM7/nPN+mpAtc+KUPx2n6kUWaNOGTfzPv+SS5ye1sid5inH/EWz4n7KZKZBaDN4yo4xjh9f6Wxv9tLVUGZDDzZd4rNZHEh5Zx/wCVLMw2rSPOS F4bTzGY/ dAo1a3nj4fdeocn69hSUYh8Yc4/XJsIrJnQzJyqTH2Uww2rDpnvDkb4P5mn2dtN4Cb1uKajoL+YxzNVQSmMzjnty0UDAl9aMP+oc21DF4IdiLr90oQaX7S+wps47XQq2Ckmp/uMD1rSKNgsodruGRa2ItYiZmq0Q+wFCxd9MvSkd4ZHs5y8PJVYjVthMBo+f2ko6qW7huiY8YZCWl1R69eYGw72Immecw/8umMyDMN3wa7xDO3HDYImf12GFUOp9uyrXIllrTHYfbJhpBigL9yxLXF6H8RnAYxSTfvpgAKUjmhG+I2w96NWGovBt4y286tYTybvVL9oMlgQ4MRdI2jDUTM4xdnWe2obXGAfJ8+ZedFhc8oYHz859T0Gtm3iVCGt9UfNWD1wfTvHM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Björn Töpel Prepare for memory hotplugging support by changing from __init to __meminit for the page table functions that are used by the upcoming architecture specific callbacks. Changing the __init attribute to __meminit, avoids that the functions are removed after init. The __meminit attribute makes sure the functions are kept in the kernel text post init, but only if memory hotplugging is enabled for the build. Also, make sure that the altmap parameter is properly passed on to vmemmap_populate_hugepages(). Signed-off-by: Björn Töpel Reviewed-by: David Hildenbrand Reviewed-by: Oscar Salvador --- arch/riscv/include/asm/mmu.h | 4 +-- arch/riscv/include/asm/pgtable.h | 2 +- arch/riscv/mm/init.c | 58 ++++++++++++++------------------ 3 files changed, 29 insertions(+), 35 deletions(-) diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h index 60be458e94da..c09c3c79f496 100644 --- a/arch/riscv/include/asm/mmu.h +++ b/arch/riscv/include/asm/mmu.h @@ -28,8 +28,8 @@ typedef struct { #endif } mm_context_t; -void __init create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa, - phys_addr_t sz, pgprot_t prot); +void __meminit create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa, phys_addr_t sz, + pgprot_t prot); #endif /* __ASSEMBLY__ */ #endif /* _ASM_RISCV_MMU_H */ diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 58fd7b70b903..7933f493db71 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -162,7 +162,7 @@ struct pt_alloc_ops { #endif }; -extern struct pt_alloc_ops pt_ops __initdata; +extern struct pt_alloc_ops pt_ops __meminitdata; #ifdef CONFIG_MMU /* Number of PGD entries that a user-mode program can use */ diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 5b8cdfafb52a..c969427eab88 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -295,7 +295,7 @@ static void __init setup_bootmem(void) } #ifdef CONFIG_MMU -struct pt_alloc_ops pt_ops __initdata; +struct pt_alloc_ops pt_ops __meminitdata; pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss; pgd_t trampoline_pg_dir[PTRS_PER_PGD] __page_aligned_bss; @@ -357,7 +357,7 @@ static inline pte_t *__init get_pte_virt_fixmap(phys_addr_t pa) return (pte_t *)set_fixmap_offset(FIX_PTE, pa); } -static inline pte_t *__init get_pte_virt_late(phys_addr_t pa) +static inline pte_t *__meminit get_pte_virt_late(phys_addr_t pa) { return (pte_t *) __va(pa); } @@ -376,7 +376,7 @@ static inline phys_addr_t __init alloc_pte_fixmap(uintptr_t va) return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); } -static phys_addr_t __init alloc_pte_late(uintptr_t va) +static phys_addr_t __meminit alloc_pte_late(uintptr_t va) { struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); @@ -384,9 +384,8 @@ static phys_addr_t __init alloc_pte_late(uintptr_t va) return __pa((pte_t *)ptdesc_address(ptdesc)); } -static void __init create_pte_mapping(pte_t *ptep, - uintptr_t va, phys_addr_t pa, - phys_addr_t sz, pgprot_t prot) +static void __meminit create_pte_mapping(pte_t *ptep, uintptr_t va, phys_addr_t pa, phys_addr_t sz, + pgprot_t prot) { uintptr_t pte_idx = pte_index(va); @@ -440,7 +439,7 @@ static pmd_t *__init get_pmd_virt_fixmap(phys_addr_t pa) return (pmd_t *)set_fixmap_offset(FIX_PMD, pa); } -static pmd_t *__init get_pmd_virt_late(phys_addr_t pa) +static pmd_t *__meminit get_pmd_virt_late(phys_addr_t pa) { return (pmd_t *) __va(pa); } @@ -457,7 +456,7 @@ static phys_addr_t __init alloc_pmd_fixmap(uintptr_t va) return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); } -static phys_addr_t __init alloc_pmd_late(uintptr_t va) +static phys_addr_t __meminit alloc_pmd_late(uintptr_t va) { struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); @@ -465,9 +464,9 @@ static phys_addr_t __init alloc_pmd_late(uintptr_t va) return __pa((pmd_t *)ptdesc_address(ptdesc)); } -static void __init create_pmd_mapping(pmd_t *pmdp, - uintptr_t va, phys_addr_t pa, - phys_addr_t sz, pgprot_t prot) +static void __meminit create_pmd_mapping(pmd_t *pmdp, + uintptr_t va, phys_addr_t pa, + phys_addr_t sz, pgprot_t prot) { pte_t *ptep; phys_addr_t pte_phys; @@ -503,7 +502,7 @@ static pud_t *__init get_pud_virt_fixmap(phys_addr_t pa) return (pud_t *)set_fixmap_offset(FIX_PUD, pa); } -static pud_t *__init get_pud_virt_late(phys_addr_t pa) +static pud_t *__meminit get_pud_virt_late(phys_addr_t pa) { return (pud_t *)__va(pa); } @@ -521,7 +520,7 @@ static phys_addr_t __init alloc_pud_fixmap(uintptr_t va) return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); } -static phys_addr_t alloc_pud_late(uintptr_t va) +static phys_addr_t __meminit alloc_pud_late(uintptr_t va) { unsigned long vaddr; @@ -541,7 +540,7 @@ static p4d_t *__init get_p4d_virt_fixmap(phys_addr_t pa) return (p4d_t *)set_fixmap_offset(FIX_P4D, pa); } -static p4d_t *__init get_p4d_virt_late(phys_addr_t pa) +static p4d_t *__meminit get_p4d_virt_late(phys_addr_t pa) { return (p4d_t *)__va(pa); } @@ -559,7 +558,7 @@ static phys_addr_t __init alloc_p4d_fixmap(uintptr_t va) return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); } -static phys_addr_t alloc_p4d_late(uintptr_t va) +static phys_addr_t __meminit alloc_p4d_late(uintptr_t va) { unsigned long vaddr; @@ -568,9 +567,8 @@ static phys_addr_t alloc_p4d_late(uintptr_t va) return __pa(vaddr); } -static void __init create_pud_mapping(pud_t *pudp, - uintptr_t va, phys_addr_t pa, - phys_addr_t sz, pgprot_t prot) +static void __meminit create_pud_mapping(pud_t *pudp, uintptr_t va, phys_addr_t pa, phys_addr_t sz, + pgprot_t prot) { pmd_t *nextp; phys_addr_t next_phys; @@ -595,9 +593,8 @@ static void __init create_pud_mapping(pud_t *pudp, create_pmd_mapping(nextp, va, pa, sz, prot); } -static void __init create_p4d_mapping(p4d_t *p4dp, - uintptr_t va, phys_addr_t pa, - phys_addr_t sz, pgprot_t prot) +static void __meminit create_p4d_mapping(p4d_t *p4dp, uintptr_t va, phys_addr_t pa, phys_addr_t sz, + pgprot_t prot) { pud_t *nextp; phys_addr_t next_phys; @@ -653,9 +650,8 @@ static void __init create_p4d_mapping(p4d_t *p4dp, #define create_pmd_mapping(__pmdp, __va, __pa, __sz, __prot) do {} while(0) #endif /* __PAGETABLE_PMD_FOLDED */ -void __init create_pgd_mapping(pgd_t *pgdp, - uintptr_t va, phys_addr_t pa, - phys_addr_t sz, pgprot_t prot) +void __meminit create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa, phys_addr_t sz, + pgprot_t prot) { pgd_next_t *nextp; phys_addr_t next_phys; @@ -680,8 +676,7 @@ void __init create_pgd_mapping(pgd_t *pgdp, create_pgd_next_mapping(nextp, va, pa, sz, prot); } -static uintptr_t __init best_map_size(phys_addr_t pa, uintptr_t va, - phys_addr_t size) +static uintptr_t __meminit best_map_size(phys_addr_t pa, uintptr_t va, phys_addr_t size) { if (pgtable_l5_enabled && !(pa & (P4D_SIZE - 1)) && !(va & (P4D_SIZE - 1)) && size >= P4D_SIZE) @@ -714,7 +709,7 @@ asmlinkage void __init __copy_data(void) #endif #ifdef CONFIG_STRICT_KERNEL_RWX -static __init pgprot_t pgprot_from_va(uintptr_t va) +static __meminit pgprot_t pgprot_from_va(uintptr_t va) { if (is_va_kernel_text(va)) return PAGE_KERNEL_READ_EXEC; @@ -739,7 +734,7 @@ void mark_rodata_ro(void) set_memory_ro); } #else -static __init pgprot_t pgprot_from_va(uintptr_t va) +static __meminit pgprot_t pgprot_from_va(uintptr_t va) { if (IS_ENABLED(CONFIG_64BIT) && !is_kernel_mapping(va)) return PAGE_KERNEL; @@ -1231,9 +1226,8 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) pt_ops_set_fixmap(); } -static void __init create_linear_mapping_range(phys_addr_t start, - phys_addr_t end, - uintptr_t fixed_map_size) +static void __meminit create_linear_mapping_range(phys_addr_t start, phys_addr_t end, + uintptr_t fixed_map_size) { phys_addr_t pa; uintptr_t va, map_size; @@ -1435,7 +1429,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, * memory hotplug, we are not able to update all the page tables with * the new PMDs. */ - return vmemmap_populate_hugepages(start, end, node, NULL); + return vmemmap_populate_hugepages(start, end, node, altmap); } #endif