From patchwork Thu Mar 6 18:51:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005176 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF57B25A2B5; Thu, 6 Mar 2025 18:51:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287119; cv=none; b=HvYipX0bHgp5B3vbhFSCDq/HQeTku/on61R5RXR5/K/M0sfq8eF5FS3RSDoMEgmEnj6Q2QU14o+aDJeCegBfPX0Jo16Llsl1PihHJXNjQCV+XA3xst1TOiPdXLA5Q/2gnMSRNH4sm/KVt2AMTzZ47Gx9pxapr++XpO0qULrDLQ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287119; c=relaxed/simple; bh=xXhNVtoRAjb56OLfL+MIUyuiC6fNrGdmFypg4AHZHss=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=p+bzeTojJ5hxBflEG58SB5sN1p2KfZqmjAfprLouO4TW3yjQx3tdVLuowD60B8DB1uGIfxkBVYT63Z+aiOOjvNxO1GaaBoZLzhihqfKu2WUh8XzGxIXrCVGXjk2ZY/t2/M1HlzS34jyfKT0Cgc6h8zlV7YSHF/LHmJbGkDIPFCs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mc3+5VBd; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mc3+5VBd" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A344CC4AF09; Thu, 6 Mar 2025 18:51:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287118; bh=xXhNVtoRAjb56OLfL+MIUyuiC6fNrGdmFypg4AHZHss=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mc3+5VBdIIu92zQEQScIdHtZDhsV8+TCREp8qojNXCswdu86fU2tpoEml4nZd+poR eQ0aLJbVQzCZz9QzHALQCbJDJmU+0IljCo21Q5bN5Bm7RSinZHmM9kZn2YJ+28Da5R F3c4gmZD4pIuuzDv9PqqG1EoltUBB5A6ciiMy22TEZs/vmhrtV3uwpgLjhtCkg5Rdv BvwfrE5ADeSFTJF2WiyoCmebxERnGD71/MtMVpgpA/CV54VKWpKBskx5rbT/nROo1f YwI/WFO7c+f3jMkR6/CwCcPcELWSYBwm/v51JOMhWtqAnUyjyq8a8lee+Mg1IXoDLQ F1y1chgJwbe0w== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 01/13] arm: mem_init: use memblock_phys_free() to free DMA memory on SA1111 Date: Thu, 6 Mar 2025 20:51:11 +0200 Message-ID: <20250306185124.3147510-2-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" This will help to pull out memblock_free_all() to generic code. Signed-off-by: Mike Rapoport (Microsoft) --- arch/arm/mm/init.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 5345d218899a..9aec1cb2386f 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -277,14 +277,14 @@ void __init mem_init(void) set_max_mapnr(pfn_to_page(max_pfn) - mem_map); - /* this will put all unused low memory onto the freelists */ - memblock_free_all(); - #ifdef CONFIG_SA1111 /* now that our DMA memory is actually so designated, we can free it */ - free_reserved_area(__va(PHYS_OFFSET), swapper_pg_dir, -1, NULL); + memblock_phys_free(PHYS_OFFSET, __pa(swapper_pg_dir) - PHYS_OFFSET); #endif + /* this will put all unused low memory onto the freelists */ + memblock_free_all(); + free_highpages(); /* From patchwork Thu Mar 6 18:51:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005177 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E974219D090; Thu, 6 Mar 2025 18:52:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287132; cv=none; b=pWgMVIWVIKjCzwW1RZMtu3z4SSuxeB+Ui4Jk0iAHZj45ZUX35XJlTp7MBD5JfM33ugkpIirD8TSBgpkOpMA4z9ryLVrt0QMy3474mkGWwd9UPoEkmMFEKNZ9SYJC+gA2fFgq5N8G6Lz5VRUg1BgHiXLiiGNvLqKov+M1Keb2iSY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287132; c=relaxed/simple; bh=8AVATLG9lzmcQcx5E69jKjTsUN+LZvQn92lo4/jx/sQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MdbgvqY4YKwi9IxCuMUWNA+6Qqn97Na5JbjFMtCumBAqvNZy7t3DBmq12JcJ+FJPsG5GBxieyLVLrC0rxw4jdUZO2TDHEyUsgNshyQ1bGdhsKywnzlOX9rihRVNaNHCVVtAHdooSYsg5aIiVdGOZHE3ocZE9rEilU/W5M5z2jj0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jV0L4MxA; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jV0L4MxA" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C5485C4CEEC; Thu, 6 Mar 2025 18:51:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287131; bh=8AVATLG9lzmcQcx5E69jKjTsUN+LZvQn92lo4/jx/sQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jV0L4MxA7ZLOPXJ7CzvIBhifnvQ3cLfWLbGX9Kpq+TJV+I4pYEortTLjsgEQ+JgOz MJeD/6kqeK3kbbbLebIrILdKUS+MINmZ+/B/sNOxLIGg7XbVwZV42EgAZnZAp6YA9n 29WDsIsPmNqboWrzBZPQWiBq4xZoUqUHc5X/w6ZGtfzGGa8bFcwxmZypQhUi8SVeUt 4+S4ZR8z1jIcZBlvxst+95F7MqZZIvNp+reY2KbX5iWIXdOVrbfew191sYutRBEayR 2jUlJelZWwvqGeLBlQ86aq/GxBuU3JetL31MovldgywY/67riRZKdTJu068ZWyNYvD NqjbY0mINjozA== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 02/13] csky: move setup_initrd() to setup.c Date: Thu, 6 Mar 2025 20:51:12 +0200 Message-ID: <20250306185124.3147510-3-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" Memory used by initrd should be reserved as soon as possible before there any memblock allocations that might overwrite that memory. This will also help with pulling out memblock_free_all() to the generic code and reducing code duplication in arch::mem_init(). Signed-off-by: Mike Rapoport (Microsoft) --- arch/csky/kernel/setup.c | 43 ++++++++++++++++++++++++++++++++++++++++ arch/csky/mm/init.c | 43 ---------------------------------------- 2 files changed, 43 insertions(+), 43 deletions(-) diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c index fe715b707fd0..e0d6ca86ea8c 100644 --- a/arch/csky/kernel/setup.c +++ b/arch/csky/kernel/setup.c @@ -12,6 +12,45 @@ #include #include +#ifdef CONFIG_BLK_DEV_INITRD +static void __init setup_initrd(void) +{ + unsigned long size; + + if (initrd_start >= initrd_end) { + pr_err("initrd not found or empty"); + goto disable; + } + + if (__pa(initrd_end) > PFN_PHYS(max_low_pfn)) { + pr_err("initrd extends beyond end of memory"); + goto disable; + } + + size = initrd_end - initrd_start; + + if (memblock_is_region_reserved(__pa(initrd_start), size)) { + pr_err("INITRD: 0x%08lx+0x%08lx overlaps in-use memory region", + __pa(initrd_start), size); + goto disable; + } + + memblock_reserve(__pa(initrd_start), size); + + pr_info("Initial ramdisk at: 0x%p (%lu bytes)\n", + (void *)(initrd_start), size); + + initrd_below_start_ok = 1; + + return; + +disable: + initrd_start = initrd_end = 0; + + pr_err(" - disabling initrd\n"); +} +#endif + static void __init csky_memblock_init(void) { unsigned long lowmem_size = PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET); @@ -40,6 +79,10 @@ static void __init csky_memblock_init(void) max_low_pfn = min_low_pfn + sseg_size; } +#ifdef CONFIG_BLK_DEV_INITRD + setup_initrd(); +#endif + max_zone_pfn[ZONE_NORMAL] = max_low_pfn; mmu_init(min_low_pfn, max_low_pfn); diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c index bde7cabd23df..ab51acbc19b2 100644 --- a/arch/csky/mm/init.c +++ b/arch/csky/mm/init.c @@ -42,45 +42,6 @@ unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss; EXPORT_SYMBOL(empty_zero_page); -#ifdef CONFIG_BLK_DEV_INITRD -static void __init setup_initrd(void) -{ - unsigned long size; - - if (initrd_start >= initrd_end) { - pr_err("initrd not found or empty"); - goto disable; - } - - if (__pa(initrd_end) > PFN_PHYS(max_low_pfn)) { - pr_err("initrd extends beyond end of memory"); - goto disable; - } - - size = initrd_end - initrd_start; - - if (memblock_is_region_reserved(__pa(initrd_start), size)) { - pr_err("INITRD: 0x%08lx+0x%08lx overlaps in-use memory region", - __pa(initrd_start), size); - goto disable; - } - - memblock_reserve(__pa(initrd_start), size); - - pr_info("Initial ramdisk at: 0x%p (%lu bytes)\n", - (void *)(initrd_start), size); - - initrd_below_start_ok = 1; - - return; - -disable: - initrd_start = initrd_end = 0; - - pr_err(" - disabling initrd\n"); -} -#endif - void __init mem_init(void) { #ifdef CONFIG_HIGHMEM @@ -92,10 +53,6 @@ void __init mem_init(void) #endif high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT); -#ifdef CONFIG_BLK_DEV_INITRD - setup_initrd(); -#endif - memblock_free_all(); #ifdef CONFIG_HIGHMEM From patchwork Thu Mar 6 18:51:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005178 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0275325D54E; Thu, 6 Mar 2025 18:52:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287145; cv=none; b=PDvwWRfeSQbbJIhNWOvscN75UtbSPCCp4FRWphI0//IFyDKhLOHeRqrBoXE83U0lsmQfSZaRlNIqEEi016tOztgA0xT5z/zQn1GvJ3Ss2A6cIlyqHF3D9DG7xZNOAeZvHNqNNqcvxDFz1ZWUNjH+kT+9AZzcQ/+Fn3pThh8YMu0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287145; c=relaxed/simple; bh=3f2PL8+tvQz4kqOX8BsTM2Qrh30qjoJp6oFsaw3/420=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kXGNrmxz/gFXskyZSX2cIUSoewpEC7q8km3695lC/GWAxFd9SoDd/A8YfguIqIK8XGdbVwZRg63T9ZyQlSJ/U8u61+qggTVw7RaypcuECzzt2YkN/SjAPwWJw6yGeKzQ+1pK0WsPZudv2vkA2/C78NOoSaBfwKM4QKlgO9nzfQ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FJgzrjTO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FJgzrjTO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E7DB2C4CEED; Thu, 6 Mar 2025 18:52:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287144; bh=3f2PL8+tvQz4kqOX8BsTM2Qrh30qjoJp6oFsaw3/420=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FJgzrjTO4LydJP8HiTJaetkeA9l5AEBKpAM8oOldJjnfSIsNdxHDVWTJYA8ZW+ytA Gf6Qv2JH04v1Wv5MZqueKxNSDjn1bvxaES2Ur7agOMyA+oM/cLmVu4Wp52ZWKRFePe sZyd0sVUs0txtZvPxmVDHQsORFcZRvNQnvfx4g/W2wHxnoZ7VhRd//aSxKK7XJStTy sdd79C0Ssrv//n3N6cBDLeTrdB9KhSWEEtBzpAV84qCSM/2NMoenGBapXQxwZet7HJ ldS9H0hKMEZgcYfgJnMdk4vJpeeALhqApfWyBsF4pj+idRDR4fcjmG4H92j5rE85ME PX/YLB7CaLEzw== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 03/13] hexagon: move initialization of init_mm.context init to paging_init() Date: Thu, 6 Mar 2025 20:51:13 +0200 Message-ID: <20250306185124.3147510-4-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" This will help with pulling out memblock_free_all() to the generic code and reducing code duplication in arch::mem_init(). Signed-off-by: Mike Rapoport (Microsoft) --- arch/hexagon/mm/init.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/arch/hexagon/mm/init.c b/arch/hexagon/mm/init.c index 3458f39ca2ac..508bb6a8dcc9 100644 --- a/arch/hexagon/mm/init.c +++ b/arch/hexagon/mm/init.c @@ -59,14 +59,6 @@ void __init mem_init(void) * To-Do: someone somewhere should wipe out the bootmem map * after we're done? */ - - /* - * This can be moved to some more virtual-memory-specific - * initialization hook at some point. Set the init_mm - * descriptors "context" value to point to the initial - * kernel segment table's physical address. - */ - init_mm.context.ptbase = __pa(init_mm.pgd); } void sync_icache_dcache(pte_t pte) @@ -103,6 +95,12 @@ static void __init paging_init(void) free_area_init(max_zone_pfn); /* sets up the zonelists and mem_map */ + /* + * Set the init_mm descriptors "context" value to point to the + * initial kernel segment table's physical address. + */ + init_mm.context.ptbase = __pa(init_mm.pgd); + /* * Start of high memory area. Will probably need something more * fancy if we... get more fancy. From patchwork Thu Mar 6 18:51:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005179 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 38A9A26B2CD; Thu, 6 Mar 2025 18:52:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287158; cv=none; b=edNwVHyXQMy6F2Nud+TpGoTpDeFBZJuzRfq4wbJ/inrHMP/4GYef449eiIm3ft5+byuut/Nh+nsHYBDnLftW0V9Nqp7swsW0++NXQjkZKgj5fsCDf5l8p+x9qenNPiKjokik9bNTIKs/w0/nlvoK61pGM5mLHlUVRxOLsKT2ODM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287158; c=relaxed/simple; bh=nkBe6+9DINra8ChNyXeYcesTu6mPwjeTN1yq3Y1szbM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rq99Pls82OcUXT+1n4mI2IAClLj2AzJHLyegqF1wWBz0i2ctDt0Zg4jKGVbq/vyuSI6c2U7720sg5TAezZ1Vy9UWCW+c7vCFDoA00h+xCeNsDmMpzDtg+8xv2w1tI2qNTT/kNTQwUFvwlUDTYEPb6xDxQL1Zty+9Xvn9pKZw3ag= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=siKQU+HD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="siKQU+HD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1376FC4CEE0; Thu, 6 Mar 2025 18:52:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287157; bh=nkBe6+9DINra8ChNyXeYcesTu6mPwjeTN1yq3Y1szbM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=siKQU+HDCfw0PGVcuQLowpSBXONSMtUVdFxiCr/6hU+NHY+VBHRlc1vAUMT5hpJZB PiVwReKljb7dtz57q/YhNgC/pOTwxkZOQVhOO7p43ve+TVnLWWN7d0SfMwaPKZXUnC kE9KWupqvpjakTaBQB38YsWY/YDt6tkSLuS44omo3I/wMLF2pVop/8yqmSFOQUynyz OAbvzs0UwLJMMvu3PP28iy00klBtAdZ7IDinGxb0rWjVk4O5ozTiq+E/5xe7lZMfHj qz1b46GsbuZHpSH/b+emJTc8g80OT8hpp/EeqCWWu0ZThETTwxlc70LZLmOMcV9Az+ ArY7HhgOILFsA== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 04/13] MIPS: consolidate mem_init() for NUMA machines Date: Thu, 6 Mar 2025 20:51:14 +0200 Message-ID: <20250306185124.3147510-5-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" Both MIPS systems that support numa (loongsoon3 and sgi-ip27) have identical mem_init() for NUMA case. Move that into arch/mips/mm/init.c and drop duplicate per-machine definitions. Signed-off-by: Mike Rapoport (Microsoft) --- arch/mips/loongson64/numa.c | 7 ------- arch/mips/mm/init.c | 7 +++++++ arch/mips/sgi-ip27/ip27-memory.c | 9 --------- 3 files changed, 7 insertions(+), 16 deletions(-) diff --git a/arch/mips/loongson64/numa.c b/arch/mips/loongson64/numa.c index 8388400d052f..95d5f553ce19 100644 --- a/arch/mips/loongson64/numa.c +++ b/arch/mips/loongson64/numa.c @@ -164,13 +164,6 @@ void __init paging_init(void) free_area_init(zones_size); } -void __init mem_init(void) -{ - high_memory = (void *) __va(get_num_physpages() << PAGE_SHIFT); - memblock_free_all(); - setup_zero_pages(); /* This comes from node 0 */ -} - /* All PCI device belongs to logical Node-0 */ int pcibus_to_node(struct pci_bus *bus) { diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 4583d1a2a73e..3db6082c611e 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -482,6 +482,13 @@ void __init mem_init(void) 0x80000000 - 4, KCORE_TEXT); #endif } +#else /* CONFIG_NUMA */ +void __init mem_init(void) +{ + high_memory = (void *) __va(get_num_physpages() << PAGE_SHIFT); + memblock_free_all(); + setup_zero_pages(); /* This comes from node 0 */ +} #endif /* !CONFIG_NUMA */ void free_init_pages(const char *what, unsigned long begin, unsigned long end) diff --git a/arch/mips/sgi-ip27/ip27-memory.c b/arch/mips/sgi-ip27/ip27-memory.c index 1963313f55d8..2b3e46e2e607 100644 --- a/arch/mips/sgi-ip27/ip27-memory.c +++ b/arch/mips/sgi-ip27/ip27-memory.c @@ -406,8 +406,6 @@ void __init prom_meminit(void) } } -extern void setup_zero_pages(void); - void __init paging_init(void) { unsigned long zones_size[MAX_NR_ZONES] = {0, }; @@ -416,10 +414,3 @@ void __init paging_init(void) zones_size[ZONE_NORMAL] = max_low_pfn; free_area_init(zones_size); } - -void __init mem_init(void) -{ - high_memory = (void *) __va(get_num_physpages() << PAGE_SHIFT); - memblock_free_all(); - setup_zero_pages(); /* This comes from node 0 */ -} From patchwork Thu Mar 6 18:51:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005180 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 65A2C26B955; Thu, 6 Mar 2025 18:52:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287171; cv=none; b=UhS5xYJq1SV3OChj+bhw78BRtW5E1m51Ni752PZ//G1ck26LMlqD4N8iHAIs6o8+oRUSzxQrl/kmxb7Yc6obYk2cDsgfVRUNx0VjOD1VDhwjIEsV0WB0VPGX3/M3Iu+GUNGqpMViecTPXGVjFVDiljWewEKYw6yZH3puvegMg5s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287171; c=relaxed/simple; bh=zHkXxvTCym0KhjWbRn9ukBtTCkXKX0nK77Ga6+2Q1YQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aRMvkHKPV+el0CS5QhJfwofvmomOGfUyjuNU3lTfxICWqMBOkqTea5os9kVcuXFW7/WI/GYwMrHwuJ7f0AeVa3VaA8LUvVMtj8ttwnPhAq4UScXjP1ccFu3hM/aTpu0XESG9lM4lV9xZIVVCR98zhxWkWDQmSGoDftucoBLQ8Qg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sU7/prHN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sU7/prHN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3777BC4CEE4; Thu, 6 Mar 2025 18:52:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287170; bh=zHkXxvTCym0KhjWbRn9ukBtTCkXKX0nK77Ga6+2Q1YQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sU7/prHNQYg50lA7Hgqj0K1JV1oudrxegE/7Yx9axrVAFFToyFBUGUG74ycwlXmZO aq0XBTYCBQmpUJCkwEU/Dntqz+qE3P71BesMhHmp+sBEjPmFo7vSnZ1AiTchdUe70i DuW4Y+bWiYoNGCgn5z3IrutQmgNDmAGNn+R5Q7kZGJZMIg21W1EcBEi7sD7/iGD9UL Ke2xqsg+DakqZUrIVCcLXoVSSNTU6Qc9K22RHZSnsQwI+wRQEE/KhepO8LXgXyv5ig o7R10Dqr4mttjq4pi8tkEwtM4iS1U3Oolo7BJkl6KzqCiWI4vyw/wcu4aZDHmjHKeg JTD2x/pwRnwNg== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 05/13] MIPS: make setup_zero_pages() use memblock Date: Thu, 6 Mar 2025 20:51:15 +0200 Message-ID: <20250306185124.3147510-6-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" Allocating the zero pages from memblock is simpler because the memory is already reserved. This will also help with pulling out memblock_free_all() to the generic code and reducing code duplication in arch::mem_init(). Signed-off-by: Mike Rapoport (Microsoft) --- arch/mips/include/asm/mmzone.h | 2 -- arch/mips/mm/init.c | 16 +++++----------- 2 files changed, 5 insertions(+), 13 deletions(-) diff --git a/arch/mips/include/asm/mmzone.h b/arch/mips/include/asm/mmzone.h index 14226ea42036..602a21aee9d4 100644 --- a/arch/mips/include/asm/mmzone.h +++ b/arch/mips/include/asm/mmzone.h @@ -20,6 +20,4 @@ #define nid_to_addrbase(nid) 0 #endif -extern void setup_zero_pages(void); - #endif /* _ASM_MMZONE_H_ */ diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 3db6082c611e..f51cd97376df 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -59,25 +59,19 @@ EXPORT_SYMBOL(zero_page_mask); /* * Not static inline because used by IP27 special magic initialization code */ -void setup_zero_pages(void) +static void __init setup_zero_pages(void) { - unsigned int order, i; - struct page *page; + unsigned int order; if (cpu_has_vce) order = 3; else order = 0; - empty_zero_page = __get_free_pages(GFP_KERNEL | __GFP_ZERO, order); + empty_zero_page = (unsigned long)memblock_alloc(PAGE_SIZE << order, PAGE_SIZE); if (!empty_zero_page) panic("Oh boy, that early out of memory?"); - page = virt_to_page((void *)empty_zero_page); - split_page(page, order); - for (i = 0; i < (1 << order); i++, page++) - mark_page_reserved(page); - zero_page_mask = ((PAGE_SIZE << order) - 1) & PAGE_MASK; } @@ -470,9 +464,9 @@ void __init mem_init(void) BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT) && (PFN_PTE_SHIFT > PAGE_SHIFT)); maar_init(); - memblock_free_all(); setup_zero_pages(); /* Setup zeroed pages. */ mem_init_free_highmem(); + memblock_free_all(); #ifdef CONFIG_64BIT if ((unsigned long) &_text > (unsigned long) CKSEG0) @@ -486,8 +480,8 @@ void __init mem_init(void) void __init mem_init(void) { high_memory = (void *) __va(get_num_physpages() << PAGE_SHIFT); - memblock_free_all(); setup_zero_pages(); /* This comes from node 0 */ + memblock_free_all(); } #endif /* !CONFIG_NUMA */ From patchwork Thu Mar 6 18:51:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005181 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FD7D25C702; Thu, 6 Mar 2025 18:53:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287184; cv=none; b=VEfyx1Drs9iBO1If8HTg9BT+oCv4dmDGCYtCIO7J5A71zEfasaZc5b1Zc8YhUhzHkgEwPz5cD7VZBnFryvTqJLp/qHXxysyEixSGTWiNRAHhoBZ6je/LoHQTT1kRoA1l5XFi7H+Xi6DuYkjdxaM8flfKWC81Px718QtkF4NMsd0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287184; c=relaxed/simple; bh=uD5faVqJ7G6OTY8H5Svd8uEX7wWQ143lrl+QSf2xljc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Bc1DrsWZyMqQWy5d/cHs+z5ApK9U0zke7QI5jHo/Dj3ZHhapmEVyHD7WGmEH8+lBIzEqHbTCS65RMQ6DOS1neLONPk9fr/+GVNB67NXNQ7X54t0L0O33Wda56VCHRlG66QflXMQ584Cu6Wyui1+Pix9l0NeaWpSCvZkHdOWu7Is= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Giroq7hT; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Giroq7hT" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5BFD1C4CEEB; Thu, 6 Mar 2025 18:52:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287184; bh=uD5faVqJ7G6OTY8H5Svd8uEX7wWQ143lrl+QSf2xljc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Giroq7hT2g3vsvEZ2g+QDA6pleXlCgJ831y/Fi3Wjhg80Qvyzg+4WeJi56qeahcuZ BnB/BJAkrOWMctvhyQiwDEOKAODheMIY8JlcVpXK5m/bif8P0mWEql2C7GcEAP8YDq y5I3KMdkU9l7AIEqGFvXGvfoPMAU64jJ7mxxyGtY4aMTIFjIzONPRhlRs09lt/nZQC Dfy3fEyUNo74jl61ggLHzXPiXbsdY95YwK3QPmGVyWMPsISrjHbonWwydEU9fwsHjj Cokp3+w/v07caKL3v7HrEl6YuLcjwgCcPlwVGGWzrWhcdLmjUmXtBEydZY3rL4XdE2 sLgtFWbfjejNg== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 06/13] nios2: move pr_debug() about memory start and end to setup_arch() Date: Thu, 6 Mar 2025 20:51:16 +0200 Message-ID: <20250306185124.3147510-7-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" This will help with pulling out memblock_free_all() to the generic code and reducing code duplication in arch::mem_init(). Signed-off-by: Mike Rapoport (Microsoft) --- arch/nios2/kernel/setup.c | 2 ++ arch/nios2/mm/init.c | 2 -- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/nios2/kernel/setup.c b/arch/nios2/kernel/setup.c index da122a5fa43b..a4cffbfc1399 100644 --- a/arch/nios2/kernel/setup.c +++ b/arch/nios2/kernel/setup.c @@ -149,6 +149,8 @@ void __init setup_arch(char **cmdline_p) memory_start = memblock_start_of_DRAM(); memory_end = memblock_end_of_DRAM(); + pr_debug("%s: start=%lx, end=%lx\n", __func__, memory_start, memory_end); + setup_initial_init_mm(_stext, _etext, _edata, _end); init_task.thread.kregs = &fake_regs; diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c index a2278485de19..aa692ad30044 100644 --- a/arch/nios2/mm/init.c +++ b/arch/nios2/mm/init.c @@ -65,8 +65,6 @@ void __init mem_init(void) unsigned long end_mem = memory_end; /* this must not include kernel stack at top */ - pr_debug("mem_init: start=%lx, end=%lx\n", memory_start, memory_end); - end_mem &= PAGE_MASK; high_memory = __va(end_mem); From patchwork Thu Mar 6 18:51:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005182 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B50D26D5B8; Thu, 6 Mar 2025 18:53:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287199; cv=none; b=n+Vp+u9o068wRW/L2ptf74LJIfEq3Wr6cRoMHTzvkvnFSMNUbKxZ1yxQ4W0E2PFyFlXFOCO4d+AGOBLuUHTz6tfgr4ZE3A25agivwFv2P+ASCVp6mu9cRXvo0a/qmHxZdQgepImcsQ+e+t7nomknClGjl34UkccCDKfcRhL3l1s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287199; c=relaxed/simple; bh=BkE0IFJXSKU77+xC0N2TBlUULdAjqoteouUfp8/gZ9o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QOJEUbDc5VTXhMKFFkwcLyNkEnniRcwHEdYCIKD4YLort4JhAO1LujxNFslMYglrJFpsi77x2oVXzeqeZlC4ogtB5GvUgKxKFoBdksrASQPDdQMK9TGCVTrVjcZxxJyuKXTV527sQscgm55QzNGtxlC5FIvIMIbCcwHRxEQT0AQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=H78DLSl+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="H78DLSl+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8163DC4CEE8; Thu, 6 Mar 2025 18:53:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287197; bh=BkE0IFJXSKU77+xC0N2TBlUULdAjqoteouUfp8/gZ9o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H78DLSl+IjdnXrOdxgT+/xt6JTBU4euW24C/yDa5I+IadYyC+HEIKf8v9/STSql0w 8Ci9p73t1w+Xz+YHbbWW0NqRvPAsTTGgkup9RJxFt3IWYCiwU8KzktKs78WPfgstEp r3TBxtlGs5W73C+QSyCEERn1fqDTHnrNMca8jScShJM1AOJsNrVThziirlU9DWuCQI NxI/6QapAZIvA0iPZvBo0oljP9L1r5qcoH4ZPCXNbth214GQA4i6M5RW8Tday7NHzb PgoPccIN87hXyD3eA9w781HCQ9lk9kdOFTozIS7Gy/SywY4wv8FVfsBzQ613ahQ7F+ 9OZnlgrDMKwRQ== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 07/13] s390: make setup_zero_pages() use memblock Date: Thu, 6 Mar 2025 20:51:17 +0200 Message-ID: <20250306185124.3147510-8-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" Allocating the zero pages from memblock is simpler because the memory is already reserved. This will also help with pulling out memblock_free_all() to the generic code and reducing code duplication in arch::mem_init(). Signed-off-by: Mike Rapoport (Microsoft) --- arch/s390/mm/init.c | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index f2298f7a3f21..020aa2f78d01 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -73,8 +73,6 @@ static void __init setup_zero_pages(void) { unsigned long total_pages = memblock_estimated_nr_free_pages(); unsigned int order; - struct page *page; - int i; /* Latest machines require a mapping granularity of 512KB */ order = 7; @@ -83,17 +81,10 @@ static void __init setup_zero_pages(void) while (order > 2 && (total_pages >> 10) < (1UL << order)) order--; - empty_zero_page = __get_free_pages(GFP_KERNEL | __GFP_ZERO, order); + empty_zero_page = (unsigned long)memblock_alloc(PAGE_SIZE << order, order); if (!empty_zero_page) panic("Out of memory in setup_zero_pages"); - page = virt_to_page((void *) empty_zero_page); - split_page(page, order); - for (i = 1 << order; i > 0; i--) { - mark_page_reserved(page); - page++; - } - zero_page_mask = ((PAGE_SIZE << order) - 1) & PAGE_MASK; } @@ -176,9 +167,10 @@ void __init mem_init(void) pv_init(); kfence_split_mapping(); + setup_zero_pages(); /* Setup zeroed pages. */ + /* this will put all low memory onto the freelists */ memblock_free_all(); - setup_zero_pages(); /* Setup zeroed pages. */ } unsigned long memory_block_size_bytes(void) From patchwork Thu Mar 6 18:51:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005183 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1C6825D54E; Thu, 6 Mar 2025 18:53:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287210; cv=none; b=OXvPXaDF72GEvZWok7uhRtk+R6lutCNZcMPjV7H0B9mV+Wmm/7EVpA+n++qEoH1N0BqtJ2fpXZsF9Wc1KgrreDfq0TMboit/t6Is2lXtb+RxJEwhh+qdKEietDNY7/hcLmk8GN5vyzF8Sx3RGkdsxniuwanqjUSsoTIr/EsD1bg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287210; c=relaxed/simple; bh=Af9gKlZIDExjo3NK0sfX49tiYlvs18loCfa3z4qCS4k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KOR4xNlBXg51F9AnooqE1pK7lGg90qv28YyT/U0yOA1Uzjul8SMdUkDPtNQhjDGnRukz/TgnQVXj7wgU/KPsvV9eI7u/ThFcHJiwSmUxUI/vURu/0pIhPBEQjdvJsmAvxp44hCeatq5ISf5wixnavJDMwTzNkJDqJU6+PHAm6yQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ACHI6rfL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ACHI6rfL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2766C4CEE4; Thu, 6 Mar 2025 18:53:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287210; bh=Af9gKlZIDExjo3NK0sfX49tiYlvs18loCfa3z4qCS4k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ACHI6rfL1CccB57sU6MHDnuoRg3jMkyFyeeJlXxk8uRHYT/lbOYh1q5kA3v8+SoBx /rcC9SXjhDCUt02EMaLZCrUKfOk4h0oLxAT+jnDZad4CATtsjvmD8EjF6oqwbdQeEY lec4vL01J8O4/qZP9ogTuxjIICQ/nPBl4pvJ44MoLU18PCfW1pW/JGbUMlIRkJlmQt 6fBGVqdQnog5dPeP0hV9MmrP+bGVIxIvFvQUgN6hZKnvRwCNYzS1uButYr6dHJ9Qv2 LukeOxx9dFnYJScdzXYrIbXP+iFHUqdhlT2i8EDd+xbfsULozwofW4yz7qQY5SvlsQ dE7dgY38/JZiw== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 08/13] xtensa: split out printing of virtual memory layout to a function Date: Thu, 6 Mar 2025 20:51:18 +0200 Message-ID: <20250306185124.3147510-9-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" This will help with pulling out memblock_free_all() to the generic code and reducing code duplication in arch::mem_init(). Signed-off-by: Mike Rapoport (Microsoft) --- arch/xtensa/mm/init.c | 97 ++++++++++++++++++++++--------------------- 1 file changed, 50 insertions(+), 47 deletions(-) diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c index b2587a1a7c46..01577d33e602 100644 --- a/arch/xtensa/mm/init.c +++ b/arch/xtensa/mm/init.c @@ -66,6 +66,55 @@ void __init bootmem_init(void) memblock_dump_all(); } +static void __init print_vm_layout(void) +{ + pr_info("virtual kernel memory layout:\n" +#ifdef CONFIG_KASAN + " kasan : 0x%08lx - 0x%08lx (%5lu MB)\n" +#endif +#ifdef CONFIG_MMU + " vmalloc : 0x%08lx - 0x%08lx (%5lu MB)\n" +#endif +#ifdef CONFIG_HIGHMEM + " pkmap : 0x%08lx - 0x%08lx (%5lu kB)\n" + " fixmap : 0x%08lx - 0x%08lx (%5lu kB)\n" +#endif + " lowmem : 0x%08lx - 0x%08lx (%5lu MB)\n" + " .text : 0x%08lx - 0x%08lx (%5lu kB)\n" + " .rodata : 0x%08lx - 0x%08lx (%5lu kB)\n" + " .data : 0x%08lx - 0x%08lx (%5lu kB)\n" + " .init : 0x%08lx - 0x%08lx (%5lu kB)\n" + " .bss : 0x%08lx - 0x%08lx (%5lu kB)\n", +#ifdef CONFIG_KASAN + KASAN_SHADOW_START, KASAN_SHADOW_START + KASAN_SHADOW_SIZE, + KASAN_SHADOW_SIZE >> 20, +#endif +#ifdef CONFIG_MMU + VMALLOC_START, VMALLOC_END, + (VMALLOC_END - VMALLOC_START) >> 20, +#ifdef CONFIG_HIGHMEM + PKMAP_BASE, PKMAP_BASE + LAST_PKMAP * PAGE_SIZE, + (LAST_PKMAP*PAGE_SIZE) >> 10, + FIXADDR_START, FIXADDR_END, + (FIXADDR_END - FIXADDR_START) >> 10, +#endif + PAGE_OFFSET, PAGE_OFFSET + + (max_low_pfn - min_low_pfn) * PAGE_SIZE, +#else + min_low_pfn * PAGE_SIZE, max_low_pfn * PAGE_SIZE, +#endif + ((max_low_pfn - min_low_pfn) * PAGE_SIZE) >> 20, + (unsigned long)_text, (unsigned long)_etext, + (unsigned long)(_etext - _text) >> 10, + (unsigned long)__start_rodata, (unsigned long)__end_rodata, + (unsigned long)(__end_rodata - __start_rodata) >> 10, + (unsigned long)_sdata, (unsigned long)_edata, + (unsigned long)(_edata - _sdata) >> 10, + (unsigned long)__init_begin, (unsigned long)__init_end, + (unsigned long)(__init_end - __init_begin) >> 10, + (unsigned long)__bss_start, (unsigned long)__bss_stop, + (unsigned long)(__bss_stop - __bss_start) >> 10); +} void __init zones_init(void) { @@ -77,6 +126,7 @@ void __init zones_init(void) #endif }; free_area_init(max_zone_pfn); + print_vm_layout(); } static void __init free_highpages(void) @@ -118,53 +168,6 @@ void __init mem_init(void) high_memory = (void *)__va(max_low_pfn << PAGE_SHIFT); memblock_free_all(); - - pr_info("virtual kernel memory layout:\n" -#ifdef CONFIG_KASAN - " kasan : 0x%08lx - 0x%08lx (%5lu MB)\n" -#endif -#ifdef CONFIG_MMU - " vmalloc : 0x%08lx - 0x%08lx (%5lu MB)\n" -#endif -#ifdef CONFIG_HIGHMEM - " pkmap : 0x%08lx - 0x%08lx (%5lu kB)\n" - " fixmap : 0x%08lx - 0x%08lx (%5lu kB)\n" -#endif - " lowmem : 0x%08lx - 0x%08lx (%5lu MB)\n" - " .text : 0x%08lx - 0x%08lx (%5lu kB)\n" - " .rodata : 0x%08lx - 0x%08lx (%5lu kB)\n" - " .data : 0x%08lx - 0x%08lx (%5lu kB)\n" - " .init : 0x%08lx - 0x%08lx (%5lu kB)\n" - " .bss : 0x%08lx - 0x%08lx (%5lu kB)\n", -#ifdef CONFIG_KASAN - KASAN_SHADOW_START, KASAN_SHADOW_START + KASAN_SHADOW_SIZE, - KASAN_SHADOW_SIZE >> 20, -#endif -#ifdef CONFIG_MMU - VMALLOC_START, VMALLOC_END, - (VMALLOC_END - VMALLOC_START) >> 20, -#ifdef CONFIG_HIGHMEM - PKMAP_BASE, PKMAP_BASE + LAST_PKMAP * PAGE_SIZE, - (LAST_PKMAP*PAGE_SIZE) >> 10, - FIXADDR_START, FIXADDR_END, - (FIXADDR_END - FIXADDR_START) >> 10, -#endif - PAGE_OFFSET, PAGE_OFFSET + - (max_low_pfn - min_low_pfn) * PAGE_SIZE, -#else - min_low_pfn * PAGE_SIZE, max_low_pfn * PAGE_SIZE, -#endif - ((max_low_pfn - min_low_pfn) * PAGE_SIZE) >> 20, - (unsigned long)_text, (unsigned long)_etext, - (unsigned long)(_etext - _text) >> 10, - (unsigned long)__start_rodata, (unsigned long)__end_rodata, - (unsigned long)(__end_rodata - __start_rodata) >> 10, - (unsigned long)_sdata, (unsigned long)_edata, - (unsigned long)(_edata - _sdata) >> 10, - (unsigned long)__init_begin, (unsigned long)__init_end, - (unsigned long)(__init_end - __init_begin) >> 10, - (unsigned long)__bss_start, (unsigned long)__bss_stop, - (unsigned long)(__bss_stop - __bss_start) >> 10); } static void __init parse_memmap_one(char *p) From patchwork Thu Mar 6 18:51:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005184 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6E0B25C702; Thu, 6 Mar 2025 18:53:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287224; cv=none; b=ujioBY5F2ildsLJMcH8E7WisKsf7B9MEueHJeaDkWUsH2Tg8fFat/enkpJ+6XbuImjskrGlpY8uH7PvsnG3lfa7UJS3Ebk7BgL+RXj/KZy0alwadX0IUovU+EfOY8rec9vUtPKsircX9Gjqu+xQ1Eam49FXvnJi50Mab+HSbHc8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287224; c=relaxed/simple; bh=GWhAi3dGAk2czZZlfNlnCeLY0kz6EzDfRxHcBCJ65pw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CylRjFDg5OKgFGVx/WZQVnHCqwMf/u/UVR8s/VR2RjFJaNuypcqvp+nn16rxvPSRC71yoWJAO+5ll2RIMynjzVBpLbGEztGLEUq4cDI6V2y2ap6bD8Wb0TLQDB0FVVVjcLvP0TX/3ZiOD5c6fz+JRVkdlOoriJ8je6g7IAUaF0Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Sjo2a54H; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Sjo2a54H" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C878CC4CEEB; Thu, 6 Mar 2025 18:53:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287223; bh=GWhAi3dGAk2czZZlfNlnCeLY0kz6EzDfRxHcBCJ65pw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Sjo2a54HfhylNrI9szawcN/hFY9v9LkhuO3hGt+/XZFGhe6zgylMcCb+JP4zXVy+l oY1w36YFnS+uiYpXV50mQ9r3KV7Wig7OJ53EwaIzzgxZKFR7pIuucwJUy+6+QwJ2yl j7aksWYDkwdSyu/N5Wtt0+3RjpPmUTCWp/GkxFN83BeMGCgoSbGOUFUTq9dOW6nh8p 2DqUe8gaglnTA45JXqLd5S0Nel1Bf+qtQH44yp0Uz+SaXxK1irZurJQEA2+qdlGsR6 SQ/ny1xFkoNAxWXJ0VT+vsx6cMk9bJFNgkjkK4UaCh672uWftBY+eZOUrK9Ln7Yoll BImA1J/mIS4gA== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 09/13] arch, mm: set max_mapnr when allocating memory map for FLATMEM Date: Thu, 6 Mar 2025 20:51:19 +0200 Message-ID: <20250306185124.3147510-10-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" max_mapnr is essentially the size of the memory map for systems that use FLATMEM. There is no reason to calculate it in each and every architecture when it's anyway calculated in alloc_node_mem_map(). Drop setting of max_mapnr from architecture code and set it once in alloc_node_mem_map(). While on it, move definition of mem_map and max_mapnr to mm/mm_init.c so there won't be two copies for MMU and !MMU variants. Signed-off-by: Mike Rapoport (Microsoft) --- arch/alpha/mm/init.c | 1 - arch/arc/mm/init.c | 5 ----- arch/arm/mm/init.c | 2 -- arch/csky/mm/init.c | 4 ---- arch/loongarch/mm/init.c | 1 - arch/microblaze/mm/init.c | 4 ---- arch/mips/mm/init.c | 8 -------- arch/nios2/kernel/setup.c | 1 - arch/nios2/mm/init.c | 2 +- arch/openrisc/mm/init.c | 1 - arch/parisc/mm/init.c | 1 - arch/powerpc/kernel/setup-common.c | 2 -- arch/riscv/mm/init.c | 1 - arch/s390/mm/init.c | 1 - arch/sh/mm/init.c | 1 - arch/sparc/mm/init_32.c | 1 - arch/um/include/shared/mem_user.h | 1 - arch/um/kernel/physmem.c | 12 ------------ arch/um/kernel/um_arch.c | 1 - arch/x86/mm/init_32.c | 3 --- arch/xtensa/mm/init.c | 1 - include/asm-generic/memory_model.h | 5 +++-- include/linux/mm.h | 11 ----------- mm/memory.c | 8 -------- mm/mm_init.c | 25 +++++++++++++++++-------- mm/nommu.c | 4 ---- 26 files changed, 21 insertions(+), 86 deletions(-) diff --git a/arch/alpha/mm/init.c b/arch/alpha/mm/init.c index 61c2198b1359..ec0eeae9c653 100644 --- a/arch/alpha/mm/init.c +++ b/arch/alpha/mm/init.c @@ -276,7 +276,6 @@ srm_paging_stop (void) void __init mem_init(void) { - set_max_mapnr(max_low_pfn); high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); memblock_free_all(); } diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c index 6a71b23f1383..7ef883d58dc1 100644 --- a/arch/arc/mm/init.c +++ b/arch/arc/mm/init.c @@ -154,11 +154,6 @@ void __init setup_arch_memory(void) arch_pfn_offset = min(min_low_pfn, min_high_pfn); kmap_init(); - -#else /* CONFIG_HIGHMEM */ - /* pfn_valid() uses this when FLATMEM=y and HIGHMEM=n */ - max_mapnr = max_low_pfn - min_low_pfn; - #endif /* CONFIG_HIGHMEM */ free_area_init(max_zone_pfn); diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 9aec1cb2386f..d4bcc745a044 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -275,8 +275,6 @@ void __init mem_init(void) swiotlb_init(max_pfn > arm_dma_pfn_limit, SWIOTLB_VERBOSE); #endif - set_max_mapnr(pfn_to_page(max_pfn) - mem_map); - #ifdef CONFIG_SA1111 /* now that our DMA memory is actually so designated, we can free it */ memblock_phys_free(PHYS_OFFSET, __pa(swapper_pg_dir) - PHYS_OFFSET); diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c index ab51acbc19b2..ba6694d6170a 100644 --- a/arch/csky/mm/init.c +++ b/arch/csky/mm/init.c @@ -46,10 +46,6 @@ void __init mem_init(void) { #ifdef CONFIG_HIGHMEM unsigned long tmp; - - set_max_mapnr(highend_pfn - ARCH_PFN_OFFSET); -#else - set_max_mapnr(max_low_pfn - ARCH_PFN_OFFSET); #endif high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT); diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c index ca5aa5f46a9f..00449df50db1 100644 --- a/arch/loongarch/mm/init.c +++ b/arch/loongarch/mm/init.c @@ -78,7 +78,6 @@ void __init paging_init(void) void __init mem_init(void) { - max_mapnr = max_low_pfn; high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT); memblock_free_all(); diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c index 4520c5741579..857cd2b44bcf 100644 --- a/arch/microblaze/mm/init.c +++ b/arch/microblaze/mm/init.c @@ -104,17 +104,13 @@ void __init setup_memory(void) * * min_low_pfn - the first page (mm/bootmem.c - node_boot_start) * max_low_pfn - * max_mapnr - the first unused page (mm/bootmem.c - node_low_pfn) */ /* memory start is from the kernel end (aligned) to higher addr */ min_low_pfn = memory_start >> PAGE_SHIFT; /* minimum for allocation */ - /* RAM is assumed contiguous */ - max_mapnr = memory_size >> PAGE_SHIFT; max_low_pfn = ((u64)memory_start + (u64)lowmem_size) >> PAGE_SHIFT; max_pfn = ((u64)memory_start + (u64)memory_size) >> PAGE_SHIFT; - pr_info("%s: max_mapnr: %#lx\n", __func__, max_mapnr); pr_info("%s: min_low_pfn: %#lx\n", __func__, min_low_pfn); pr_info("%s: max_low_pfn: %#lx\n", __func__, max_low_pfn); pr_info("%s: max_pfn: %#lx\n", __func__, max_pfn); diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index f51cd97376df..338b3c9fc5bc 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -417,15 +417,7 @@ void __init paging_init(void) " %ldk highmem ignored\n", (highend_pfn - max_low_pfn) << (PAGE_SHIFT - 10)); max_zone_pfns[ZONE_HIGHMEM] = max_low_pfn; - - max_mapnr = max_low_pfn; - } else if (highend_pfn) { - max_mapnr = highend_pfn; - } else { - max_mapnr = max_low_pfn; } -#else - max_mapnr = max_low_pfn; #endif high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT); diff --git a/arch/nios2/kernel/setup.c b/arch/nios2/kernel/setup.c index a4cffbfc1399..2a40150142c3 100644 --- a/arch/nios2/kernel/setup.c +++ b/arch/nios2/kernel/setup.c @@ -158,7 +158,6 @@ void __init setup_arch(char **cmdline_p) *cmdline_p = boot_command_line; find_limits(&min_low_pfn, &max_low_pfn, &max_pfn); - max_mapnr = max_low_pfn; memblock_reserve(__pa_symbol(_stext), _end - _stext); #ifdef CONFIG_BLK_DEV_INITRD diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c index aa692ad30044..3cafa87ead9e 100644 --- a/arch/nios2/mm/init.c +++ b/arch/nios2/mm/init.c @@ -51,7 +51,7 @@ void __init paging_init(void) pagetable_init(); pgd_current = swapper_pg_dir; - max_zone_pfn[ZONE_NORMAL] = max_mapnr; + max_zone_pfn[ZONE_NORMAL] = max_low_pfn; /* pass the memory from the bootmem allocator to the main allocator */ free_area_init(max_zone_pfn); diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c index d0cb1a0126f9..9093c336e158 100644 --- a/arch/openrisc/mm/init.c +++ b/arch/openrisc/mm/init.c @@ -193,7 +193,6 @@ void __init mem_init(void) { BUG_ON(!mem_map); - max_mapnr = max_low_pfn; high_memory = (void *)__va(max_low_pfn * PAGE_SIZE); /* clear the zero-page */ diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c index 61c0a2477072..2cdfc0b1195c 100644 --- a/arch/parisc/mm/init.c +++ b/arch/parisc/mm/init.c @@ -563,7 +563,6 @@ void __init mem_init(void) #endif high_memory = __va((max_pfn << PAGE_SHIFT)); - set_max_mapnr(max_low_pfn); memblock_free_all(); #ifdef CONFIG_PA11 diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c index a08b0ede4e64..68d47c53876c 100644 --- a/arch/powerpc/kernel/setup-common.c +++ b/arch/powerpc/kernel/setup-common.c @@ -957,8 +957,6 @@ void __init setup_arch(char **cmdline_p) /* Parse memory topology */ mem_topology_setup(); - /* Set max_mapnr before paging_init() */ - set_max_mapnr(max_pfn); high_memory = (void *)__va(max_low_pfn * PAGE_SIZE); /* diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 15b2eda4c364..157c9ca51541 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -298,7 +298,6 @@ static void __init setup_bootmem(void) high_memory = (void *)(__va(PFN_PHYS(max_low_pfn))); dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn)); - set_max_mapnr(max_low_pfn - ARCH_PFN_OFFSET); reserve_initrd_mem(); diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 020aa2f78d01..7e64243693c9 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -161,7 +161,6 @@ void __init mem_init(void) cpumask_set_cpu(0, &init_mm.context.cpu_attach_mask); cpumask_set_cpu(0, mm_cpumask(&init_mm)); - set_max_mapnr(max_low_pfn); high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); pv_init(); diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index 289a2fecebef..72aea5cd1b85 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -290,7 +290,6 @@ void __init paging_init(void) */ max_low_pfn = max_pfn = memblock_end_of_DRAM() >> PAGE_SHIFT; min_low_pfn = __MEMORY_START >> PAGE_SHIFT; - set_max_mapnr(max_low_pfn - min_low_pfn); nodes_clear(node_online_map); diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c index d96a14ffceeb..6b58da14edc6 100644 --- a/arch/sparc/mm/init_32.c +++ b/arch/sparc/mm/init_32.c @@ -275,7 +275,6 @@ void __init mem_init(void) taint_real_pages(); - max_mapnr = last_valid_pfn - pfn_base; high_memory = __va(max_low_pfn << PAGE_SHIFT); memblock_free_all(); diff --git a/arch/um/include/shared/mem_user.h b/arch/um/include/shared/mem_user.h index adfa08062f88..d4727efcf23d 100644 --- a/arch/um/include/shared/mem_user.h +++ b/arch/um/include/shared/mem_user.h @@ -47,7 +47,6 @@ extern int iomem_size; #define ROUND_4M(n) ((((unsigned long) (n)) + (1 << 22)) & ~((1 << 22) - 1)) extern unsigned long find_iomem(char *driver, unsigned long *len_out); -extern void mem_total_pages(unsigned long physmem, unsigned long iomem); extern void setup_physmem(unsigned long start, unsigned long usable, unsigned long len); extern void map_memory(unsigned long virt, unsigned long phys, diff --git a/arch/um/kernel/physmem.c b/arch/um/kernel/physmem.c index a74f17b033c4..af02b5f9911d 100644 --- a/arch/um/kernel/physmem.c +++ b/arch/um/kernel/physmem.c @@ -22,18 +22,6 @@ static int physmem_fd = -1; unsigned long high_physmem; EXPORT_SYMBOL(high_physmem); -void __init mem_total_pages(unsigned long physmem, unsigned long iomem) -{ - unsigned long phys_pages, iomem_pages, total_pages; - - phys_pages = physmem >> PAGE_SHIFT; - iomem_pages = iomem >> PAGE_SHIFT; - - total_pages = phys_pages + iomem_pages; - - max_mapnr = total_pages; -} - void map_memory(unsigned long virt, unsigned long phys, unsigned long len, int r, int w, int x) { diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c index 79ea97d4797e..6414cbf00572 100644 --- a/arch/um/kernel/um_arch.c +++ b/arch/um/kernel/um_arch.c @@ -419,7 +419,6 @@ void __init setup_arch(char **cmdline_p) stack_protections((unsigned long) init_task.stack); setup_physmem(uml_physmem, uml_reserved, physmem_size); - mem_total_pages(physmem_size, iomem_size); uml_dtb_init(); read_initrd(); diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index ac41b1e0940d..6d2f8cb9451e 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -650,9 +650,6 @@ void __init initmem_init(void) memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0); -#ifdef CONFIG_FLATMEM - max_mapnr = IS_ENABLED(CONFIG_HIGHMEM) ? highend_pfn : max_low_pfn; -#endif __vmalloc_start_set = true; printk(KERN_NOTICE "%ldMB LOWMEM available.\n", diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c index 01577d33e602..9f1b0d5fccc7 100644 --- a/arch/xtensa/mm/init.c +++ b/arch/xtensa/mm/init.c @@ -164,7 +164,6 @@ void __init mem_init(void) { free_highpages(); - max_mapnr = max_pfn - ARCH_PFN_OFFSET; high_memory = (void *)__va(max_low_pfn << PAGE_SHIFT); memblock_free_all(); diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h index 6d1fb6162ac1..a3b5029aebbd 100644 --- a/include/asm-generic/memory_model.h +++ b/include/asm-generic/memory_model.h @@ -19,11 +19,12 @@ #define __page_to_pfn(page) ((unsigned long)((page) - mem_map) + \ ARCH_PFN_OFFSET) +/* avoid include hell */ +extern unsigned long max_mapnr; + #ifndef pfn_valid static inline int pfn_valid(unsigned long pfn) { - /* avoid include hell */ - extern unsigned long max_mapnr; unsigned long pfn_offset = ARCH_PFN_OFFSET; return pfn >= pfn_offset && (pfn - pfn_offset) < max_mapnr; diff --git a/include/linux/mm.h b/include/linux/mm.h index 7b1068ddcbb7..fdf20503850e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -45,17 +45,6 @@ extern int sysctl_page_lock_unfairness; void mm_core_init(void); void init_mm_internals(void); -#ifndef CONFIG_NUMA /* Don't use mapnrs, do it properly */ -extern unsigned long max_mapnr; - -static inline void set_max_mapnr(unsigned long limit) -{ - max_mapnr = limit; -} -#else -static inline void set_max_mapnr(unsigned long limit) { } -#endif - extern atomic_long_t _totalram_pages; static inline unsigned long totalram_pages(void) { diff --git a/mm/memory.c b/mm/memory.c index b4d3d4893267..126fdd3001e3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -95,14 +95,6 @@ #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid. #endif -#ifndef CONFIG_NUMA -unsigned long max_mapnr; -EXPORT_SYMBOL(max_mapnr); - -struct page *mem_map; -EXPORT_SYMBOL(mem_map); -#endif - static vm_fault_t do_fault(struct vm_fault *vmf); static vm_fault_t do_anonymous_page(struct vm_fault *vmf); static bool vmf_pte_changed(struct vm_fault *vmf); diff --git a/mm/mm_init.c b/mm/mm_init.c index 2630cc30147e..50a93714e1c6 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -36,6 +36,14 @@ #include +#ifndef CONFIG_NUMA +unsigned long max_mapnr; +EXPORT_SYMBOL(max_mapnr); + +struct page *mem_map; +EXPORT_SYMBOL(mem_map); +#endif + #ifdef CONFIG_DEBUG_MEMORY_INIT int __meminitdata mminit_loglevel; @@ -1617,7 +1625,7 @@ static void __init alloc_node_mem_map(struct pglist_data *pgdat) start = pgdat->node_start_pfn & ~(MAX_ORDER_NR_PAGES - 1); offset = pgdat->node_start_pfn - start; /* - * The zone's endpoints aren't required to be MAX_PAGE_ORDER + * The zone's endpoints aren't required to be MAX_PAGE_ORDER * aligned but the node_mem_map endpoints must be in order * for the buddy allocator to function correctly. */ @@ -1633,14 +1641,15 @@ static void __init alloc_node_mem_map(struct pglist_data *pgdat) pr_debug("%s: node %d, pgdat %08lx, node_mem_map %08lx\n", __func__, pgdat->node_id, (unsigned long)pgdat, (unsigned long)pgdat->node_mem_map); -#ifndef CONFIG_NUMA + /* the global mem_map is just set as node 0's */ - if (pgdat == NODE_DATA(0)) { - mem_map = NODE_DATA(0)->node_mem_map; - if (page_to_pfn(mem_map) != pgdat->node_start_pfn) - mem_map -= offset; - } -#endif + WARN_ON(pgdat != NODE_DATA(0)); + + mem_map = pgdat->node_mem_map; + if (page_to_pfn(mem_map) != pgdat->node_start_pfn) + mem_map -= offset; + + max_mapnr = end - start; } #else static inline void alloc_node_mem_map(struct pglist_data *pgdat) { } diff --git a/mm/nommu.c b/mm/nommu.c index baa79abdaf03..f0209dd26dfa 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -44,16 +44,12 @@ void *high_memory; EXPORT_SYMBOL(high_memory); -struct page *mem_map; -unsigned long max_mapnr; -EXPORT_SYMBOL(max_mapnr); unsigned long highest_memmap_pfn; int sysctl_nr_trim_pages = CONFIG_NOMMU_INITIAL_TRIM_EXCESS; int heap_stack_gap = 0; atomic_long_t mmap_pages_allocated; -EXPORT_SYMBOL(mem_map); /* list of mapped, potentially shareable regions */ static struct kmem_cache *vm_region_jar; From patchwork Thu Mar 6 18:51:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005185 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BB1CE26FDA6; Thu, 6 Mar 2025 18:53:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287236; cv=none; b=SjEjl4CZrLAOYQvs/lpm1vWrWti22F09E/jid1Pyir0NrQApHWsL/5A2hj+rSowPOU+KcV6EeQgqhYAemW6qw/szyuH7AxxiX6Q5lpzac7XyVIIaxIQyXdjURfjB6Tbzx2U1yLO+HfTR6oPk42ABZJHaaZqdgY0v7iHF6oB3u8A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287236; c=relaxed/simple; bh=N82KtXPTFRyks1M44cPPRNwY3qzMhZjfhHPDkzfY8rc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hQsiy5mLa1QIPy4EeiHBmSFRsO7b/jowLQ28pGLsmoPYOOPKpHEbDSTSnBXCrY6bWJjJFGqaYks4quqkLHjJ03PVmXi9p6JMp0/tDEi+XNdifeLMgI04QRrf7slFqtfmyXl7H0OVD0VOhlSF1YWtGZIFoV5O2sQelFDPkU+ZSug= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KHBmfR60; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KHBmfR60" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EC2AFC4CEE0; Thu, 6 Mar 2025 18:53:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287236; bh=N82KtXPTFRyks1M44cPPRNwY3qzMhZjfhHPDkzfY8rc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KHBmfR60iylGj76PcT5SR7iJdsxbhA2Op+45b52Rs6V3kvzzkbh7AD6HEff+L65Hh xV0sL9zAxkHRgn2L9BsKzOGu3B/4Z0X1T/iAyacvyctkIjJOG4pG91QEtBsGACBUg8 nU35JgjU4cUQEzRF+SZgMrTeY64fMzNGDm2r6hZWXBdNPRBOi5QSkoA2qThxQvajNu fmWs3Nb6Wt2+0q0UN7j2wVSLURdOagBUsixRBdAqyIFyK5FmhDQI0oLDouhgQERSL1 vd5xfzqJGRd07zHsmoNtbnlpkmmbxEcGXST9cCCGirN8KhrhPFUgVkjm8Wfik22qcU T4FkPG9Lanq/g== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 10/13] arch, mm: set high_memory in free_area_init() Date: Thu, 6 Mar 2025 20:51:20 +0200 Message-ID: <20250306185124.3147510-11-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" high_memory defines upper bound on the directly mapped memory. This bound is defined by the beginning of ZONE_HIGHMEM when a system has high memory and by the end of memory otherwise. All this is known to generic memory management initialization code that can set high_memory while initializing core mm structures. Remove per-architecture calculation of high_memory and add a generic version to free_area_init(). Signed-off-by: Mike Rapoport (Microsoft) --- arch/alpha/mm/init.c | 1 - arch/arc/mm/init.c | 2 -- arch/arm/mm/mmu.c | 2 -- arch/arm/mm/nommu.c | 1 - arch/arm64/mm/init.c | 2 -- arch/csky/mm/init.c | 1 - arch/hexagon/mm/init.c | 6 ------ arch/loongarch/kernel/numa.c | 1 - arch/loongarch/mm/init.c | 2 -- arch/m68k/mm/init.c | 2 -- arch/m68k/mm/mcfmmu.c | 1 - arch/m68k/mm/motorola.c | 2 -- arch/m68k/sun3/config.c | 1 - arch/microblaze/mm/init.c | 2 -- arch/mips/mm/init.c | 2 -- arch/nios2/mm/init.c | 6 ------ arch/openrisc/mm/init.c | 2 -- arch/parisc/mm/init.c | 1 - arch/powerpc/kernel/setup-common.c | 1 - arch/riscv/mm/init.c | 1 - arch/s390/mm/init.c | 2 -- arch/sh/mm/init.c | 7 ------- arch/sparc/mm/init_32.c | 1 - arch/sparc/mm/init_64.c | 2 -- arch/um/kernel/um_arch.c | 1 - arch/x86/kernel/setup.c | 2 -- arch/x86/mm/init_32.c | 3 --- arch/x86/mm/numa_32.c | 3 --- arch/xtensa/mm/init.c | 2 -- mm/memory.c | 8 -------- mm/mm_init.c | 23 +++++++++++++++++++++++ mm/nommu.c | 2 -- 32 files changed, 23 insertions(+), 72 deletions(-) diff --git a/arch/alpha/mm/init.c b/arch/alpha/mm/init.c index ec0eeae9c653..3ab2d2f3c917 100644 --- a/arch/alpha/mm/init.c +++ b/arch/alpha/mm/init.c @@ -276,7 +276,6 @@ srm_paging_stop (void) void __init mem_init(void) { - high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); memblock_free_all(); } diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c index 7ef883d58dc1..05025122e965 100644 --- a/arch/arc/mm/init.c +++ b/arch/arc/mm/init.c @@ -150,8 +150,6 @@ void __init setup_arch_memory(void) */ max_zone_pfn[ZONE_HIGHMEM] = max_high_pfn; - high_memory = (void *)(min_high_pfn << PAGE_SHIFT); - arch_pfn_offset = min(min_low_pfn, min_high_pfn); kmap_init(); #endif /* CONFIG_HIGHMEM */ diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index f02f872ea8a9..e492d58a0386 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1250,8 +1250,6 @@ void __init adjust_lowmem_bounds(void) arm_lowmem_limit = lowmem_limit; - high_memory = __va(arm_lowmem_limit - 1) + 1; - if (!memblock_limit) memblock_limit = arm_lowmem_limit; diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c index 1a8f6914ee59..65903ed5e80d 100644 --- a/arch/arm/mm/nommu.c +++ b/arch/arm/mm/nommu.c @@ -146,7 +146,6 @@ void __init adjust_lowmem_bounds(void) phys_addr_t end; adjust_lowmem_bounds_mpu(); end = memblock_end_of_DRAM(); - high_memory = __va(end - 1) + 1; memblock_set_current_limit(end); } diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 9c0b8d9558fc..a48fcccd67fa 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -314,8 +314,6 @@ void __init arm64_memblock_init(void) } early_init_fdt_scan_reserved_mem(); - - high_memory = __va(memblock_end_of_DRAM() - 1) + 1; } void __init bootmem_init(void) diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c index ba6694d6170a..a22801aa503a 100644 --- a/arch/csky/mm/init.c +++ b/arch/csky/mm/init.c @@ -47,7 +47,6 @@ void __init mem_init(void) #ifdef CONFIG_HIGHMEM unsigned long tmp; #endif - high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT); memblock_free_all(); diff --git a/arch/hexagon/mm/init.c b/arch/hexagon/mm/init.c index 508bb6a8dcc9..d412c2314509 100644 --- a/arch/hexagon/mm/init.c +++ b/arch/hexagon/mm/init.c @@ -100,12 +100,6 @@ static void __init paging_init(void) * initial kernel segment table's physical address. */ init_mm.context.ptbase = __pa(init_mm.pgd); - - /* - * Start of high memory area. Will probably need something more - * fancy if we... get more fancy. - */ - high_memory = (void *)((bootmem_lastpg + 1) << PAGE_SHIFT); } #ifndef DMA_RESERVE diff --git a/arch/loongarch/kernel/numa.c b/arch/loongarch/kernel/numa.c index 84fe7f854820..8eb489725b1a 100644 --- a/arch/loongarch/kernel/numa.c +++ b/arch/loongarch/kernel/numa.c @@ -389,7 +389,6 @@ void __init paging_init(void) void __init mem_init(void) { - high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT); memblock_free_all(); } diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c index 00449df50db1..6affa3609188 100644 --- a/arch/loongarch/mm/init.c +++ b/arch/loongarch/mm/init.c @@ -78,8 +78,6 @@ void __init paging_init(void) void __init mem_init(void) { - high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT); - memblock_free_all(); } #endif /* !CONFIG_NUMA */ diff --git a/arch/m68k/mm/init.c b/arch/m68k/mm/init.c index 8b11d0d545aa..e03ac556c59e 100644 --- a/arch/m68k/mm/init.c +++ b/arch/m68k/mm/init.c @@ -66,8 +66,6 @@ void __init paging_init(void) unsigned long end_mem = memory_end & PAGE_MASK; unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, }; - high_memory = (void *) end_mem; - empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT; free_area_init(max_zone_pfn); diff --git a/arch/m68k/mm/mcfmmu.c b/arch/m68k/mm/mcfmmu.c index 19a75029036c..1750cf9f0369 100644 --- a/arch/m68k/mm/mcfmmu.c +++ b/arch/m68k/mm/mcfmmu.c @@ -168,7 +168,6 @@ void __init cf_bootmem_alloc(void) memstart = PAGE_ALIGN(_ramstart); min_low_pfn = PFN_DOWN(_rambase); max_pfn = max_low_pfn = PFN_DOWN(_ramend); - high_memory = (void *)_ramend; /* Reserve kernel text/data/bss */ memblock_reserve(_rambase, memstart - _rambase); diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 73651e093c4d..312efcd4b353 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -472,8 +472,6 @@ void __init paging_init(void) module_fixup(NULL, __start_fixup, __stop_fixup); flush_icache(); - high_memory = phys_to_virt(max_addr) + 1; - min_low_pfn = availmem >> PAGE_SHIFT; max_pfn = max_low_pfn = (max_addr >> PAGE_SHIFT) + 1; diff --git a/arch/m68k/sun3/config.c b/arch/m68k/sun3/config.c index cd8af809e0ca..925818278e34 100644 --- a/arch/m68k/sun3/config.c +++ b/arch/m68k/sun3/config.c @@ -115,7 +115,6 @@ static void __init sun3_bootmem_alloc(unsigned long memory_start, max_pfn = num_pages = __pa(memory_end) >> PAGE_SHIFT; - high_memory = (void *)memory_end; availmem = memory_start; m68k_setup_node(0); diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c index 857cd2b44bcf..7e2e342e84c5 100644 --- a/arch/microblaze/mm/init.c +++ b/arch/microblaze/mm/init.c @@ -120,8 +120,6 @@ void __init setup_memory(void) void __init mem_init(void) { - high_memory = (void *)__va(memory_start + lowmem_size - 1); - /* this will put all memory onto the freelists */ memblock_free_all(); #ifdef CONFIG_HIGHMEM diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 338b3c9fc5bc..99cefb58fba3 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -419,7 +419,6 @@ void __init paging_init(void) max_zone_pfns[ZONE_HIGHMEM] = max_low_pfn; } #endif - high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT); free_area_init(max_zone_pfns); } @@ -471,7 +470,6 @@ void __init mem_init(void) #else /* CONFIG_NUMA */ void __init mem_init(void) { - high_memory = (void *) __va(get_num_physpages() << PAGE_SHIFT); setup_zero_pages(); /* This comes from node 0 */ memblock_free_all(); } diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c index 3cafa87ead9e..4ba8dfa0d238 100644 --- a/arch/nios2/mm/init.c +++ b/arch/nios2/mm/init.c @@ -62,12 +62,6 @@ void __init paging_init(void) void __init mem_init(void) { - unsigned long end_mem = memory_end; /* this must not include - kernel stack at top */ - - end_mem &= PAGE_MASK; - high_memory = __va(end_mem); - /* this will put all memory onto the freelists */ memblock_free_all(); } diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c index 9093c336e158..72c5952607ac 100644 --- a/arch/openrisc/mm/init.c +++ b/arch/openrisc/mm/init.c @@ -193,8 +193,6 @@ void __init mem_init(void) { BUG_ON(!mem_map); - high_memory = (void *)__va(max_low_pfn * PAGE_SIZE); - /* clear the zero-page */ memset((void *)empty_zero_page, 0, PAGE_SIZE); diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c index 2cdfc0b1195c..4fbe354dc9b4 100644 --- a/arch/parisc/mm/init.c +++ b/arch/parisc/mm/init.c @@ -562,7 +562,6 @@ void __init mem_init(void) BUILD_BUG_ON(TMPALIAS_MAP_START >= 0x80000000); #endif - high_memory = __va((max_pfn << PAGE_SHIFT)); memblock_free_all(); #ifdef CONFIG_PA11 diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c index 68d47c53876c..de34c40ccb21 100644 --- a/arch/powerpc/kernel/setup-common.c +++ b/arch/powerpc/kernel/setup-common.c @@ -957,7 +957,6 @@ void __init setup_arch(char **cmdline_p) /* Parse memory topology */ mem_topology_setup(); - high_memory = (void *)__va(max_low_pfn * PAGE_SIZE); /* * Release secondary cpus out of their spinloops at 0x60 now that diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 157c9ca51541..ac6d41e86243 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -295,7 +295,6 @@ static void __init setup_bootmem(void) phys_ram_end = memblock_end_of_DRAM(); min_low_pfn = PFN_UP(phys_ram_base); max_low_pfn = max_pfn = PFN_DOWN(phys_ram_end); - high_memory = (void *)(__va(PFN_PHYS(max_low_pfn))); dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn)); diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 7e64243693c9..08ebc9a9344a 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -161,8 +161,6 @@ void __init mem_init(void) cpumask_set_cpu(0, &init_mm.context.cpu_attach_mask); cpumask_set_cpu(0, mm_cpumask(&init_mm)); - high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); - pv_init(); kfence_split_mapping(); diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index 72aea5cd1b85..6d459ffba4bc 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -330,13 +330,6 @@ unsigned int mem_init_done = 0; void __init mem_init(void) { - pg_data_t *pgdat; - - high_memory = NULL; - for_each_online_pgdat(pgdat) - high_memory = max_t(void *, high_memory, - __va(pgdat_end_pfn(pgdat) << PAGE_SHIFT)); - memblock_free_all(); /* Set this up early, so we can take care of the zero page */ diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c index 6b58da14edc6..81a468a9c223 100644 --- a/arch/sparc/mm/init_32.c +++ b/arch/sparc/mm/init_32.c @@ -275,7 +275,6 @@ void __init mem_init(void) taint_real_pages(); - high_memory = __va(max_low_pfn << PAGE_SHIFT); memblock_free_all(); for (i = 0; sp_banks[i].num_bytes != 0; i++) { diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 05882bca5b73..34d46adb9571 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -2505,8 +2505,6 @@ static void __init register_page_bootmem_info(void) } void __init mem_init(void) { - high_memory = __va(last_valid_pfn << PAGE_SHIFT); - memblock_free_all(); /* diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c index 6414cbf00572..f24a3ce37ab7 100644 --- a/arch/um/kernel/um_arch.c +++ b/arch/um/kernel/um_arch.c @@ -385,7 +385,6 @@ int __init linux_main(int argc, char **argv, char **envp) high_physmem = uml_physmem + physmem_size; end_iomem = high_physmem + iomem_size; - high_memory = (void *) end_iomem; start_vm = VMALLOC_START; diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index cebee310e200..5c9ec876915e 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -972,8 +972,6 @@ void __init setup_arch(char **cmdline_p) max_low_pfn = e820__end_of_low_ram_pfn(); else max_low_pfn = max_pfn; - - high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) + 1; #endif /* Find and reserve MPTABLE area */ diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 6d2f8cb9451e..801b659ead0c 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -643,9 +643,6 @@ void __init initmem_init(void) highstart_pfn = max_low_pfn; printk(KERN_NOTICE "%ldMB HIGHMEM available.\n", pages_to_mb(highend_pfn - highstart_pfn)); - high_memory = (void *) __va(highstart_pfn * PAGE_SIZE - 1) + 1; -#else - high_memory = (void *) __va(max_low_pfn * PAGE_SIZE - 1) + 1; #endif memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0); diff --git a/arch/x86/mm/numa_32.c b/arch/x86/mm/numa_32.c index 65fda406e6f2..442ef3facff0 100644 --- a/arch/x86/mm/numa_32.c +++ b/arch/x86/mm/numa_32.c @@ -41,9 +41,6 @@ void __init initmem_init(void) highstart_pfn = max_low_pfn; printk(KERN_NOTICE "%ldMB HIGHMEM available.\n", pages_to_mb(highend_pfn - highstart_pfn)); - high_memory = (void *) __va(highstart_pfn * PAGE_SIZE - 1) + 1; -#else - high_memory = (void *) __va(max_low_pfn * PAGE_SIZE - 1) + 1; #endif printk(KERN_NOTICE "%ldMB LOWMEM available.\n", pages_to_mb(max_low_pfn)); diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c index 9f1b0d5fccc7..9b662477b3d4 100644 --- a/arch/xtensa/mm/init.c +++ b/arch/xtensa/mm/init.c @@ -164,8 +164,6 @@ void __init mem_init(void) { free_highpages(); - high_memory = (void *)__va(max_low_pfn << PAGE_SHIFT); - memblock_free_all(); } diff --git a/mm/memory.c b/mm/memory.c index 126fdd3001e3..2351f3f6b9ed 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -113,14 +113,6 @@ static __always_inline bool vmf_orig_pte_uffd_wp(struct vm_fault *vmf) return pte_marker_uffd_wp(vmf->orig_pte); } -/* - * A number of key systems in x86 including ioremap() rely on the assumption - * that high_memory defines the upper bound on direct map memory, then end - * of ZONE_NORMAL. - */ -void *high_memory; -EXPORT_SYMBOL(high_memory); - /* * Randomize the address space (stacks, mmaps, brk, etc.). * diff --git a/mm/mm_init.c b/mm/mm_init.c index 50a93714e1c6..5e5f6ba73757 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -44,6 +44,13 @@ struct page *mem_map; EXPORT_SYMBOL(mem_map); #endif +/* + * high_memory defines the upper bound on direct map memory, then end + * of ZONE_NORMAL. + */ +void *high_memory; +EXPORT_SYMBOL(high_memory); + #ifdef CONFIG_DEBUG_MEMORY_INIT int __meminitdata mminit_loglevel; @@ -1756,6 +1763,20 @@ static bool arch_has_descending_max_zone_pfns(void) return IS_ENABLED(CONFIG_ARC) && !IS_ENABLED(CONFIG_ARC_HAS_PAE40); } +static void set_high_memory(void) +{ + phys_addr_t highmem = memblock_end_of_DRAM(); + +#ifdef CONFIG_HIGHMEM + unsigned long pfn = arch_zone_lowest_possible_pfn[ZONE_HIGHMEM]; + + if (arch_has_descending_max_zone_pfns() || highmem > PFN_PHYS(pfn)) + highmem = PFN_PHYS(pfn); +#endif + + high_memory = phys_to_virt(highmem - 1) + 1; +} + /** * free_area_init - Initialise all pg_data_t and zone data * @max_zone_pfn: an array of max PFNs for each zone @@ -1875,6 +1896,8 @@ void __init free_area_init(unsigned long *max_zone_pfn) /* disable hash distribution for systems with a single node */ fixup_hashdist(); + + set_high_memory(); } /** diff --git a/mm/nommu.c b/mm/nommu.c index f0209dd26dfa..b9783638fbd4 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -42,8 +42,6 @@ #include #include "internal.h" -void *high_memory; -EXPORT_SYMBOL(high_memory); unsigned long highest_memmap_pfn; int sysctl_nr_trim_pages = CONFIG_NOMMU_INITIAL_TRIM_EXCESS; int heap_stack_gap = 0; From patchwork Thu Mar 6 18:51:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005186 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44DA326BDB6; Thu, 6 Mar 2025 18:54:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287251; cv=none; b=U20fkjAizb/PE6AFTDZ+v5ZQeB4tyACVFkrIKe1qttD7De8k0bbRlhbzk22eTgRiF9Za2i+C5TfwW/X1HtrjZi9/cvYt8vdWm9jlUE2VWwIcRvCgPaM6i/0tCVzTlhEa70TJZX2uWFuTT4VUv+rZKisjtM1tMkNzrgIWxmPoPEU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287251; c=relaxed/simple; bh=DneiROlCoHGV4ifZ5RS2tqXbkizidYqYiepJ/XJs4u8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=exHM1w6Lk5cG5cvQJcCsSVhpsYCBkEkvIvhBcVhtQAkSKUG0i7P9R5E9KoITDKfZfZEscOYTGOoA72s5GFIngXhzUASw4kVZw+NBLf7uJ3m9dV1/loK6UGWE4ADzuqXobV9WLZ6IZt8mTFKxtSN9novvhoJsrCWT2oGNyGo4p20= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=CjFBQxFu; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="CjFBQxFu" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 161A0C4CEE9; Thu, 6 Mar 2025 18:53:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287249; bh=DneiROlCoHGV4ifZ5RS2tqXbkizidYqYiepJ/XJs4u8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CjFBQxFugxhbXU3Nj7ff8bsxbfYqlFC6eLvUFcovxhCEXxdfqomIndh6uyghPN5Zh K51wVzOKZXwQN+w8tL9srsjWFP3caoZKVo8xzX+Rv6SjCTBNoKEDsLPwrFviZyG3sj kkkCpMi8x4815tM0IDh+mckBYDEQPYUBxVB89B8KxYsuSRBmXeCfjvLWKKI+sYGzcQ Clh/Vw6CDp0Owls0KUmQEC1AN6ZEJ7ffWWzbnL+olKNGOIy8yv8oAqrZmniATfsXll J0l9GdHpOZ9jrEQab2d0IkZceZqtUXTqIbr4t+LYSk2U/u99shMVz2ordAgMNBmeNT zy2CFySHPRbVQ== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 11/13] arch, mm: streamline HIGHMEM freeing Date: Thu, 6 Mar 2025 20:51:21 +0200 Message-ID: <20250306185124.3147510-12-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" All architectures that support HIGHMEM have their code that frees high memory pages to the buddy allocator while __free_memory_core() is limited to freeing only low memory. There is no actual reason for that. The memory map is completely ready by the time memblock_free_all() is called and high pages can be released to the buddy allocator along with low memory. Remove low memory limit from __free_memory_core() and drop per-architecture code that frees high memory pages. Signed-off-by: Mike Rapoport (Microsoft) --- arch/arc/mm/init.c | 6 +----- arch/arm/mm/init.c | 29 ----------------------------- arch/csky/mm/init.c | 14 -------------- arch/microblaze/mm/init.c | 16 ---------------- arch/mips/mm/init.c | 20 -------------------- arch/powerpc/mm/mem.c | 14 -------------- arch/sparc/mm/init_32.c | 25 ------------------------- arch/x86/include/asm/highmem.h | 3 --- arch/x86/include/asm/numa.h | 4 ---- arch/x86/include/asm/numa_32.h | 13 ------------- arch/x86/mm/Makefile | 2 -- arch/x86/mm/highmem_32.c | 34 ---------------------------------- arch/x86/mm/init_32.c | 28 ---------------------------- arch/xtensa/mm/init.c | 29 ----------------------------- include/linux/mm.h | 1 - mm/memblock.c | 3 +-- 16 files changed, 2 insertions(+), 239 deletions(-) delete mode 100644 arch/x86/include/asm/numa_32.h delete mode 100644 arch/x86/mm/highmem_32.c diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c index 05025122e965..11ce638731c9 100644 --- a/arch/arc/mm/init.c +++ b/arch/arc/mm/init.c @@ -160,11 +160,7 @@ void __init setup_arch_memory(void) static void __init highmem_init(void) { #ifdef CONFIG_HIGHMEM - unsigned long tmp; - memblock_phys_free(high_mem_start, high_mem_sz); - for (tmp = min_high_pfn; tmp < max_high_pfn; tmp++) - free_highmem_page(pfn_to_page(tmp)); #endif } @@ -176,8 +172,8 @@ static void __init highmem_init(void) */ void __init mem_init(void) { - memblock_free_all(); highmem_init(); + memblock_free_all(); BUILD_BUG_ON((PTRS_PER_PGD * sizeof(pgd_t)) > PAGE_SIZE); BUILD_BUG_ON((PTRS_PER_PUD * sizeof(pud_t)) > PAGE_SIZE); diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index d4bcc745a044..7bb5ce02b9b5 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -237,33 +237,6 @@ static inline void poison_init_mem(void *s, size_t count) *p++ = 0xe7fddef0; } -static void __init free_highpages(void) -{ -#ifdef CONFIG_HIGHMEM - unsigned long max_low = max_low_pfn; - phys_addr_t range_start, range_end; - u64 i; - - /* set highmem page free */ - for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, - &range_start, &range_end, NULL) { - unsigned long start = PFN_UP(range_start); - unsigned long end = PFN_DOWN(range_end); - - /* Ignore complete lowmem entries */ - if (end <= max_low) - continue; - - /* Truncate partial highmem entries */ - if (start < max_low) - start = max_low; - - for (; start < end; start++) - free_highmem_page(pfn_to_page(start)); - } -#endif -} - /* * mem_init() marks the free areas in the mem_map and tells us how much * memory is free. This is done after various parts of the system have @@ -283,8 +256,6 @@ void __init mem_init(void) /* this will put all unused low memory onto the freelists */ memblock_free_all(); - free_highpages(); - /* * Check boundaries twice: Some fundamental inconsistencies can * be detected at build time already. diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c index a22801aa503a..3914c2b873da 100644 --- a/arch/csky/mm/init.c +++ b/arch/csky/mm/init.c @@ -44,21 +44,7 @@ EXPORT_SYMBOL(empty_zero_page); void __init mem_init(void) { -#ifdef CONFIG_HIGHMEM - unsigned long tmp; -#endif - memblock_free_all(); - -#ifdef CONFIG_HIGHMEM - for (tmp = highstart_pfn; tmp < highend_pfn; tmp++) { - struct page *page = pfn_to_page(tmp); - - /* FIXME not sure about */ - if (!memblock_is_reserved(tmp << PAGE_SHIFT)) - free_highmem_page(page); - } -#endif } void free_initmem(void) diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c index 7e2e342e84c5..3e664e0efc33 100644 --- a/arch/microblaze/mm/init.c +++ b/arch/microblaze/mm/init.c @@ -52,19 +52,6 @@ static void __init highmem_init(void) map_page(PKMAP_BASE, 0, 0); /* XXX gross */ pkmap_page_table = virt_to_kpte(PKMAP_BASE); } - -static void __meminit highmem_setup(void) -{ - unsigned long pfn; - - for (pfn = max_low_pfn; pfn < max_pfn; ++pfn) { - struct page *page = pfn_to_page(pfn); - - /* FIXME not sure about */ - if (!memblock_is_reserved(pfn << PAGE_SHIFT)) - free_highmem_page(page); - } -} #endif /* CONFIG_HIGHMEM */ /* @@ -122,9 +109,6 @@ void __init mem_init(void) { /* this will put all memory onto the freelists */ memblock_free_all(); -#ifdef CONFIG_HIGHMEM - highmem_setup(); -#endif mem_init_done = 1; } diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 99cefb58fba3..e7882874ba2f 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -427,25 +427,6 @@ void __init paging_init(void) static struct kcore_list kcore_kseg0; #endif -static inline void __init mem_init_free_highmem(void) -{ -#ifdef CONFIG_HIGHMEM - unsigned long tmp; - - if (cpu_has_dc_aliases) - return; - - for (tmp = highstart_pfn; tmp < highend_pfn; tmp++) { - struct page *page = pfn_to_page(tmp); - - if (!memblock_is_memory(PFN_PHYS(tmp))) - SetPageReserved(page); - else - free_highmem_page(page); - } -#endif -} - void __init mem_init(void) { /* @@ -456,7 +437,6 @@ void __init mem_init(void) maar_init(); setup_zero_pages(); /* Setup zeroed pages. */ - mem_init_free_highmem(); memblock_free_all(); #ifdef CONFIG_64BIT diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index c7708c8fad29..1bc94bca9944 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -297,20 +297,6 @@ void __init mem_init(void) memblock_free_all(); -#ifdef CONFIG_HIGHMEM - { - unsigned long pfn, highmem_mapnr; - - highmem_mapnr = lowmem_end_addr >> PAGE_SHIFT; - for (pfn = highmem_mapnr; pfn < max_mapnr; ++pfn) { - phys_addr_t paddr = (phys_addr_t)pfn << PAGE_SHIFT; - struct page *page = pfn_to_page(pfn); - if (memblock_is_memory(paddr) && !memblock_is_reserved(paddr)) - free_highmem_page(page); - } - } -#endif /* CONFIG_HIGHMEM */ - #if defined(CONFIG_PPC_E500) && !defined(CONFIG_SMP) /* * If smp is enabled, next_tlbcam_idx is initialized in the cpu up diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c index 81a468a9c223..043e9b6fadd0 100644 --- a/arch/sparc/mm/init_32.c +++ b/arch/sparc/mm/init_32.c @@ -232,18 +232,6 @@ static void __init taint_real_pages(void) } } -static void map_high_region(unsigned long start_pfn, unsigned long end_pfn) -{ - unsigned long tmp; - -#ifdef CONFIG_DEBUG_HIGHMEM - printk("mapping high region %08lx - %08lx\n", start_pfn, end_pfn); -#endif - - for (tmp = start_pfn; tmp < end_pfn; tmp++) - free_highmem_page(pfn_to_page(tmp)); -} - void __init mem_init(void) { int i; @@ -276,19 +264,6 @@ void __init mem_init(void) taint_real_pages(); memblock_free_all(); - - for (i = 0; sp_banks[i].num_bytes != 0; i++) { - unsigned long start_pfn = sp_banks[i].base_addr >> PAGE_SHIFT; - unsigned long end_pfn = (sp_banks[i].base_addr + sp_banks[i].num_bytes) >> PAGE_SHIFT; - - if (end_pfn <= highstart_pfn) - continue; - - if (start_pfn < highstart_pfn) - start_pfn = highstart_pfn; - - map_high_region(start_pfn, end_pfn); - } } void sparc_flush_page_to_ram(struct page *page) diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h index 731ee7cc40a5..585bdadba47d 100644 --- a/arch/x86/include/asm/highmem.h +++ b/arch/x86/include/asm/highmem.h @@ -69,9 +69,6 @@ extern unsigned long highstart_pfn, highend_pfn; arch_flush_lazy_mmu_mode(); \ } while (0) -extern void add_highpages_with_active_regions(int nid, unsigned long start_pfn, - unsigned long end_pfn); - #endif /* __KERNEL__ */ #endif /* _ASM_X86_HIGHMEM_H */ diff --git a/arch/x86/include/asm/numa.h b/arch/x86/include/asm/numa.h index 5469d7a7c40f..53ba39ce010c 100644 --- a/arch/x86/include/asm/numa.h +++ b/arch/x86/include/asm/numa.h @@ -41,10 +41,6 @@ static inline int numa_cpu_node(int cpu) } #endif /* CONFIG_NUMA */ -#ifdef CONFIG_X86_32 -# include -#endif - #ifdef CONFIG_NUMA extern void numa_set_node(int cpu, int node); extern void numa_clear_node(int cpu); diff --git a/arch/x86/include/asm/numa_32.h b/arch/x86/include/asm/numa_32.h deleted file mode 100644 index 9c8e9e85be77..000000000000 --- a/arch/x86/include/asm/numa_32.h +++ /dev/null @@ -1,13 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_X86_NUMA_32_H -#define _ASM_X86_NUMA_32_H - -#ifdef CONFIG_HIGHMEM -extern void set_highmem_pages_init(void); -#else -static inline void set_highmem_pages_init(void) -{ -} -#endif - -#endif /* _ASM_X86_NUMA_32_H */ diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 690fbf48e853..52fbf0a60858 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -42,8 +42,6 @@ obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_PTDUMP_CORE) += dump_pagetables.o obj-$(CONFIG_PTDUMP_DEBUGFS) += debug_pagetables.o -obj-$(CONFIG_HIGHMEM) += highmem_32.o - KASAN_SANITIZE_kasan_init_$(BITS).o := n obj-$(CONFIG_KASAN) += kasan_init_$(BITS).o diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c deleted file mode 100644 index d9efa35711ee..000000000000 --- a/arch/x86/mm/highmem_32.c +++ /dev/null @@ -1,34 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -#include -#include -#include /* for totalram_pages */ -#include -#include - -void __init set_highmem_pages_init(void) -{ - struct zone *zone; - int nid; - - /* - * Explicitly reset zone->managed_pages because set_highmem_pages_init() - * is invoked before memblock_free_all() - */ - reset_all_zones_managed_pages(); - for_each_zone(zone) { - unsigned long zone_start_pfn, zone_end_pfn; - - if (!is_highmem(zone)) - continue; - - zone_start_pfn = zone->zone_start_pfn; - zone_end_pfn = zone_start_pfn + zone->spanned_pages; - - nid = zone_to_nid(zone); - printk(KERN_INFO "Initializing %s for node %d (%08lx:%08lx)\n", - zone->name, nid, zone_start_pfn, zone_end_pfn); - - add_highpages_with_active_regions(nid, zone_start_pfn, - zone_end_pfn); - } -} diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 801b659ead0c..9ee8ec2bc5d1 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -394,23 +394,6 @@ static void __init permanent_kmaps_init(pgd_t *pgd_base) pkmap_page_table = virt_to_kpte(vaddr); } - -void __init add_highpages_with_active_regions(int nid, - unsigned long start_pfn, unsigned long end_pfn) -{ - phys_addr_t start, end; - u64 i; - - for_each_free_mem_range(i, nid, MEMBLOCK_NONE, &start, &end, NULL) { - unsigned long pfn = clamp_t(unsigned long, PFN_UP(start), - start_pfn, end_pfn); - unsigned long e_pfn = clamp_t(unsigned long, PFN_DOWN(end), - start_pfn, end_pfn); - for ( ; pfn < e_pfn; pfn++) - if (pfn_valid(pfn)) - free_highmem_page(pfn_to_page(pfn)); - } -} #else static inline void permanent_kmaps_init(pgd_t *pgd_base) { @@ -715,17 +698,6 @@ void __init mem_init(void) #ifdef CONFIG_FLATMEM BUG_ON(!mem_map); #endif - /* - * With CONFIG_DEBUG_PAGEALLOC initialization of highmem pages has to - * be done before memblock_free_all(). Memblock use free low memory for - * temporary data (see find_range_array()) and for this purpose can use - * pages that was already passed to the buddy allocator, hence marked as - * not accessible in the page tables when compiled with - * CONFIG_DEBUG_PAGEALLOC. Otherwise order of initialization is not - * important here. - */ - set_highmem_pages_init(); - /* this will put all low memory onto the freelists */ memblock_free_all(); diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c index 9b662477b3d4..47ecbe28263e 100644 --- a/arch/xtensa/mm/init.c +++ b/arch/xtensa/mm/init.c @@ -129,41 +129,12 @@ void __init zones_init(void) print_vm_layout(); } -static void __init free_highpages(void) -{ -#ifdef CONFIG_HIGHMEM - unsigned long max_low = max_low_pfn; - phys_addr_t range_start, range_end; - u64 i; - - /* set highmem page free */ - for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, - &range_start, &range_end, NULL) { - unsigned long start = PFN_UP(range_start); - unsigned long end = PFN_DOWN(range_end); - - /* Ignore complete lowmem entries */ - if (end <= max_low) - continue; - - /* Truncate partial highmem entries */ - if (start < max_low) - start = max_low; - - for (; start < end; start++) - free_highmem_page(pfn_to_page(start)); - } -#endif -} - /* * Initialize memory pages. */ void __init mem_init(void) { - free_highpages(); - memblock_free_all(); } diff --git a/include/linux/mm.h b/include/linux/mm.h index fdf20503850e..6fccd3b3248c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3172,7 +3172,6 @@ extern void reserve_bootmem_region(phys_addr_t start, /* Free the reserved page into the buddy system, so it gets managed. */ void free_reserved_page(struct page *page); -#define free_highmem_page(page) free_reserved_page(page) static inline void mark_page_reserved(struct page *page) { diff --git a/mm/memblock.c b/mm/memblock.c index 95af35fd1389..64ae678cd1d1 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -2164,8 +2164,7 @@ static unsigned long __init __free_memory_core(phys_addr_t start, phys_addr_t end) { unsigned long start_pfn = PFN_UP(start); - unsigned long end_pfn = min_t(unsigned long, - PFN_DOWN(end), max_low_pfn); + unsigned long end_pfn = PFN_DOWN(end); if (start_pfn >= end_pfn) return 0; From patchwork Thu Mar 6 18:51:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005187 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F256E26BDB3; Thu, 6 Mar 2025 18:54:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287263; cv=none; b=f7i03WJvBh4PFmJLsppWJ6+VzLD0vjga7BztEhr4Kh0fwUNpFO6Utjh4WrAtQIm8RR0q+L4SIOu6RfEC7wVUGBj1PTizdl5T+wdU5oIr9fVSNx9Mo6hHexV53I3OPEt1EzSkzlZ8G5XhdYmwMdFcsAvBTC/lF7Snk1S+wurZh8M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287263; c=relaxed/simple; bh=V81Qe73ywRGOa4P4lYmmCebVskMbjgLcVHgS3/FNKb8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Li0TjYcO3JeKILMLjaNLPx/T035N4L93RKbXjwTxYYDPTqcdHNWiqPuRKfKROZ4cKTZtzuC3WZBFazGCFUH9jhx6Waa73tWhaO7LPOYvQCdtSqhE11g5y0ZXJnbfSwsjVUpxq5UMBXgppWckWFckjQjullDTXnEtE8P9Z9B1kJw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TAhao/GT; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TAhao/GT" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 38629C4CEEB; Thu, 6 Mar 2025 18:54:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287262; bh=V81Qe73ywRGOa4P4lYmmCebVskMbjgLcVHgS3/FNKb8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TAhao/GTaASroRPSjezNsogKfXXuvqmRVJI+/+lxYofZ0aHb7va4mm5DZz38ZXAPx iZhsaGzSpPO/5oGJDT4jg6ck2Q1ItRjAc9wV2GCvcPzcGNOuUiHFOtJHFgRRARYn9l WvNp0VppaXJtOmKXr9ljVB12Q364WscbA7MM0yQyKtoK5zArJ7MkcjlIHEAACk4t/n Qd3Qxs6BKso9T8cQXLp3KTY4YALs2Y1rfn3giJLbxTgX5EVjcvqh0u/C+5H1PNqprm zEkWaGLRDT1W7i4v6z/pQyo0g5j5YTy1Rf/Yjv2XHIXp7ENlt7cOKhhXFy3pvRDgpw aHbo4bB0HWLnA== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 12/13] arch, mm: introduce arch_mm_preinit Date: Thu, 6 Mar 2025 20:51:22 +0200 Message-ID: <20250306185124.3147510-13-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" Currently, implementation of mem_init() in every architecture consists of one or more of the following: * initializations that must run before page allocator is active, for instance swiotlb_init() * a call to memblock_free_all() to release all the memory to the buddy allocator * initializations that must run after page allocator is ready and there is no arch-specific hook other than mem_init() for that, like for example register_page_bootmem_info() in x86 and sparc64 or simple setting of mem_init_done = 1 in several architectures * a bunch of semi-related stuff that apparently had no better place to live, for example a ton of BUILD_BUG_ON()s in parisc. Introduce arch_mm_preinit() that will be the first thing called from mm_core_init(). On architectures that have initializations that must happen before the page allocator is ready, move those into arch_mm_preinit() along with the code that does not depend on ordering with page allocator setup. On several architectures this results in reduction of mem_init() to a single call to memblock_free_all() that allows its consolidation next. Signed-off-by: Mike Rapoport (Microsoft) --- arch/arc/mm/init.c | 13 ++++++------- arch/arm/mm/init.c | 21 ++++++++++++--------- arch/arm64/mm/init.c | 21 ++++++++++++--------- arch/mips/mm/init.c | 11 +++++++---- arch/powerpc/mm/mem.c | 9 ++++++--- arch/riscv/mm/init.c | 8 ++++++-- arch/s390/mm/init.c | 5 ++++- arch/sparc/mm/init_32.c | 5 ++++- arch/um/kernel/mem.c | 7 +++++-- arch/x86/mm/init_32.c | 6 +++++- arch/x86/mm/init_64.c | 5 ++++- include/linux/mm.h | 1 + mm/mm_init.c | 5 +++++ 13 files changed, 77 insertions(+), 40 deletions(-) diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c index 11ce638731c9..90715b4a0bfa 100644 --- a/arch/arc/mm/init.c +++ b/arch/arc/mm/init.c @@ -157,11 +157,16 @@ void __init setup_arch_memory(void) free_area_init(max_zone_pfn); } -static void __init highmem_init(void) +void __init arch_mm_preinit(void) { #ifdef CONFIG_HIGHMEM memblock_phys_free(high_mem_start, high_mem_sz); #endif + + BUILD_BUG_ON((PTRS_PER_PGD * sizeof(pgd_t)) > PAGE_SIZE); + BUILD_BUG_ON((PTRS_PER_PUD * sizeof(pud_t)) > PAGE_SIZE); + BUILD_BUG_ON((PTRS_PER_PMD * sizeof(pmd_t)) > PAGE_SIZE); + BUILD_BUG_ON((PTRS_PER_PTE * sizeof(pte_t)) > PAGE_SIZE); } /* @@ -172,13 +177,7 @@ static void __init highmem_init(void) */ void __init mem_init(void) { - highmem_init(); memblock_free_all(); - - BUILD_BUG_ON((PTRS_PER_PGD * sizeof(pgd_t)) > PAGE_SIZE); - BUILD_BUG_ON((PTRS_PER_PUD * sizeof(pud_t)) > PAGE_SIZE); - BUILD_BUG_ON((PTRS_PER_PMD * sizeof(pmd_t)) > PAGE_SIZE); - BUILD_BUG_ON((PTRS_PER_PTE * sizeof(pte_t)) > PAGE_SIZE); } #ifdef CONFIG_HIGHMEM diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 7bb5ce02b9b5..7222100b0631 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -237,12 +237,7 @@ static inline void poison_init_mem(void *s, size_t count) *p++ = 0xe7fddef0; } -/* - * mem_init() marks the free areas in the mem_map and tells us how much - * memory is free. This is done after various parts of the system have - * claimed their memory after the kernel image. - */ -void __init mem_init(void) +void __init arch_mm_preinit(void) { #ifdef CONFIG_ARM_LPAE swiotlb_init(max_pfn > arm_dma_pfn_limit, SWIOTLB_VERBOSE); @@ -253,9 +248,6 @@ void __init mem_init(void) memblock_phys_free(PHYS_OFFSET, __pa(swapper_pg_dir) - PHYS_OFFSET); #endif - /* this will put all unused low memory onto the freelists */ - memblock_free_all(); - /* * Check boundaries twice: Some fundamental inconsistencies can * be detected at build time already. @@ -271,6 +263,17 @@ void __init mem_init(void) #endif } +/* + * mem_init() marks the free areas in the mem_map and tells us how much + * memory is free. This is done after various parts of the system have + * claimed their memory after the kernel image. + */ +void __init mem_init(void) +{ + /* this will put all unused low memory onto the freelists */ + memblock_free_all(); +} + #ifdef CONFIG_STRICT_KERNEL_RWX struct section_perm { const char *name; diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index a48fcccd67fa..8eff6a6eb11e 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -362,12 +362,7 @@ void __init bootmem_init(void) memblock_dump_all(); } -/* - * mem_init() marks the free areas in the mem_map and tells us how much memory - * is free. This is done after various parts of the system have claimed their - * memory after the kernel image. - */ -void __init mem_init(void) +void __init arch_mm_preinit(void) { unsigned int flags = SWIOTLB_VERBOSE; bool swiotlb = max_pfn > PFN_DOWN(arm64_dma_phys_limit); @@ -391,9 +386,6 @@ void __init mem_init(void) swiotlb_init(swiotlb, flags); swiotlb_update_mem_attributes(); - /* this will put all unused low memory onto the freelists */ - memblock_free_all(); - /* * Check boundaries twice: Some fundamental inconsistencies can be * detected at build time already. @@ -419,6 +411,17 @@ void __init mem_init(void) } } +/* + * mem_init() marks the free areas in the mem_map and tells us how much memory + * is free. This is done after various parts of the system have claimed their + * memory after the kernel image. + */ +void __init mem_init(void) +{ + /* this will put all unused low memory onto the freelists */ + memblock_free_all(); +} + void free_initmem(void) { void *lm_init_begin = lm_alias(__init_begin); diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index e7882874ba2f..619e2e394392 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -427,7 +427,7 @@ void __init paging_init(void) static struct kcore_list kcore_kseg0; #endif -void __init mem_init(void) +void __init arch_mm_preinit(void) { /* * When PFN_PTE_SHIFT is greater than PAGE_SHIFT we won't have enough PTE @@ -437,7 +437,6 @@ void __init mem_init(void) maar_init(); setup_zero_pages(); /* Setup zeroed pages. */ - memblock_free_all(); #ifdef CONFIG_64BIT if ((unsigned long) &_text > (unsigned long) CKSEG0) @@ -448,13 +447,17 @@ void __init mem_init(void) #endif } #else /* CONFIG_NUMA */ -void __init mem_init(void) +void __init arch_mm_preinit(void) { setup_zero_pages(); /* This comes from node 0 */ - memblock_free_all(); } #endif /* !CONFIG_NUMA */ +void __init mem_init(void) +{ + memblock_free_all(); +} + void free_init_pages(const char *what, unsigned long begin, unsigned long end) { unsigned long pfn; diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 1bc94bca9944..68efdaf14e58 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -273,7 +273,7 @@ void __init paging_init(void) mark_nonram_nosave(); } -void __init mem_init(void) +void __init arch_mm_preinit(void) { /* * book3s is limited to 16 page sizes due to encoding this in @@ -295,8 +295,6 @@ void __init mem_init(void) kasan_late_init(); - memblock_free_all(); - #if defined(CONFIG_PPC_E500) && !defined(CONFIG_SMP) /* * If smp is enabled, next_tlbcam_idx is initialized in the cpu up @@ -329,6 +327,11 @@ void __init mem_init(void) #endif /* CONFIG_PPC32 */ } +void __init mem_init(void) +{ + memblock_free_all(); +} + void free_initmem(void) { ppc_md.progress = ppc_printk_progress; diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index ac6d41e86243..9efadabf6be1 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -171,7 +171,7 @@ static void __init print_vm_layout(void) static void print_vm_layout(void) { } #endif /* CONFIG_DEBUG_VM */ -void __init mem_init(void) +void __init arch_mm_preinit(void) { bool swiotlb = max_pfn > PFN_DOWN(dma32_phys_limit); #ifdef CONFIG_FLATMEM @@ -192,11 +192,15 @@ void __init mem_init(void) } swiotlb_init(swiotlb, SWIOTLB_VERBOSE); - memblock_free_all(); print_vm_layout(); } +void __init mem_init(void) +{ + memblock_free_all(); +} + /* Limit the memory size via mem. */ static phys_addr_t memory_limit; #ifdef CONFIG_XIP_KERNEL diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 08ebc9a9344a..6741b38fc864 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -156,7 +156,7 @@ static void pv_init(void) swiotlb_update_mem_attributes(); } -void __init mem_init(void) +void __init arch_mm_preinit(void) { cpumask_set_cpu(0, &init_mm.context.cpu_attach_mask); cpumask_set_cpu(0, mm_cpumask(&init_mm)); @@ -165,7 +165,10 @@ void __init mem_init(void) kfence_split_mapping(); setup_zero_pages(); /* Setup zeroed pages. */ +} +void __init mem_init(void) +{ /* this will put all low memory onto the freelists */ memblock_free_all(); } diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c index 043e9b6fadd0..e16c32c5728f 100644 --- a/arch/sparc/mm/init_32.c +++ b/arch/sparc/mm/init_32.c @@ -232,7 +232,7 @@ static void __init taint_real_pages(void) } } -void __init mem_init(void) +void __init arch_mm_preinit(void) { int i; @@ -262,7 +262,10 @@ void __init mem_init(void) memset(sparc_valid_addr_bitmap, 0, i << 2); taint_real_pages(); +} +void __init mem_init(void) +{ memblock_free_all(); } diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c index befed230aac2..cce387438e60 100644 --- a/arch/um/kernel/mem.c +++ b/arch/um/kernel/mem.c @@ -54,7 +54,7 @@ int kmalloc_ok = 0; /* Used during early boot */ static unsigned long brk_end; -void __init mem_init(void) +void __init arch_mm_preinit(void) { /* clear the zero-page */ memset(empty_zero_page, 0, PAGE_SIZE); @@ -66,10 +66,13 @@ void __init mem_init(void) map_memory(brk_end, __pa(brk_end), uml_reserved - brk_end, 1, 1, 0); memblock_free((void *)brk_end, uml_reserved - brk_end); uml_reserved = brk_end; + max_pfn = max_low_pfn; +} +void __init mem_init(void) +{ /* this will put all low memory onto the freelists */ memblock_free_all(); - max_pfn = max_low_pfn; kmalloc_ok = 1; } diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 9ee8ec2bc5d1..16664c5464b5 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -691,13 +691,17 @@ static void __init test_wp_bit(void) panic("Linux doesn't support CPUs with broken WP."); } -void __init mem_init(void) +void __init arch_mm_preinit(void) { pci_iommu_alloc(); #ifdef CONFIG_FLATMEM BUG_ON(!mem_map); #endif +} + +void __init mem_init(void) +{ /* this will put all low memory onto the freelists */ memblock_free_all(); diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 01ea7c6df303..f8981e29633c 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1348,10 +1348,13 @@ static void __init preallocate_vmalloc_pages(void) panic("Failed to pre-allocate %s pages for vmalloc area\n", lvl); } -void __init mem_init(void) +void __init arch_mm_preinit(void) { pci_iommu_alloc(); +} +void __init mem_init(void) +{ /* clear_bss() already clear the empty_zero_page */ /* this will put all memory onto the freelists */ diff --git a/include/linux/mm.h b/include/linux/mm.h index 6fccd3b3248c..ae9cfb9612ea 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -42,6 +42,7 @@ struct folio_batch; extern int sysctl_page_lock_unfairness; +void arch_mm_preinit(void); void mm_core_init(void); void init_mm_internals(void); diff --git a/mm/mm_init.c b/mm/mm_init.c index 5e5f6ba73757..9cca3d497bf8 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2668,11 +2668,16 @@ static void __init mem_init_print_info(void) ); } +void __init __weak arch_mm_preinit(void) +{ +} + /* * Set up kernel memory allocators */ void __init mm_core_init(void) { + arch_mm_preinit(); /* Initializations relying on SMP setup */ BUILD_BUG_ON(MAX_ZONELISTS > 2); build_all_zonelists(NULL); From patchwork Thu Mar 6 18:51:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005188 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8CE5525D541; Thu, 6 Mar 2025 18:54:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287276; cv=none; b=fx6hzLufnEkXjD+WXci1gMX+zULdWnDqDgsp4mimeIuKnBNYzRL2K7xuZzy52W7njkO/i0wqpYJfc2qW2GxD+4RIO33LrNO0O5gnkTSBSb28KaerY8lKvCxcGBnEUD83QRRFT+R5pXijsRDDUTveTe+lNGHHHULdiaCEYsdpz6E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741287276; c=relaxed/simple; bh=i7YZzIHjlu++jS8DtrmhAxtbfRCM0U79L0Q3tH96qkU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dumncCh0XmC2Noe7sbCwPfMJXGECuC6RHcb1ZiVzZoL8OtFR21sjPpUl3jYlnnJ7tuotBylpaQHbglmRNpVEscW6FKy4qq9Zvigcnev8jJoQxivHA0jpOBvlYhl9JUlJDGMppNEQIbrwQ8wcPBALZH4aPdN4wI3zZt4Rz1DK5/E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fTJWFqAh; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fTJWFqAh" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5C2F7C4CEE0; Thu, 6 Mar 2025 18:54:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287276; bh=i7YZzIHjlu++jS8DtrmhAxtbfRCM0U79L0Q3tH96qkU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fTJWFqAhSwgDaONoPqxuhnIT2Is0wHT3SJ9NPMkR7fA2iYPIkxFnZjd5M1VKYba3r dDFOLnBZGbl/UOrasudxdir+GD8gQBx9F/6TlUl+ZQ3ZqU+RuseUBX+1m8gI1shazW c2YOCkYAPepNIqJzcQ3r560m8/Nz/Jnrm2aaD6yFQV9t6WahulOSxpbL0fc/ChBRD0 d36fBgwjkwhcZg64NkyCm8krI23YIVZTEvin5KonBRkWM6GTgOP36TFx4HMfL+Ekjj 5Mp+97kxK/Ji8jvgsKgn3LoklCa5pvmEb5qHciiN97Crlfe37/SeCqdA5Ua7qpMDx0 d2I10+Nz85pfg== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 13/13] arch, mm: make releasing of memory to page allocator more explicit Date: Thu, 6 Mar 2025 20:51:23 +0200 Message-ID: <20250306185124.3147510-14-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" The point where the memory is released from memblock to the buddy allocator is hidden inside arch-specific mem_init()s and the call to memblock_free_all() is needlessly duplicated in every artiste cure and after introduction of arch_mm_preinit() hook, mem_init() implementation on many architecture only contains the call to memblock_free_all(). Pull memblock_free_all() call into mm_core_init() and drop mem_init() on relevant architectures to make it more explicit where the free memory is released from memblock to the buddy allocator and to reduce code duplication in architecture specific code. Signed-off-by: Mike Rapoport (Microsoft) --- arch/alpha/mm/init.c | 6 ------ arch/arc/mm/init.c | 11 ----------- arch/arm/mm/init.c | 11 ----------- arch/arm64/mm/init.c | 11 ----------- arch/csky/mm/init.c | 5 ----- arch/hexagon/mm/init.c | 18 ------------------ arch/loongarch/kernel/numa.c | 5 ----- arch/loongarch/mm/init.c | 5 ----- arch/m68k/mm/init.c | 2 -- arch/microblaze/mm/init.c | 3 --- arch/mips/mm/init.c | 5 ----- arch/nios2/mm/init.c | 6 ------ arch/openrisc/mm/init.c | 3 --- arch/parisc/mm/init.c | 2 -- arch/powerpc/mm/mem.c | 5 ----- arch/riscv/mm/init.c | 5 ----- arch/s390/mm/init.c | 6 ------ arch/sh/mm/init.c | 2 -- arch/sparc/mm/init_32.c | 5 ----- arch/sparc/mm/init_64.c | 2 -- arch/um/kernel/mem.c | 2 -- arch/x86/mm/init_32.c | 3 --- arch/x86/mm/init_64.c | 2 -- arch/xtensa/mm/init.c | 9 --------- include/linux/memblock.h | 1 - mm/internal.h | 3 ++- mm/mm_init.c | 5 +++++ 27 files changed, 7 insertions(+), 136 deletions(-) diff --git a/arch/alpha/mm/init.c b/arch/alpha/mm/init.c index 3ab2d2f3c917..2d491b8cdab9 100644 --- a/arch/alpha/mm/init.c +++ b/arch/alpha/mm/init.c @@ -273,12 +273,6 @@ srm_paging_stop (void) } #endif -void __init -mem_init(void) -{ - memblock_free_all(); -} - static const pgprot_t protection_map[16] = { [VM_NONE] = _PAGE_P(_PAGE_FOE | _PAGE_FOW | _PAGE_FOR), diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c index 90715b4a0bfa..a73cc94f806e 100644 --- a/arch/arc/mm/init.c +++ b/arch/arc/mm/init.c @@ -169,17 +169,6 @@ void __init arch_mm_preinit(void) BUILD_BUG_ON((PTRS_PER_PTE * sizeof(pte_t)) > PAGE_SIZE); } -/* - * mem_init - initializes memory - * - * Frees up bootmem - * Calculates and displays memory available/used - */ -void __init mem_init(void) -{ - memblock_free_all(); -} - #ifdef CONFIG_HIGHMEM int pfn_valid(unsigned long pfn) { diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 7222100b0631..54bdca025c9f 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -263,17 +263,6 @@ void __init arch_mm_preinit(void) #endif } -/* - * mem_init() marks the free areas in the mem_map and tells us how much - * memory is free. This is done after various parts of the system have - * claimed their memory after the kernel image. - */ -void __init mem_init(void) -{ - /* this will put all unused low memory onto the freelists */ - memblock_free_all(); -} - #ifdef CONFIG_STRICT_KERNEL_RWX struct section_perm { const char *name; diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 8eff6a6eb11e..510695107233 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -411,17 +411,6 @@ void __init arch_mm_preinit(void) } } -/* - * mem_init() marks the free areas in the mem_map and tells us how much memory - * is free. This is done after various parts of the system have claimed their - * memory after the kernel image. - */ -void __init mem_init(void) -{ - /* this will put all unused low memory onto the freelists */ - memblock_free_all(); -} - void free_initmem(void) { void *lm_init_begin = lm_alias(__init_begin); diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c index 3914c2b873da..573da66b2543 100644 --- a/arch/csky/mm/init.c +++ b/arch/csky/mm/init.c @@ -42,11 +42,6 @@ unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss; EXPORT_SYMBOL(empty_zero_page); -void __init mem_init(void) -{ - memblock_free_all(); -} - void free_initmem(void) { free_initmem_default(-1); diff --git a/arch/hexagon/mm/init.c b/arch/hexagon/mm/init.c index d412c2314509..34eb9d424b96 100644 --- a/arch/hexagon/mm/init.c +++ b/arch/hexagon/mm/init.c @@ -43,24 +43,6 @@ DEFINE_SPINLOCK(kmap_gen_lock); /* checkpatch says don't init this to 0. */ unsigned long long kmap_generation; -/* - * mem_init - initializes memory - * - * Frees up bootmem - * Fixes up more stuff for HIGHMEM - * Calculates and displays memory available/used - */ -void __init mem_init(void) -{ - /* No idea where this is actually declared. Seems to evade LXR. */ - memblock_free_all(); - - /* - * To-Do: someone somewhere should wipe out the bootmem map - * after we're done? - */ -} - void sync_icache_dcache(pte_t pte) { unsigned long addr; diff --git a/arch/loongarch/kernel/numa.c b/arch/loongarch/kernel/numa.c index 8eb489725b1a..30a72fd528c0 100644 --- a/arch/loongarch/kernel/numa.c +++ b/arch/loongarch/kernel/numa.c @@ -387,11 +387,6 @@ void __init paging_init(void) free_area_init(zones_size); } -void __init mem_init(void) -{ - memblock_free_all(); -} - int pcibus_to_node(struct pci_bus *bus) { return dev_to_node(&bus->dev); diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c index 6affa3609188..fdb7f73ad160 100644 --- a/arch/loongarch/mm/init.c +++ b/arch/loongarch/mm/init.c @@ -75,11 +75,6 @@ void __init paging_init(void) free_area_init(max_zone_pfns); } - -void __init mem_init(void) -{ - memblock_free_all(); -} #endif /* !CONFIG_NUMA */ void __ref free_initmem(void) diff --git a/arch/m68k/mm/init.c b/arch/m68k/mm/init.c index e03ac556c59e..3d9aa9cce144 100644 --- a/arch/m68k/mm/init.c +++ b/arch/m68k/mm/init.c @@ -119,7 +119,5 @@ static inline void init_pointer_tables(void) void __init mem_init(void) { - /* this will put all memory onto the freelists */ - memblock_free_all(); init_pointer_tables(); } diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c index 3e664e0efc33..65f0d1fb8a2a 100644 --- a/arch/microblaze/mm/init.c +++ b/arch/microblaze/mm/init.c @@ -107,9 +107,6 @@ void __init setup_memory(void) void __init mem_init(void) { - /* this will put all memory onto the freelists */ - memblock_free_all(); - mem_init_done = 1; } diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 619e2e394392..6ea27bbd387e 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -453,11 +453,6 @@ void __init arch_mm_preinit(void) } #endif /* !CONFIG_NUMA */ -void __init mem_init(void) -{ - memblock_free_all(); -} - void free_init_pages(const char *what, unsigned long begin, unsigned long end) { unsigned long pfn; diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c index 4ba8dfa0d238..94efa3de3933 100644 --- a/arch/nios2/mm/init.c +++ b/arch/nios2/mm/init.c @@ -60,12 +60,6 @@ void __init paging_init(void) (unsigned long)empty_zero_page + PAGE_SIZE); } -void __init mem_init(void) -{ - /* this will put all memory onto the freelists */ - memblock_free_all(); -} - void __init mmu_init(void) { flush_tlb_all(); diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c index 72c5952607ac..be1c2eb8bb94 100644 --- a/arch/openrisc/mm/init.c +++ b/arch/openrisc/mm/init.c @@ -196,9 +196,6 @@ void __init mem_init(void) /* clear the zero-page */ memset((void *)empty_zero_page, 0, PAGE_SIZE); - /* this will put all low memory onto the freelists */ - memblock_free_all(); - printk("mem_init_done ...........................................\n"); mem_init_done = 1; return; diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c index 4fbe354dc9b4..14270715d754 100644 --- a/arch/parisc/mm/init.c +++ b/arch/parisc/mm/init.c @@ -562,8 +562,6 @@ void __init mem_init(void) BUILD_BUG_ON(TMPALIAS_MAP_START >= 0x80000000); #endif - memblock_free_all(); - #ifdef CONFIG_PA11 if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { pcxl_dma_start = (unsigned long)SET_MAP_OFFSET(MAP_START); diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 68efdaf14e58..d8fe11b64259 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -327,11 +327,6 @@ void __init arch_mm_preinit(void) #endif /* CONFIG_PPC32 */ } -void __init mem_init(void) -{ - memblock_free_all(); -} - void free_initmem(void) { ppc_md.progress = ppc_printk_progress; diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 9efadabf6be1..79b649f6de72 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -196,11 +196,6 @@ void __init arch_mm_preinit(void) print_vm_layout(); } -void __init mem_init(void) -{ - memblock_free_all(); -} - /* Limit the memory size via mem. */ static phys_addr_t memory_limit; #ifdef CONFIG_XIP_KERNEL diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 6741b38fc864..e8585011fbfc 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -167,12 +167,6 @@ void __init arch_mm_preinit(void) setup_zero_pages(); /* Setup zeroed pages. */ } -void __init mem_init(void) -{ - /* this will put all low memory onto the freelists */ - memblock_free_all(); -} - unsigned long memory_block_size_bytes(void) { /* diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index 6d459ffba4bc..99e302eeeec1 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -330,8 +330,6 @@ unsigned int mem_init_done = 0; void __init mem_init(void) { - memblock_free_all(); - /* Set this up early, so we can take care of the zero page */ cpu_cache_init(); diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c index e16c32c5728f..fdc93dd12c3e 100644 --- a/arch/sparc/mm/init_32.c +++ b/arch/sparc/mm/init_32.c @@ -264,11 +264,6 @@ void __init arch_mm_preinit(void) taint_real_pages(); } -void __init mem_init(void) -{ - memblock_free_all(); -} - void sparc_flush_page_to_ram(struct page *page) { unsigned long vaddr = (unsigned long)page_address(page); diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 34d46adb9571..760818950464 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -2505,8 +2505,6 @@ static void __init register_page_bootmem_info(void) } void __init mem_init(void) { - memblock_free_all(); - /* * Must be done after boot memory is put on freelist, because here we * might set fields in deferred struct pages that have not yet been diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c index cce387438e60..379f33a1babf 100644 --- a/arch/um/kernel/mem.c +++ b/arch/um/kernel/mem.c @@ -71,8 +71,6 @@ void __init arch_mm_preinit(void) void __init mem_init(void) { - /* this will put all low memory onto the freelists */ - memblock_free_all(); kmalloc_ok = 1; } diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 16664c5464b5..95b2758b4e4d 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -702,9 +702,6 @@ void __init arch_mm_preinit(void) void __init mem_init(void) { - /* this will put all low memory onto the freelists */ - memblock_free_all(); - after_bootmem = 1; x86_init.hyper.init_after_bootmem(); diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index f8981e29633c..451e796427d3 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1357,8 +1357,6 @@ void __init mem_init(void) { /* clear_bss() already clear the empty_zero_page */ - /* this will put all memory onto the freelists */ - memblock_free_all(); after_bootmem = 1; x86_init.hyper.init_after_bootmem(); diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c index 47ecbe28263e..cc52733a0649 100644 --- a/arch/xtensa/mm/init.c +++ b/arch/xtensa/mm/init.c @@ -129,15 +129,6 @@ void __init zones_init(void) print_vm_layout(); } -/* - * Initialize memory pages. - */ - -void __init mem_init(void) -{ - memblock_free_all(); -} - static void __init parse_memmap_one(char *p) { char *oldp; diff --git a/include/linux/memblock.h b/include/linux/memblock.h index e79eb6ac516f..ef5a1ecc6e59 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -133,7 +133,6 @@ int memblock_mark_nomap(phys_addr_t base, phys_addr_t size); int memblock_clear_nomap(phys_addr_t base, phys_addr_t size); int memblock_reserved_mark_noinit(phys_addr_t base, phys_addr_t size); -void memblock_free_all(void); void memblock_free(void *ptr, size_t size); void reset_all_zones_managed_pages(void); diff --git a/mm/internal.h b/mm/internal.h index 109ef30fee11..26e2e8cea495 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1407,7 +1407,8 @@ static inline bool gup_must_unshare(struct vm_area_struct *vma, } extern bool mirrored_kernelcore; -extern bool memblock_has_mirror(void); +bool memblock_has_mirror(void); +void memblock_free_all(void); static __always_inline void vma_set_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, diff --git a/mm/mm_init.c b/mm/mm_init.c index 9cca3d497bf8..545e11f1a3ba 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2672,6 +2672,10 @@ void __init __weak arch_mm_preinit(void) { } +void __init __weak mem_init(void) +{ +} + /* * Set up kernel memory allocators */ @@ -2693,6 +2697,7 @@ void __init mm_core_init(void) report_meminit(); kmsan_init_shadow(); stack_depot_early_init(); + memblock_free_all(); mem_init(); kmem_cache_init(); /*