From patchwork Thu Mar 6 18:51:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 14005194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B530C282D1 for ; Thu, 6 Mar 2025 18:52:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ADB6528000E; Thu, 6 Mar 2025 13:52:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A8A0F280004; Thu, 6 Mar 2025 13:52:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9544B28000E; Thu, 6 Mar 2025 13:52:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 77855280004 for ; Thu, 6 Mar 2025 13:52:53 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9E040AC225 for ; Thu, 6 Mar 2025 18:52:53 +0000 (UTC) X-FDA: 83192023026.15.67807F6 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf10.hostedemail.com (Postfix) with ESMTP id E3BDCC000D for ; Thu, 6 Mar 2025 18:52:51 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="sU7/prHN"; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf10.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741287171; a=rsa-sha256; cv=none; b=CajsDEmH5rWNch7B8C7UEu70f7bNFrFqHaA6uOaSh8SaX9SpmDHgzA63aWU/nk25a41FkA py1SZrpEr9ikuzG2rWJ2r15XgZ6NcmQEyfNBQlghKnrF+sasrVB1yDuokfmgoYtlcXa65N sV3EPf3nhYcPHuxbAvJJTdQ1XfA28ws= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="sU7/prHN"; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf10.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741287171; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=muwizWukvEQXWCdPpr52z13tn84CGBwjdggEEowg38M=; b=bXYKlcb5l87wOng5moD3wpDb8yB5kozlN7jDsap+JZ2fkvd7zsLvoF/NhWZeSRmSefg30q RMdHxTURZnkm85bTwaUpkCG2BS7shyQ9jFME2iEFhSbAOJUczdvGT9U2u1SJ4JaO5oOKdD jgrlN167Z33pyL9vRkzgBwUgmRm4FEs= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 8CE09A45015; Thu, 6 Mar 2025 18:47:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3777BC4CEE4; Thu, 6 Mar 2025 18:52:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741287170; bh=zHkXxvTCym0KhjWbRn9ukBtTCkXKX0nK77Ga6+2Q1YQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sU7/prHNQYg50lA7Hgqj0K1JV1oudrxegE/7Yx9axrVAFFToyFBUGUG74ycwlXmZO aq0XBTYCBQmpUJCkwEU/Dntqz+qE3P71BesMhHmp+sBEjPmFo7vSnZ1AiTchdUe70i DuW4Y+bWiYoNGCgn5z3IrutQmgNDmAGNn+R5Q7kZGJZMIg21W1EcBEi7sD7/iGD9UL Ke2xqsg+DakqZUrIVCcLXoVSSNTU6Qc9K22RHZSnsQwI+wRQEE/KhepO8LXgXyv5ig o7R10Dqr4mttjq4pi8tkEwtM4iS1U3Oolo7BJkl6KzqCiWI4vyw/wcu4aZDHmjHKeg JTD2x/pwRnwNg== From: Mike Rapoport To: Andrew Morton Cc: Alexander Gordeev , Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Dave Hansen , "David S. Miller" , Dinh Nguyen , Geert Uytterhoeven , Gerald Schaefer , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Jiaxun Yang , Johannes Berg , John Paul Adrian Glaubitz , Madhavan Srinivasan , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH 05/13] MIPS: make setup_zero_pages() use memblock Date: Thu, 6 Mar 2025 20:51:15 +0200 Message-ID: <20250306185124.3147510-6-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250306185124.3147510-1-rppt@kernel.org> References: <20250306185124.3147510-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: E3BDCC000D X-Stat-Signature: k5m4waih6d7gfsz9gzuqxiwu7pgeyry7 X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1741287171-285132 X-HE-Meta: U2FsdGVkX18QN4KSVaXxXyn6IrN93ro3Bg6mQYkH723FLahJgwmQ2ZVSV2yYbDOIAdhE1AqGueJXdBxluBSWN1qM/qgzxdymqX1brmZDc/8QXizsWYc+xq4IgTsUIl3Fuj+bKKPxseW9NEI4n9g5FMjwLxCn5hreBQo5JORpHFIgCmATm37cAnxNvUGRV0SUyRXJFDavNLG13852tNF5oQ6aXXsqGS8WSWWGQ0dLMfQpHvfzYvyPep0wqkVLuZl2KfhSXG0TUdKO33z7hHY0dqDa7XD1BVjQ5odQB5qzH65mrlg/KyshEV5MX0kV9NDlHsX9dju8+HxmIMgQ30VeYjMQ0/myQs9GjwvsEqItZkQjDzIJHjBFaefMepyzaPDUSoFoAAgk3Y4byt0PI/KSZeF2VNlFIBqveB9CFouECACtUiqT5Fbpuezew+kGGFqN8vJoq22nhA9Ff0u8O2yffxakT2BAfQtSviXu0EDJGScDmHupgsu5m+Tzi8n95Os72kG4g2QIM8s1zWow3tBTlAfrXBr1KDXjqdsLR21nQha/BR5jc3Lu6l2ylNa9AEXxI2rAOSJ4tVB5ncyFX39CiB5HkHRhobFyBEWPKami7rNGQAy2sszPcoz5AfkkE2uYg/tGRWJn/QEYIH90If3C76jBH0TOomrpxnKoMhpGdYiaHvepqckN9m2GGsioxBbV9XoiA7xwRBk24ontKI2Tvn/fo3AzQ53s1qG8GbfA4KHaWIHsfg6JVfmQ4HsA3t68lJ9LCORnlW+axl/IZMQogGQprDUpfRz6MmZ9VsLzx7+WaEtDsr4edGT7cgdU0I0Rm4Af2IAQzD5vEzrIJ7dQ6hi4LFsJlv+E+dnyBfwRL5onWrNC9dS5nHpjJ/6Pa4HLtOj1W/jsO3bX423adrHtPkovnKWmf4W5NjMUkMiA/nB9eB/AiAobK8i2fni/10CLHtsCzVIbtQZIVyiKJ5j XzIrq+s4 3UrZ4U3xx6eP+H2fu65YJRzCTX2r9w8+YZwKlZ+M0iGriGHxub37lCJzV/YiDoscUlVKfJCugWKXEQWvHJ4pU/IfLA7Ptkpi7IogR9U8GQagxTV8ZpUBPChLCh3T+mBSDnLKl6Xgc4brg7YrsNH9z5oT351bb+Sw21uBHkRdLwW/qWDgZuYgiXDmkZfVsw9q9yZKnYKmvpxFugoPr4s3emfCJ/j3ULan1FFZXlJath3gTbiPFeq8fYmS625UmM6khA68LhBFN8pvktzoRdg4EUe+eS6imaDXuegXG X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" Allocating the zero pages from memblock is simpler because the memory is already reserved. This will also help with pulling out memblock_free_all() to the generic code and reducing code duplication in arch::mem_init(). Signed-off-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- arch/mips/include/asm/mmzone.h | 2 -- arch/mips/mm/init.c | 16 +++++----------- 2 files changed, 5 insertions(+), 13 deletions(-) diff --git a/arch/mips/include/asm/mmzone.h b/arch/mips/include/asm/mmzone.h index 14226ea42036..602a21aee9d4 100644 --- a/arch/mips/include/asm/mmzone.h +++ b/arch/mips/include/asm/mmzone.h @@ -20,6 +20,4 @@ #define nid_to_addrbase(nid) 0 #endif -extern void setup_zero_pages(void); - #endif /* _ASM_MMZONE_H_ */ diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 3db6082c611e..f51cd97376df 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -59,25 +59,19 @@ EXPORT_SYMBOL(zero_page_mask); /* * Not static inline because used by IP27 special magic initialization code */ -void setup_zero_pages(void) +static void __init setup_zero_pages(void) { - unsigned int order, i; - struct page *page; + unsigned int order; if (cpu_has_vce) order = 3; else order = 0; - empty_zero_page = __get_free_pages(GFP_KERNEL | __GFP_ZERO, order); + empty_zero_page = (unsigned long)memblock_alloc(PAGE_SIZE << order, PAGE_SIZE); if (!empty_zero_page) panic("Oh boy, that early out of memory?"); - page = virt_to_page((void *)empty_zero_page); - split_page(page, order); - for (i = 0; i < (1 << order); i++, page++) - mark_page_reserved(page); - zero_page_mask = ((PAGE_SIZE << order) - 1) & PAGE_MASK; } @@ -470,9 +464,9 @@ void __init mem_init(void) BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT) && (PFN_PTE_SHIFT > PAGE_SHIFT)); maar_init(); - memblock_free_all(); setup_zero_pages(); /* Setup zeroed pages. */ mem_init_free_highmem(); + memblock_free_all(); #ifdef CONFIG_64BIT if ((unsigned long) &_text > (unsigned long) CKSEG0) @@ -486,8 +480,8 @@ void __init mem_init(void) void __init mem_init(void) { high_memory = (void *) __va(get_num_physpages() << PAGE_SHIFT); - memblock_free_all(); setup_zero_pages(); /* This comes from node 0 */ + memblock_free_all(); } #endif /* !CONFIG_NUMA */