From patchwork Tue May 18 09:06:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12264227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E047C43461 for ; Tue, 18 May 2021 09:06:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EAC3F6135C for ; Tue, 18 May 2021 09:06:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EAC3F6135C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 717866B008A; Tue, 18 May 2021 05:06:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A2B66B008C; Tue, 18 May 2021 05:06:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 51A5A6B0092; Tue, 18 May 2021 05:06:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1CCC76B008A for ; Tue, 18 May 2021 05:06:25 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AA6C268A1 for ; Tue, 18 May 2021 09:06:24 +0000 (UTC) X-FDA: 78153770688.26.A78E82A Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf11.hostedemail.com (Postfix) with ESMTP id D4BA32000264 for ; Tue, 18 May 2021 09:06:21 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 3617D6135B; Tue, 18 May 2021 09:06:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621328783; bh=qrNdjflpUvLivu9UtSwHedId14adKJHrYDNOU0/V3Hs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XfTTMQYAEQMi3fqTkr2dL4u5aQ32iekJ2f11E6d0LfIrcTlOXc5RujyIPrdD67aVM L2u5P1RKGkPDNEfqHTuTR4n63y7AtCLBctt7+wkO85ZYjOwurRE16omdDCkeCv7xkl 3UYTPnmlKWi2EFQ7CxUkYEkPVAkTxLLHSt+0WXy5+cx3uC4dk3sexIgpsEN61Cxsl4 2QnHzfeS+5aReMvTzQVK8tQsfYJUtDkzpubLJ3l8xTJTIaubeaTKrl9UP9wmi3qtKD ZpKgI2X5y/4CpFybv1p2DziIl2YD9kbAWzSrckVpt3fgdHSiexgXnFBPv+RCj0z1Rd L44IAuWPcIjJw== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Andrew Morton , Kefeng Wang , Mike Rapoport , Mike Rapoport , Russell King , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/3] memblock: free_unused_memmap: use pageblock units instead of MAX_ORDER Date: Tue, 18 May 2021 12:06:11 +0300 Message-Id: <20210518090613.21519-2-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210518090613.21519-1-rppt@kernel.org> References: <20210518090613.21519-1-rppt@kernel.org> MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=XfTTMQYA; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf11.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D4BA32000264 X-Stat-Signature: 1ak6iue3ges8ceaohba6i98eczb7yce5 X-HE-Tag: 1621328781-198330 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The code that frees unused memory map uses rounds start and end of the holes that are freed to MAX_ORDER_NR_PAGES to preserve continuity of the memory map for MAX_ORDER regions. Lots of core memory management functionality relies on homogeneity of the memory map within each pageblock which size may differ from MAX_ORDER in certain configurations. Although currently, for the architectures that use free_unused_memmap(), pageblock_order and MAX_ORDER are equivalent, it is cleaner to have common notation thought mm code. Replace MAX_ORDER_NR_PAGES with pageblock_nr_pages and update the comments to make it more clear why the alignment to pageblock boundaries is required. Signed-off-by: Mike Rapoport --- mm/memblock.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index afaefa8fc6ab..97fa87541b5f 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1943,11 +1943,11 @@ static void __init free_unused_memmap(void) start = min(start, ALIGN(prev_end, PAGES_PER_SECTION)); #else /* - * Align down here since the VM subsystem insists that the - * memmap entries are valid from the bank start aligned to - * MAX_ORDER_NR_PAGES. + * Align down here since many operations in VM subsystem + * presume that there are no holes in the memory map inside + * a pageblock */ - start = round_down(start, MAX_ORDER_NR_PAGES); + start = round_down(start, pageblock_nr_pages); #endif /* @@ -1958,11 +1958,11 @@ static void __init free_unused_memmap(void) free_memmap(prev_end, start); /* - * Align up here since the VM subsystem insists that the - * memmap entries are valid from the bank end aligned to - * MAX_ORDER_NR_PAGES. + * Align up here since many operations in VM subsystem + * presume that there are no holes in the memory map inside + * a pageblock */ - prev_end = ALIGN(end, MAX_ORDER_NR_PAGES); + prev_end = ALIGN(end, pageblock_nr_pages); } #ifdef CONFIG_SPARSEMEM From patchwork Tue May 18 09:06:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12264229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E91CC433B4 for ; Tue, 18 May 2021 09:06:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2A31E60720 for ; Tue, 18 May 2021 09:06:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2A31E60720 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BFABD6B008C; Tue, 18 May 2021 05:06:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B82DF6B0092; Tue, 18 May 2021 05:06:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D6C76B0093; Tue, 18 May 2021 05:06:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0045.hostedemail.com [216.40.44.45]) by kanga.kvack.org (Postfix) with ESMTP id 664E46B008C for ; Tue, 18 May 2021 05:06:28 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 08EDB8249980 for ; Tue, 18 May 2021 09:06:28 +0000 (UTC) X-FDA: 78153770856.31.291179F Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf26.hostedemail.com (Postfix) with ESMTP id 0AB7240002CB for ; Tue, 18 May 2021 09:06:25 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 34AC86108D; Tue, 18 May 2021 09:06:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621328786; bh=8S/jBfuQqPw5Y9SjeyOmElG9wuJ/AnCGL+Y38mTgWSw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GiPSzD3EHlwQp2twmDYDpgwIJ+zYdl24Jne4bvdQbA/dO2SFfZoDD0RBu/OB1oFBM Gnzin6LGBxpmo/uJqT0ZQtLQReJHKqBVUUfi/dCG8iL5c/miJdVGl7ZtN2BaswRMfM dwJsiZvxb3ObGV60edfQNhVqmbHcZ0GLl5gmqUja1gSsVkCalkXG1yeKLSGeDQWt33 +YRwhxPRnSYIFoviTH0GkegVHUEAxDVqo3siMMTcm84PHk3AYo8BbvCuIi1v08B2Jj plXqKlFXodC8Z54ua1hfiYSBgglKyKY7yd5B2gBNZfsLbz1YH3LGwFwP3R+x2ZXFXM nFXFz2y5tzF3Q== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Andrew Morton , Kefeng Wang , Mike Rapoport , Mike Rapoport , Russell King , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/3] memblock: align freed memory map on pageblock boundaries with SPARSEMEM Date: Tue, 18 May 2021 12:06:12 +0300 Message-Id: <20210518090613.21519-3-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210518090613.21519-1-rppt@kernel.org> References: <20210518090613.21519-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 0AB7240002CB Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=GiPSzD3E; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf26.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Rspamd-Server: rspam03 X-Stat-Signature: y4z9jf6sncrws1ya9ceghrrua74qaww5 X-HE-Tag: 1621328785-523340 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport When CONFIG_SPARSEMEM=y the ranges of the memory map that are freed are not aligned to the pageblock boundaries which breaks assumptions about homogeneity of the memory map throughout core mm code. Make sure that the freed memory map is always aligned on pageblock boundaries regardless of the memory model selection. Signed-off-by: Mike Rapoport --- mm/memblock.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index 97fa87541b5f..2e25d69739e0 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1941,14 +1941,13 @@ static void __init free_unused_memmap(void) * due to SPARSEMEM sections which aren't present. */ start = min(start, ALIGN(prev_end, PAGES_PER_SECTION)); -#else +#endif /* * Align down here since many operations in VM subsystem * presume that there are no holes in the memory map inside * a pageblock */ start = round_down(start, pageblock_nr_pages); -#endif /* * If we had a previous bank, and there is a space @@ -1966,8 +1965,10 @@ static void __init free_unused_memmap(void) } #ifdef CONFIG_SPARSEMEM - if (!IS_ALIGNED(prev_end, PAGES_PER_SECTION)) + if (!IS_ALIGNED(prev_end, PAGES_PER_SECTION)) { + prev_end = ALIGN(end, pageblock_nr_pages); free_memmap(prev_end, ALIGN(prev_end, PAGES_PER_SECTION)); + } #endif } From patchwork Tue May 18 09:06:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12264231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34B8CC433B4 for ; Tue, 18 May 2021 09:06:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D8E1660240 for ; Tue, 18 May 2021 09:06:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D8E1660240 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 000816B0092; Tue, 18 May 2021 05:06:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB34A6B0093; Tue, 18 May 2021 05:06:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6ECE6B0095; Tue, 18 May 2021 05:06:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0099.hostedemail.com [216.40.44.99]) by kanga.kvack.org (Postfix) with ESMTP id 64B3D6B0092 for ; Tue, 18 May 2021 05:06:31 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0C09F8249980 for ; Tue, 18 May 2021 09:06:31 +0000 (UTC) X-FDA: 78153770982.01.7F19372 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf13.hostedemail.com (Postfix) with ESMTP id 0FC49E000129 for ; Tue, 18 May 2021 09:06:26 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 5C13561209; Tue, 18 May 2021 09:06:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621328790; bh=9+8VlIrFfLEmzRoxqWF0ylaXDWsmZt0DQAU8LObraoc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tum/Yu+aKBsznhRuVylQK1gPSXlk4/7f+dqaXzkFCAWeO/0tZdRrJZtzGTM6D28JO kNXoQsHu8pYcBj4X1N6ufEmYx14mMILc9d/2tIvJVWRyafGupktUOVExadKNVYaMGZ llENDa6we67Ooe34Xi2kSQ/P6XCtnqQ5WnnqX9SoyRCkHAwcWOxOdhZTeF86UzVNxL 04embraLt7uvp7lZ/rLgrZ6w2rjJAcQRUn/cTkCXggepKY2CGsNGa+ScXwBTxXyKTs SBvoaM2ND8iIQF2D7E+02+2ciTrM3HEhoMf5yOI0k+bMMe8zfn6o1H8bWgy6toeWHM gbkuHTV/w6DmA== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Andrew Morton , Kefeng Wang , Mike Rapoport , Mike Rapoport , Russell King , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/3] arm: extend pfn_valid to take into accound freed memory map alignment Date: Tue, 18 May 2021 12:06:13 +0300 Message-Id: <20210518090613.21519-4-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210518090613.21519-1-rppt@kernel.org> References: <20210518090613.21519-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 0FC49E000129 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="Tum/Yu+a"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf13.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Rspamd-Server: rspam03 X-Stat-Signature: 6869nahda1npbffas985qnmd84qj3n6s X-HE-Tag: 1621328786-865931 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport When unused memory map is freed the preserved part of the memory map is extended to match pageblock boundaries because lots of core mm functionality relies on homogeneity of the memory map within pageblock boundaries. Since pfn_valid() is used to check whether there is a valid memory map entry for a PFN, make it return true also for PFNs that have memory map entries even if there is no actual memory populated there. Signed-off-by: Mike Rapoport Tested-by: Kefeng Wang --- arch/arm/mm/init.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 9d4744a632c6..bb678c0ba143 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -125,11 +125,24 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max_low, int pfn_valid(unsigned long pfn) { phys_addr_t addr = __pfn_to_phys(pfn); + unsigned long pageblock_size = PAGE_SIZE * pageblock_nr_pages; if (__phys_to_pfn(addr) != pfn) return 0; - return memblock_is_map_memory(addr); + if (memblock_is_map_memory(addr)) + return 1; + + /* + * If address less than pageblock_size bytes away from a present + * memory chunk there still will be a memory map entry for it + * because we round freed memory map to the pageblock boundaries + */ + if (memblock_is_map_memory(ALIGN(addr + 1, pageblock_size)) || + memblock_is_map_memory(ALIGN_DOWN(addr, pageblock_size))) + return 1; + + return 0; } EXPORT_SYMBOL(pfn_valid); #endif