From patchwork Tue May 18 09:06:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12264227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E047C43461 for ; Tue, 18 May 2021 09:06:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EAC3F6135C for ; Tue, 18 May 2021 09:06:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EAC3F6135C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 717866B008A; Tue, 18 May 2021 05:06:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A2B66B008C; Tue, 18 May 2021 05:06:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 51A5A6B0092; Tue, 18 May 2021 05:06:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1CCC76B008A for ; Tue, 18 May 2021 05:06:25 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AA6C268A1 for ; Tue, 18 May 2021 09:06:24 +0000 (UTC) X-FDA: 78153770688.26.A78E82A Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf11.hostedemail.com (Postfix) with ESMTP id D4BA32000264 for ; Tue, 18 May 2021 09:06:21 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 3617D6135B; Tue, 18 May 2021 09:06:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621328783; bh=qrNdjflpUvLivu9UtSwHedId14adKJHrYDNOU0/V3Hs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XfTTMQYAEQMi3fqTkr2dL4u5aQ32iekJ2f11E6d0LfIrcTlOXc5RujyIPrdD67aVM L2u5P1RKGkPDNEfqHTuTR4n63y7AtCLBctt7+wkO85ZYjOwurRE16omdDCkeCv7xkl 3UYTPnmlKWi2EFQ7CxUkYEkPVAkTxLLHSt+0WXy5+cx3uC4dk3sexIgpsEN61Cxsl4 2QnHzfeS+5aReMvTzQVK8tQsfYJUtDkzpubLJ3l8xTJTIaubeaTKrl9UP9wmi3qtKD ZpKgI2X5y/4CpFybv1p2DziIl2YD9kbAWzSrckVpt3fgdHSiexgXnFBPv+RCj0z1Rd L44IAuWPcIjJw== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Andrew Morton , Kefeng Wang , Mike Rapoport , Mike Rapoport , Russell King , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/3] memblock: free_unused_memmap: use pageblock units instead of MAX_ORDER Date: Tue, 18 May 2021 12:06:11 +0300 Message-Id: <20210518090613.21519-2-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210518090613.21519-1-rppt@kernel.org> References: <20210518090613.21519-1-rppt@kernel.org> MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=XfTTMQYA; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf11.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D4BA32000264 X-Stat-Signature: 1ak6iue3ges8ceaohba6i98eczb7yce5 X-HE-Tag: 1621328781-198330 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The code that frees unused memory map uses rounds start and end of the holes that are freed to MAX_ORDER_NR_PAGES to preserve continuity of the memory map for MAX_ORDER regions. Lots of core memory management functionality relies on homogeneity of the memory map within each pageblock which size may differ from MAX_ORDER in certain configurations. Although currently, for the architectures that use free_unused_memmap(), pageblock_order and MAX_ORDER are equivalent, it is cleaner to have common notation thought mm code. Replace MAX_ORDER_NR_PAGES with pageblock_nr_pages and update the comments to make it more clear why the alignment to pageblock boundaries is required. Signed-off-by: Mike Rapoport --- mm/memblock.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index afaefa8fc6ab..97fa87541b5f 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1943,11 +1943,11 @@ static void __init free_unused_memmap(void) start = min(start, ALIGN(prev_end, PAGES_PER_SECTION)); #else /* - * Align down here since the VM subsystem insists that the - * memmap entries are valid from the bank start aligned to - * MAX_ORDER_NR_PAGES. + * Align down here since many operations in VM subsystem + * presume that there are no holes in the memory map inside + * a pageblock */ - start = round_down(start, MAX_ORDER_NR_PAGES); + start = round_down(start, pageblock_nr_pages); #endif /* @@ -1958,11 +1958,11 @@ static void __init free_unused_memmap(void) free_memmap(prev_end, start); /* - * Align up here since the VM subsystem insists that the - * memmap entries are valid from the bank end aligned to - * MAX_ORDER_NR_PAGES. + * Align up here since many operations in VM subsystem + * presume that there are no holes in the memory map inside + * a pageblock */ - prev_end = ALIGN(end, MAX_ORDER_NR_PAGES); + prev_end = ALIGN(end, pageblock_nr_pages); } #ifdef CONFIG_SPARSEMEM