From patchwork Thu Dec 9 23:04:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12668431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90980C433FE for ; Thu, 9 Dec 2021 23:07:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6523B6B007B; Thu, 9 Dec 2021 18:04:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5DA886B007D; Thu, 9 Dec 2021 18:04:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 453C66B007E; Thu, 9 Dec 2021 18:04:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay039.a.hostedemail.com [64.99.140.39]) by kanga.kvack.org (Postfix) with ESMTP id 0E5C56B007D for ; Thu, 9 Dec 2021 18:04:52 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id D70B8120811 for ; Thu, 9 Dec 2021 23:04:41 +0000 (UTC) X-FDA: 78899787162.05.75CFCF5 Received: from new1-smtp.messagingengine.com (new1-smtp.messagingengine.com [66.111.4.221]) by imf26.hostedemail.com (Postfix) with ESMTP id 73A16140002 for ; Thu, 9 Dec 2021 23:04:41 +0000 (UTC) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id 0578A58026D; Thu, 9 Dec 2021 18:04:41 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 09 Dec 2021 18:04:41 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=from :to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; s=fm2; bh=7cyN4s0xVwQyR GJ0Fc+VVqgj+iUolX1sZzd0E92fEJY=; b=MG7YC+pU3dCSJPziDOQc190Xhgyek yz8KXK/RBv9WfhZnU/loKYoXkW6pITbpEMn/iVBkABFTEDINHVicLejE5F5cwGdZ tNoJAjYeAuRe6H/zbgUJxE2Q6zp/hXAZyAcXS+t48EYoFuyFqelT+4DSMj1VzuyR rPjxld8Byq28+NnQe8pCkRrzbrVA848WP0Z40Q3TMV7zBBMkoUZbLCdvC96iMsqw RSOhnPux8hlkqwb/UnLjSTdnDd1bLUF5ZDPaqixJVmunDfLKQ6zHHcL2HaPAeLot ceUEIuJDVTVfWiPfxgvctRIpJ8dqBSvXkxBZ44jmv5PAanE0qCA+/IHFA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:reply-to:subject :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=7cyN4s0xVwQyRGJ0Fc+VVqgj+iUolX1sZzd0E92fEJY=; b=MdMLTKav 488WWiYXMVbpTmasmlkq2si3G1H/7u5RP3uEs67z606CkpRFJgdWeiFglie42wDA hdxSUzbt4HfI4OZVhtBjNAIt0H1FLKV4BBZq7yC/exMTzwMFFWF3relfDpdIiWgi AqWLPM3wws0KiDslZq7YXu1jMM9LRdXViZKgfXNtV5hNvSCjQNZ4UO7nT8BzIgtS esJ2bsJ2KTkygLzYnPphZRjvJGdKEOMKQ+EnEaKftK5AZbhhqTRM1NrBSUrZolsk GejfYEtd5swYNfMzGHbVqQfT2aH4wbZThpZEjZvrcO/67vsHd76LwTAOYOy/YXYU nVZf0UJSxORcSQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvuddrkedugddtgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcujggr nhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeeijeeuvd euudeuhfeghfehieeuvdetvdeugfeigeevteeuieeuhedtgeduheefleenucevlhhushht vghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhesshgvnh htrdgtohhm X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 9 Dec 2021 18:04:40 -0500 (EST) From: Zi Yan To: David Hildenbrand , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Michael Ellerman , Christoph Hellwig , Marek Szyprowski , Robin Murphy , linuxppc-dev@lists.ozlabs.org, virtualization@lists.linux-foundation.org, iommu@lists.linux-foundation.org, Vlastimil Babka , Mel Gorman , Eric Ren , Zi Yan Subject: [RFC PATCH v2 5/7] mm: cma: use pageblock_order as the single alignment Date: Thu, 9 Dec 2021 18:04:12 -0500 Message-Id: <20211209230414.2766515-6-zi.yan@sent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211209230414.2766515-1-zi.yan@sent.com> References: <20211209230414.2766515-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 73A16140002 X-Stat-Signature: 5run6pe675wqtg41ffo5qrn4stgmmn39 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=MG7YC+pU; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=MdMLTKav; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf26.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.221 as permitted sender) smtp.mailfrom=zi.yan@sent.com X-HE-Tag: 1639091081-388993 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan Now alloc_contig_range() works at pageblock granularity. Change CMA allocation, which uses alloc_contig_range(), to use pageblock_order alignment. Signed-off-by: Zi Yan --- include/linux/mmzone.h | 5 +---- kernel/dma/contiguous.c | 2 +- mm/cma.c | 6 ++---- mm/page_alloc.c | 6 +++--- 4 files changed, 7 insertions(+), 12 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b925431b0123..71830af35745 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -54,10 +54,7 @@ enum migratetype { * * The way to use it is to change migratetype of a range of * pageblocks to MIGRATE_CMA which can be done by - * __free_pageblock_cma() function. What is important though - * is that a range of pageblocks must be aligned to - * MAX_ORDER_NR_PAGES should biggest page be bigger than - * a single pageblock. + * __free_pageblock_cma() function. */ MIGRATE_CMA, #endif diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index 3d63d91cba5c..ac35b14b0786 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -399,7 +399,7 @@ static const struct reserved_mem_ops rmem_cma_ops = { static int __init rmem_cma_setup(struct reserved_mem *rmem) { - phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order); + phys_addr_t align = PAGE_SIZE << pageblock_order; phys_addr_t mask = align - 1; unsigned long node = rmem->fdt_node; bool default_cma = of_get_flat_dt_prop(node, "linux,cma-default", NULL); diff --git a/mm/cma.c b/mm/cma.c index bc9ca8f3c487..d171158bd418 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -180,8 +180,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, return -EINVAL; /* ensure minimal alignment required by mm core */ - alignment = PAGE_SIZE << - max_t(unsigned long, MAX_ORDER - 1, pageblock_order); + alignment = PAGE_SIZE << pageblock_order; /* alignment should be aligned with order_per_bit */ if (!IS_ALIGNED(alignment >> PAGE_SHIFT, 1 << order_per_bit)) @@ -268,8 +267,7 @@ int __init cma_declare_contiguous_nid(phys_addr_t base, * migratetype page by page allocator's buddy algorithm. In the case, * you couldn't get a contiguous memory, which is not what we want. */ - alignment = max(alignment, (phys_addr_t)PAGE_SIZE << - max_t(unsigned long, MAX_ORDER - 1, pageblock_order)); + alignment = max(alignment, (phys_addr_t)PAGE_SIZE << pageblock_order); if (fixed && base & (alignment - 1)) { ret = -EINVAL; pr_err("Region at %pa must be aligned to %pa bytes\n", diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5ffbeb1b7512..3317f2064105 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -9127,8 +9127,8 @@ static inline void split_free_page_into_pageblocks(struct page *free_page, * be either of the two. * @gfp_mask: GFP mask to use during compaction * - * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES - * aligned. The PFN range must belong to a single zone. + * The PFN range does not have to be pageblock aligned. The PFN range must + * belong to a single zone. * * The first thing this routine does is attempt to MIGRATE_ISOLATE all * pageblocks in the range. Once isolated, the pageblocks should not @@ -9243,7 +9243,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, ret = 0; /* - * Pages from [start, end) are within a MAX_ORDER_NR_PAGES + * Pages from [start, end) are within a pageblock_nr_pages * aligned blocks that are marked as MIGRATE_ISOLATE. What's * more, all pages in [start, end) are free in page allocator. * What we are going to do is to allocate all pages from