From patchwork Fri Feb 11 16:41:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12743660 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BC4FC4332F for ; Fri, 11 Feb 2022 16:41:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 321126B0080; Fri, 11 Feb 2022 11:41:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D2916B0081; Fri, 11 Feb 2022 11:41:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 125388D0001; Fri, 11 Feb 2022 11:41:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0040.hostedemail.com [216.40.44.40]) by kanga.kvack.org (Postfix) with ESMTP id EFF4A6B0080 for ; Fri, 11 Feb 2022 11:41:50 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9B7DE96F33 for ; Fri, 11 Feb 2022 16:41:50 +0000 (UTC) X-FDA: 79131065580.19.32FE7EB Received: from new1-smtp.messagingengine.com (new1-smtp.messagingengine.com [66.111.4.221]) by imf15.hostedemail.com (Postfix) with ESMTP id 2A882A0003 for ; Fri, 11 Feb 2022 16:41:50 +0000 (UTC) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailnew.nyi.internal (Postfix) with ESMTP id DC08A58037A; Fri, 11 Feb 2022 11:41:49 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Fri, 11 Feb 2022 11:41:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm2; bh=New+plem0avY8e mMqwW+nHffrxO930geSXXz9NumNjU=; b=zlqzeudlmr9ByiQoXiXD4oWkPkQWrO 9LeIZ5gkrbGwvDnTKD/QP5C7t0n60UprSyOc91UIp0QDNUGC/3yQQEqBXXxYPQHo ISZooymnxzQdfCRxb1cSPvkuA++2BhOfDjBnGnr6HfxxZjGFmymJKMpeCBwAJJx4 Kdn2JXSIhmmG3UOv+Qr8TpZJ1iwPScWnQ4pqjPXgPSS5jLPDysklL2yiPX/5kegT N0v3yf7/Z7hTjjBH4xB5E3WvuLfQ/z/wvG4sKohcJCv3nPbLVIVN1gwBLXOmdHpg fs3ACxLgHy3notg1CzORsf99n6aUNQxFEtaUUKM7GycuXURNpPAN697w== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:sender:subject:subject:to:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=New+plem0avY8emMqwW+nHffrxO930geSXXz9NumNjU=; b=glMY8862 ++fO/DiQl5ZcoUaX+3OrMgqO2GO7ce+m2y9vaERy5c20bmlc8TH+s4mIm7bau+jn jSQfTEV+8mBxlbrK/mnQe4uNCUYC4p35uLcjIiVDokPV9tHvIu4Ye4KhmqYt0Zd1 /x49iWVgUZj+MmtBWMPAGHEqxXvd+5Z6CJnrEtC77JEDadDAgHCyQgLFljig4rD5 DqMIcKJqEzXma0s5g25GvRIJb+kwjvY0owefkDGcJTmyahN+EsZToLLNW7BzbE7Z 5yPcfnEsRT1G3Mj5rrTWVqFOHBPdNTyIluvVLCcaXsGCCD3gZZB+4VdSu45pxDPL bDEo/wpcNkunpg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvvddrieefgdeltdcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcujggr nhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeeijeeuvd euudeuhfeghfehieeuvdetvdeugfeigeevteeuieeuhedtgeduheefleenucevlhhushht vghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhesshgvnh htrdgtohhm X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 11 Feb 2022 11:41:49 -0500 (EST) From: Zi Yan To: David Hildenbrand , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Michael Ellerman , Christoph Hellwig , Marek Szyprowski , Robin Murphy , linuxppc-dev@lists.ozlabs.org, virtualization@lists.linux-foundation.org, iommu@lists.linux-foundation.org, Vlastimil Babka , Mel Gorman , Eric Ren , Mike Rapoport , Oscar Salvador , Zi Yan Subject: [PATCH v5 4/6] mm: cma: use pageblock_order as the single alignment Date: Fri, 11 Feb 2022 11:41:33 -0500 Message-Id: <20220211164135.1803616-5-zi.yan@sent.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220211164135.1803616-1-zi.yan@sent.com> References: <20220211164135.1803616-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Stat-Signature: r9jmyam4p7zfa9u76j3ypf6oqi7cj8bo X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 2A882A0003 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=zlqzeudl; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=glMY8862; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf15.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.221 as permitted sender) smtp.mailfrom=zi.yan@sent.com X-Rspam-User: X-HE-Tag: 1644597710-346582 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan Now alloc_contig_range() works at pageblock granularity. Change CMA allocation, which uses alloc_contig_range(), to use pageblock_order alignment. Signed-off-by: Zi Yan --- include/linux/mmzone.h | 5 +---- kernel/dma/contiguous.c | 2 +- mm/cma.c | 6 ++---- mm/page_alloc.c | 4 ++-- 4 files changed, 6 insertions(+), 11 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 3fff6deca2c0..da38c8436493 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -54,10 +54,7 @@ enum migratetype { * * The way to use it is to change migratetype of a range of * pageblocks to MIGRATE_CMA which can be done by - * __free_pageblock_cma() function. What is important though - * is that a range of pageblocks must be aligned to - * MAX_ORDER_NR_PAGES should biggest page be bigger than - * a single pageblock. + * __free_pageblock_cma() function. */ MIGRATE_CMA, #endif diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index 3d63d91cba5c..ac35b14b0786 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -399,7 +399,7 @@ static const struct reserved_mem_ops rmem_cma_ops = { static int __init rmem_cma_setup(struct reserved_mem *rmem) { - phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order); + phys_addr_t align = PAGE_SIZE << pageblock_order; phys_addr_t mask = align - 1; unsigned long node = rmem->fdt_node; bool default_cma = of_get_flat_dt_prop(node, "linux,cma-default", NULL); diff --git a/mm/cma.c b/mm/cma.c index 766f1b82b532..b2e927fab7b5 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -187,8 +187,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, return -EINVAL; /* ensure minimal alignment required by mm core */ - alignment = PAGE_SIZE << - max_t(unsigned long, MAX_ORDER - 1, pageblock_order); + alignment = PAGE_SIZE << pageblock_order; /* alignment should be aligned with order_per_bit */ if (!IS_ALIGNED(alignment >> PAGE_SHIFT, 1 << order_per_bit)) @@ -275,8 +274,7 @@ int __init cma_declare_contiguous_nid(phys_addr_t base, * migratetype page by page allocator's buddy algorithm. In the case, * you couldn't get a contiguous memory, which is not what we want. */ - alignment = max(alignment, (phys_addr_t)PAGE_SIZE << - max_t(unsigned long, MAX_ORDER - 1, pageblock_order)); + alignment = max(alignment, (phys_addr_t)PAGE_SIZE << pageblock_order); if (fixed && base & (alignment - 1)) { ret = -EINVAL; pr_err("Region at %pa must be aligned to %pa bytes\n", diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7a4fa21aea5c..ac9432e63ce1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -9214,8 +9214,8 @@ int isolate_single_pageblock(unsigned long boundary_pfn, gfp_t gfp_flags, * be either of the two. * @gfp_mask: GFP mask to use during compaction * - * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES - * aligned. The PFN range must belong to a single zone. + * The PFN range does not have to be pageblock aligned. The PFN range must + * belong to a single zone. * * The first thing this routine does is attempt to MIGRATE_ISOLATE all * pageblocks in the range. Once isolated, the pageblocks should not