From patchwork Wed Sep 28 22:32:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Berger X-Patchwork-Id: 12993204 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DA0BC04A95 for ; Wed, 28 Sep 2022 22:34:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A94096B0080; Wed, 28 Sep 2022 18:33:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A43FD6B0081; Wed, 28 Sep 2022 18:33:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E4656B0082; Wed, 28 Sep 2022 18:33:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7C5C46B0080 for ; Wed, 28 Sep 2022 18:33:59 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 48FCA1204E8 for ; Wed, 28 Sep 2022 22:33:59 +0000 (UTC) X-FDA: 79962948198.27.B4FDA27 Received: from mail-qv1-f49.google.com (mail-qv1-f49.google.com [209.85.219.49]) by imf06.hostedemail.com (Postfix) with ESMTP id DB70F180007 for ; Wed, 28 Sep 2022 22:33:58 +0000 (UTC) Received: by mail-qv1-f49.google.com with SMTP id m18so3309278qvo.12 for ; Wed, 28 Sep 2022 15:33:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=lhtWJhsd4j1WsvrD24jP2B8/duCGXddB3TQ2GRCgsxg=; b=K4Q7s1SxcUiklLSpEe6hESiZP1eOwlpB06nDUMzWfLR+0hfwHBSsDc9vw59nU6aEJj fuU58WUvQGm9lOcNc70XihFXQHcDbjXpCyfRDy2Yj10m+KOk1le/1KJ5jGnO2/drUJNV 6/w5oHj4j7ksI1PvbfmNogyb+2VAXBylqvHHytrxEPT6B9tqreUV6nx7JxfuMcSx1mg9 udUN//BAAEMQCXBmrh/9DmVfqDgnInPV/jLTGI6qM30DD1crAkg5uZsVTBCOviHGTcAp iByb2ebeoASWVGlb1e4wklMEohoA3Kf3sXcc76Vlm2ERoj9cZ8IF+PZ5jHcsAhAgaAvh BJ7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=lhtWJhsd4j1WsvrD24jP2B8/duCGXddB3TQ2GRCgsxg=; b=QR6CyfBFs9tlEwl+v5wbzlt573bK84pyRc7Rajd+GFojbSkJjCKsAN4ZpIL3cF8tg2 19cVc3ycPA3wXGKAb03qKNcp17cwKhudLyRF69E6/LovO/WC611pQPOgHJo3Jfchk1j6 srJsPkW7SezKwCDu2lCvL1LSOGIWBj6jsJfBgi0RqnYSHQqr3AIciKPSIXGmw49q1Nmy WlyMe5OMEcAlW3yuvw7KSrzYjekmAPN94ALGFqZGQ+qQUGPOU7Vlli5uIgHAUUiUQFmg Rp+fo9mAFlz4sMEqqrZzCMEhrjlxuCfnwSRzFwWKd0Uu6XP+EJBYDzfEwOG13odBAXGO NyJw== X-Gm-Message-State: ACrzQf1+l73n0a5wOxo9d3kUoJTYummHzDvUM9utKQMeCkj8uJPHbvWX KYP2aaGEs+jMNBDye3XvzgI= X-Google-Smtp-Source: AMsMyM6hndsIdtDYd0/y5nc1hr5JoFumCrEoRQfIpyGeSDegaAZLd45AZZ0iEbDObJ88KUO8KKuesA== X-Received: by 2002:a0c:b295:0:b0:496:b91a:f5f4 with SMTP id r21-20020a0cb295000000b00496b91af5f4mr234203qve.20.1664404438041; Wed, 28 Sep 2022 15:33:58 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id j188-20020a37b9c5000000b006bb83c2be40sm3963481qkf.59.2022.09.28.15.33.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Sep 2022 15:33:57 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Muchun Song , KOSAKI Motohiro , Mel Gorman , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Michal Hocko , Joonsoo Kim , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v2 5/9] mm/page_alloc: introduce init_reserved_pageblock() Date: Wed, 28 Sep 2022 15:32:57 -0700 Message-Id: <20220928223301.375229-6-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220928223301.375229-1-opendmb@gmail.com> References: <20220928223301.375229-1-opendmb@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664404438; a=rsa-sha256; cv=none; b=C65LCkf5ViG4wvvF++s0ULuS83vtef+mbqKLDfpJYlwC5vEOlajRzbtW6xgMoLoKc1Cjxq DMWPGMMFL6IXNvHgqrXQfT3UGyrKh3WQEeYugmg4DQ7RtXa8nNYX4AogIQhrleXJbP+KCS ewhl8XqJAm2z8ABVPcvWF9As2mCHxvI= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=K4Q7s1Sx; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf06.hostedemail.com: domain of opendmb@gmail.com designates 209.85.219.49 as permitted sender) smtp.mailfrom=opendmb@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664404438; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lhtWJhsd4j1WsvrD24jP2B8/duCGXddB3TQ2GRCgsxg=; b=MPcdwsOPHtPs/HDXzMydnIS4i0RAcXTyKmdoR1d2N0dNlKfgexsX+mpl2V8i1AuhZXQhya fZCZfLcVqbf/eYs+AZyt9DkFzb1GIEpC0iJh0Fad6gLVt/rCYDpTOg/j/HnSCe/92eKD07 EipYRa9UxuKtb1S567FMyqef5oVJAdw= X-Rspamd-Queue-Id: DB70F180007 X-Rspam-User: Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=K4Q7s1Sx; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf06.hostedemail.com: domain of opendmb@gmail.com designates 209.85.219.49 as permitted sender) smtp.mailfrom=opendmb@gmail.com X-Rspamd-Server: rspam03 X-Stat-Signature: enfgd8u5xrppw9hn3ds9rfqdbu9a5cs1 X-HE-Tag: 1664404438-825342 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Most of the implementation of init_cma_reserved_pageblock() is common to the initialization of any reserved pageblock for use by the page allocator. This commit breaks that functionality out into the new common function init_reserved_pageblock() for use by code other than CMA. The CMA specific code is relocated from page_alloc to the point where init_cma_reserved_pageblock() was invoked and the new function is used there instead. The error path is also updated to use the function to operate on pageblocks rather than pages. Signed-off-by: Doug Berger --- include/linux/gfp.h | 5 +---- mm/cma.c | 15 +++++++++++---- mm/page_alloc.c | 8 ++------ 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index f314be58fa77..71ed687be406 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -367,9 +367,6 @@ extern struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, #endif void free_contig_range(unsigned long pfn, unsigned long nr_pages); -#ifdef CONFIG_CMA -/* CMA stuff */ -extern void init_cma_reserved_pageblock(struct page *page); -#endif +extern void init_reserved_pageblock(struct page *page); #endif /* __LINUX_GFP_H */ diff --git a/mm/cma.c b/mm/cma.c index 4a978e09547a..6208a3e1cd9d 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include "cma.h" @@ -116,8 +117,13 @@ static void __init cma_activate_area(struct cma *cma) } for (pfn = base_pfn; pfn < base_pfn + cma->count; - pfn += pageblock_nr_pages) - init_cma_reserved_pageblock(pfn_to_page(pfn)); + pfn += pageblock_nr_pages) { + struct page *page = pfn_to_page(pfn); + + set_pageblock_migratetype(page, MIGRATE_CMA); + init_reserved_pageblock(page); + page_zone(page)->cma_pages += pageblock_nr_pages; + } spin_lock_init(&cma->lock); @@ -133,8 +139,9 @@ static void __init cma_activate_area(struct cma *cma) out_error: /* Expose all pages to the buddy, they are useless for CMA. */ if (!cma->reserve_pages_on_error) { - for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++) - free_reserved_page(pfn_to_page(pfn)); + for (pfn = base_pfn; pfn < base_pfn + cma->count; + pfn += pageblock_nr_pages) + init_reserved_pageblock(pfn_to_page(pfn)); } totalcma_pages -= cma->count; cma->count = 0; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 81f97c5ed080..6d4470b0daba 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2302,9 +2302,8 @@ void __init page_alloc_init_late(void) set_zone_contiguous(zone); } -#ifdef CONFIG_CMA -/* Free whole pageblock and set its migration type to MIGRATE_CMA. */ -void __init init_cma_reserved_pageblock(struct page *page) +/* Free whole pageblock */ +void __init init_reserved_pageblock(struct page *page) { unsigned i = pageblock_nr_pages; struct page *p = page; @@ -2314,14 +2313,11 @@ void __init init_cma_reserved_pageblock(struct page *page) set_page_count(p, 0); } while (++p, --i); - set_pageblock_migratetype(page, MIGRATE_CMA); set_page_refcounted(page); __free_pages(page, pageblock_order); adjust_managed_page_count(page, pageblock_nr_pages); - page_zone(page)->cma_pages += pageblock_nr_pages; } -#endif /* * The order of subdivision here is critical for the IO subsystem.