From patchwork Wed Jan 5 21:47:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12704782 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FAD8C433EF for ; Wed, 5 Jan 2022 21:48:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 221196B0075; Wed, 5 Jan 2022 16:48:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F8F16B0078; Wed, 5 Jan 2022 16:48:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 02E876B007D; Wed, 5 Jan 2022 16:48:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id E14486B0078 for ; Wed, 5 Jan 2022 16:48:24 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A1E89181F0FE6 for ; Wed, 5 Jan 2022 21:48:24 +0000 (UTC) X-FDA: 78997572528.05.9964A7E Received: from new4-smtp.messagingengine.com (new4-smtp.messagingengine.com [66.111.4.230]) by imf02.hostedemail.com (Postfix) with ESMTP id 5BF4280015 for ; Wed, 5 Jan 2022 21:48:17 +0000 (UTC) Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailnew.nyi.internal (Postfix) with ESMTP id 0152D580510; Wed, 5 Jan 2022 16:48:24 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute6.internal (MEProxy); Wed, 05 Jan 2022 16:48:24 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=from :to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; s=fm2; bh=ssIavQd0t027n j+SJdZ+R89EjFohei8nsJIZDjq+gAc=; b=S0iFzugiHfWc3pkd+CZdBT6nLqkU1 O293WHqzAfo4z2NG7KeyjrOwbHMOOslgSqp/nzgDYhgtHbQy1TP5B107IQwfKrV8 sEDHbFP1mzRmf/bsMkZZ7+2pcaq0OjVwXSzmeIevlmdj0dGB74IDIhFKz9IeN+9N iEOV2a9f3zb+Q4MA+mVXYAU6GfHusjfsdpI8K78PATIFGaq7Y/d6Mq7pnJoAcUYA 9t/UBNDiCl8irKYbP4TG3sHTQN1b3IZRkuFRYdHXRZtYVMgM7MLwU70CNVDKvBw8 beX2PYbUiLK11zp70mQu+GnKdoHILPBnmmksVpxTSZHVqyVMG6Jdp07+g== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:reply-to:subject :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=ssIavQd0t027nj+SJdZ+R89EjFohei8nsJIZDjq+gAc=; b=LJN2Ozob Zz3Qf1m9F8UnBOu76cytEY9asp+rvN52Rrd3ey/iblpRlyPSEJP8LwIE09PVGXL1 HkPLjcnDoTYkde2g6ZdmiTjinrkRQchiWLtI0g86iG7g0bP3QS7t+ridmDGtdgrE C7k2cvCS03Lqd4KYR9k5z9ysNwlR1BFEwFeJoN4uFz3/XQ4C+/mXMtj2J/4nnRf3 CGhK7Gqw5TiVw9sJW5LYNs2M+hd4v/dIjr9UmUd8Ofc85jh8ZMgWmMcJkeUwA361 Mn8Zye4hve17YR89Pg0yBCRGikTbApdB58nmr7svo5XlODiH2tKtbvrz4pIM/xsp IfqqtAyzNHJAVA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvuddrudefjedgieekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfrhgggfestdhqredtredttdenucfhrhhomhepkghiucgj rghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepkeeiue elvddtteeujeehtdegheejledvtdetgfeileejfeeghfeftdeuffefleevnecuffhomhgr ihhnpehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpeiiihdrhigrnhesshgvnhhtrdgtohhm X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 5 Jan 2022 16:48:23 -0500 (EST) From: Zi Yan To: David Hildenbrand , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Michael Ellerman , Christoph Hellwig , Marek Szyprowski , Robin Murphy , linuxppc-dev@lists.ozlabs.org, virtualization@lists.linux-foundation.org, iommu@lists.linux-foundation.org, Vlastimil Babka , Mel Gorman , Eric Ren , Zi Yan Subject: [RFC PATCH v3 1/8] mm: page_alloc: avoid merging non-fallbackable pageblocks with others. Date: Wed, 5 Jan 2022 16:47:49 -0500 Message-Id: <20220105214756.91065-2-zi.yan@sent.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220105214756.91065-1-zi.yan@sent.com> References: <20220105214756.91065-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 5BF4280015 X-Stat-Signature: xhe65izok8wscqxxqrruwrnyzuid4b1a Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=S0iFzugi; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=LJN2Ozob; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf02.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.230 as permitted sender) smtp.mailfrom=zi.yan@sent.com X-HE-Tag: 1641419297-376811 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan This is done in addition to MIGRATE_ISOLATE pageblock merge avoidance. It prepares for the upcoming removal of the MAX_ORDER-1 alignment requirement for CMA and alloc_contig_range(). MIGRARTE_HIGHATOMIC should not merge with other migratetypes like MIGRATE_ISOLATE and MIGRARTE_CMA[1], so this commit prevents that too. Also add MIGRARTE_HIGHATOMIC to fallbacks array for completeness. [1] https://lore.kernel.org/linux-mm/20211130100853.GP3366@techsingularity.net/ Signed-off-by: Zi Yan --- include/linux/mmzone.h | 6 ++++++ mm/page_alloc.c | 28 ++++++++++++++++++---------- 2 files changed, 24 insertions(+), 10 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index aed44e9b5d89..0aa549653e4e 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -83,6 +83,12 @@ static inline bool is_migrate_movable(int mt) return is_migrate_cma(mt) || mt == MIGRATE_MOVABLE; } +/* See fallbacks[MIGRATE_TYPES][3] in page_alloc.c */ +static inline bool migratetype_has_fallback(int mt) +{ + return mt < MIGRATE_PCPTYPES; +} + #define for_each_migratetype_order(order, type) \ for (order = 0; order < MAX_ORDER; order++) \ for (type = 0; type < MIGRATE_TYPES; type++) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8dd6399bafb5..5193c953dbf8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1042,6 +1042,12 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, return page_is_buddy(higher_page, higher_buddy, order + 1); } +static inline bool has_non_fallback_pageblock(struct zone *zone) +{ + return has_isolate_pageblock(zone) || zone_cma_pages(zone) != 0 || + zone->nr_reserved_highatomic != 0; +} + /* * Freeing function for a buddy system allocator. * @@ -1117,14 +1123,15 @@ static inline void __free_one_page(struct page *page, } if (order < MAX_ORDER - 1) { /* If we are here, it means order is >= pageblock_order. - * We want to prevent merge between freepages on isolate - * pageblock and normal pageblock. Without this, pageblock - * isolation could cause incorrect freepage or CMA accounting. + * We want to prevent merge between freepages on pageblock + * without fallbacks and normal pageblock. Without this, + * pageblock isolation could cause incorrect freepage or CMA + * accounting or HIGHATOMIC accounting. * * We don't want to hit this code for the more frequent * low-order merging. */ - if (unlikely(has_isolate_pageblock(zone))) { + if (unlikely(has_non_fallback_pageblock(zone))) { int buddy_mt; buddy_pfn = __find_buddy_pfn(pfn, order); @@ -1132,8 +1139,8 @@ static inline void __free_one_page(struct page *page, buddy_mt = get_pageblock_migratetype(buddy); if (migratetype != buddy_mt - && (is_migrate_isolate(migratetype) || - is_migrate_isolate(buddy_mt))) + && (!migratetype_has_fallback(migratetype) || + !migratetype_has_fallback(buddy_mt))) goto done_merging; } max_order = order + 1; @@ -2484,6 +2491,7 @@ static int fallbacks[MIGRATE_TYPES][3] = { [MIGRATE_UNMOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE, MIGRATE_TYPES }, [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_TYPES }, [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_TYPES }, + [MIGRATE_HIGHATOMIC] = { MIGRATE_TYPES }, /* Never used */ #ifdef CONFIG_CMA [MIGRATE_CMA] = { MIGRATE_TYPES }, /* Never used */ #endif @@ -2795,8 +2803,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone, /* Yoink! */ mt = get_pageblock_migratetype(page); - if (!is_migrate_highatomic(mt) && !is_migrate_isolate(mt) - && !is_migrate_cma(mt)) { + /* Only reserve normal pageblock */ + if (migratetype_has_fallback(mt)) { zone->nr_reserved_highatomic += pageblock_nr_pages; set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL); @@ -3545,8 +3553,8 @@ int __isolate_free_page(struct page *page, unsigned int order) struct page *endpage = page + (1 << order) - 1; for (; page < endpage; page += pageblock_nr_pages) { int mt = get_pageblock_migratetype(page); - if (!is_migrate_isolate(mt) && !is_migrate_cma(mt) - && !is_migrate_highatomic(mt)) + /* Only change normal pageblock */ + if (migratetype_has_fallback(mt)) set_pageblock_migratetype(page, MIGRATE_MOVABLE); }