From patchwork Mon Feb 24 12:37:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13987983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3D51C021A4 for ; Mon, 24 Feb 2025 12:37:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 873396B008A; Mon, 24 Feb 2025 07:37:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 820AA6B008C; Mon, 24 Feb 2025 07:37:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5AFC1280001; Mon, 24 Feb 2025 07:37:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3F37B6B008A for ; Mon, 24 Feb 2025 07:37:40 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id F331216133D for ; Mon, 24 Feb 2025 12:37:39 +0000 (UTC) X-FDA: 83154789438.05.BA6A6D0 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf27.hostedemail.com (Postfix) with ESMTP id 23D7D4000B for ; Mon, 24 Feb 2025 12:37:37 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=3bZXCASB; spf=pass (imf27.hostedemail.com: domain of 3EGi8ZwgKCBc6xz79xAy3BB381.zB985AHK-997Ixz7.BE3@flex--jackmanb.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3EGi8ZwgKCBc6xz79xAy3BB381.zB985AHK-997Ixz7.BE3@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740400658; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IfVd/sVmXFUURgb0dTMvXDyQ4mvzorXL6Kpx2xKlZtA=; b=DLOjg5BfKBFtNlz/VKVQDOPL38d2spdj6aZu/OWeIbXH32lZDBiClMHkKCgiWE0ORCSAi/ BJfBw57/QSlWz3liSAb/FslVZWsSEEW6T2isPYjHezlziUw9/1BRzvfv76ZI1bn6UEznc3 e4UP4H29ARbt2tUn7S7ljx7ZjvBSKVY= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=3bZXCASB; spf=pass (imf27.hostedemail.com: domain of 3EGi8ZwgKCBc6xz79xAy3BB381.zB985AHK-997Ixz7.BE3@flex--jackmanb.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3EGi8ZwgKCBc6xz79xAy3BB381.zB985AHK-997Ixz7.BE3@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740400658; a=rsa-sha256; cv=none; b=IwaRHfxEcSSeinYVfKnIOXiV1xxVokzMugIucVef96c/bfQtzUPzjkXBSoEZ1OFu2aY2XH c3rjm6/vN7VD+JZnKiV/jqnOyCw3YzyC0ki79cyp3KftKdbnICl6pjwttq0aSosZmconCD 5FOxjzVT4LsSa6IawNr32ID0fsFLBLg= Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43947979ce8so18413935e9.0 for ; Mon, 24 Feb 2025 04:37:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740400657; x=1741005457; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IfVd/sVmXFUURgb0dTMvXDyQ4mvzorXL6Kpx2xKlZtA=; b=3bZXCASBIHsR2uztfJE2CAwS55JpTGnYGSyZVaQd/ZJVH+LBEv7II/Ift9TMfqT/K5 kEhOY+ZjbCvz9nWUXr03c5hHNKUp7yJiIKD3s1Pesu7pAVk8XBo6Wds0+tYeFl2WjNov HD73mhf6iG8REAAOUyB23UNzY/GPdlEpjOay8CPTBuUBcqhnKiTf7Eh6BpstMIr3DjUJ JvFl64hIpwNZEUrSsVPHpyJyWd+FJXql4x3kAC3YgDjjCtgdU8ce1KLC349TnOn1oo1M Curb99VAYV0LUUtOy3Y7bpZpnsUFMSqZ+1i5tl+pPTDiW1dogIr9NS0H1CAnjyP8oK2R ZUuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740400657; x=1741005457; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IfVd/sVmXFUURgb0dTMvXDyQ4mvzorXL6Kpx2xKlZtA=; b=lhMKpR/6c717M6X84Q79vS+BbSbYKQ784vaNH9eCyRU51+U6n+LgcUe5wdqTvAXkHi eHuFzwCxNyUt/R99qjtjds8pZnsQyHWOoCPV+xZ10zrfD9vYEluJ7rC0dDG8u+tQ3+Q0 QTPD23/oEtFwwFx1ycKG5ttOVkHopfKoDXZgd+izzVF0q5PW5pxoMNotikb0AECcjLGb 4CiVwpVQPJxuDs0lx2RK0X0bIYry/Zcl5vodVGdoTQgKyd+W7766kO3hhIM28kJSOYJL VVI0UbLyeH+dmYoWuj/6Dc5nSXEqM7lLHp+SqBMn4SWBxogRjEsGjjz7SPvT1SzzZ6Ge IcIQ== X-Forwarded-Encrypted: i=1; AJvYcCVIQ+uWPMVon1tnu/Wvhdge+Olaz9sdqIMGZqjjY9IWcpYMnVMmJQ03ieFH7Yvh6P6h6VU5l1PA+A==@kvack.org X-Gm-Message-State: AOJu0Yx9iIS/hcIfK17gHTmS7NZhZBSVeN0m5yKPd0i8Osh6Ivq2yA8o 4miNlNK98ima7DiBvwkIZ79plK8Io+K1q/IX8dEwI9WGpk+aZCjPf9PnUvyTIzvxpYvptZNncRn 06Q5IykIHOQ== X-Google-Smtp-Source: AGHT+IEmuZ/CUd1iF302SQAJwhHvbZAtuGJmnDBQXhzOwEPDmSMyo4gXMQuCeG/fpatV+2gFoFl5y7wAHBiClQ== X-Received: from wmbfl27.prod.google.com ([2002:a05:600c:b9b:b0:439:67c6:5642]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3c86:b0:439:9d75:9e7d with SMTP id 5b1f17b1804b1-439ae21d1d0mr93253705e9.22.1740400656922; Mon, 24 Feb 2025 04:37:36 -0800 (PST) Date: Mon, 24 Feb 2025 12:37:28 +0000 In-Reply-To: <20250224-clarify-steal-v2-0-be24da656764@google.com> Mime-Version: 1.0 References: <20250224-clarify-steal-v2-0-be24da656764@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250224-clarify-steal-v2-1-be24da656764@google.com> Subject: [PATCH v2 1/2] mm/page_alloc: Clarify terminology in migratetype fallback code From: Brendan Jackman To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Michal Hocko , Johannes Weiner , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Brendan Jackman , Yosry Ahmed X-Rspam-User: X-Rspamd-Queue-Id: 23D7D4000B X-Stat-Signature: n51ub168ups98maifq19nzji3xnrtre4 X-Rspamd-Server: rspam03 X-HE-Tag: 1740400657-785080 X-HE-Meta: U2FsdGVkX1/XTs6DKaElBT9tL5X5z2P+cd9742Db5+zGZRZwc0dFLpcqtdnWWO9wqlszlAG/D8FArB90srxbJ3JWr1n9EraPZ8xY4nbtkQBk5yWUQnnkENmxXdbdwcwEyVIEAM3NG2v5Zp6gEIAloOP+qNCSe0MvvBB7glTre6mtucR2yZy7tK6O/aLEeX6xP0wzuDsKa9ytlhofnYL/ufHF5DDEulYSFiwrbgsF3I6CfV52aV4ieadmZMNoi7W5eKw3VrvaWL2bXkMHDlsUTLX5rB6EO1k9OhydfbTd+9KZ2KUdD3DMKcUaPFK1Qb73rVNyzZlVMxyxKFxPmywsASvHz4bPQ9i8RXcR0FMsZfvspgY6nxmYRzYbWPntWwRpfCwVCJbrcsFoEAMaYJTSTa/R09V6cDSksTkiVOi4Yikxe5pu1kOGWUHfvS1EXQy6f1NzI8A9YCPjxyX6/4eVqejX5jxpmoG4E2gUOhKavCqACbMFRDRCxQ63Gvwt5vjnQJYSogNa8yjA5ndL/sDuijK5fVPu6T8MrhdcUu3LgfHDWuNafJh8rhQuvSZVEbmczdXewhZ49qrq0pagIxy08qBVT1V+6E4i+ldl0aPrxSUx2sjPwEtlQbvqsThkczWqiDKlFqzv4q+QsMWZ1oEVnoWrUYV2CPhbakvajIuKHXAQIoToN1yOS1USzvDrlxlDI8l9B5Kpb5cmMHexgl5cjYfM5D/bDkqXwZ4hQvvxoY8mmGg4QTOz2hM+gwUkINHUCwqhLuXlMjHObGIkMchsSIPIrW4b3TWgbQAyB+d89oAUYIzT/5Wb3mXiu5SMW1KbQwfmAKUIvF7Gh7tAbYjolqAYp2qhoSx2eobOHVidYtwLi+uw5SrgbvPEcEJ0N1Rn+VVtsJmLeCJGd+8KjLcsVjhmV//bLnvKa7s3L09WoohkrqBqtFZ/OZJmVt94jOi50d8WwddM/LO8CWk7pcZ XCdyn7hn gr6Gh1lRB88AIBAKWlBqzVSpoza3NpOjEwubH2/7tvQyQO0C8ATLsH1gdWfmXHW8zIV05qvNoHrR2nvmnQlekIfU1lcnkJ1+4yS0qzj36hrnVlsoR0B1sLx8UmpnVOmHQIUOjf/N8YDD0vKWxfn8zJYWfYldchtv+WBfZKe9iaj8/RLBsiH7tbORMHxiFE6b2R8Ugf4UgeOG9Rf+QlGWrGW1BZ8hNsmk8BwIobxGDB0jlmBtXqePWy5lHe6cvlfyMS7WK9KtKCpWmIDjanprXQ+V8woggZ2ATULC7aMCNNbR2clgk4YvO5CZnMb/Svwi9W7f1GeJaExlcL9+3R1ezv+dU/5PYMV1b/tHqPK/dmQxmuf4raPfeCo8jjMbBC5rtx93v5t0JZVbLpDkN7/ycWgT6q9DjPaSDjXJcj1BezV9X2lC+Cv2tuRs28rt60IooBEQz+427AqrXpEFJnuswgF+vv09BhyYQZI58JMU1BjbhjStqFBZWyqGHn3etF0p4DJwlHuVHsUwyux18NpHUfIaH2JclWh5t1XRNMGhx8sVb8WB2wlwU1bQtzzuPK+wnO3VdUOI8kA1TIZ4E79STMS7BlOmIIkWuzjSf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This code is rather confusing because: 1. "Steal" is sometimes used to refer to the general concept of allocating from a from a block of a fallback migratetype (steal_suitable_fallback()) but sometimes it refers specifically to converting a whole block's migratetype (can_steal_fallback()). 2. can_steal_fallback() sounds as though it's answering the question "am I functionally permitted to allocate from that other type" but in fact it is encoding a heuristic preference. 3. The same piece of data has different names in different places: can_steal vs whole_block. This reinforces point 2 because it looks like the different names reflect a shift in intent from "am I allowed to steal" to "do I want to steal", but no such shift exists. Fix 1. by avoiding the term "steal" in ambiguous contexts. Start using the term "claim" to refer to the special case of stealing the entire block. Fix 2. by using "should" instead of "can", and also rename its parameters and add some commentary to make it more explicit what they mean. Fix 3. by adopting the new "claim" terminology universally for this set of variables. Signed-off-by: Brendan Jackman --- mm/compaction.c | 4 ++-- mm/internal.h | 2 +- mm/page_alloc.c | 65 ++++++++++++++++++++++++++++----------------------------- 3 files changed, 35 insertions(+), 36 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 12ed8425fa175c5dec50bac3dddb13499abaaa11..4609df1f6fb3feb274ef451a0dabcb5c4a11ac76 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2332,7 +2332,7 @@ static enum compact_result __compact_finished(struct compact_control *cc) ret = COMPACT_NO_SUITABLE_PAGE; for (order = cc->order; order < NR_PAGE_ORDERS; order++) { struct free_area *area = &cc->zone->free_area[order]; - bool can_steal; + bool claim_block; /* Job done if page is free of the right migratetype */ if (!free_area_empty(area, migratetype)) @@ -2349,7 +2349,7 @@ static enum compact_result __compact_finished(struct compact_control *cc) * other migratetype buddy lists. */ if (find_suitable_fallback(area, order, migratetype, - true, &can_steal) != -1) + true, &claim_block) != -1) /* * Movable pages are OK in any pageblock. If we are * stealing for a non-movable allocation, make sure diff --git a/mm/internal.h b/mm/internal.h index 109ef30fee11f8b399f6bac42eab078cd51e01a5..c22d2826fd8d8681c89bb783ed269cc9346b5d92 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -847,7 +847,7 @@ void init_cma_reserved_pageblock(struct page *page); #endif /* CONFIG_COMPACTION || CONFIG_CMA */ int find_suitable_fallback(struct free_area *area, unsigned int order, - int migratetype, bool only_stealable, bool *can_steal); + int migratetype, bool need_whole_block, bool *whole_block); static inline bool free_area_empty(struct free_area *area, int migratetype) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 579789600a3c7bfb7b0d847d51af702a9d4b139a..50d6c503474fa4c1d21b5bf5dbfd3eb0eef2c415 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1827,22 +1827,22 @@ static void change_pageblock_range(struct page *pageblock_page, /* * When we are falling back to another migratetype during allocation, try to - * steal extra free pages from the same pageblocks to satisfy further - * allocations, instead of polluting multiple pageblocks. + * claim entire blocks to satisfy further allocations, instead of polluting + * multiple pageblocks. * - * If we are stealing a relatively large buddy page, it is likely there will - * be more free pages in the pageblock, so try to steal them all. For - * reclaimable and unmovable allocations, we steal regardless of page size, - * as fragmentation caused by those allocations polluting movable pageblocks - * is worse than movable allocations stealing from unmovable and reclaimable - * pageblocks. + * If we are stealing a relatively large buddy page, it is likely there will be + * more free pages in the pageblock, so try to claim the whole block. For + * reclaimable and unmovable allocations, we claim the whole block regardless of + * page size, as fragmentation caused by those allocations polluting movable + * pageblocks is worse than movable allocations stealing from unmovable and + * reclaimable pageblocks. */ -static bool can_steal_fallback(unsigned int order, int start_mt) +static bool should_claim_block(unsigned int order, int start_mt) { /* * Leaving this order check is intended, although there is * relaxed order check in next check. The reason is that - * we can actually steal whole pageblock if this condition met, + * we can actually claim the whole pageblock if this condition met, * but, below check doesn't guarantee it and that is just heuristic * so could be changed anytime. */ @@ -1855,7 +1855,7 @@ static bool can_steal_fallback(unsigned int order, int start_mt) * reclaimable pages that are closest to the request size. After a * while, memory compaction may occur to form large contiguous pages, * and the next movable allocation may not need to steal. Unmovable and - * reclaimable allocations need to actually steal pages. + * reclaimable allocations need to actually claim the whole block. */ if (order >= pageblock_order / 2 || start_mt == MIGRATE_RECLAIMABLE || @@ -1948,7 +1948,7 @@ steal_suitable_fallback(struct zone *zone, struct page *page, if (boost_watermark(zone) && (alloc_flags & ALLOC_KSWAPD)) set_bit(ZONE_BOOSTED_WATERMARK, &zone->flags); - /* We are not allowed to try stealing from the whole block */ + /* No point in claiming the whole block */ if (!whole_block) goto single_page; @@ -1995,12 +1995,14 @@ steal_suitable_fallback(struct zone *zone, struct page *page, /* * Check whether there is a suitable fallback freepage with requested order. - * If only_stealable is true, this function returns fallback_mt only if - * we can steal other freepages all together. This would help to reduce + * Sets *claim_block to instruct the caller whether it should convert a whole + * pageblock to the returned migratetype. + * If only_claim is true, this function returns fallback_mt only if + * we would do this whole-block claiming. This would help to reduce * fragmentation due to mixed migratetype pages in one pageblock. */ int find_suitable_fallback(struct free_area *area, unsigned int order, - int migratetype, bool only_stealable, bool *can_steal) + int migratetype, bool only_claim, bool *claim_block) { int i; int fallback_mt; @@ -2008,19 +2010,16 @@ int find_suitable_fallback(struct free_area *area, unsigned int order, if (area->nr_free == 0) return -1; - *can_steal = false; + *claim_block = false; for (i = 0; i < MIGRATE_PCPTYPES - 1 ; i++) { fallback_mt = fallbacks[migratetype][i]; if (free_area_empty(area, fallback_mt)) continue; - if (can_steal_fallback(order, migratetype)) - *can_steal = true; + if (should_claim_block(order, migratetype)) + *claim_block = true; - if (!only_stealable) - return fallback_mt; - - if (*can_steal) + if (*claim_block || !only_claim) return fallback_mt; } @@ -2190,7 +2189,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, int min_order = order; struct page *page; int fallback_mt; - bool can_steal; + bool claim_block; /* * Do not steal pages from freelists belonging to other pageblocks @@ -2209,19 +2208,19 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, --current_order) { area = &(zone->free_area[current_order]); fallback_mt = find_suitable_fallback(area, current_order, - start_migratetype, false, &can_steal); + start_migratetype, false, &claim_block); if (fallback_mt == -1) continue; /* - * We cannot steal all free pages from the pageblock and the - * requested migratetype is movable. In that case it's better to - * steal and split the smallest available page instead of the - * largest available page, because even if the next movable - * allocation falls back into a different pageblock than this - * one, it won't cause permanent fragmentation. + * We are not gonna claim the pageblock and the requested + * migratetype is movable. In that case it's better to steal and + * split the smallest available page instead of the largest + * available page, because even if the next movable allocation + * falls back into a different pageblock than this one, it won't + * cause permanent fragmentation. */ - if (!can_steal && start_migratetype == MIGRATE_MOVABLE + if (!claim_block && start_migratetype == MIGRATE_MOVABLE && current_order > order) goto find_smallest; @@ -2234,7 +2233,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, for (current_order = order; current_order < NR_PAGE_ORDERS; current_order++) { area = &(zone->free_area[current_order]); fallback_mt = find_suitable_fallback(area, current_order, - start_migratetype, false, &can_steal); + start_migratetype, false, &claim_block); if (fallback_mt != -1) break; } @@ -2250,7 +2249,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, /* take off list, maybe claim block, expand remainder */ page = steal_suitable_fallback(zone, page, current_order, order, - start_migratetype, alloc_flags, can_steal); + start_migratetype, alloc_flags, claim_block); trace_mm_page_alloc_extfrag(page, order, current_order, start_migratetype, fallback_mt); From patchwork Mon Feb 24 12:37:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13987984 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DCBFC021A6 for ; Mon, 24 Feb 2025 12:37:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 146F16B0092; Mon, 24 Feb 2025 07:37:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D021280001; Mon, 24 Feb 2025 07:37:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB0346B0095; Mon, 24 Feb 2025 07:37:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C5EFD6B0092 for ; Mon, 24 Feb 2025 07:37:42 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 721901412E7 for ; Mon, 24 Feb 2025 12:37:42 +0000 (UTC) X-FDA: 83154789564.22.934FB3E Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf25.hostedemail.com (Postfix) with ESMTP id 9A589A0012 for ; Mon, 24 Feb 2025 12:37:40 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="lNpb/f7i"; spf=pass (imf25.hostedemail.com: domain of 3E2i8ZwgKCBo902AC0D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--jackmanb.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3E2i8ZwgKCBo902AC0D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740400660; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9H3uAh8z39ZL705dy+lsEsP19JzvEAhzwdccc5XF6GE=; b=iKfdMYgwLrERaxMV+RWJ03txOGZxKtc3EcEQf67kxbYCQIFkLoYIsCGwz9OlNF8FfrcHfZ fOXbsyumCnlYVVbZf8pcfnZgIsZO3ylGCuKrXE9iSxA5NHYhlN+OTQQljfXPwoYse6+G9E moTyZVz1HLHbY/pK0hxn0zIvReIkxOc= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="lNpb/f7i"; spf=pass (imf25.hostedemail.com: domain of 3E2i8ZwgKCBo902AC0D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--jackmanb.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3E2i8ZwgKCBo902AC0D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740400660; a=rsa-sha256; cv=none; b=XUcBvgBnU+cmwt6LenxeS7ZYzpYHFwGfZjqDAI95Kw6+lNbO5CYrH9nuc/AE7tI+lMkEd6 E6J5p5q1japP95enxSedV68p00Za9Pe+7Kzrsgkk8ip2epjtP4M5yJIVP8xM07xIki39/c 2l/a4xnrH85sZiwJOsI0lbaRZF5adeY= Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-38f2726c0faso3935935f8f.0 for ; Mon, 24 Feb 2025 04:37:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740400659; x=1741005459; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9H3uAh8z39ZL705dy+lsEsP19JzvEAhzwdccc5XF6GE=; b=lNpb/f7i/Tdq5Cq+jVYhckLH70dHK9kp+0R85Ov5JABLVHxpZND6CbKX0V7fSAQc5w JQy/dhUnTjkLZRQl1f21efCFFoeHptnDqdRKFdNCR7lkc6smQzhyw6yJdFWdlitHs3i0 lI6NQbvOu/u96VFqYdGTwtpxm0YTXnPhCu1j0l8hKJ6YoCnOfVo7HeE+nM2rtBwhiyem YDC5kauSeRnd2Pc5PN1zPhfhuLYf6cgiunzxI3mO2s8WSAiseT0vwBn+XLYyT7z1wwvx 1loeLNttNVyqgIFd99m1ButYUb1Hxm63RnJtdXUkp5ven/GHMgBrU57kZNlMW/dLS/K8 Hknw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740400659; x=1741005459; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9H3uAh8z39ZL705dy+lsEsP19JzvEAhzwdccc5XF6GE=; b=pKfx0rkWLu2aIX7HxnN/5TGdP0V9nPp3gNLF2F0B/flEFO+bseEBGKCkQY/5z4VxW3 chX2E+nzUTCjuSekAa9li3+gm0cVT+jDr3qJ3gmKB4R1lH7DxOq+xHk18QOBv3Cg+JZq zmkuwPWJ3EnbHPDgH2m/ScIXeaIteMEUWByUJmuGJNubZ3QOQc+PVSNkqYX/+E8Otq5/ 94PH0gH+oHCKnw0CBppV/OdpTnzAiAgRQNCYS1IZWbSd/SdHdtEDplY/oN1CGmtgGTpu DcfYAkTr57Oop/WT23LQGixgwx+eqiFRG7IPtY5AxoDPYAZO0ukyM5YrwCqGPB5UAoY2 se0A== X-Forwarded-Encrypted: i=1; AJvYcCVRNWaqVmA5i6DPWdNaVEJW9MzremRlfxdldIIp7j9evSqdyxx/riqY9+jdIbwO5/5WcrrytQ9Dow==@kvack.org X-Gm-Message-State: AOJu0YxPwsttdTGX0tE+ZDrA05qaWun3ebtkDDnJGk3y+Bb9N2wFU2v4 TTKEcqg31W/1oUZ0M+pCXagZf48D+P8k/PQ2Pg+kEU3TTaLU8EwzuoXCx8dlP76Wxd1yG3ATAEG dleN8NDsk2g== X-Google-Smtp-Source: AGHT+IG06C12mCh6DOgP0VK1wSYmbo1aaL50c6VFj4ZklSRSFPy/ILkeB2EyUg8/wsegGye3DKVKomcDzBRQ8g== X-Received: from wmsp10.prod.google.com ([2002:a05:600c:1d8a:b0:439:8664:577a]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:4006:b0:38f:2856:7dc2 with SMTP id ffacd0b85a97d-38f6e95d596mr12371877f8f.18.1740400659184; Mon, 24 Feb 2025 04:37:39 -0800 (PST) Date: Mon, 24 Feb 2025 12:37:29 +0000 In-Reply-To: <20250224-clarify-steal-v2-0-be24da656764@google.com> Mime-Version: 1.0 References: <20250224-clarify-steal-v2-0-be24da656764@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250224-clarify-steal-v2-2-be24da656764@google.com> Subject: [PATCH v2 2/2] mm/page_alloc: Clarify should_claim_block() commentary From: Brendan Jackman To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Michal Hocko , Johannes Weiner , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Brendan Jackman , Yosry Ahmed X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 9A589A0012 X-Stat-Signature: i6cyqjphefstntbh37twzn3ant3w5ekm X-HE-Tag: 1740400660-698548 X-HE-Meta: U2FsdGVkX18tVJ8p+Jaje8H6fwkdb11KbLN5VPBceCDP+mA9NwMzmEzuetQNYkTQM9wrkLYoJwHHxhL6EgTX+eIhfHjEc1WkDymhRqsyDV7dqvx4Vu2y05qyvZQEZ0t2rZxwh3LdvoXbSreCL7fLY1LUniQXReC3dUJ/QGsyKaq2b8QFrfeW/PtQ6hyvKtSpZV+XMWuUAPpF62GJ1tmGI6yAVbQkcVGtUXdTwA5OcSahWRgsNCRVNchVenr5N1g8QWEGHSKGbCDEcR4l/4z/0jiQIj1rgxKr4r68oW9IZCP9zmn2c/GrHlMTq4lOJ7CwDnAQxsXNwxPS7Z6qAcivs0NL9Q2D7HrNNxe7jklb1eb6bou8wEaTigPky87Ag0mx1xxiuqBNXcPNJV0Ua1wIxAo8Koof7arZF3sRDVHd0L/fM+oC0s0v3//3sq576uR6M8euWoBLdtaVp/1YYbPI0HiquJJcNO7OlL73XcvdAbwLg819eM8AJohgf5YuV6cDBSEHMrggoyCAbqOk0g0xaQ/vrEIDlMigPprsetZ/C/ydWwstA7EBYwukkw/Iv7hqnyMvR2HzC0/kBmjQcrOluSu/DOpno42YeA7LkVXft7AN1QjRKHlamhdTTsN9cWrB3GlESIlXoC44AXzLnbU302gEdB80trc8dUEnzkuhwsvsslJO9g7jjD3qntuNW8AU0ZMpuk69GnFFKUchSfAcx0FCFVMnUFCEXjc8hjYcplka6TVp6ZoYyIC4QCATmQ/D+jYV4RHmpu7vwAeo98wLdkxG0ggKJfvTtvMDVeLVLcOztQFQnqwZJbPw0PG8TYJrKKPKXk26UKm3QBb0Zx0zrMg20VA+aBt7JX3q+cuSmLYgyudCmeIaPqakcXjoCy75V+yZ0zqRauwXN6PhK2cpHj8CfHvnjX1RgrxIw6JDBrYEAXceIgPwr2qswuBdQBxp3MQihZN147FO94BHfDg od1b+RHt EwzbLaFISa1YThBTD0NfEhUzc2aJ7qZlcdWSAbZnOSE5WOZtZTjAp7uDt/1TJLHNxXR0BbsqMOkKQZS96zn0x82uezzIB9SRRmKPwcVsT7Gz6PxZ8E1Uz7sDdey2xnt4b8XueZacvqEZfA1e/Gecdx/ipx3uB0IjyIIYQiesvClVeYomaWkvuMlpo3MsylfR2PI0ZmKSy+6O9+qOA9ePOxIJbKkICYj82nApWZhqAtlHT1HaCBLb1gU7xC5Etk82fokTzHhDV1ylTkCh/1b54SsxSQkA5zjR+QSTUHzL/mn/SfHemVVrq5nzHdxSX7qrpDWixA3TD4pO3cJU7AYGQZ8API02oe4i7OwYQuNiM4TwLRuxSXg9PobqDu0T7n6/lZHrMW9xnSWYnyvaRc4PKhowXyP8UsPDo2pr96XFF6YWjXaNSLCgy5W6U3R8HFl08KEJEpJ6CflFD2tttTb4jwke5e3mwOUWwibL7bg5CxvnXKSsz8G5ocLXaAp2oHQQjzt2pVydi/D+XYbf13fND4YzrtCHU8PcoNiS/D7w4P9P9mcAoI7tsZtJf08ZEBRIkVUD9/5493jvauZk/DZ9SOH4Xk/VLxgQMnf4i X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There's lots of text here but it's a little hard to follow, this is an attempt to break it up and align its structure more closely with the code. Reword the top-level function comment to just explain what question the function answers from the point of view of the caller. Break up the internal logic into different sections that can have their own commentary describing why that part of the rationale is present. Note the page_groupy_by_mobility_disabled logic is not explained in the commentary, that is outside the scope of this patch... Signed-off-by: Brendan Jackman --- mm/page_alloc.c | 39 +++++++++++++++++++++++---------------- 1 file changed, 23 insertions(+), 16 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 50d6c503474fa4c1d21b5bf5dbfd3eb0eef2c415..547cdba789d8f3f04c5aab04ba7e74cb54c1261b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1826,16 +1826,9 @@ static void change_pageblock_range(struct page *pageblock_page, } /* - * When we are falling back to another migratetype during allocation, try to - * claim entire blocks to satisfy further allocations, instead of polluting - * multiple pageblocks. - * - * If we are stealing a relatively large buddy page, it is likely there will be - * more free pages in the pageblock, so try to claim the whole block. For - * reclaimable and unmovable allocations, we claim the whole block regardless of - * page size, as fragmentation caused by those allocations polluting movable - * pageblocks is worse than movable allocations stealing from unmovable and - * reclaimable pageblocks. + * When we are falling back to another migratetype during allocation, should we + * try to claim an entire block to satisfy further allocations, instead of + * polluting multiple pageblocks? */ static bool should_claim_block(unsigned int order, int start_mt) { @@ -1849,6 +1842,26 @@ static bool should_claim_block(unsigned int order, int start_mt) if (order >= pageblock_order) return true; + /* + * Above a certain threshold, always try to claim, as it's likely there + * will be more free pages in the pageblock. + */ + if (order >= pageblock_order / 2) + return true; + + /* + * Unmovable/reclaimable allocations would cause permanent + * fragmentations if they fell back to allocating from a movable block + * (polluting it), so we try to claim the whole block regardless of the + * allocation size. Later movable allocations can always steal from this + * block, which is less problematic. + */ + if (start_mt == MIGRATE_RECLAIMABLE || start_mt == MIGRATE_UNMOVABLE) + return true; + + if (page_group_by_mobility_disabled) + return true; + /* * Movable pages won't cause permanent fragmentation, so when you alloc * small pages, you just need to temporarily steal unmovable or @@ -1857,12 +1870,6 @@ static bool should_claim_block(unsigned int order, int start_mt) * and the next movable allocation may not need to steal. Unmovable and * reclaimable allocations need to actually claim the whole block. */ - if (order >= pageblock_order / 2 || - start_mt == MIGRATE_RECLAIMABLE || - start_mt == MIGRATE_UNMOVABLE || - page_group_by_mobility_disabled) - return true; - return false; }