From patchwork Fri Feb 28 09:52:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13996060 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23913C19776 for ; Fri, 28 Feb 2025 09:52:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 994066B008A; Fri, 28 Feb 2025 04:52:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8A5CE6B008C; Fri, 28 Feb 2025 04:52:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 689696B0092; Fri, 28 Feb 2025 04:52:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4626F6B008A for ; Fri, 28 Feb 2025 04:52:26 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E7BB44BFA9 for ; Fri, 28 Feb 2025 09:52:25 +0000 (UTC) X-FDA: 83168888250.21.284DC26 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf15.hostedemail.com (Postfix) with ESMTP id 194B3A0002 for ; Fri, 28 Feb 2025 09:52:23 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=jvD94cg3; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3VofBZwgKCK8YPRZbPcQVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--jackmanb.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3VofBZwgKCK8YPRZbPcQVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--jackmanb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740736344; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FW/4Itowb9kTuUXQ1y/EpM+Euvfc88hrYXKaBKarQSo=; b=74bVYiq0SB1xAtf4eT8+Jg2iemLHyEO96sYaZJRFeePBYg6y6kvo8ttcXlYDF8NeR9TYko fYyP+nnKxImuEGA0kBdf7dPwuis2NYhT9rn1sjdVaZsX5GFO3BXyIc9PkTnqKMnv3k5Lcf /tE7eTnDCfXDrIjs6VYG5dENChXBNqI= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=jvD94cg3; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3VofBZwgKCK8YPRZbPcQVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--jackmanb.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3VofBZwgKCK8YPRZbPcQVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--jackmanb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740736344; a=rsa-sha256; cv=none; b=mhq7Mf9xqkm4NgAEddRCvC9VN36ICD1bWVn7NdeqDmlXyao0GcgPozgkTByKehMjXYQ3Qt w/QfxYGLzlDlkxtYKAwI9KmFwkv9P5Qr+kDrkAWAPUEovAyq+lzSlth4E80BSnOV3E+l1D N+hBlnLTZms0Vgbc/204ae7za3uP2a0= Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-38f4e47d0b2so828494f8f.2 for ; Fri, 28 Feb 2025 01:52:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740736342; x=1741341142; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FW/4Itowb9kTuUXQ1y/EpM+Euvfc88hrYXKaBKarQSo=; b=jvD94cg3WEDKauojeNVLdjj6NHYtYG2WFhCLBuT9ZYrBP41T7C65LZNdDpf0rj/xS2 kMZ5RIBj8SlpsZ1cnUmnQezz8r4JH0zui8yQYrBFgGKpt61r9hHnPIajpPTm1KnhKxte RGNuiuZ3ieUoCPZXpO6Tp+u+h3S/byC9AFkqOR6nmAPrA9u6pzoa1sg+Gs/0SWLaSvTs xg14H1nIA2PhD4iUjP/3LvVeo9eXQuFLlyPAdXKkyjeUO923JrY5avUEpPNMAv53RFt6 4VXHDAjd7YGTRboFMSSWdBIDoEINCU1V+eNgSGja0ozAIA9LzpazSI5XVT0TBZH6yXht O6pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740736342; x=1741341142; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FW/4Itowb9kTuUXQ1y/EpM+Euvfc88hrYXKaBKarQSo=; b=XedrZEmIrsYpU1Zdb60FeusluKNJXk3VOcyVyZ5CBAQJeX4DXelMOdM05E7nVBoX9A 3UYOUQzB9lo03I8WQ0jZ8bF+87O9qkO7XI3lH/bO0uZkqOGnpMQpO8cBwys+9kPdFvwX WKj+tX1Nxe9v4mzbSOFMZDl5/9nlc/O/Z9LcipmzsGGegZ5CQGY1kVGtfQoh7cFKrort yZfKIJonT+u5JYSx+Li9rfD6qf/sA7N4eT3pVlEvppzQQzpmUNyPbkRkp6LAzevaezFS XlEOEXoJnhcV9S6LsrNJBaJ7SjMUGrqsxryA9AzpdjrYR2Y4m3vpFVAB/IxZRjaqOnUE EXDw== X-Forwarded-Encrypted: i=1; AJvYcCUnyV0Eoi5xVFgliHfEXp25Eim4b/1sxi/68WyfiObIUlWKx85h/LlgEfMtOA+gbBCtrvSfE2yzLQ==@kvack.org X-Gm-Message-State: AOJu0Yy5i5NqACQ2pAIZcCQ510ExIubDzOyiXaTEU0CdVAjS1XE/v5bg cmGNZi2T97EfsEN3GHkojOBbFw7TCPn9DTV4TPOT7ZLgQ4g9oblsF2Yfg1NxwL2gKOB7Z5PAzHJ xSb4/qrYSSQ== X-Google-Smtp-Source: AGHT+IEu3+m482P1YPLRHuGe76vpYm6iwFvrHH/RmgK/tMNHmvSUd2hMzMyhtzJdg3Ovfa7XhnTfdGvRY/hbwQ== X-Received: from wmbez14.prod.google.com ([2002:a05:600c:83ce:b0:439:8664:577a]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:5988:0:b0:38f:30a3:5208 with SMTP id ffacd0b85a97d-390eca48478mr2569251f8f.39.1740736342806; Fri, 28 Feb 2025 01:52:22 -0800 (PST) Date: Fri, 28 Feb 2025 09:52:17 +0000 In-Reply-To: <20250228-clarify-steal-v4-0-cb2ef1a4e610@google.com> Mime-Version: 1.0 References: <20250228-clarify-steal-v4-0-cb2ef1a4e610@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250228-clarify-steal-v4-1-cb2ef1a4e610@google.com> Subject: [PATCH v4 1/2] mm/page_alloc: Clarify terminology in migratetype fallback code From: Brendan Jackman To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Michal Hocko , Johannes Weiner , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Brendan Jackman , Yosry Ahmed X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 194B3A0002 X-Rspam-User: X-Stat-Signature: uk8j8e9c6ehuds5gmwy5b9fw8n9zp1rp X-HE-Tag: 1740736343-643659 X-HE-Meta: U2FsdGVkX1/PN6IUPhzmAJ4JCId+06pcuyBHnS28a/JHQvcxh9eOBSwth/uSS1KNKa4oyqPGj7CUxiIWG0z8OIMsVhI9P+hw/hYRp+ls5hm6ZLw5NzY254N5lcIkEFDPnGeXdziAw1XIr2GZtW5OIjmlrckjKJYNLf8mZy6od/jI7+nP1c717hXVRWRckM3jwll2+aCiOwGKBBffNjmHAg8L+TaqYzz3H2F3hLOeLvKGN3jl+Vb/ICz/sf76NiOVW5tHMQL+DpGAeqT+parOgkJVrjTma5Jw7KqoS0IivgZio9f/wXgYI4bF3K48FYWpk8x63MW8z6l90fYQXDaUEThmWgoZMJoI/sjw+S4FyMzMb8ZqAMDCl75ynuPB341NQngyHTembNZucQkY8SnKs3ZCDyxihYjlZ12asAot2oiC++Bne/ingScLZFDy3zsEd73ZcWzLpXo6qQq7c8gTq/o7vaAUcYhAtAKGX3hG9mItfdUXRmx+wOJdJA6/Ds1N0vxWwE24KA+UiSGgLG5NTVG5AAKK5ANQEisH2JPBLVqx5pSp8TLSiF3yrMQfglovV9d+HGI99d7pjqhcsWTsJkQtFX05mkN9lUte2uvBj0sjeXjc5YMcQgt/MQGz5h9vlnDHuYicUB2sflFWCnmro645s/3O1WSREgVG6capdNlZu7hFe0Bc3e2obdP/EkESk0nvv41mM5JBC9FksuT1zOKk++qcXTgct0ChP/UfpT1HrOtDNloC4P0f8tv2DKksChB7l7h1eOVA718NxHEOzzz1RgFXFcVh62w9IscFM07mPvOyXWaz+Y+OifYib+5btGK897uedmFABuRF/OptX6n/47VEJwGeV1sZtolamLIkwv1MXYKZbXGTPUVoOUoWExHr9O3a4DFOkNCnV5BDNxzTP3TQcEr1PHokWhOvUsSBRJenBdOYsQRYFEjZF203iDJSzYqKqNzyq/Cp41k 0qZwUgZn 9Rnj7MRWsGF2iLFuOaHXurDnizlrzksKkE5MwKxrDITl0WACBuaD2koR6ChUoFtsIdxmOb5lx/oU9oyK60Qll4AnFyf/DJUf4YPKDdNxbClKs9uIJOAwsvZ2tVUtb+2mmJ0+yqUylvKyn0JvJ7KLVCdsrtd74lB1HrIJo3uiEZtCIqRznHSt6GxLFait1Csq5xHq22a8QMH1S0V474LqhnHwih7tBk5W4Req/6QoJhxCbKYMvM8LR8S6SRXePQ422xPYC2bTQF3m+dY6d5rHuBlfT9bSCk0jTbdfbb54CNVh+0oLjeNZXW/QjlAPldV5hhxMZ7Ei2yKzzBuCgitc/+IUg7CfJ8Cu+2ISd+ums9+cTTZ7nTWZp0Q+ZSIOJNAJDGzk2o27axKJhDS1GVKzupfF41yhvVsQW77X4LKZ2pzs3otbVuu02hwJ2MxTYLHT4rfoceUZTukkw4RYORvdc6kaGesCL73PCEEU9n0Cie5v+u++l4lt6qPnEPflLBD0X7pvdD3X+C34S69cQQUoRHej4S7635HdV8LUl3S5Gm6Zkghc8GYEK1eSTLJbavcLd+p99CwP85odHsEHOab/GwLlz3ZIBnjHrgVk5 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This code is rather confusing because: 1. "Steal" is sometimes used to refer to the general concept of allocating from a from a block of a fallback migratetype (steal_suitable_fallback()) but sometimes it refers specifically to converting a whole block's migratetype (can_steal_fallback()). 2. can_steal_fallback() sounds as though it's answering the question "am I functionally permitted to allocate from that other type" but in fact it is encoding a heuristic preference. 3. The same piece of data has different names in different places: can_steal vs whole_block. This reinforces point 2 because it looks like the different names reflect a shift in intent from "am I allowed to steal" to "do I want to steal", but no such shift exists. Fix 1. by avoiding the term "steal" in ambiguous contexts. Start using the term "claim" to refer to the special case of stealing the entire block. Fix 2. by using "should" instead of "can", and also rename its parameters and add some commentary to make it more explicit what they mean. Fix 3. by adopting the new "claim" terminology universally for this set of variables. Reviewed-by: Vlastimil Babka Signed-off-by: Brendan Jackman --- mm/compaction.c | 4 ++-- mm/internal.h | 2 +- mm/page_alloc.c | 72 ++++++++++++++++++++++++++++----------------------------- 3 files changed, 39 insertions(+), 39 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 0992106d4ea751f7f1f8ebf7c75cd433d676cbe0..550ce50218075509ccb5f9485fd84f5d1f3d23a7 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2333,7 +2333,7 @@ static enum compact_result __compact_finished(struct compact_control *cc) ret = COMPACT_NO_SUITABLE_PAGE; for (order = cc->order; order < NR_PAGE_ORDERS; order++) { struct free_area *area = &cc->zone->free_area[order]; - bool can_steal; + bool claim_block; /* Job done if page is free of the right migratetype */ if (!free_area_empty(area, migratetype)) @@ -2350,7 +2350,7 @@ static enum compact_result __compact_finished(struct compact_control *cc) * other migratetype buddy lists. */ if (find_suitable_fallback(area, order, migratetype, - true, &can_steal) != -1) + true, &claim_block) != -1) /* * Movable pages are OK in any pageblock. If we are * stealing for a non-movable allocation, make sure diff --git a/mm/internal.h b/mm/internal.h index b07550db2bfd1d152fa90f91b3687b0fa1a9f653..aa30282a774ae26349944a75da854ae6a3da2a98 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -863,7 +863,7 @@ static inline void init_cma_pageblock(struct page *page) int find_suitable_fallback(struct free_area *area, unsigned int order, - int migratetype, bool only_stealable, bool *can_steal); + int migratetype, bool claim_only, bool *claim_block); static inline bool free_area_empty(struct free_area *area, int migratetype) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5d8e274c8b1d500d263a17ef36fe190f60b88196..441c9d9cc5f8edbae1f4169207f5de9f32586f34 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1942,22 +1942,22 @@ static inline bool boost_watermark(struct zone *zone) /* * When we are falling back to another migratetype during allocation, try to - * steal extra free pages from the same pageblocks to satisfy further - * allocations, instead of polluting multiple pageblocks. + * claim entire blocks to satisfy further allocations, instead of polluting + * multiple pageblocks. * - * If we are stealing a relatively large buddy page, it is likely there will - * be more free pages in the pageblock, so try to steal them all. For - * reclaimable and unmovable allocations, we steal regardless of page size, - * as fragmentation caused by those allocations polluting movable pageblocks - * is worse than movable allocations stealing from unmovable and reclaimable - * pageblocks. + * If we are stealing a relatively large buddy page, it is likely there will be + * more free pages in the pageblock, so try to claim the whole block. For + * reclaimable and unmovable allocations, we try to claim the whole block + * regardless of page size, as fragmentation caused by those allocations + * polluting movable pageblocks is worse than movable allocations stealing from + * unmovable and reclaimable pageblocks. */ -static bool can_steal_fallback(unsigned int order, int start_mt) +static bool should_try_claim_block(unsigned int order, int start_mt) { /* * Leaving this order check is intended, although there is * relaxed order check in next check. The reason is that - * we can actually steal whole pageblock if this condition met, + * we can actually claim the whole pageblock if this condition met, * but, below check doesn't guarantee it and that is just heuristic * so could be changed anytime. */ @@ -1970,7 +1970,7 @@ static bool can_steal_fallback(unsigned int order, int start_mt) * reclaimable pages that are closest to the request size. After a * while, memory compaction may occur to form large contiguous pages, * and the next movable allocation may not need to steal. Unmovable and - * reclaimable allocations need to actually steal pages. + * reclaimable allocations need to actually claim the whole block. */ if (order >= pageblock_order / 2 || start_mt == MIGRATE_RECLAIMABLE || @@ -1983,12 +1983,14 @@ static bool can_steal_fallback(unsigned int order, int start_mt) /* * Check whether there is a suitable fallback freepage with requested order. - * If only_stealable is true, this function returns fallback_mt only if - * we can steal other freepages all together. This would help to reduce + * Sets *claim_block to instruct the caller whether it should convert a whole + * pageblock to the returned migratetype. + * If only_claim is true, this function returns fallback_mt only if + * we would do this whole-block claiming. This would help to reduce * fragmentation due to mixed migratetype pages in one pageblock. */ int find_suitable_fallback(struct free_area *area, unsigned int order, - int migratetype, bool only_stealable, bool *can_steal) + int migratetype, bool only_claim, bool *claim_block) { int i; int fallback_mt; @@ -1996,19 +1998,16 @@ int find_suitable_fallback(struct free_area *area, unsigned int order, if (area->nr_free == 0) return -1; - *can_steal = false; + *claim_block = false; for (i = 0; i < MIGRATE_PCPTYPES - 1 ; i++) { fallback_mt = fallbacks[migratetype][i]; if (free_area_empty(area, fallback_mt)) continue; - if (can_steal_fallback(order, migratetype)) - *can_steal = true; + if (should_try_claim_block(order, migratetype)) + *claim_block = true; - if (!only_stealable) - return fallback_mt; - - if (*can_steal) + if (*claim_block || !only_claim) return fallback_mt; } @@ -2016,14 +2015,14 @@ int find_suitable_fallback(struct free_area *area, unsigned int order, } /* - * This function implements actual steal behaviour. If order is large enough, we - * can claim the whole pageblock for the requested migratetype. If not, we check - * the pageblock for constituent pages; if at least half of the pages are free - * or compatible, we can still claim the whole block, so pages freed in the - * future will be put on the correct free list. + * This function implements actual block claiming behaviour. If order is large + * enough, we can claim the whole pageblock for the requested migratetype. If + * not, we check the pageblock for constituent pages; if at least half of the + * pages are free or compatible, we can still claim the whole block, so pages + * freed in the future will be put on the correct free list. */ static struct page * -try_to_steal_block(struct zone *zone, struct page *page, +try_to_claim_block(struct zone *zone, struct page *page, int current_order, int order, int start_type, int block_type, unsigned int alloc_flags) { @@ -2091,11 +2090,12 @@ try_to_steal_block(struct zone *zone, struct page *page, /* * Try finding a free buddy page on the fallback list. * - * This will attempt to steal a whole pageblock for the requested type + * This will attempt to claim a whole pageblock for the requested type * to ensure grouping of such requests in the future. * - * If a whole block cannot be stolen, regress to __rmqueue_smallest() - * logic to at least break up as little contiguity as possible. + * If a whole block cannot be claimed, steal an individual page, regressing to + * __rmqueue_smallest() logic to at least break up as little contiguity as + * possible. * * The use of signed ints for order and current_order is a deliberate * deviation from the rest of this file, to make the for loop @@ -2112,7 +2112,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, int min_order = order; struct page *page; int fallback_mt; - bool can_steal; + bool claim_block; /* * Do not steal pages from freelists belonging to other pageblocks @@ -2131,15 +2131,15 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, --current_order) { area = &(zone->free_area[current_order]); fallback_mt = find_suitable_fallback(area, current_order, - start_migratetype, false, &can_steal); + start_migratetype, false, &claim_block); if (fallback_mt == -1) continue; - if (!can_steal) + if (!claim_block) break; page = get_page_from_free_area(area, fallback_mt); - page = try_to_steal_block(zone, page, current_order, order, + page = try_to_claim_block(zone, page, current_order, order, start_migratetype, fallback_mt, alloc_flags); if (page) @@ -2149,11 +2149,11 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, if (alloc_flags & ALLOC_NOFRAGMENT) return NULL; - /* No luck stealing blocks. Find the smallest fallback page */ + /* No luck claiming pageblock. Find the smallest fallback page */ for (current_order = order; current_order < NR_PAGE_ORDERS; current_order++) { area = &(zone->free_area[current_order]); fallback_mt = find_suitable_fallback(area, current_order, - start_migratetype, false, &can_steal); + start_migratetype, false, &claim_block); if (fallback_mt == -1) continue; From patchwork Fri Feb 28 09:52:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13996061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0ABEBC282C5 for ; Fri, 28 Feb 2025 09:52:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8F6996B0092; Fri, 28 Feb 2025 04:52:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8632E6B0093; Fri, 28 Feb 2025 04:52:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6857F280001; Fri, 28 Feb 2025 04:52:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3C6326B0092 for ; Fri, 28 Feb 2025 04:52:28 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E6FE24BFA9 for ; Fri, 28 Feb 2025 09:52:27 +0000 (UTC) X-FDA: 83168888334.18.04BE8E1 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf23.hostedemail.com (Postfix) with ESMTP id 1E6A4140008 for ; Fri, 28 Feb 2025 09:52:25 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=mw8RNQ0c; spf=pass (imf23.hostedemail.com: domain of 3WIfBZwgKCLEaRTbdReSXffXcV.TfdcZelo-ddbmRTb.fiX@flex--jackmanb.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3WIfBZwgKCLEaRTbdReSXffXcV.TfdcZelo-ddbmRTb.fiX@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740736346; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HO3rm5+cQOMI85a/kqJJUn+V+LGjTUrNi05UBywCRJg=; b=QJ1wy4S8707TZQgBBTjPmmRBgmt/iYmicplldQ/uleRBCt4A52RPiIxJJ+05xwoX9rO9Pg FTurqtmOFnCEzblahluW0l0/UCZsWD1pVMI1w73nQYufH4T+yvlaFoLtuBWlVHspyGFIdS KqJjlkBx5uF3jL6P6mt2bluSkYVuJlY= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=mw8RNQ0c; spf=pass (imf23.hostedemail.com: domain of 3WIfBZwgKCLEaRTbdReSXffXcV.TfdcZelo-ddbmRTb.fiX@flex--jackmanb.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3WIfBZwgKCLEaRTbdReSXffXcV.TfdcZelo-ddbmRTb.fiX@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740736346; a=rsa-sha256; cv=none; b=NGVF/vCDRECidgY3CtPIeJuJiaazjq0OC/0B6+eniRAX3budq3RZilZPxnOtNSaEzEBDH/ 0Tm2Poctwrx04P0LH4DiPjiu5MzajREGw6MxZ9nH/gMSUnD4jSfII+teH4Ca+6IsdWlhEM 5+IJzk+soJH5koY6mnKFQUJjC4CNrM8= Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4394b8bd4e1so10175245e9.0 for ; Fri, 28 Feb 2025 01:52:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740736344; x=1741341144; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HO3rm5+cQOMI85a/kqJJUn+V+LGjTUrNi05UBywCRJg=; b=mw8RNQ0c7umzVbvnw7qgBzP0v3UGnEAGteEHTl8vLQgrxWzOVN8pCW5U9ZZlG1Gbna XGek/twG3XoBiRenz9WaEigyGssUTsD83aBn/jlVhgeZu3kVKiCdSmyBD88scnWmCGr6 vRFykvUTSbeGmsqbz47Yq80PiY9k8mMeEOPq2vfnkdt/QEl/5GDuJDyk8CwWuyCqiO4n 4S9P9SKKyMhjhc3X6kXbU+fdgwkNLO7AKKz8AJVaVtntmKkWw8iWbM+z1v+An6dJfQYo Q5VdOx6jrggurvDvNtbSEJg6W2lty3zyizBt1rLnotAlJdNRPts5O8+ywHTjykKQGgMP AhPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740736344; x=1741341144; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HO3rm5+cQOMI85a/kqJJUn+V+LGjTUrNi05UBywCRJg=; b=BJ4XDD51ORdvJeFWhH7j982lCMzyfRVL4ddev7IUuKWYcsc3OSqP50zOzT7YpEOHY5 cYzHrvX2eeuAe8w2FTJPNWci5HE068qlL5f5FZ9lGWFysqyZB0x76gbUfUDd4YAxx9bT OnUx5XeRzaqE1aXJ8Hwy03MQS0mGg1t3/GbCfcRwRkjAPRBFzDbQA508Rh/mH/dg35mM bcmQoPhOM57K8wYeAdUaX/i2fXo1ERlaoi0Gb1f0BK2hoiqeQ8IY2o58UzGbEDFa6R51 Ns3137LBXFCBlxi5cm7zbrzSxV9SsMzmyltguzf2cJ4mQrUVxb+TLZy+puEjTpEhkBQb 4+yA== X-Forwarded-Encrypted: i=1; AJvYcCV6kaip+Y3B9O+3u499hqCy7F8JRprF+Dt58fdTu1OcbpbhzNsJ8Jt1jq9yl6XyBY0Uv8XDlEqqjQ==@kvack.org X-Gm-Message-State: AOJu0Yw4plYR0g+cU/DGWollPDPFLHHQTWddLS5wYZOQJRElc98Yf+zn UloamQbuvitNUrLEEMIyE4uQ4fg/cM02bWMGDJ37y4ChlS1o4cgaoum7Kgrsd4X8hgOzO1eRGQm k4ISslz6n9g== X-Google-Smtp-Source: AGHT+IE1A0Dj6yDwVQ5f1BronRrhiqIk/3BQtEj5r1ew6jfrtBzSUFcUVO/MsKCaxzJF66E3L65r+gpQTCtWGQ== X-Received: from wmbbh18.prod.google.com ([2002:a05:600c:3d12:b0:439:8e3e:b51b]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:190a:b0:439:9c0e:3709 with SMTP id 5b1f17b1804b1-43ba66e6b0emr22738525e9.8.1740736344709; Fri, 28 Feb 2025 01:52:24 -0800 (PST) Date: Fri, 28 Feb 2025 09:52:18 +0000 In-Reply-To: <20250228-clarify-steal-v4-0-cb2ef1a4e610@google.com> Mime-Version: 1.0 References: <20250228-clarify-steal-v4-0-cb2ef1a4e610@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250228-clarify-steal-v4-2-cb2ef1a4e610@google.com> Subject: [PATCH v4 2/2] mm/page_alloc: Clarify should_claim_block() commentary From: Brendan Jackman To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Michal Hocko , Johannes Weiner , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Brendan Jackman , Yosry Ahmed X-Rspam-User: X-Stat-Signature: p1drm99xofhbowsge9yu5f63q76oy3df X-Rspamd-Queue-Id: 1E6A4140008 X-Rspamd-Server: rspam07 X-HE-Tag: 1740736345-534978 X-HE-Meta: U2FsdGVkX18x5G8cdwO0/I+rI84se8JHo8ePK9MRWNmONvTx25CWvXvmnKxlvSwlji9G9nltLwe860vzhekRuFf3aELCaZO6sUZlrPOR0ecYwlspaHqHSB82TvrSuGLhPUrUboM7IALa7nwrntihfgZzsFQaRIAlWOuiO+dQ08iOxnSWXPGZSQ8wbponY41yz0Em01EsY1doQsCxhVypYOVxOUOlzOUh/2YoxrzpPu+I47ohq6OoUyK/srszsskrKbrFuWVqKS7DTcSWaCYe6VbUF/8eXeAwTwGFI3Ehod9ZtsEBp2ncOFTmEKHcW6As1EXdKegk9tM6yZXIOe4I9VCryMtXrTcMybCXE0s5g4qu9q5mQ9//w0tCi4G+cqKGRxYPUJOWEyYHBAdGGxVYz/08HjFY6IzEvcN7aAZlHaegh7uC97tZrHpyqDjKcbBkcojFvNBIWuknBIddRHNj2SUNPTE+IHU1LMfiiwzyzwQlDtLG4AYHotvrNLkTK5IRSTrqZCtMJp6iT+bZpWjiRWBpYuAscPwc0LOORhc0PGNd57mWmu/6XfcNoE7Go4l8rZvuU+QSxa07ZQKanAFwXzHqgAd57OBoiGOnBGdcELHgW0fmalsVsdfduAM7ddG4xwJVesO675nvqW6LNV0vs+pwFweFSQqZi5MGJP2I8uGgHu7Ztjn0Nc1LQ0pD2Bf0ayZ/fwHBpdlaG80Kayeakeshfu8IBBbKxvXTnUhwEMKCY8Dqm+tIgdkWHxX6+/+t1X5HyjrNPoVWFVB/jfLv4ETOJfCvBm6cFhEhB84frCQ6sn287ZyiEBrjw2fdHpM+I9APDjvNXJA1euS+vFu277lZL9KBerDrKhBpjdCCZtO0ruSqD3jBq1i5vZR1DK+yN5V2bGca1ogHyqJgk6Y4fMcPOHjZuv4YoD+D8Lc5J0f/C9RtSTYGAYuVn+apZT/uKmZ5VZVKcoqtfjOKrOh aOWvPiZo ShdOH9QDT2xGsgH93Sh4iaptnKS/jg2O6J1B8Sjym6w8WCTe5oI50Fg1O/KBduN5bPo+V8IBiKx1uPr6YO9IAC7/hvLzMrWgMbNx9nqV3/qT5ITy02sfxVfaGe6mkVEgkk8b6rq6bIRxlEr9jwZyRoRFaPSYqWuhuBp+Aj0jfAxpEz3C86Xx58emIVL2UOjktAlwgYBJyQ1jDnyBKdZIHuSZ1KAFAtGX9RVdrTqmnVH4Q+lLYKcoiH9uGnCGAMeHcaCzPAQRBWhT2/Fsx6sG1VGnhi10bH0CVHPML177YM4QGkrSvbnB6kEUuruXDGaqwASj5PuKjV6yhY0Noi+ZrYKnCNBd2QZNUudMbWqabXZNYHeJHIbPojh+8MGZv0wnsEnvBLzol9ROA36SIvVDxSOgmtPxxJhkMumuFu1/p25efjbKPjCzl2oKJ3J1sum9KadGRzRaiUv/kt7aeCEYErlCplAQxIMI8iqKqN2rF6d+MKimFKgEOWTmxA/MgZE0kpCD499HL5PS6YPh7NBAnQEXBc/ojCMQ+pJq2YX0blSTz+Tv0beZJE9UI5vx8V333J1KInjKCNJSsEHKVqcUXTNHEgJHlAab12O+7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There's lots of text here but it's a little hard to follow, this is an attempt to break it up and align its structure more closely with the code. Reword the top-level function comment to just explain what question the function answers from the point of view of the caller. Break up the internal logic into different sections that can have their own commentary describing why that part of the rationale is present. Note the page_group_by_mobility_disabled logic is not explained in the commentary, that is outside the scope of this patch... Signed-off-by: Brendan Jackman Reviewed-by: Vlastimil Babka --- mm/page_alloc.c | 46 ++++++++++++++++++++++++++-------------------- 1 file changed, 26 insertions(+), 20 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 441c9d9cc5f8edbae1f4169207f5de9f32586f34..17ba5d758aa539370947b6f894e5fce7de6c5c5e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1941,16 +1941,9 @@ static inline bool boost_watermark(struct zone *zone) } /* - * When we are falling back to another migratetype during allocation, try to - * claim entire blocks to satisfy further allocations, instead of polluting - * multiple pageblocks. - * - * If we are stealing a relatively large buddy page, it is likely there will be - * more free pages in the pageblock, so try to claim the whole block. For - * reclaimable and unmovable allocations, we try to claim the whole block - * regardless of page size, as fragmentation caused by those allocations - * polluting movable pageblocks is worse than movable allocations stealing from - * unmovable and reclaimable pageblocks. + * When we are falling back to another migratetype during allocation, should we + * try to claim an entire block to satisfy further allocations, instead of + * polluting multiple pageblocks? */ static bool should_try_claim_block(unsigned int order, int start_mt) { @@ -1965,19 +1958,32 @@ static bool should_try_claim_block(unsigned int order, int start_mt) return true; /* - * Movable pages won't cause permanent fragmentation, so when you alloc - * small pages, you just need to temporarily steal unmovable or - * reclaimable pages that are closest to the request size. After a - * while, memory compaction may occur to form large contiguous pages, - * and the next movable allocation may not need to steal. Unmovable and - * reclaimable allocations need to actually claim the whole block. + * Above a certain threshold, always try to claim, as it's likely there + * will be more free pages in the pageblock. */ - if (order >= pageblock_order / 2 || - start_mt == MIGRATE_RECLAIMABLE || - start_mt == MIGRATE_UNMOVABLE || - page_group_by_mobility_disabled) + if (order >= pageblock_order / 2) return true; + /* + * Unmovable/reclaimable allocations would cause permanent + * fragmentations if they fell back to allocating from a movable block + * (polluting it), so we try to claim the whole block regardless of the + * allocation size. Later movable allocations can always steal from this + * block, which is less problematic. + */ + if (start_mt == MIGRATE_RECLAIMABLE || start_mt == MIGRATE_UNMOVABLE) + return true; + + if (page_group_by_mobility_disabled) + return true; + + /* + * Movable pages won't cause permanent fragmentation, so when you alloc + * small pages, we just need to temporarily steal unmovable or + * reclaimable pages that are closest to the request size. After a + * while, memory compaction may occur to form large contiguous pages, + * and the next movable allocation may not need to steal. + */ return false; }