From patchwork Mon Feb 10 16:07:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bertrand Wlodarczyk X-Patchwork-Id: 13968151 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3503C0219B for ; Mon, 10 Feb 2025 16:11:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 33CFC6B0085; Mon, 10 Feb 2025 11:11:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2ECFC6B0088; Mon, 10 Feb 2025 11:11:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B4BD6B0089; Mon, 10 Feb 2025 11:11:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id F1E186B0085 for ; Mon, 10 Feb 2025 11:11:13 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 63B36C01CE for ; Mon, 10 Feb 2025 16:11:13 +0000 (UTC) X-FDA: 83104524426.30.DEF77BD Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) by imf21.hostedemail.com (Postfix) with ESMTP id 528B91C001E for ; Mon, 10 Feb 2025 16:11:10 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=jGGMWVTO; spf=pass (imf21.hostedemail.com: domain of bertrand.wlodarczyk@intel.com designates 198.175.65.15 as permitted sender) smtp.mailfrom=bertrand.wlodarczyk@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739203871; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=q2v5gIVlkcfNt+WaGAHRVBzYX5v2Cxu1NWBIOpLh71s=; b=V+0q9TtORcb4B0HkLZw+BOdlOtpKL+KuZIwsmP/KaxxsQrgX9DoJyGCphx0x+tw3p3rGoC RCxdz5c/dQKtxoIx4FeioqNR10n5b9+B31pRFV5SJEMILdlnr2wG8iBx1KSSckAqIIvZbX 1cmBhZ9g3QrDT031heolx2752hGP1As= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=jGGMWVTO; spf=pass (imf21.hostedemail.com: domain of bertrand.wlodarczyk@intel.com designates 198.175.65.15 as permitted sender) smtp.mailfrom=bertrand.wlodarczyk@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739203871; a=rsa-sha256; cv=none; b=ZQNsXKNu+77qG4P7/7x4lWbuZsUNGiGroHo1lpuchRr+Iuf3H+++0LyvSFUWo6j9+1kCAS sDQrFGr5s8v4V2aATzb468/7hmGsfMwEoNbwA5CuR7edSAPWwil9KL6qRI369P79h4AI1Q bbLM1qMvoFwNA0RI+s2Jhoy2nizg5+k= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1739203870; x=1770739870; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=R0MKiDoY26HPIOwFUCB6t2y9Vc87PkYPzVYIjoMb/DQ=; b=jGGMWVTORZfbhIdW2Uw37wWY7aJXAnWQ4X5+s4uhqkIlIGw/qXfll1KE RUNgsAcociQcRJe9YZDOJ65LAHXfzEUF3Gxkbd8YX44xCprq7UWLJZ7zl jm1TYNZKH3N5DxEEUZkgiSGVwvmMGJ+WyiBVA0Mr4tKhlNOrS0VjzTTO6 D8abRRXauj94TspLx7LWJ7TpFmrSrralo0ag0xL5NLr9oytwW9yZrjBry QNWKACEBUqoe284tpZ4UuMJqvJobNnuaw3MeJfXuEPPzaLrIR/N4n2HOn kUr10b3tSgRyu87FJw1bX2Z+/G5hnAS0UPtA4CHcBFGn1W/5zdVxCFlTg w==; X-CSE-ConnectionGUID: 3QYq3JvUSl2iNT3atgElUw== X-CSE-MsgGUID: Mwt3/5MmTNSBD7W2hwdKsQ== X-IronPort-AV: E=McAfee;i="6700,10204,11341"; a="43449892" X-IronPort-AV: E=Sophos;i="6.13,274,1732608000"; d="scan'208";a="43449892" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2025 08:11:09 -0800 X-CSE-ConnectionGUID: gqvKAhIGRT6AOhwQOxa9dg== X-CSE-MsgGUID: QQlx2Dr2QO6s54FsjWjVgw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="149423919" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by orviesa001.jf.intel.com with ESMTP; 10 Feb 2025 08:11:03 -0800 Received: from soc-PF3291X2.clients.intel.com (unknown [10.245.121.166]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 233B032EA9; Mon, 10 Feb 2025 16:11:02 +0000 (GMT) From: Bertrand Wlodarczyk To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org Cc: mhocko@suse.com, dave.hansen@linux.intel.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, Bertrand Wlodarczyk , Tim Chen Subject: [PATCH] vmscan, cleanup: add for_each_managed_zone_pgdat macro Date: Mon, 10 Feb 2025 17:07:49 +0100 Message-ID: <20250210160818.686-1-bertrand.wlodarczyk@intel.com> X-Mailer: git-send-email 2.47.1.windows.2 MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 528B91C001E X-Stat-Signature: x9gfzti1f9dtkhk1oz3rrjd8z3xdqmyy X-HE-Tag: 1739203870-731239 X-HE-Meta: U2FsdGVkX1/pw4lKniWhjsIjqIEJjiwBVAKtQBRuTVHK3eSGPVILOiWmERxtrMVrw8uiZBUnXmfj2ugQTEAhAWmcv6IQ/4tUMkkjSk3PU0cu5Y68eVV4X3DoxYw7Ufj4+TtFku1lIsvk/WM6gpzWqQ/SIu2+ljwTIUgfozbTe4sdSQI2Dxd1AdoXea0Erf9JUnUFxYLUDH7HrDPJoPgcsHsmJUfQwIoWXyHWrhbFzfxFIyWmjeqokf1esEaHhH8Dydv2gFZpTRI7UwbEsqSDjF6Bf22qNs/vQcdqAjuXL3gbLjc6wpiNzgVpLcHPZXCd8kEUZU4xr1d3lBPczgv22FpOs17/K6ZUVyVUGSkkUMe5iulEe4HWXayWDVw5+Wbl4nLeIbW7O9C5y7yvorhdjHU1JGeJY8Vca4Y3M40a5qY6knpuhHIDaDPVM+oFcxikB0pKmx8f9yNCCGEbSVFNmtIliuawzN4tDt51kiMUfdhYZebupIOzv++JJdcploM8uYYOewK1jlpp1XRZPERX4YvHcbeKiMl1lVuPPJ+/IpmpP0hgp5a9oBnuTho90TdOMdjTGRP1LlhqLDCDblOWfgBYtRfbwGdMGMSdUg00aEe9FnIPkG6GFUMW3R7xLj+i++OZqrtrRcF9DWp2jAMkufbxlq9G3LOLUwcEp7V5mvJNj3Zg9fY90JSPfHBHdniqbLSNc4pNAF4/p59F/NCx/69dA7j59rFsI1QZ528uFKdqVy0K75KcRzmIk5he/jH8eMI8x9dJWEuKL39m42Gag8wtPl3zThregwmpAxIJ2KgWCuh4v9/Fjr7nleVfvaRcUtPTgynVKeMCojJ8kKx2oiN/UxT3CzD6Dhp+5lZtLQGkeZv9TmmZNIm9iuL1Q5SWtlD5fCsOl3XeUlt9idrM7B+fs1D3LZXJW8crPqBiCLwKQRaNX8TnZkcsiKHwVR/eV2xIDd9kfX3eDBm2wNe /2PjGazD p0xEfQGa1vfHPUuUQ/R7/6nxRUjYS13+sqYQ6wjnJLGgywKdrHbU2mSaIAv/q1a0BcG3Dn/5euzk7bh3h+59dUMJrLXnJdTgyt9YKV+EqEdyv1d2owuWNqdjnXCXa+DW0hl/VpDO79aQHOjmini0R6Uc+05p/SVcsCJQbReL4DskhT4tAe9j8upeeXD6zWIBkfP9mYK7GO5UNBsOusy/EZVSlITKhc2KBXuuIG485Z1muk6tHJnKat0Z81TzWcXB3HqrCdGkSpPV9fFGQ9mhdlqj0J0rvFibLawA5LxiRCsp1DtA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The macro is introduced to eliminate redundancy in the repeated iteration over managed zones in pgdat data structure, reducing the potential for errors. This change doesn't introduce any functional modifications. Due to concentration of the pattern in vmscan.c the macro is placed locally in that file. Reviewed-by: Tim Chen Signed-off-by: Bertrand Wlodarczyk --- There is an ERROR in checkpatch.pl script: "Macros should be enclosed in Do-While loop" which seems to be false positive. Similar patch ee99c71c59f89 also raises the same error. Added maintainers of checkpatch.pl to review and investigate. --- mm/vmscan.c | 83 +++++++++++++++++++++-------------------------------- 1 file changed, 32 insertions(+), 51 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index c767d71c43d7..2c77bd6f6f2f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -271,6 +271,25 @@ static int sc_swappiness(struct scan_control *sc, struct mem_cgroup *memcg) } #endif +/* for_each_managed_zone_pgdat - helper macro to iterate over all managed zones in a pgdat up to + * and including the specified highidx + * @zone: The current zone in the iterator + * @pgdat: The pgdat which node_zones are being iterated + * @idx: The index variable + * @highidx: The index of the highest zone to return + * + * This macro iterates through all managed zones up to and including the specified highidx. + * The zone iterator enters an invalid state after macro call and must be reinitialized + * before it can be used again. + */ +#define for_each_managed_zone_pgdat(zone, pgdat, idx, highidx) \ + for ((idx) = 0, (zone) = (pgdat)->node_zones; \ + (idx) <= (highidx); \ + (idx)++, (zone)++) \ + if (!managed_zone(zone)) \ + continue; \ + else + static void set_task_reclaim_state(struct task_struct *task, struct reclaim_state *rs) { @@ -396,13 +415,9 @@ static unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, { unsigned long size = 0; int zid; + struct zone *zone; - for (zid = 0; zid <= zone_idx; zid++) { - struct zone *zone = &lruvec_pgdat(lruvec)->node_zones[zid]; - - if (!managed_zone(zone)) - continue; - + for_each_managed_zone_pgdat(zone, lruvec_pgdat(lruvec), zid, zone_idx) { if (!mem_cgroup_disabled()) size += mem_cgroup_get_zone_lru_size(lruvec, lru, zid); else @@ -495,7 +510,7 @@ static bool skip_throttle_noprogress(pg_data_t *pgdat) { int reclaimable = 0, write_pending = 0; int i; - + struct zone *zone; /* * If kswapd is disabled, reschedule if necessary but do not * throttle as the system is likely near OOM. @@ -508,12 +523,7 @@ static bool skip_throttle_noprogress(pg_data_t *pgdat) * throttle as throttling will occur when the folios cycle * towards the end of the LRU if still under writeback. */ - for (i = 0; i < MAX_NR_ZONES; i++) { - struct zone *zone = pgdat->node_zones + i; - - if (!managed_zone(zone)) - continue; - + for_each_managed_zone_pgdat(zone, pgdat, i, MAX_NR_ZONES - 1) { reclaimable += zone_reclaimable_pages(zone); write_pending += zone_page_state_snapshot(zone, NR_ZONE_WRITE_PENDING); @@ -2372,17 +2382,13 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc) unsigned long total_high_wmark = 0; unsigned long free, anon; int z; + struct zone *zone; free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); file = node_page_state(pgdat, NR_ACTIVE_FILE) + node_page_state(pgdat, NR_INACTIVE_FILE); - for (z = 0; z < MAX_NR_ZONES; z++) { - struct zone *zone = &pgdat->node_zones[z]; - - if (!managed_zone(zone)) - continue; - + for_each_managed_zone_pgdat(zone, pgdat, z, MAX_NR_ZONES - 1) { total_high_wmark += high_wmark_pages(zone); } @@ -5843,6 +5849,7 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat, unsigned long pages_for_compaction; unsigned long inactive_lru_pages; int z; + struct zone *zone; /* If not in reclaim/compaction mode, stop */ if (!in_reclaim_compaction(sc)) @@ -5862,11 +5869,7 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat, return false; /* If compaction would go ahead or the allocation would succeed, stop */ - for (z = 0; z <= sc->reclaim_idx; z++) { - struct zone *zone = &pgdat->node_zones[z]; - if (!managed_zone(zone)) - continue; - + for_each_managed_zone_pgdat(zone, pgdat, z, sc->reclaim_idx) { /* Allocation can already succeed, nothing to do */ if (zone_watermark_ok(zone, sc->order, min_wmark_pages(zone), sc->reclaim_idx, 0)) @@ -6393,11 +6396,7 @@ static bool allow_direct_reclaim(pg_data_t *pgdat) if (pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES) return true; - for (i = 0; i <= ZONE_NORMAL; i++) { - zone = &pgdat->node_zones[i]; - if (!managed_zone(zone)) - continue; - + for_each_managed_zone_pgdat(zone, pgdat, i, ZONE_NORMAL) { if (!zone_reclaimable_pages(zone)) continue; @@ -6702,12 +6701,7 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) * Check watermarks bottom-up as lower zones are more likely to * meet watermarks. */ - for (i = 0; i <= highest_zoneidx; i++) { - zone = pgdat->node_zones + i; - - if (!managed_zone(zone)) - continue; - + for_each_managed_zone_pgdat(zone, pgdat, i, highest_zoneidx) { if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) mark = promo_wmark_pages(zone); else @@ -6792,11 +6786,7 @@ static bool kswapd_shrink_node(pg_data_t *pgdat, /* Reclaim a number of pages proportional to the number of zones */ sc->nr_to_reclaim = 0; - for (z = 0; z <= sc->reclaim_idx; z++) { - zone = pgdat->node_zones + z; - if (!managed_zone(zone)) - continue; - + for_each_managed_zone_pgdat(zone, pgdat, z, sc->reclaim_idx) { sc->nr_to_reclaim += max(high_wmark_pages(zone), SWAP_CLUSTER_MAX); } @@ -6827,12 +6817,7 @@ update_reclaim_active(pg_data_t *pgdat, int highest_zoneidx, bool active) int i; struct zone *zone; - for (i = 0; i <= highest_zoneidx; i++) { - zone = pgdat->node_zones + i; - - if (!managed_zone(zone)) - continue; - + for_each_managed_zone_pgdat(zone, pgdat, i, highest_zoneidx) { if (active) set_bit(ZONE_RECLAIM_ACTIVE, &zone->flags); else @@ -6893,11 +6878,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx) * stall or direct reclaim until kswapd is finished. */ nr_boost_reclaim = 0; - for (i = 0; i <= highest_zoneidx; i++) { - zone = pgdat->node_zones + i; - if (!managed_zone(zone)) - continue; - + for_each_managed_zone_pgdat(zone, pgdat, i, highest_zoneidx) { nr_boost_reclaim += zone->watermark_boost; zone_boosts[i] = zone->watermark_boost; }