From patchwork Mon Apr 7 23:42:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 14041993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39E99C369A1 for ; Mon, 7 Apr 2025 23:42:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E5B86B0007; Mon, 7 Apr 2025 19:42:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 895436B0008; Mon, 7 Apr 2025 19:42:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 674256B000A; Mon, 7 Apr 2025 19:42:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4727D6B0007 for ; Mon, 7 Apr 2025 19:42:26 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 4EAE71A118B for ; Mon, 7 Apr 2025 23:42:27 +0000 (UTC) X-FDA: 83308874334.17.133BC13 Received: from mail-yw1-f182.google.com (mail-yw1-f182.google.com [209.85.128.182]) by imf26.hostedemail.com (Postfix) with ESMTP id 81BA414000D for ; Mon, 7 Apr 2025 23:42:25 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=LMKf74rQ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.128.182 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744069345; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=01NE8d4c5rprciUbZX80cCVvuXYrVzb/ghlV1xi5+yQ=; b=jxBh6Z4is6M68kIdprXqASv2xgf6s/F9mfRERH34+PQTnfsStuoSUd8Oae7G+l4p+V8vzG LKq41ANCbgWV3D+MgCAMWDsDKn+UJRCqHjNlmoDCR6rNg6yOxLaOQdyvJBYs5/E0CtqfOy iS6Jfry02rWzIAOYgtVD59A0bsXUyIM= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=LMKf74rQ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.128.182 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744069345; a=rsa-sha256; cv=none; b=BgOjLNBTZz0EDVLMcc3LIOwT4bwzeYR/HrQlvn1pSxYC+7Uk3nIcKIppU7xVh/NC3ZBKo3 QjQW2yvncdtSlAAGiSS4a0tJxsVKE/QUwUnACAd1AVi6r2cC678AIqwtABg6cAgwl6QU3W qJtFEvnd6x7msxq8HZW6IrbgHYlr6qk= Received: by mail-yw1-f182.google.com with SMTP id 00721157ae682-6f47ed1f40dso37689337b3.1 for ; Mon, 07 Apr 2025 16:42:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744069344; x=1744674144; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=01NE8d4c5rprciUbZX80cCVvuXYrVzb/ghlV1xi5+yQ=; b=LMKf74rQlmK/hcppZWLJMSrfJIRFD3B/3AZkVouMHPngcKuu7Qh5DHWBD9DfcmTdef kSVO4HDTn1nIDHu+61bjkoBtt4uVT7tVcCtFK1RpoqIhUlRTg8/GBc/sJLms2Kx15B28 0bbTngygmUtpc7TBopLTdD+2yizOfNwjRlaV4XdY8NriiTzYiIs4WtZPFGnnJF+G03sW T/xS8wZhjj7RVgqGWjK2ENP0MmRpWF3jAJPBDTjVsV0SpqrUENWYm2/MnuA+Lv7BvWOw +SMNPPsShypxLoZMQ1bwMkz8j49n7om3G+A0bEMet9gB0H3yPqqHckdXLKfGi010wtqz pk3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744069344; x=1744674144; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=01NE8d4c5rprciUbZX80cCVvuXYrVzb/ghlV1xi5+yQ=; b=OqkZtV+LApTpLpvy22QfuJiBVLw4aXAAxjrGvLsAsGE1kgCHZqukuThyS/2DuIm9QB aVwk+Yx1JhbRQtB9lljSXaZ0t9DYyRDpDuoII9ZuVolYGWfjRgkNHWP86GPEzr/kTqLc +UgqJ4bfGxJRP3bjkj5bOpuNiCfyTmIGXXcOUgqCnLFcjAE0Qbtyo7uBpfoGiWKnMdk+ oizwwoSS31UefiqhaqQcZchUTScc/Rj682Gx0MpP6yJyuaL6Fi+Kpez+8Ua9PxfeUmk6 EuHZwT6BfrMVHi2stRJObHOZy4PWw6f30mllEdbCTVACBX27ZJ57DZG2mP2bNBQ+TOdO pdug== X-Gm-Message-State: AOJu0YwqLZ+vyFr4CGkriurmwfdPIB31oi6K3T4lxzlFOf7OU076zfAi HaolcPPgyW5B50IiRIF1C4Oh+pjYWr85IVGtUrWUpvgLG8UzQhpzMxLWTA== X-Gm-Gg: ASbGncsSNQF/AjBJN7nN/u2afBTOU4Eg0bPoYU1saNjM3ljL6/sWcPbRtT+uxqeZSha jgEetW1VGQ7AmidRAsZKyycD3UfsekfxepXbJYLjzCfa1horrkxmPLAU0eCrMM8tDSxRoc8v33y TcvND1pNtn8h068GjZRY67NR2FNA5EztD46DLBkUNYeDkgngy2V9QAWsdPA4m6RgCRcsUbAke0k Dg9ZIPhDN/hhnzJhLMnVFJW5Mc9DK0TiEVckbh3mPQVErugZX+TybWA699ymEW4LhcwkAE6Cplr G136NBpiFbykg5vCTsCWhKQpqnfVYJYWNU/CEhW0CwOobA== X-Google-Smtp-Source: AGHT+IFOp3tUm8iPf6YKwMSQa4oxKFyoh7IrPg9EFNWwYJEdbSfmFgU00UV1NVJA+nwj/GXIF2HQZA== X-Received: by 2002:a05:690c:650f:b0:702:4eac:175f with SMTP id 00721157ae682-703f42fabfemr190896357b3.31.1744069344383; Mon, 07 Apr 2025 16:42:24 -0700 (PDT) Received: from localhost ([2a03:2880:25ff:8::]) by smtp.gmail.com with ESMTPSA id 00721157ae682-703d1f6f402sm27825447b3.86.2025.04.07.16.42.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Apr 2025 16:42:24 -0700 (PDT) From: Nhat Pham To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, hughd@google.com, yosry.ahmed@linux.dev, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, len.brown@intel.com, chengming.zhou@linux.dev, kasong@tencent.com, chrisl@kernel.org, huang.ying.caritas@gmail.com, ryan.roberts@arm.com, viro@zeniv.linux.org.uk, baohua@kernel.org, osalvador@suse.de, lorenzo.stoakes@oracle.com, christophe.leroy@csgroup.eu, pavel@kernel.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-pm@vger.kernel.org Subject: [RFC PATCH 01/14] swapfile: rearrange functions Date: Mon, 7 Apr 2025 16:42:02 -0700 Message-ID: <20250407234223.1059191-2-nphamcs@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250407234223.1059191-1-nphamcs@gmail.com> References: <20250407234223.1059191-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Stat-Signature: hamq3zwa7ds7g6qxccqe9miakytufer3 X-Rspam-User: X-Rspamd-Queue-Id: 81BA414000D X-HE-Tag: 1744069345-140981 X-HE-Meta: U2FsdGVkX18YagmZQV/9tnkTRf1UjiPlaqKXXg6pCMp8NT7ejNd9qXlPxOQCFjPJZHgKAKXrWCySAUhv2PGVAgDYs7s7VBaKS9XKSTLta7HPiXq25CQ96b4ELydfu3it+1EYRnnFb8qWiLPg0Ai5x/6tyYc614MQkMEMY0jivbiB/6EQ8TudjY7HpblneHlkZvv7HiDgHurQkTQ5sfw9uSvnKLB+WGduFY5n9n7UjiCtN2lW2UpMpUlrAhdCPHOLATMefmYDNgogNLYTdM1YfbcvkL+X2Q2tHhzPCJ1j6EiK4+uCRCMzcNCWRm2H0FF7+T5uohYhe1Lz7iIUI+dkCT4QH0rRULUeZpbYzHuw3lahIcEVPJqCOCLgYe7QTgf3mdyD55XB3w7HnAQWkVlD1UWfkIkJ0/JlehP2e7DpFQ98DMxuL2irqqXqY9O/NZx9p1vDXAvJj7YM2hYtC8hA0b3lklzrrG8FD6SCs5M2Y8B/1tlsqJPcci3RrLEG6LZjAoJuHKcSCbsU629MpI0HJqDo+kczrntBYoNvTDWcIzFI2L7P4rFC1r/NFOZSmPsytNOCm2TES7DCfyGpUFI6TCYnOreWexq44nv0o+j2P34vIele4+hFBV/qlD0zuSAmA3ztk3jGfei7risJiTEEZ6wYvTgFF9aK6MI9c00ARopoznvAHjLYmNNMPQbFMV0dA/I79v5axQhrxiyFFj56i1kT0en+skEPlHOaTzVORL7QjHazsUUYWF6lMypwrOEU/UXQKLSl7URoluHV59pic5ZWwHx7oe2ZXztPfZFeQfJZxjZiruS2yiVkMh5zFo99ES+K31jsAyAhdAW/yt9n74eJxZuqchJ9KogjCTWAyxNewuMN1u/9SG7qcaiufVdlheJtLB43TwqJWpWfta+d6OKn7W0ufppC5NRRgw7Gz/jc363LTiYAKG89HzjBXBvh3GGnvGHbtDMSwJzHY7g uJdmi2qs TEbVEyELZi1o7xPLpWjx8QyoBmXJjxsAjcpySuWkdIwfRe0HQODJUG/5dPme1K3m2XKmYn1O3CJ3a0JGZAClvBo6X/Ig61E70JAhRDsmIUD0UgcyGsrDuf7UndV3UBjIhnJ3f92YcpcoXjSH63RNEtxXv8hN9Km98pRHZin7GDyg5PBsEbO5zstkzllG8VMLdtA5yzeI4UMAz4CwHI0oGnZ0CxYk7kszplv7pU4zSO2vzv7ACyO0nGUzneDE/5eTr6E7VFDbi0Mn48CBX8kWm/QM+R3o0QUsSpSYzDGAzThLBlOLUNfQGWr1ilbpjWU3rvfl7XvtXf7WjN78X8J6ogH5SU8LzVr6tTOivUIY/ocFD8QyRVq+qxKvUr881HES4bpDJkoxU/HpCnOY54cDA6NQLnzmsRW+/EyW9lpWEBGBycNXB1y3vOvPyO9OTYftFEvkSB2Cx6V1vn08Ci5Ifyv2N9cjbF1+W34sXD0OWMbllyGMTznFSQaIPXA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Rearrange some functions in preparation for the rest of the series. No functional change intended. Signed-off-by: Nhat Pham --- mm/swapfile.c | 230 +++++++++++++++++++++++++------------------------- 1 file changed, 115 insertions(+), 115 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index df7c4e8b089c..27cf985e08ac 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -124,11 +124,6 @@ static struct swap_info_struct *swap_type_to_swap_info(int type) return READ_ONCE(swap_info[type]); /* rcu_dereference() */ } -static inline unsigned char swap_count(unsigned char ent) -{ - return ent & ~SWAP_HAS_CACHE; /* may include COUNT_CONTINUED flag */ -} - /* * Use the second highest bit of inuse_pages counter as the indicator * if one swap device is on the available plist, so the atomic can @@ -161,6 +156,11 @@ static long swap_usage_in_pages(struct swap_info_struct *si) /* Reclaim directly, bypass the slot cache and don't touch device lock */ #define TTRS_DIRECT 0x8 +static inline unsigned char swap_count(unsigned char ent) +{ + return ent & ~SWAP_HAS_CACHE; /* may include COUNT_CONTINUED flag */ +} + static bool swap_is_has_cache(struct swap_info_struct *si, unsigned long offset, int nr_pages) { @@ -1326,46 +1326,6 @@ static struct swap_info_struct *_swap_info_get(swp_entry_t entry) return NULL; } -static unsigned char __swap_entry_free_locked(struct swap_info_struct *si, - unsigned long offset, - unsigned char usage) -{ - unsigned char count; - unsigned char has_cache; - - count = si->swap_map[offset]; - - has_cache = count & SWAP_HAS_CACHE; - count &= ~SWAP_HAS_CACHE; - - if (usage == SWAP_HAS_CACHE) { - VM_BUG_ON(!has_cache); - has_cache = 0; - } else if (count == SWAP_MAP_SHMEM) { - /* - * Or we could insist on shmem.c using a special - * swap_shmem_free() and free_shmem_swap_and_cache()... - */ - count = 0; - } else if ((count & ~COUNT_CONTINUED) <= SWAP_MAP_MAX) { - if (count == COUNT_CONTINUED) { - if (swap_count_continued(si, offset, count)) - count = SWAP_MAP_MAX | COUNT_CONTINUED; - else - count = SWAP_MAP_MAX; - } else - count--; - } - - usage = count | has_cache; - if (usage) - WRITE_ONCE(si->swap_map[offset], usage); - else - WRITE_ONCE(si->swap_map[offset], SWAP_HAS_CACHE); - - return usage; -} - /* * When we get a swap entry, if there aren't some other ways to * prevent swapoff, such as the folio in swap cache is locked, RCU @@ -1432,6 +1392,46 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry) return NULL; } +static unsigned char __swap_entry_free_locked(struct swap_info_struct *si, + unsigned long offset, + unsigned char usage) +{ + unsigned char count; + unsigned char has_cache; + + count = si->swap_map[offset]; + + has_cache = count & SWAP_HAS_CACHE; + count &= ~SWAP_HAS_CACHE; + + if (usage == SWAP_HAS_CACHE) { + VM_BUG_ON(!has_cache); + has_cache = 0; + } else if (count == SWAP_MAP_SHMEM) { + /* + * Or we could insist on shmem.c using a special + * swap_shmem_free() and free_shmem_swap_and_cache()... + */ + count = 0; + } else if ((count & ~COUNT_CONTINUED) <= SWAP_MAP_MAX) { + if (count == COUNT_CONTINUED) { + if (swap_count_continued(si, offset, count)) + count = SWAP_MAP_MAX | COUNT_CONTINUED; + else + count = SWAP_MAP_MAX; + } else + count--; + } + + usage = count | has_cache; + if (usage) + WRITE_ONCE(si->swap_map[offset], usage); + else + WRITE_ONCE(si->swap_map[offset], SWAP_HAS_CACHE); + + return usage; +} + static unsigned char __swap_entry_free(struct swap_info_struct *si, swp_entry_t entry) { @@ -1585,25 +1585,6 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) unlock_cluster(ci); } -void swapcache_free_entries(swp_entry_t *entries, int n) -{ - int i; - struct swap_cluster_info *ci; - struct swap_info_struct *si = NULL; - - if (n <= 0) - return; - - for (i = 0; i < n; ++i) { - si = _swap_info_get(entries[i]); - if (si) { - ci = lock_cluster(si, swp_offset(entries[i])); - swap_entry_range_free(si, ci, entries[i], 1); - unlock_cluster(ci); - } - } -} - int __swap_count(swp_entry_t entry) { struct swap_info_struct *si = swp_swap_info(entry); @@ -1717,57 +1698,6 @@ static bool folio_swapped(struct folio *folio) return swap_page_trans_huge_swapped(si, entry, folio_order(folio)); } -static bool folio_swapcache_freeable(struct folio *folio) -{ - VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); - - if (!folio_test_swapcache(folio)) - return false; - if (folio_test_writeback(folio)) - return false; - - /* - * Once hibernation has begun to create its image of memory, - * there's a danger that one of the calls to folio_free_swap() - * - most probably a call from __try_to_reclaim_swap() while - * hibernation is allocating its own swap pages for the image, - * but conceivably even a call from memory reclaim - will free - * the swap from a folio which has already been recorded in the - * image as a clean swapcache folio, and then reuse its swap for - * another page of the image. On waking from hibernation, the - * original folio might be freed under memory pressure, then - * later read back in from swap, now with the wrong data. - * - * Hibernation suspends storage while it is writing the image - * to disk so check that here. - */ - if (pm_suspended_storage()) - return false; - - return true; -} - -/** - * folio_free_swap() - Free the swap space used for this folio. - * @folio: The folio to remove. - * - * If swap is getting full, or if there are no more mappings of this folio, - * then call folio_free_swap to free its swap space. - * - * Return: true if we were able to release the swap space. - */ -bool folio_free_swap(struct folio *folio) -{ - if (!folio_swapcache_freeable(folio)) - return false; - if (folio_swapped(folio)) - return false; - - delete_from_swap_cache(folio); - folio_set_dirty(folio); - return true; -} - /** * free_swap_and_cache_nr() - Release reference on range of swap entries and * reclaim their cache if no more references remain. @@ -1842,6 +1772,76 @@ void free_swap_and_cache_nr(swp_entry_t entry, int nr) put_swap_device(si); } +void swapcache_free_entries(swp_entry_t *entries, int n) +{ + int i; + struct swap_cluster_info *ci; + struct swap_info_struct *si = NULL; + + if (n <= 0) + return; + + for (i = 0; i < n; ++i) { + si = _swap_info_get(entries[i]); + if (si) { + ci = lock_cluster(si, swp_offset(entries[i])); + swap_entry_range_free(si, ci, entries[i], 1); + unlock_cluster(ci); + } + } +} + +static bool folio_swapcache_freeable(struct folio *folio) +{ + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + + if (!folio_test_swapcache(folio)) + return false; + if (folio_test_writeback(folio)) + return false; + + /* + * Once hibernation has begun to create its image of memory, + * there's a danger that one of the calls to folio_free_swap() + * - most probably a call from __try_to_reclaim_swap() while + * hibernation is allocating its own swap pages for the image, + * but conceivably even a call from memory reclaim - will free + * the swap from a folio which has already been recorded in the + * image as a clean swapcache folio, and then reuse its swap for + * another page of the image. On waking from hibernation, the + * original folio might be freed under memory pressure, then + * later read back in from swap, now with the wrong data. + * + * Hibernation suspends storage while it is writing the image + * to disk so check that here. + */ + if (pm_suspended_storage()) + return false; + + return true; +} + +/** + * folio_free_swap() - Free the swap space used for this folio. + * @folio: The folio to remove. + * + * If swap is getting full, or if there are no more mappings of this folio, + * then call folio_free_swap to free its swap space. + * + * Return: true if we were able to release the swap space. + */ +bool folio_free_swap(struct folio *folio) +{ + if (!folio_swapcache_freeable(folio)) + return false; + if (folio_swapped(folio)) + return false; + + delete_from_swap_cache(folio); + folio_set_dirty(folio); + return true; +} + #ifdef CONFIG_HIBERNATION swp_entry_t get_swap_page_of_type(int type)