From patchwork Tue May 21 13:03:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13669396 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9496C25B74 for ; Tue, 21 May 2024 12:38:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7FC1C6B00A0; Tue, 21 May 2024 08:38:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7ABA86B00A1; Tue, 21 May 2024 08:38:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 672F56B00A2; Tue, 21 May 2024 08:38:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 480986B00A0 for ; Tue, 21 May 2024 08:38:32 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id BA170A2960 for ; Tue, 21 May 2024 12:38:31 +0000 (UTC) X-FDA: 82142356422.06.D5EB161 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf11.hostedemail.com (Postfix) with ESMTP id 4C7E740018 for ; Tue, 21 May 2024 12:38:27 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf11.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716295109; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=FqSCKzQqDCcFnOUTWZEdYYtclIbhQqe1xZfzshBAzd0=; b=R/V4cAIAmSGeib58ekSQraz8yn7t0B3hqjdPyrUaoFxFoAezWqxxmIsnn/ApebjbcuBoA5 Inx6gGClONm2Wo9Orzoo3IMRAd1BEZ8kN2jjTjRMdQnNV+2spSlxk+WRfXsozsxL8VhSvN XK3158Y8W3SZDD7DZQ/X0ydmXhgcmgc= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf11.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716295109; a=rsa-sha256; cv=none; b=QeHMQQJM2dfw17TEoqIjoiNGQPjqNZWkh8rE4MMNnoTwkow03HdRhGobt8eWI9UICoaBIO K5ZNeIyYV0m7mH6ga7zJhqCAPCQHIyPeTN9tGXFs9h9QN0U7D29BLUMIz7KSo4VbICfJft 0C78JjesSU94g69XutXKJLYIzdKu77s= Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4VkDTY5dRWzcf7r; Tue, 21 May 2024 20:37:05 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id EA4241403D2; Tue, 21 May 2024 20:38:22 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 21 May 2024 20:38:22 +0800 From: Kefeng Wang To: Andrew Morton CC: Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , , Matthew Wilcox , David Hildenbrand , Lance Yang , Vishal Moola , Kefeng Wang Subject: [PATCH v2] mm: refactor folio_undo_large_rmappable() Date: Tue, 21 May 2024 21:03:15 +0800 Message-ID: <20240521130315.46072-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspamd-Queue-Id: 4C7E740018 X-Stat-Signature: brsxh7xciyrbnwhi1scf47f49xtiiug8 X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1716295107-324112 X-HE-Meta: U2FsdGVkX1+gq7agnNZA77x4zfBc9c4uCCoyqWESBqWKIPRRbpT4Q7UHeQ/wFSgCRN0QXM2BM6j8NNhTOhNYjDINdwfnDZEW1m6y4Ql7ELWW4LPhNKbSkQgakdI0MoKM+62pRpyGDyyKKX9ebe/udDI1RxkoUQlSUZJVOK0FTZWj+CcEtlOw1pTf16x1sEMy4pD8JkLZVe8G1KAmqn7+A+6WTocoRMm/Dx9UBJoEfkmf/wvypPXoa5qTWbZl4oaD41n4W+KKGBWvsSssyDq7IbvVJT0r2AdhWiR535PKscmHAvHQVKiNxG5rWjTY9ovW4qWrNNXLxebsUZ90xeMAPFIyjwB+wcs/jAs2GDGbPO5Wg3LbshLAftDFy+NEkXi+N+WRz27ijYE7Lt7ZZuJuycppg1xX10G8qJpbL1yumbDN7mo7W0zr//aDsH6+AoP10RmT1pYtIiNnd6l9c5LtH9kkfR2/U7zZoThsi1eNnfb99UZy26Zyp9nN5RNobypAfZl2lyN4HFehRUWwQDAMTrFeoW2gfAX5M4HJz62xGUeaYdp5pzw3vwMvTNs4cxHRRpmqy4cwdKi2IzrDKYh+RhRM/o/RlVlWO30aay0DkyJ4WK9o8lhrnD81r5DD+wvy2tQtL65QCGrwUZQMynSKGCMNg9BiC9GWrqA5eL8B6X+CHalME7fMB6Hz4n2cilgx3eUArGLUgwRcjSzuuvr72krSW5O8An00oPQ1ja6dPs1QaBgUSHmdNM7WX6hzg+faz8fd9O4yYROFdtj5YoENNfc5kOLnWcmq1alx6PW/KT9JaTDZ6Eut9u+/zXn41ESr/ydYW6IOol36SYyE5jXBzplBL3/t4g6/YKl9zuPh20H5Uujd1NAFkQqCwGTCatuYmwh/ERJHim6l0QyDNz2LDdxtIgxGw8A0AAbJe6DxJd0JyIVbnCnSU4dSnYLpOq8tE0GSsR6HJh7zc12L5Fg K0hnnADY cLgq/4h2OYpWllNsDODEu9IR4i6RQH4G15epKCfmj++yOHdduJPNH4jF3SM+jIcDWGJGvIXgMSGSv2NdywO1b1tVh1V30pLZk4ka+tWJ0WiL9hRgGSsyKQk3jepT0kKzmDxh+RLaWAQPLn60+EPg0397KwLs9sYzdJ0y+uzZd1Lw5GEnjj+8b8Exs5zHp74HPqverTquVszQxerwF1ov+K0DUnCQcLYBOf1AOngEHardr/rfck98S/NKWHpG8uOV9Z3JFnaW9jiYLipU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Folios of order <= 1 are not in deferred list, the check of order is added into folio_undo_large_rmappable() from commit 8897277acfef ("mm: support order-1 folios in the page cache"), but there is a repeated check for small folio (order 0) during each call of the folio_undo_large_rmappable(), so only keep folio_order() check inside the function. In addition, move all the checks into header file to save a function call for non-large-rmappable or empty deferred_list folio. Signed-off-by: Kefeng Wang Reviewed-by: Vishal Moola (Oracle) --- v2: - update changelog, per Lance and Vishal mm/huge_memory.c | 13 +------------ mm/internal.h | 17 ++++++++++++++++- mm/memcontrol.c | 3 +-- mm/page_alloc.c | 3 +-- mm/swap.c | 8 ++------ mm/vmscan.c | 8 ++------ 6 files changed, 23 insertions(+), 29 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 317de2afd371..f9dbdc878136 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3182,22 +3182,11 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, return ret; } -void folio_undo_large_rmappable(struct folio *folio) +void __folio_undo_large_rmappable(struct folio *folio) { struct deferred_split *ds_queue; unsigned long flags; - if (folio_order(folio) <= 1) - return; - - /* - * At this point, there is no one trying to add the folio to - * deferred_list. If folio is not in deferred_list, it's safe - * to check without acquiring the split_queue_lock. - */ - if (data_race(list_empty(&folio->_deferred_list))) - return; - ds_queue = get_deferred_split_queue(folio); spin_lock_irqsave(&ds_queue->split_queue_lock, flags); if (!list_empty(&folio->_deferred_list)) { diff --git a/mm/internal.h b/mm/internal.h index b2c75b12014e..447171d171ce 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -605,7 +605,22 @@ static inline void folio_set_order(struct folio *folio, unsigned int order) #endif } -void folio_undo_large_rmappable(struct folio *folio); +void __folio_undo_large_rmappable(struct folio *folio); +static inline void folio_undo_large_rmappable(struct folio *folio) +{ + if (folio_order(folio) <= 1 || !folio_test_large_rmappable(folio)) + return; + + /* + * At this point, there is no one trying to add the folio to + * deferred_list. If folio is not in deferred_list, it's safe + * to check without acquiring the split_queue_lock. + */ + if (data_race(list_empty(&folio->_deferred_list))) + return; + + __folio_undo_large_rmappable(folio); +} static inline struct folio *page_rmappable_folio(struct page *page) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f85925da5687..1b80d2660c6e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7873,8 +7873,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) * In addition, the old folio is about to be freed after migration, so * removing from the split queue a bit earlier seems reasonable. */ - if (folio_test_large(old) && folio_test_large_rmappable(old)) - folio_undo_large_rmappable(old); + folio_undo_large_rmappable(old); old->memcg_data = 0; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index cd584aace6bf..b1e3eb5787de 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2645,8 +2645,7 @@ void free_unref_folios(struct folio_batch *folios) unsigned long pfn = folio_pfn(folio); unsigned int order = folio_order(folio); - if (order > 0 && folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); + folio_undo_large_rmappable(folio); if (!free_pages_prepare(&folio->page, order)) continue; /* diff --git a/mm/swap.c b/mm/swap.c index 67786cb77130..dc205bdfbbd4 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -123,8 +123,7 @@ void __folio_put(struct folio *folio) } page_cache_release(folio); - if (folio_test_large(folio) && folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); + folio_undo_large_rmappable(folio); mem_cgroup_uncharge(folio); free_unref_page(&folio->page, folio_order(folio)); } @@ -1002,10 +1001,7 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs) free_huge_folio(folio); continue; } - if (folio_test_large(folio) && - folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); - + folio_undo_large_rmappable(folio); __page_cache_release(folio, &lruvec, &flags); if (j != i) diff --git a/mm/vmscan.c b/mm/vmscan.c index 6981a71c8ef0..615d2422d0e4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1454,9 +1454,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, */ nr_reclaimed += nr_pages; - if (folio_test_large(folio) && - folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); + folio_undo_large_rmappable(folio); if (folio_batch_add(&free_folios, folio) == 0) { mem_cgroup_uncharge_folios(&free_folios); try_to_unmap_flush(); @@ -1863,9 +1861,7 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec, if (unlikely(folio_put_testzero(folio))) { __folio_clear_lru_flags(folio); - if (folio_test_large(folio) && - folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); + folio_undo_large_rmappable(folio); if (folio_batch_add(&free_folios, folio) == 0) { spin_unlock_irq(&lruvec->lru_lock); mem_cgroup_uncharge_folios(&free_folios);