From patchwork Wed Mar 27 14:45:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13606753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84001C54E67 for ; Wed, 27 Mar 2024 14:46:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 97EF26B00A3; Wed, 27 Mar 2024 10:46:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 907406B00A4; Wed, 27 Mar 2024 10:46:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 75D496B00A5; Wed, 27 Mar 2024 10:46:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 52FC06B00A3 for ; Wed, 27 Mar 2024 10:46:02 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0F037A00FA for ; Wed, 27 Mar 2024 14:46:02 +0000 (UTC) X-FDA: 81943093764.04.83307AD Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf26.hostedemail.com (Postfix) with ESMTP id 4FBC714000D for ; Wed, 27 Mar 2024 14:45:59 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf26.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711550759; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7uvbegZXxC37Rm8HS+NizMa+0/lbFm4bbWFA9NeibeE=; b=x32oCo6WkPHR203I0ZSflGNVDIierr9yB7jhtgSCKcLRgvCJkoBkDaEfAMGmOoDvWOY9ob 7RwBhvG82fS885A9Mhgu5aEjrE8+rF6NxThCDNTKCuJXGN8B3m1DcXiLr9sQoEBVytNQWF j8Gxj6fMM3wvYAnKS4o7NE9+iB2R710= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf26.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711550759; a=rsa-sha256; cv=none; b=NpGon147Ttdf9x3VW0qDfNLQBSglZCnQdbs38oxPEukCpgz+ZV+RYgWK781RlEwf/pC7N2 G+JFcGNa7y0Ib6yHLyPnfyCkAB/5C09UyNmS9+Xetyr4YHcIB5kUU8jSOtuhmw3cArdpq0 4mDRS+tNQ3MUazMydOAmFzhl2HXF000= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1BD601595; Wed, 27 Mar 2024 07:46:33 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 066593F64C; Wed, 27 Mar 2024 07:45:56 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Barry Song Subject: [PATCH v5 5/6] mm: vmscan: Avoid split during shrink_folio_list() Date: Wed, 27 Mar 2024 14:45:36 +0000 Message-Id: <20240327144537.4165578-6-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240327144537.4165578-1-ryan.roberts@arm.com> References: <20240327144537.4165578-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 4FBC714000D X-Stat-Signature: 43mtg1wwxm7wo7eaxwmyzk7pb1ggidde X-HE-Tag: 1711550759-393876 X-HE-Meta: U2FsdGVkX1+0tcWbfM6mPHo2D6S6M5rtrBrN1hLKyJH5rGyPbymEc0PTCsF9gkM5ufodc7B5qRM91B6kdYNGGag87qCX8+/Nmdhi87WsfFB8cYerFJhCA8YYDgopp1NB1UYoXB4oC2vj3l0MeTns/wedW9+7ovWcHpI68jDg3gjbTwb2A87eb0rzbw4GQmOw6DIBnHDmG3xqGyCK54HLfd2Q09vygfivKF/CCIvZJObG2ulVDAUK8Jgc4djux2NQY4ceMkFpqrVcA3iPw8EjCRlTZnTfPicWlwQDhwtS2YnSBfCl2DpuJdF4PQtQa2cMG9D3/hVY8jjlCWMhXtLVH15FoNiM5nCXh0RlE6L9FUxe3k3BKzZhtKnP62KC+Pr/fLjK7iJS466NXER8kk7n0u7m0R6UDme7CH7JsBnUGJVFnwFKnMYgJiuS7X5HC+8qNh7x2oHQ/W17TScmL6jy5iWNqMUyJ86L9Fy4THG/sdst9FydememzPyYREFrQr1I2sRbQSqGSbCw2WkZqvPj2tFjgVMXckDNDgcGwZXja2PKCBeT0FUHasGAc/0g9tp1j5ACflqpXIY0revBtx3JBHkxX6UDiMxP8uG+PetOrW20HWsKBJAYWBNiUluQjKksfm/W7qCGEem846s0j7tHjwsdVd6A7JqHVKW8ZKpj6X1NqxtirhH6Ryu2vw6J+UCAOi0bKoGhuGpGnKpvGybdjjnzbRwtbj0J9MvVGok63IqJzd6HNGCBTZzokOoUwjwRrMPlUpiIvmCSAMjVmodDsw2fVv8+JoH9vP7iulds3cHTrfx48Y/gFSOFWd36msXNVXTKZs6BTK9oy6nJOeo7E3p6iQqnZ1FkmpbDGYfeqLJ9wZGqleqDUKjzSC0enKPoMwfVAPLziMaJDKI/ASYxJeGncV3b0sFyXqMymyKxagiKD9jrxQ/l7ojhthL2ahJdiW1snzNe2iwI+o/zbT7 RwjXLyH3 8rM1iqaXeu2MvjfgWKcUdDH8y7WzLox/qHBOG2aDPMDaDwRaQfat9+2ks20oOJjr1ZvHTHpxG9cRAE031pFDQ5KFQOjKyZ7w9hVlHnMDBUw+qcVL4akxTniXjPaBG3GrbAVe8Cb3Z1laXMT68WfnJuc47+OCPA4JuMrJl2Bn1wTpFG0VWtsFCg6j2BoRg7tApbrSTbyRgqGiLYNpmx3g/+Dvqeh04ENBIehtQR/p0Kxxhoxa4+6WBHH52PNHPWTZuTWJyEt1TrP59c7rThWsfFLL3O8f6G59hwMH+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that swap supports storing all mTHP sizes, avoid splitting large folios before swap-out. This benefits performance of the swap-out path by eliding split_folio_to_list(), which is expensive, and also sets us up for swapping in large folios in a future series. If the folio is partially mapped, we continue to split it since we want to avoid the extra IO overhead and storage of writing out pages uneccessarily. Reviewed-by: David Hildenbrand Reviewed-by: Barry Song Signed-off-by: Ryan Roberts --- mm/vmscan.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 00adaf1cb2c3..293120fe54f3 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1223,11 +1223,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (!can_split_folio(folio, NULL)) goto activate_locked; /* - * Split folios without a PMD map right - * away. Chances are some or all of the - * tail pages can be freed without IO. + * Split partially mapped folios right + * away. We can free the unmapped pages + * without IO. */ - if (!folio_entire_mapcount(folio) && + if (data_race(!list_empty( + &folio->_deferred_list)) && split_folio_to_list(folio, folio_list)) goto activate_locked;