From patchwork Sun Aug 25 23:25:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13776937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D93D0C5321E for ; Sun, 25 Aug 2024 23:25:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 50DF38D0036; Sun, 25 Aug 2024 19:25:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4BDF88D0029; Sun, 25 Aug 2024 19:25:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 35F298D0036; Sun, 25 Aug 2024 19:25:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 175AB8D0029 for ; Sun, 25 Aug 2024 19:25:45 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id BFC8B1C25B5 for ; Sun, 25 Aug 2024 23:25:44 +0000 (UTC) X-FDA: 82492352208.03.B4F4593 Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) by imf03.hostedemail.com (Postfix) with ESMTP id 0A2EB20009 for ; Sun, 25 Aug 2024 23:25:42 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JNT0pns+; spf=pass (imf03.hostedemail.com: domain of hughd@google.com designates 209.85.215.179 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724628324; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=rzeE9bKcsZgsUP1FogvmerFavSQnv357IASYuLar6TQ=; b=iVPrnwklUzAM4bfEpOcmcmfS/5Zj/XrC0qefS/bbxDnHp45avJEduJwrISNO14GIEvKOWa ZFzC6/SP2mGXfpByIcaBuDbj5NRWgCIhyCfEK1nLOBIYcp7nDZsTDDxUT0x77VWAyk9gVY NSJ5coM0dGeZQb6KpNIf2IXUb7COS3M= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JNT0pns+; spf=pass (imf03.hostedemail.com: domain of hughd@google.com designates 209.85.215.179 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724628324; a=rsa-sha256; cv=none; b=bU0eLj4kuukPLBI13ttg0n6fvakQ4H40Ckgm3YklByKyVIqnOfgumtWmMumZf8izaw9GSu wCSyuyi+EKmHiKLV8SrWKbvxrC6nj7ke6wKfQWjFa3N5svx1QzsJHHsxdXOHHa/T+qIV4+ OhS1lNz7W4w0rAH88XVcnNEVAgEUeuo= Received: by mail-pg1-f179.google.com with SMTP id 41be03b00d2f7-7cf5e179b68so1210326a12.1 for ; Sun, 25 Aug 2024 16:25:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724628342; x=1725233142; darn=kvack.org; h=mime-version:message-id:subject:cc:to:from:date:from:to:cc:subject :date:message-id:reply-to; bh=rzeE9bKcsZgsUP1FogvmerFavSQnv357IASYuLar6TQ=; b=JNT0pns+o4s647grsqYUpvkUW6Xh5WTmJ7bsv/RhNhw+ZMvnPrjkjF7lGSAjK+WeAn l+7jPZPHRN4roXfV38W/buoNJJ6NYJWOyHGx8EGrY0zx06orpIskDvCKzinjOsDN2QCH mohPdE41e2T0BTsNa702JcIpwfSUIpKVBwnNO+dvJW393JpZhE7WmT0cmlo2dd6VpjL9 NYVny1JKSE/WtfBnlIZaFNwsDUsJddWjJFLcqEVsiP/XWUrOMcAvKyOWyEXblcicfcH3 6MqxZzHdST4tuYX1t/59Xvtd/9cQcY8noWjt7ix+Mpxt2BrcXf5+18tvXvieSXNx+069 uv9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724628342; x=1725233142; h=mime-version:message-id:subject:cc:to:from:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=rzeE9bKcsZgsUP1FogvmerFavSQnv357IASYuLar6TQ=; b=nUzdlx9g2fbvFyBwJINhSgWCIDfWiemjm6tJHhVPreMAMQhpaF99XaockGkqHSeUWK ZTRRzfx47kVUehk6DraNrbMtymXiT12hmueEOhfkjbzFTUDciPDK6/wWDMOcUiF2At69 FAYGFMrKsRPqzGqNDEIulDM/JKBy6q4ate+mtkoxeo6ICVNhYHanARC/WxztFD1sMI8w iLFTb7hRRM/82lMw9aB50rbsDaq+XHd8/yOkihZuvgP21/eNiVUwISbhMjVSZOAO/ABH VxtkLZcgqHLVsZoS378vsWUKsZsKCseR2+m1b0RmipCHNE9OsdxcvtGUXD8SRIDhGQs8 +Gew== X-Forwarded-Encrypted: i=1; AJvYcCVm0YTynBH8NfmUyY5orHEAZomDAa8aLPEPQySTfRoKK/w/d71Ceb+SjU60LV4pLnt13Eefn6ydGw==@kvack.org X-Gm-Message-State: AOJu0YxRKq0aRUwNd7wT4ql3Yr8TgJIKepQnDrH1aaYRmh0dcB4cRaB9 ns2jmb11/yq9TEOsO9M3GngsxbyWpsHwlmw3BtvtQEfQ+olHZNyJAttSuEjhmQ== X-Google-Smtp-Source: AGHT+IGM1pZchLOaXRCnXn9BkaDFEykmTV38Q2MqO8HiEk46KGpvNq1YwBDhJHoU6Iofmf8ahpOMew== X-Received: by 2002:a17:90b:180f:b0:2d3:c365:53b1 with SMTP id 98e67ed59e1d1-2d6447b2771mr14905062a91.6.1724628341207; Sun, 25 Aug 2024 16:25:41 -0700 (PDT) Received: from darker.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2d613a58fdbsm8424175a91.29.2024.08.25.16.25.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Aug 2024 16:25:40 -0700 (PDT) Date: Sun, 25 Aug 2024 16:25:39 -0700 (PDT) From: Hugh Dickins To: Andrew Morton cc: Baolin Wang , "Kirill A. Shutemov" , Hugh Dickins , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH] mm: shmem: extend shmem_unused_huge_shrink() to all sizes Message-ID: MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: qpuzpjfc61dem9od63rn366fcrqsew59 X-Rspamd-Queue-Id: 0A2EB20009 X-Rspamd-Server: rspam11 X-HE-Tag: 1724628342-130277 X-HE-Meta: U2FsdGVkX19LpaJEQBWIL7d9pI6fO7Q9P24HaZEQqoO6eytmTrwbK7xyUBklwRl2OWvTi1BW3vsJJ1Tpmsykv4DOxXffX8B0Sh0vGBmlhcDhwQJz8LZQuml5Wqcql79s5gt6yFXkrlWdhSNVY5D/wi11C80SMcTaJG4iaq1jNvL5ooigjGuPaYZ/feczMnvLGS6N22h6tAlIwkJQIYJhsNt8+wN1RWpdSJA81RGuWalvatb6TKP6LLXmRsGgAFGlgI6vpTjC0z9u2iRAeOyeyMAh6+KvwJeWfTXzZD0A5NIc/MQ1E5nh6CKiqsLApRFXTNQwguAgzET0PALbBYRq4yuu+HaF0XwLCJ5jQtf2LA9m8uBZWZOmw1HswRX3ZH6EQ1klLJ04VLYf13yU7IwWrmazOY/rmRvanwBOTv2nceTj/ixK5kgXgjhvAOx0QmQqJP1RqQYzbYs/F2hR/uBVGAb2pG71T61WDWwd/1BVwxnXpDQvEsBid9ykN3bi6wXiV+t8b7FvaAdvWlobbGx8EFlhX2uUPC1cvpmbH9CIR4y9fXTUxF8Sv1QfUwvbpelmqz9tDddQ4fFg7Hevu0rZ2sLX2vqaaXGecWR3Od2dbakFY2x2/FI96um85DL0mS/E9p5nX06JEcpsa2PMEvf4JjRhzI47b59EXzIm7NOHfejFfDLKiKp/KGPhHwXThPh5GpqIPr5NW82mQ8fV0AQa3Dt36A4NWNTIZ+eN/kaU9zXQT8mveRZEzXjC1z6eMsMMU7shvuTWozzepenKvL0EydcN2NXWxU0ZtOxrVMvrIaBv7mFmWjDd9x1/S5urHtk2gA3TTzZ/ZYAdCqauk7RQKPyaqcPWvphCVhTJYd2zkq4UwGL4Vz5/4dmipETl2p9wd80weiJfMy/Dst0DAZdbsjCRF+URDsnL1Grt+wDAjbu6IOAxl0fTCjI4pwFTd7iH0oLi730EijTAwrh/hDu OorvmPxB rxStwAtEzpAA78e9k9ksoghNPoQ/KJzkF2TBV/o0Jtmyv07EX54iprm2pfvYKXw+QkkV7m/kqtjkEsjwHIZSSuwWPbHlRoJEAsdrTFsY5pAXExDo/J3U52QHRkdoK8yKwHqZkTGU5ZYP4LxgN78TQQPlHictWQmjGgYrgTL7HAF58avZ45hs5msu/2jDkODUy+Leu6ldoEiIs2xYxtHk+nSOnXhsa7Fbl31clD+RGNE2/TJIeIknmkhOdTlJJ63+2JMNXzlfuu5eFGBlk8XBGqyDGBxMWDpJ3u0i3b4m/xURB0kUNyIJc2410WteOErTZTWNT7bvP3u5pfBQZFBeTqADfR5EuwSlSks5kExffhAF9jtF8QZOEPFiDNw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Although shmem_get_folio_gfp() is correctly putting inodes on the shrinklist according to the folio size, shmem_unused_huge_shrink() was still dealing with that shrinklist in terms of HPAGE_PMD_SIZE. Generalize that; and to handle the mixture of sizes more sensibly, shmem_alloc_and_add_folio() give it a number of pages to be freed (approximate: no need to minimize that with an exact calculation) instead of a number of inodes to split. Signed-off-by: Hugh Dickins Reviewed-by: David Hildenbrand --- This patch would most naturally go into mm-unstable as 10/9 over Baolin's "support large folio swap-out and swap-in for shmem" series. mm/shmem.c | 45 ++++++++++++++++++++------------------------- 1 file changed, 20 insertions(+), 25 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 4dd0570962fa..4c9921c234b7 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -636,15 +636,14 @@ static const char *shmem_format_huge(int huge) #endif static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, - struct shrink_control *sc, unsigned long nr_to_split) + struct shrink_control *sc, unsigned long nr_to_free) { LIST_HEAD(list), *pos, *next; - LIST_HEAD(to_remove); struct inode *inode; struct shmem_inode_info *info; struct folio *folio; unsigned long batch = sc ? sc->nr_to_scan : 128; - int split = 0; + unsigned long split = 0, freed = 0; if (list_empty(&sbinfo->shrinklist)) return SHRINK_STOP; @@ -662,13 +661,6 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, goto next; } - /* Check if there's anything to gain */ - if (round_up(inode->i_size, PAGE_SIZE) == - round_up(inode->i_size, HPAGE_PMD_SIZE)) { - list_move(&info->shrinklist, &to_remove); - goto next; - } - list_move(&info->shrinklist, &list); next: sbinfo->shrinklist_len--; @@ -677,34 +669,36 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, } spin_unlock(&sbinfo->shrinklist_lock); - list_for_each_safe(pos, next, &to_remove) { - info = list_entry(pos, struct shmem_inode_info, shrinklist); - inode = &info->vfs_inode; - list_del_init(&info->shrinklist); - iput(inode); - } - list_for_each_safe(pos, next, &list) { + pgoff_t next, end; + loff_t i_size; int ret; - pgoff_t index; info = list_entry(pos, struct shmem_inode_info, shrinklist); inode = &info->vfs_inode; - if (nr_to_split && split >= nr_to_split) + if (nr_to_free && freed >= nr_to_free) goto move_back; - index = (inode->i_size & HPAGE_PMD_MASK) >> PAGE_SHIFT; - folio = filemap_get_folio(inode->i_mapping, index); - if (IS_ERR(folio)) + i_size = i_size_read(inode); + folio = filemap_get_entry(inode->i_mapping, i_size / PAGE_SIZE); + if (!folio || xa_is_value(folio)) goto drop; - /* No huge page at the end of the file: nothing to split */ + /* No large page at the end of the file: nothing to split */ if (!folio_test_large(folio)) { folio_put(folio); goto drop; } + /* Check if there is anything to gain from splitting */ + next = folio_next_index(folio); + end = shmem_fallocend(inode, DIV_ROUND_UP(i_size, PAGE_SIZE)); + if (end <= folio->index || end >= next) { + folio_put(folio); + goto drop; + } + /* * Move the inode on the list back to shrinklist if we failed * to lock the page at this time. @@ -725,6 +719,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, if (ret) goto move_back; + freed += next - end; split++; drop: list_del_init(&info->shrinklist); @@ -769,7 +764,7 @@ static long shmem_unused_huge_count(struct super_block *sb, #define shmem_huge SHMEM_HUGE_DENY static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, - struct shrink_control *sc, unsigned long nr_to_split) + struct shrink_control *sc, unsigned long nr_to_free) { return 0; } @@ -1852,7 +1847,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, * Try to reclaim some space by splitting a few * large folios beyond i_size on the filesystem. */ - shmem_unused_huge_shrink(sbinfo, NULL, 2); + shmem_unused_huge_shrink(sbinfo, NULL, pages); /* * And do a shmem_recalc_inode() to account for freed pages: * except our folio is there in cache, so not quite balanced.