From patchwork Mon Aug 12 07:42:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13760274 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93352C52D7C for ; Mon, 12 Aug 2024 07:42:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 859C96B00AA; Mon, 12 Aug 2024 03:42:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7DE8C6B00AC; Mon, 12 Aug 2024 03:42:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 631B56B00AB; Mon, 12 Aug 2024 03:42:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3C7D76B00A8 for ; Mon, 12 Aug 2024 03:42:32 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B82CDA0907 for ; Mon, 12 Aug 2024 07:42:31 +0000 (UTC) X-FDA: 82442800902.19.4553020 Received: from out199-9.us.a.mail.aliyun.com (out199-9.us.a.mail.aliyun.com [47.90.199.9]) by imf15.hostedemail.com (Postfix) with ESMTP id A5B48A000B for ; Mon, 12 Aug 2024 07:42:29 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=ip3Oj6W2; spf=pass (imf15.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 47.90.199.9 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723448481; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+R9mz0zB2CI8dFveIvPvJwpF+8EOZp0Xn71CWrw5Br0=; b=v9IgzN3K3hRZODXcKWZMO9iPtbIwgwrkDwr7IkK+bjqo3Kv2qfD2GetyoJWUFZRlsKk2oH 4nCTYpYdMufXoPBsIERORCK5ZKW3WWWlNkHHLbEL/yvIIjT6ivfjo83vJC0jNXDrxuqzzW +WfwXsSSf7HG1CXqWTJ/RCQl7pUeYGY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723448481; a=rsa-sha256; cv=none; b=6bIH8DfZACmw4vB2U/ehyO1c0b14aXPgV+kT8ynoL9OKAF77VRCByas4K950OHevuPqV2O TrCSy/iewbi4738CkoMwFRfNCbTSUVIf33xWm+GOMDCL0wxuLIv2p1oU3/GmkLyxzDgGPW LZSXr8UwY71HDKRXaW+W78g74aYxJpQ= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=ip3Oj6W2; spf=pass (imf15.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 47.90.199.9 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1723448544; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=+R9mz0zB2CI8dFveIvPvJwpF+8EOZp0Xn71CWrw5Br0=; b=ip3Oj6W2XCZ74vYfMfX1iusCZMqiYuSf+pMV1iqjGgrE9iMQwZSLhUujtkP1dzt+8R7JpQda9AlMr1mJqQjnI91tHWcqgdYOoUWPRHOeeIDqBsYbLEFld4BRkt4cS7c5XKS4ScjoqzftI7inxemqIJeP5Xo1FvaaNavRVGBwoPk= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WCbAKPi_1723448543) by smtp.aliyun-inc.com; Mon, 12 Aug 2024 15:42:24 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, chrisl@kernel.org, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, ioworker0@gmail.com, da.gomez@samsung.com, p.raghav@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 6/9] mm: shmem: support large folio allocation for shmem_replace_folio() Date: Mon, 12 Aug 2024 15:42:07 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: A5B48A000B X-Stat-Signature: 3eyg7m4mbqprdi3oimxkamcsghyc7hhy X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1723448549-336719 X-HE-Meta: U2FsdGVkX1/2a9dQDPS8oWuBxOyyneWsAMEdSKfByUI2qObYJrj4qT7xqsiejjkN+HNlI0a8BU+kMa2AEzdb97KfRr/gNpkdxsaiKQDyetINNwzmdQMfMrZLR0gOsdbyifHUJl8/G5w41Qb6vVNJrXUw7v9KR18ESns7j6vymuF9vKoJdELdqhSCH3ll5qmSK1YU9v1o7xQrjRxa8FTBV2DcDmbYjHwLDd3uueH9SCs9tLJ4rtnrdsJ7koWnbhc/8qW6l46WrUKk5vUTOopIfg6YW/197VgjiK5FlrJLYjuNJWMT3OqyhLAZv8r7mYN9Jhmr5gdeUhDnSuXxD7tnoVXyRaR9aqdLSeeVj4WlgxQLI6wAfgCHPRfHS6iB8DblHilQKt2jUJNkTNm1AfyHgw49X89RCEALUTTj3d3kqD6kfrJ4cQCEehGxwI4VzwNCLkxNk86xENtYFefqfPRbMLHCikF2/baLa0nr6P3DzzN7oZ6NOH5oB1voi54caMpyM1WGep8tqL5GZFBYt+hNu9jGUC72O+t7ZMH6q5oLAUmdTzWNoOsKoQ/tvyREiMIXUTLR0xU/HBud3MIYsD35YvbmHMcJ9DH9UfjLk15xs+hRsxkRSs2NyDdFKbucsZmSAC3no4YNYUGBW1oaI4VLTMBHF5LQ/KLhMkjI31Qj7ujAwvA/h+315M0WCSWMuYNqvkjwax4bSzOPfOFJATWccbSqUtmvpnZHpedacEcROt057eYS6++eZ4kKqwUZwX764RD3O6wGHPRmryI/DeHBAQlYYP/VX925fW2uysWyDUOUYhh7oeo/Dlh1LMHRhOnjXoHb4QKi4MqkdC9cUF4hriRTfk/BFNoj+uVLE1ACCyIMBWrouD0pBetIJ7f/LI5tlHVtpYrJ/YyeoeFLmfT59Pd9T+oPb9iKniOj/2sWzXwSMvB4B4ENxcsdcYPnQDv17MoBfxq44Ct2GGYqLqe RNtgrlP3 APXUDu5Ahjq6aLB0WEunbmhn/qNcBv/Yx6+cAa9xg9xaUJ0YYjqgcUx8wclSunc5M80Lg9tyDA9JA38X+fo/qRWUYqP9MhYx+JprEaas+g0Admwy/3YuX54Fi1K+ia4Ktz6wACB4rZsUcbqwA1ZVGi7at2ZoEsQ1m5bd523bHc6Qtld8u2ahS8QxSU7N9txmYjRjVfCm4T6x6U9VtwbG9z6sW0PX7BJJ+98hSI41SA8PbV5Ohnf0xLa9mqQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To support large folio swapin for shmem in the following patches, add large folio allocation for the new replacement folio in shmem_replace_folio(). Moreover large folios occupy N consecutive entries in the swap cache instead of using multi-index entries like the page cache, therefore we should replace each consecutive entries in the swap cache instead of using the shmem_replace_entry(). As well as updating statistics and folio reference count using the number of pages in the folio. Signed-off-by: Baolin Wang Signed-off-by: Baolin Wang --- mm/shmem.c | 54 +++++++++++++++++++++++++++++++----------------------- 1 file changed, 31 insertions(+), 23 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index f6bab42180ea..d94f02ad7bd1 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1889,28 +1889,24 @@ static bool shmem_should_replace_folio(struct folio *folio, gfp_t gfp) static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { - struct folio *old, *new; - struct address_space *swap_mapping; - swp_entry_t entry; - pgoff_t swap_index; - int error; - - old = *foliop; - entry = old->swap; - swap_index = swap_cache_index(entry); - swap_mapping = swap_address_space(entry); + struct folio *new, *old = *foliop; + swp_entry_t entry = old->swap; + struct address_space *swap_mapping = swap_address_space(entry); + pgoff_t swap_index = swap_cache_index(entry); + XA_STATE(xas, &swap_mapping->i_pages, swap_index); + int nr_pages = folio_nr_pages(old); + int error = 0, i; /* * We have arrived here because our zones are constrained, so don't * limit chance of success by further cpuset and node constraints. */ gfp &= ~GFP_CONSTRAINT_MASK; - VM_BUG_ON_FOLIO(folio_test_large(old), old); - new = shmem_alloc_folio(gfp, 0, info, index); + new = shmem_alloc_folio(gfp, folio_order(old), info, index); if (!new) return -ENOMEM; - folio_get(new); + folio_ref_add(new, nr_pages); folio_copy(new, old); flush_dcache_folio(new); @@ -1920,18 +1916,25 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, new->swap = entry; folio_set_swapcache(new); - /* - * Our caller will very soon move newpage out of swapcache, but it's - * a nice clean interface for us to replace oldpage by newpage there. - */ + /* Swap cache still stores N entries instead of a high-order entry */ xa_lock_irq(&swap_mapping->i_pages); - error = shmem_replace_entry(swap_mapping, swap_index, old, new); + for (i = 0; i < nr_pages; i++) { + void *item = xas_load(&xas); + + if (item != old) { + error = -ENOENT; + break; + } + + xas_store(&xas, new); + xas_next(&xas); + } if (!error) { mem_cgroup_replace_folio(old, new); - __lruvec_stat_mod_folio(new, NR_FILE_PAGES, 1); - __lruvec_stat_mod_folio(new, NR_SHMEM, 1); - __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -1); - __lruvec_stat_mod_folio(old, NR_SHMEM, -1); + __lruvec_stat_mod_folio(new, NR_FILE_PAGES, nr_pages); + __lruvec_stat_mod_folio(new, NR_SHMEM, nr_pages); + __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -nr_pages); + __lruvec_stat_mod_folio(old, NR_SHMEM, -nr_pages); } xa_unlock_irq(&swap_mapping->i_pages); @@ -1951,7 +1954,12 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, old->private = NULL; folio_unlock(old); - folio_put_refs(old, 2); + /* + * The old folio are removed from swap cache, drop the 'nr_pages' + * reference, as well as one temporary reference getting from swap + * cache. + */ + folio_put_refs(old, nr_pages + 1); return error; }