From patchwork Thu Jul 4 11:24:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13723638 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05A55C38150 for ; Thu, 4 Jul 2024 11:25:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 54D1A6B00E1; Thu, 4 Jul 2024 07:25:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 461C66B00E2; Thu, 4 Jul 2024 07:25:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2DAB26B00E3; Thu, 4 Jul 2024 07:25:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 038AE6B00E1 for ; Thu, 4 Jul 2024 07:25:22 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id AC34F1202A1 for ; Thu, 4 Jul 2024 11:25:22 +0000 (UTC) X-FDA: 82301839284.19.E3E20F4 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf13.hostedemail.com (Postfix) with ESMTP id AF32A2000B for ; Thu, 4 Jul 2024 11:25:20 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b="IrS06H/i"; spf=pass (imf13.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720092301; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8wAGNXuI1qwPyzOykQd8DO+mgTfhYkp9bBwCtZr1ZS0=; b=tvH1DVusRoFzgJHp7QnXVJ/qcTNX1QIj5abEJP7dn6gUd2xbORGYOnkNxxxnBTWa9qsbAC dCn3+RccDb2wRmap9cSZvkiUz7vy9PXE5O67Xbn6rxQbQPkftDOfx2PylQEe7GL8ByMBUT p2UjfmvGanFlnpyejLuj9nIrRQ3LaSQ= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b="IrS06H/i"; spf=pass (imf13.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720092301; a=rsa-sha256; cv=none; b=LjazM8Jtt66f1guQ1TuU7TUmTywwEa+17tk3P+B7qp2qvg/1TP1AF1awZHQRUMGCWtjV8R jYh2TYqzU5fkckRYSfDTcjfT0yLoAvQKN7Ik3S5p1k36lGKNE3RYUfaIkcS0Ea89EAHsbj Nx3C5cAA++68JtNa5rpJ1C8YHSkAcDA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1720092318; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=8wAGNXuI1qwPyzOykQd8DO+mgTfhYkp9bBwCtZr1ZS0=; b=IrS06H/iF5itsGarec0zMq66fvnKviknZOMpv4ixZNbFt+hT1k9Mc9heQNTieGwAT6uEDaE9DleKwhrlRIZNQasOBa/b1G+gd51Bt9fmVKlEyY3ZLachUipr7SzDEYDAZuCRw/laQw7Joig4PDNedgwDF+tgO1GmOJb2ZH38SYg= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033023225041;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=17;SR=0;TI=SMTPD_---0W9qWV5z_1720092316; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W9qWV5z_1720092316) by smtp.aliyun-inc.com; Thu, 04 Jul 2024 19:25:17 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, chrisl@kernel.org, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, ioworker0@gmail.com, da.gomez@samsung.com, p.raghav@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 09/10] mm: shmem: split large entry if the swapin folio is not large Date: Thu, 4 Jul 2024 19:24:58 +0800 Message-Id: <7d831561b1daa14234c409cb1677a367d3ce5e0b.1720079976.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: i7dsy38z1x9cdi9naw81o5o6rwh58wi7 X-Rspam-User: X-Rspamd-Queue-Id: AF32A2000B X-Rspamd-Server: rspam02 X-HE-Tag: 1720092320-651773 X-HE-Meta: U2FsdGVkX19KPhEpRfjTNaewHhbHfbtHb/Kecd951Aad/ldtlJlJQBKCPBNkIUCe8P+OBzau03sy9zTbMtrGyXTycrbooy3qm9P1nyH75RorvN4risFyvA2twkKYrrhXj4OjZ0AF2pM5I9dL8GLRHhbuaTIa9PI2Bqpm5Elm64Nl6sgsfLmzClLh87LGQra5OP/9FEb7GBP8CZj/RfucAf0B1GTH1ROuj3oGsGi4YrbAHz/NxpnIF895yp0mNw72rxdHSFYOzS0AaZQn7XJMeqCq9naNNRB87jeBjCH1crnec1Si2UPcnmmuFiCT+1eSbK6uZ+0ScAv6ehU+XcEsHleU9blIu/PfqTIskKjFuWpoek7OLUwrdZWtJy7uD4NQDkQCqoT3tKfwpWy7DN3b4409+qCLaGPvx+US+GQnsk+OgkrhdMaX+TT7lGOKWg4e7/mCQGQ+yTs3l9ezC0z0QN2122mPZe7AvzXJFMJwz0njg/lOafMZxbUzj/NMQbdhEJfCxAoBHKSfddoWqcM9h8CBY/2C7jga1H1RCz4AkZjaGYVyBn+3Wk63VrzhAcnhkb0eEDrBLQO4J/1eAgHIj/vpbBO8sK+4jmM30Ca+CBiItias61ME1nH9BPrAcp414L0tooqk+NcOZyuqC8NaFwD1SX0Ojd0QQp6JnBCgKCQrREFvFoRE8KGEtqwNtPKKllc7h3Xva/rBY271vVHlJHj/rXnsNn+cyJCmcVYGr7cYC6a7sAVfpzEAjDbu+U2j1I3H8BRHcbeHbNNnarBbPGRptncxLRYR7FC6bkdY/8Cd6H1OLb5c99Ucy/xlTRmhW5P/GA7fr0oU8YGKMQ6FfdPNcb6/IPO95lGD4U3zzsxXmhlDdSJ9b0Zum1tQHtimyn+8bQK/VbWS3EHePupnsNjPnAmoTLnqp5VscSrW+w9CTjnnQevMp62LNKJlEau7kDE+nBWq0NtW+hOJHME jvJwQwGa 3Apm9KR/lpfAXwcVX19KJrjaYjQGJ2sjpIUXc+JfYlzd4HRP+tS3aP4JSbQNqFhRmO5gNMygcfDAX6KRZut2QIkoi6J0oU9/f6g3BigDqVCWPB+Uksb1EuqeuReoYtvAjgwbXSmbf8z7eZrUXCMFWF1FHvIX9H4mdwHYrmvyEpbeCXJyf1wAczcK5Ez8vU4IRk4rurnwNNraO88iLiulTFUmc82YFRN1wGtcTgF5qmrxZo6M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now the swap device can only swap-in order 0 folio, even though a large folio is swapped out. This requires us to split the large entry previously saved in the shmem pagecache to support the swap in of small folios. Signed-off-by: Baolin Wang --- mm/shmem.c | 95 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) diff --git a/mm/shmem.c b/mm/shmem.c index eb030827f7fb..b4468076f0e9 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1990,6 +1990,82 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index, swap_free_nr(swap, nr_pages); } +static int shmem_split_large_entry(struct inode *inode, pgoff_t index, + swp_entry_t swap, int new_order, gfp_t gfp) +{ + struct address_space *mapping = inode->i_mapping; + XA_STATE_ORDER(xas, &mapping->i_pages, index, new_order); + void *alloced_shadow = NULL; + int alloced_order = 0, i; + + for (;;) { + int order = -1, split_order = 0; + void *old = NULL; + + xas_lock_irq(&xas); + old = xas_load(&xas); + if (!xa_is_value(old) || swp_to_radix_entry(swap) != old) { + xas_set_err(&xas, -EEXIST); + goto unlock; + } + + if (order == -1) + order = xas_get_order(&xas); + + /* Swap entry may have changed before we re-acquire the lock */ + if (alloced_order && + (old != alloced_shadow || order != alloced_order)) { + xas_destroy(&xas); + alloced_order = 0; + } + + /* Try to split large swap entry in pagecache */ + if (order > 0 && order > new_order) { + if (!alloced_order) { + split_order = order; + goto unlock; + } + xas_split(&xas, old, order); + + /* + * Re-set the swap entry after splitting, and the swap + * offset of the original large entry must be continuous. + */ + for (i = 0; i < 1 << order; i += (1 << new_order)) { + pgoff_t aligned_index = round_down(index, 1 << order); + swp_entry_t tmp; + + tmp = swp_entry(swp_type(swap), swp_offset(swap) + i); + __xa_store(&mapping->i_pages, aligned_index + i, + swp_to_radix_entry(tmp), 0); + } + } + +unlock: + xas_unlock_irq(&xas); + + /* split needed, alloc here and retry. */ + if (split_order) { + xas_split_alloc(&xas, old, split_order, gfp); + if (xas_error(&xas)) + goto error; + alloced_shadow = old; + alloced_order = split_order; + xas_reset(&xas); + continue; + } + + if (!xas_nomem(&xas, gfp)) + break; + } + +error: + if (xas_error(&xas)) + return xas_error(&xas); + + return alloced_order; +} + /* * Swap in the folio pointed to by *foliop. * Caller has to make sure that *foliop contains a valid swapped folio. @@ -2026,12 +2102,31 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, /* Look it up and read it in.. */ folio = swap_cache_get_folio(swap, NULL, 0); if (!folio) { + int split_order, offset; + /* Or update major stats only when swapin succeeds?? */ if (fault_type) { *fault_type |= VM_FAULT_MAJOR; count_vm_event(PGMAJFAULT); count_memcg_event_mm(fault_mm, PGMAJFAULT); } + + /* + * Now swap device can only swap in order 0 folio, then we + * should split the large swap entry stored in the pagecache + * if necessary. + */ + split_order = shmem_split_large_entry(inode, index, swap, 0, gfp); + if (split_order < 0) { + error = split_order; + goto failed; + } + + if (split_order > 0) { + offset = index - round_down(index, 1 << split_order); + swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); + } + /* Here we actually start the io */ folio = shmem_swapin_cluster(swap, gfp, info, index); if (!folio) {