From patchwork Tue May 21 11:03:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13669308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4FB1C25B74 for ; Tue, 21 May 2024 11:04:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BDA666B00A2; Tue, 21 May 2024 07:03:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B3CC26B00A3; Tue, 21 May 2024 07:03:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98E976B00A4; Tue, 21 May 2024 07:03:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 733C66B00A2 for ; Tue, 21 May 2024 07:03:49 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 34CDA1411F1 for ; Tue, 21 May 2024 11:03:49 +0000 (UTC) X-FDA: 82142117778.22.85633F9 Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) by imf23.hostedemail.com (Postfix) with ESMTP id EDB59140009 for ; Tue, 21 May 2024 11:03:46 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=JSbbCpxS; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf23.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.119 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716289427; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3ZBjHevxDMMp9OTvEWxh5sKqkBiotmw21gIhPQdkVy8=; b=yf8WrbH2PES4X/L81R+S9lTYjUvimCgyZlHSqcgGGmtPVzyHKBewHXotKYvoCau4tfuYgK D+2eZ5thmYIqwgEeLUxKXHN4w/PovECPR07+LJFcmYy+NWA22GEe3RhZFz0j3VpvlYXDTw QRF5kRdrcXsUEgjkJoxoDRH2L607p/M= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=JSbbCpxS; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf23.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.119 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716289427; a=rsa-sha256; cv=none; b=hzabbSLMmHlf+CQKEQPg46dWTagPR/LI4k1GtKwluMQTFKNzFF0+ZxMLBcvBXAL0y1W+S8 D8zYOZPWXTbyHbdbVJ+4G+YKOgoMsFlclD4sksfPik6qFfhaS0e4BLy/0EqV/jsDqvnPuc JCBw9Y4H2uAJOI+l6PFIpbPWTKywgmg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1716289424; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=3ZBjHevxDMMp9OTvEWxh5sKqkBiotmw21gIhPQdkVy8=; b=JSbbCpxSuHCmM8c8wQVf7vJ92cG/QxZ9CgqZn8gfNjkl1dkF+01Sx0t1M9gEimdaz/cssXeJpAmw+ZMx9yede5bRvO9tEThPVavcjSG6um/Kt7J7zxhOoFk/YhXXH2wXEuFfm0wf/lMDZgu6zO65tXQohpf0OVhk/2reNUSXYHo= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R551e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067110;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=17;SR=0;TI=SMTPD_---0W6xl98k_1716289421; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W6xl98k_1716289421) by smtp.aliyun-inc.com; Tue, 21 May 2024 19:03:42 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, ioworker0@gmail.com, hrisl@kernel.org, p.raghav@samsung.com, da.gomez@samsung.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 8/8] mm: shmem: support large folio swap out Date: Tue, 21 May 2024 19:03:18 +0800 Message-Id: <1f50ac5f9dfa69c3c7cc57440eae2b1728178cca.1716285099.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: EDB59140009 X-Stat-Signature: rjt6h8zw8aegmghbqmowpx7ytc9oc5z9 X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1716289426-320394 X-HE-Meta: U2FsdGVkX19pCiUhUd+oWzBYNvpAMqRZl30ESmdYktZ0xfkjOQJF3q7LvE68bXml+FANuINRDkzTMDN941bhMEOrOj5Cr8pYllJ/mfxVpLYuYyeR4PuQ4rtjtuoz6Rr+sd+aoL+TzJYYuTYS2C+qxXD/gXtI4ohyNBUtxqqMdZDNSXFHgy9WPzodd+mADrhct2IwmVbgRCWZGJzQqJyQlTjQfGjZXRlQIkD51kJ+ZOkP93MtZydFniP3RoR8qUxNS9AhsOT/LZ0kXEcPoRQsFadMOHLM0XZIyTMXGeyv0iaJFMOdgzHyWLhb3UdmrRCT0FJuozyJLlN8X86ksIzLAfYTLdmMZNlfjou9bqwVr6uH/PQiZzU+3KcQVAApxp0go/wp+TDonMLKyp/BXTYZK6ZajI/v9ULIncv5iSenZD5PbGPdU4O7TIju59hUhFN1tR2+o2HcOYG8gZ5uKt2/LmiVTnynd+Mt921Mv9+dcZVE65sNRisOMk3aW0Yj3UM1AUKEUia6uPrHVI/zkZR1TkCqL8rMHfkF4pyqXQ1f3mU7tFu5RAWkGbYvxIqQXUovL58/GTTVHZ8IjXz3DGbTzj/mSZi/CJuH7ZQq/hcw+xl/kPxskOU44RF0EyB6RvsK70Xjpam5G9JYOZ3Npeqry35wix9D3ze65EPJSzB0bx5ei1y8R2zSE0XWKuEdU78dEl1RHCrzWh0cycH9i29yxucU8RXvTb54u8+l18YBnoh4hZY00eD4DTov4cNoX+56B7UZts50wKqEcylQABIlyGWGQ/WN8IR1KN1Yw5SoHa2IwIOcZM8Vid003VXl6II+Z5UFO30SCJBDLFiYA1P1KafoNIdbbBdirZ+fiU84Demfm6KCrugMtGcM5DD5zjeOnZ4VTOZn/nFype2yBmG3LBPJ4AKDmF4md0DVjgM4VtVbFy9XNBqDLk+7OAHifKK2MmXxgP2yc2IxHJC/OuT 207r/QSa ecyUMSoIk36xGtgDiYrqTge4WsD/IqDWYLzaHdcc9+RH+7KzJSkjGm0Br7SS9Nu6a6OFlQAERZpQ1BhjHvzO1fIF1DOEIFrPZRcl3vj0CXveY5paKsTIGItDI240IVmswqyqEHj7MTS2yU4z0KqoIzhSiXmPyjwoy/1Pwjh5yLRO2QeyQwtpJqBvvoNaR9ziIvGhflC+QCdr1lFKiiGdqzIVZ572GXN7QPmriVHOFfm1EKEkfHH8e4lHt+euyqkZqA4CcUkTQ9w22+h5GyyhCPOs50EO/iBsAAm+lg/tf02u+rOsXMpNFVL9zrcBHs92pVz9q X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Shmem will support large folio allocation [1] [2] to get a better performance, however, the memory reclaim still splits the precious large folios when trying to swap out shmem, which may lead to the memory fragmentation issue and can not take advantage of the large folio for shmeme. Moreover, the swap code already supports for swapping out large folio without split, hence this patch set supports the large folio swap out for shmem. Note the i915_gem_shmem driver still need to be split when swapping, thus add a new flag 'split_large_folio' for writeback_control to indicate spliting the large folio. [1] https://lore.kernel.org/all/cover.1715571279.git.baolin.wang@linux.alibaba.com/ [2] https://lore.kernel.org/all/20240515055719.32577-1-da.gomez@samsung.com/ Signed-off-by: Baolin Wang --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 1 + include/linux/writeback.h | 1 + mm/shmem.c | 3 +-- mm/vmscan.c | 14 ++++++++++++-- 4 files changed, 15 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 38b72d86560f..968274be14ef 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -308,6 +308,7 @@ void __shmem_writeback(size_t size, struct address_space *mapping) .range_start = 0, .range_end = LLONG_MAX, .for_reclaim = 1, + .split_large_folio = 1, }; unsigned long i; diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 112d806ddbe4..6f2599244ae0 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -63,6 +63,7 @@ struct writeback_control { unsigned range_cyclic:1; /* range_start is cyclic */ unsigned for_sync:1; /* sync(2) WB_SYNC_ALL writeback */ unsigned unpinned_netfs_wb:1; /* Cleared I_PINNING_NETFS_WB */ + unsigned split_large_folio:1; /* Split large folio for shmem writeback */ /* * When writeback IOs are bounced through async layers, only the diff --git a/mm/shmem.c b/mm/shmem.c index fdc71e14916c..6645169aa913 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -776,7 +776,6 @@ static int shmem_add_to_page_cache(struct folio *folio, VM_BUG_ON_FOLIO(index != round_down(index, nr), folio); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(!folio_test_swapbacked(folio), folio); - VM_BUG_ON(expected && folio_test_large(folio)); folio_ref_add(folio, nr); folio->mapping = mapping; @@ -1460,7 +1459,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) * "force", drivers/gpu/drm/i915/gem/i915_gem_shmem.c gets huge pages, * and its shmem_writeback() needs them to be split when swapping. */ - if (folio_test_large(folio)) { + if (wbc->split_large_folio && folio_test_large(folio)) { /* Ensure the subpages are still dirty */ folio_test_set_dirty(folio); if (split_huge_page(page) < 0) diff --git a/mm/vmscan.c b/mm/vmscan.c index bf11c0cbf12e..856286e84d62 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1260,8 +1260,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (!total_swap_pages) goto activate_locked; - /* Split shmem folio */ - if (split_folio_to_list(folio, folio_list)) + /* + * Only split shmem folio when CONFIG_THP_SWAP + * is not enabled. + */ + if (!IS_ENABLED(CONFIG_THP_SWAP) && + split_folio_to_list(folio, folio_list)) goto keep_locked; } @@ -1363,10 +1367,16 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * starts and then write it out here. */ try_to_unmap_flush_dirty(); +try_pageout: switch (pageout(folio, mapping, &plug)) { case PAGE_KEEP: goto keep_locked; case PAGE_ACTIVATE: + if (shmem_mapping(mapping) && folio_test_large(folio) && + !split_folio_to_list(folio, folio_list)) { + nr_pages = 1; + goto try_pageout; + } goto activate_locked; case PAGE_SUCCESS: stat->nr_pageout += nr_pages;