From patchwork Mon Feb 24 08:47:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13987596 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0B87C021B3 for ; Mon, 24 Feb 2025 08:47:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CD9586B007B; Mon, 24 Feb 2025 03:47:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C60FC6B0083; Mon, 24 Feb 2025 03:47:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B01846B0085; Mon, 24 Feb 2025 03:47:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 92E906B007B for ; Mon, 24 Feb 2025 03:47:33 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4AC001C8D98 for ; Mon, 24 Feb 2025 08:47:33 +0000 (UTC) X-FDA: 83154209586.08.B9617D7 Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) by imf12.hostedemail.com (Postfix) with ESMTP id 867E140006 for ; Mon, 24 Feb 2025 08:47:29 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=UByL+kZK; spf=pass (imf12.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.110 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740386851; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1wAqTABFtap9/80glYuFE9desGOh24Rf37lI4QWi5Yc=; b=iybc+uAEcLvIljb77/l8MI+mDtxM/g6SIdKg14BHtpbevszC/nazQmHHeZS1W4K3skFHi9 b9vxWjLQ4vPOTcwJi2mBg81paogmJiTEQ5nSL4xKHb9+pCRwTWJIjRZDOADoJXWoMBDo4T /i9pk/Q/w+zgO4jz5v7+X1nQAEW7wgE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740386851; a=rsa-sha256; cv=none; b=wb/1lk3/KNVB5YeixSVitQXZLbxQAF5IAakEz86VGrKd5g3evrYvpG2tUWVYUn9v1QFrWG +TIkxtD31/U8vM2x+XJDVs/KTwUzAIjXXPKTyp8fCYoFe2UszyNNN3EKdrlXDZEozUbUdz HbGxIz2GFsyrsn9OftqhkxGldjGS0kU= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=UByL+kZK; spf=pass (imf12.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.110 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1740386840; h=From:To:Subject:Date:Message-ID:MIME-Version; bh=1wAqTABFtap9/80glYuFE9desGOh24Rf37lI4QWi5Yc=; b=UByL+kZKJG6m4N+//sPCtpBkNNuWzdguc4goydt4s1wJk6TY/nv795e/YJaGCHxvWAnbudZOllvBXSRzp3yObNJ70oUKsWt03BjEVLXfmINs/hwz8NibRNBgZvMWMeXkP/K4G3kv/MEKHjdkK+77ZDnzfCocsRMOIrGpEOcZ3ws= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WQ5905j_1740386838 cluster:ay36) by smtp.aliyun-inc.com; Mon, 24 Feb 2025 16:47:18 +0800 From: Baolin Wang To: baolin.wang@linux.alibaba.com Cc: akpm@linux-foundation.org, alex_y_xu@yahoo.ca, baohua@kernel.org, da.gomez@samsung.com, david@redhat.com, hughd@google.com, ioworker0@gmail.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, ryan.roberts@arm.com, ryncsn@gmail.com, wangkefeng.wang@huawei.com, willy@infradead.org, ziy@nvidia.com Subject: [PATCH] mm: shmem: fix potential data corruption during shmem swapin Date: Mon, 24 Feb 2025 16:47:10 +0800 Message-ID: <53e610af72302667475821e5b3c84c382da4efbc.1740386576.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <731904cf-d862-4c0e-ae5b-26444faff253@linux.alibaba.com> References: <731904cf-d862-4c0e-ae5b-26444faff253@linux.alibaba.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 867E140006 X-Rspamd-Server: rspam07 X-Stat-Signature: bo1y4gk99is67jzeymtxt9k8hwp5rxcw X-HE-Tag: 1740386849-347284 X-HE-Meta: U2FsdGVkX1/cHZf8Nh5dWy1Yl9wNFmGE9pvvWeME+LgEM1SxBcnSbS5DnjhziNuMpnUtZS/SKd3PXDPcKHBvSiFyUCoeZKvo7JktJ9gFyPZjlQT1Ga502Bb7Bo7fKda6g6t4pzA62t/1AF32yKe94l3H0kMuadGgaL+ykdj5O3zZnF4uaMjOdOMRHmcAFAXBfjiD3OHEe1JcNYy0sBDUvkfKKPFht0V8qXPHayfe+NDrdZaDC+4ES5KvjM0poP+8+FiXyzzIYmgHQFh1OtJYpe8LOxMQhl+X3jHJx+nYt1oYkGvy1LZvrzTb+TidokNtApdzOTvlT7MjH8NqO572DJ+nKFQ9r4vqkdgLjKU/qgGIIMSLkE083Bojt4yQDSUirzOatx54nYTrcjmfNXz2JmExQ1gb9SvGiCz2Zed9CtoQWFgnJY/hT4DXZsgfmRncPqUGxAZF8yZw4A0cpdNTJAY995VSdtYkO7YueUDa1fuiCwsLGGalK2JAXltHIkQXyZtJ6QU9uZxEomdA5ZB5o8GzUsU0rFmwWX9jxdYAu3Kf627V6sHDmIBLRUGBrzSfuGO8YXE2KMwB0Ra/DJ4Uyt9nkYr34mmY0g73gh3Xi5fyn9mW5GOkfB8QZYUA6HfQYd/HUZYa3atEHtO91BdjM+lrg/NrqYJQmUPXCXDJHOlM2MWDOQABMxMmMMxE08f1m4Gann0sJeb+sKj7Xu0BrzpzsvosO757ONXLBEGtghn2ediqRbIkROwlOY3QhaShBprn8p6m11TiZ7z7PUFznUKL4PeP9glNAOqWoHDc6eUWcFCptrXCGwJ/8HJUpE38PwEj/JJ/BT57Ba5zlJkcRFIoWTgB4Mjim50slcUZUk2wN+x3yuLO+KyjTKhRkXMU8fy48lUa8pzOkwKCUZwCjRilRgQZ5deHq0yNkPYJvZDkcWPx5MdCRl+tWhbTu4RZSCQAgtqscHEz9EF0Gq2 bOCVLDsx RQSMuaNXSwGafqhu2aDf/HBV+CkHOBsQT1CfbwJxUwvkAlputvr22NtDwaSIJJg63Md/8QlqzrmJsVbKmO6DqyGynORyAwj67ZpzFYyPhWxsU1ViBdr3iakQE7/tqHHNmfshG/yJoaa2eKMhf/ACuwTb42J2z2TihtW7dRo9OvcKu7unplt3qxfe5RwRs7Xa7bAdAAglethDqTgOKF770Phv51bvvatu2rayy+PWgzCry+PSA/C3Mg1hoUowcSC6GtZ63BeDbZZOrhXCsD3Im+3f07iwWb3/EpDsu0eLASa+iXkOrjVBeiQsMaw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Alex and Kairui reported some issues (system hang or data corruption) when swapping out or swapping in large shmem folios. This is especially easy to reproduce when the tmpfs is mount with the 'huge=within_size' parameter. Thanks to Kairui's reproducer, the issue can be easily replicated. The root cause of the problem is that swap readahead may asynchronously swap in order 0 folios into the swap cache, while the shmem mapping can still store large swap entries. Then an order 0 folio is inserted into the shmem mapping without splitting the large swap entry, which overwrites the original large swap entry, leading to data corruption. When getting a folio from the swap cache, we should split the large swap entry stored in the shmem mapping if the orders do not match, to fix this issue. Fixes: 809bc86517cc ("mm: shmem: support large folio swap out") Reported-by: Alex Xu (Hello71) Reported-by: Kairui Song Signed-off-by: Baolin Wang --- mm/shmem.c | 31 +++++++++++++++++++++++++++---- 1 file changed, 27 insertions(+), 4 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 4ea6109a8043..cebbac97a221 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2253,7 +2253,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, struct folio *folio = NULL; bool skip_swapcache = false; swp_entry_t swap; - int error, nr_pages; + int error, nr_pages, order, split_order; VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); swap = radix_to_swp_entry(*foliop); @@ -2272,10 +2272,9 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, /* Look it up and read it in.. */ folio = swap_cache_get_folio(swap, NULL, 0); + order = xa_get_order(&mapping->i_pages, index); if (!folio) { - int order = xa_get_order(&mapping->i_pages, index); bool fallback_order0 = false; - int split_order; /* Or update major stats only when swapin succeeds?? */ if (fault_type) { @@ -2339,6 +2338,29 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, error = -ENOMEM; goto failed; } + } else if (order != folio_order(folio)) { + /* + * Swap readahead may swap in order 0 folios into swapcache + * asynchronously, while the shmem mapping can still stores + * large swap entries. In such cases, we should split the + * large swap entry to prevent possible data corruption. + */ + split_order = shmem_split_large_entry(inode, index, swap, gfp); + if (split_order < 0) { + error = split_order; + goto failed; + } + + /* + * If the large swap entry has already been split, it is + * necessary to recalculate the new swap entry based on + * the old order alignment. + */ + if (split_order > 0) { + pgoff_t offset = index - round_down(index, 1 << split_order); + + swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); + } } alloced: @@ -2346,7 +2368,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, folio_lock(folio); if ((!skip_swapcache && !folio_test_swapcache(folio)) || folio->swap.val != swap.val || - !shmem_confirm_swap(mapping, index, swap)) { + !shmem_confirm_swap(mapping, index, swap) || + xa_get_order(&mapping->i_pages, index) != folio_order(folio)) { error = -EEXIST; goto unlock; }