From patchwork Tue Mar 5 15:13:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13582542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADBE1C54798 for ; Tue, 5 Mar 2024 15:14:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17DBD6B0075; Tue, 5 Mar 2024 10:14:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 12E606B007D; Tue, 5 Mar 2024 10:14:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F37AF6B007E; Tue, 5 Mar 2024 10:14:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DEDE66B0075 for ; Tue, 5 Mar 2024 10:14:03 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B0FA81C0EE1 for ; Tue, 5 Mar 2024 15:14:03 +0000 (UTC) X-FDA: 81863330766.01.EE33654 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf25.hostedemail.com (Postfix) with ESMTP id D1BF2A0019 for ; Tue, 5 Mar 2024 15:14:01 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709651642; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=UZY3Xi2az25BCxnKQgUkA2jRYNtc9hH40VMmmU1Io6E=; b=pVhz0FFzw6n8ZMS3PnvNYEZpcaEicntvkxaXdiJbdnfJsQWIvSxMaIQZxWoS281nun7nuT eaw2H3bvvI89x1p/8WT0GsKuuldMnXMH6jhNZNT/pNYwp5K27254NGUDFa1tlFvP3JZ9nY ytc1JwMmec7bfOHxwA1uXrUs/1G30nE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709651642; a=rsa-sha256; cv=none; b=b0v2NfzD7O3s8Axs/pOkgzhuQo9vwpKRohSTKUPYbZHTbUogMYE+9gd3AqgtVBxEnkdru1 kg8386QofBmQp9uoZmU+gQw3GYEPSVteTP4Rnf3agKksGRLgkudheWEawtP/lA9sbQuwOz 2zrzUmPrDmfbWe70QsbaPPqow8NqKNk= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5F0781FB; Tue, 5 Mar 2024 07:14:37 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 836683F738; Tue, 5 Mar 2024 07:13:59 -0800 (PST) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , "Huang, Ying" Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH v1] mm: swap: Fix race between free_swap_and_cache() and swapoff() Date: Tue, 5 Mar 2024 15:13:49 +0000 Message-Id: <20240305151349.3781428-1-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Rspamd-Queue-Id: D1BF2A0019 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: ofz8demjhwy9a18s5un5dr7b3isw347g X-HE-Tag: 1709651641-607934 X-HE-Meta: U2FsdGVkX1+esNDxoY87AHZz3n+IShXBnkk6kyA2roi6huwCwIgxCrk0OiagDeuM29c1vEo48m5zSkA5PhqHq65zsODZBf+ZkigczQzElEnprAeVNiGTbsjes7DMIU53TVD90mf35rVWcR/GB+JgA4ZaDuTtE4xiBBq+9CHUpa/cxdjN9p4ef7NlJFD7RfhGh5h9dk+ak1MUEOlxc/lPEuFc/EB4+W9brMvMZOEwhoRR79/d7J4y7DxfK4eCaSg1cZuCmQPetw3fdbQbGYnxXuq0zKU4lk9tVo9SgGYoh2zMVZPgmeSltwCNx+4mNKni8KdYXJWIY1XaKQGavXts46M4E7EvWC/bI/WauPtzVZlOQ7s8P8bA8qL3W+cwT4oy6z8D5gteHdslNfDKWKB1Adw0XmIN3H6xXmiuGxi8iG4ir6FpkyoMdj4STrt7wvKS2brxZIZCszxbsfIB0PSWyweVZrUkqKiGRgpHJW+GgpfEmZF2gwwjmpuPAqJnWjtARtxjS7ehFTg5bMQZlMHuSvqS1aG5GHnHkexViDKvrwbG9f9u4Ow3N2Tf7aeZNfLPYtmxPl+bAwgTiBSBhiPIZmd8kkklT4IAJug4JAspeYUwPthTXB3BaAG/oa8/3C/y/PL243eHnoOGApQeBPNUN7g37FHRvCXwY8A2WQpUZWhKS354CGKfgzGlNgEnIWqQs4sUMPE+/nrzxDc2Unt+rBCIMAg+VmS5C9kBYMkQRPVuE76Won4xL7nKimR1yav16RooWZorXIqq7rSXlYsAs8FXuw5rqSZmSWvUgWmYmgr7zi6RJMqpDLMtKofN7mc8PdUxw3JYrPitEWTWhBE4r6itl0rnZ3HNZc4Emh2mcatavqiYSL4F+EdyZKiMDuBsOk86PvJ4SRRq+MSIo/q+krlkofxNriq2SkXXdySgaqbuCi/SEEOd6yso0q8cV1/WzCpPasHCmU5D9Tqe2Ms 4BetCtqD YAu0j8VMCfEgX0kZ8aCMdriJUi5n3/WU++t6hNfnGXvBcK8aT4eeZRfw4ssOEWkP/pn3tgGFBWeyxh6rQk6qvYF1f5WX4nIJ1hHlzrHSpvPRFhlfiwD2m+Nmwkacf9GRJUKWhlRsjnA9vtSYSkYX3Q2iqO03u4svAf/TMHnZ9atx8VtLd5W933p3zK6nDUK/Je2PJz5VT12REWYFW30hApx51k99mqP115e+iQNwp8gnG8iieeKDkvreVyusHlApuGEblzEZ9/PJZdj4RnoErRt4p6JR0b26ZrdzF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There was previously a theoretical window where swapoff() could run and teardown a swap_info_struct while a call to free_swap_and_cache() was running in another thread. This could cause, amongst other bad possibilities, swap_page_trans_huge_swapped() (called by free_swap_and_cache()) to access the freed memory for swap_map. This is a theoretical problem and I haven't been able to provoke it from a test case. But there has been agreement based on code review that this is possible (see link below). Fix it by using get_swap_device()/put_swap_device(), which will stall swapoff(). There was an extra check in _swap_info_get() to confirm that the swap entry was valid. This wasn't present in get_swap_device() so I've added it. I couldn't find any existing get_swap_device() call sites where this extra check would cause any false alarms. Details of how to provoke one possible issue (thanks to David Hilenbrand for deriving this): --8<----- __swap_entry_free() might be the last user and result in "count == SWAP_HAS_CACHE". swapoff->try_to_unuse() will stop as soon as soon as si->inuse_pages==0. So the question is: could someone reclaim the folio and turn si->inuse_pages==0, before we completed swap_page_trans_huge_swapped(). Imagine the following: 2 MiB folio in the swapcache. Only 2 subpages are still references by swap entries. Process 1 still references subpage 0 via swap entry. Process 2 still references subpage 1 via swap entry. Process 1 quits. Calls free_swap_and_cache(). -> count == SWAP_HAS_CACHE [then, preempted in the hypervisor etc.] Process 2 quits. Calls free_swap_and_cache(). -> count == SWAP_HAS_CACHE Process 2 goes ahead, passes swap_page_trans_huge_swapped(), and calls __try_to_reclaim_swap(). __try_to_reclaim_swap()->folio_free_swap()->delete_from_swap_cache()-> put_swap_folio()->free_swap_slot()->swapcache_free_entries()-> swap_entry_free()->swap_range_free()-> ... WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries); What stops swapoff to succeed after process 2 reclaimed the swap cache but before process1 finished its call to swap_page_trans_huge_swapped()? --8<----- Fixes: 7c00bafee87c ("mm/swap: free swap slots in batch") Closes: https://lore.kernel.org/linux-mm/65a66eb9-41f8-4790-8db2-0c70ea15979f@redhat.com/ Cc: stable@vger.kernel.org Signed-off-by: Ryan Roberts --- Applies on top of v6.8-rc6 and mm-unstable (b38c34939fe4). Thanks, Ryan mm/swapfile.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) -- 2.25.1 diff --git a/mm/swapfile.c b/mm/swapfile.c index 2b3a2d85e350..f580e6abc674 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1281,7 +1281,9 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry) smp_rmb(); offset = swp_offset(entry); if (offset >= si->max) - goto put_out; + goto bad_offset; + if (data_race(!si->swap_map[swp_offset(entry)])) + goto bad_free; return si; bad_nofile: @@ -1289,9 +1291,14 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry) out: return NULL; put_out: - pr_err("%s: %s%08lx\n", __func__, Bad_offset, entry.val); percpu_ref_put(&si->users); return NULL; +bad_offset: + pr_err("%s: %s%08lx\n", __func__, Bad_offset, entry.val); + goto put_out; +bad_free: + pr_err("%s: %s%08lx\n", __func__, Unused_offset, entry.val); + goto put_out; } static unsigned char __swap_entry_free(struct swap_info_struct *p, @@ -1609,13 +1616,14 @@ int free_swap_and_cache(swp_entry_t entry) if (non_swap_entry(entry)) return 1; - p = _swap_info_get(entry); + p = get_swap_device(entry); if (p) { count = __swap_entry_free(p, entry); if (count == SWAP_HAS_CACHE && !swap_page_trans_huge_swapped(p, entry)) __try_to_reclaim_swap(p, swp_offset(entry), TTRS_UNMAPPED | TTRS_FULL); + put_swap_device(p); } return p != NULL; }