From patchwork Mon May 29 06:13:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 13258141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41E17C7EE29 for ; Mon, 29 May 2023 06:14:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C0F15280002; Mon, 29 May 2023 02:14:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B9861280001; Mon, 29 May 2023 02:14:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A3935280002; Mon, 29 May 2023 02:14:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8D2F6280001 for ; Mon, 29 May 2023 02:14:25 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5BE0D1401EA for ; Mon, 29 May 2023 06:14:25 +0000 (UTC) X-FDA: 80842278090.17.90D0086 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf14.hostedemail.com (Postfix) with ESMTP id 625DD100019 for ; Mon, 29 May 2023 06:14:23 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="mckJ/evu"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf14.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685340863; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wOJYVTXnoEm5Z/BbFhrWdccE5L0r3SqMJHK9UvXxhZ8=; b=FDLkldpAJlK/OHsnlK5hwD4HLs6awb3geO8knYKIJwnjBDpk/VijKcdSJraJz6tMPjb0iM naLxT16Z/DkR+tme8t9nvipBHN/rBeCPW9OrqueoEzZak+cd7r0q0us8CeJAwO/FHcp9Gy 5hPAHASBPzOI5Qwn0b2DyEG2GjJx4Qw= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="mckJ/evu"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf14.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685340863; a=rsa-sha256; cv=none; b=yBINA/YQ23SUsbpMASfYK6zPkaBAwTMZBxan2eHkAS6pYo8Bh4rRUYqQXytD0WhlmyfMP2 FeVAs3G7k2BX/j0Qgvj5TCzwnmwSLY0RPw+3Fr0hS6smPrNAlm7hYqga5Wt6bb94e4a6Eu ZoA+g859LnkxrRJZ295ftUFNsGo40/E= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685340863; x=1716876863; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9Oxpi0PPziaQhMGSH3TrNy4IFICDc28ZfyvpBf2ejj4=; b=mckJ/evuzSuwRiaVf4daRgrIWcmVxP2HTb6o9FhdOEkJn30ynIYGrvlN XegjOH51xNhw8BcjkSojCIJQ36Y5+Mz9h3aMXZ3wTN4IadzwTBpvvhTZE C2A0TIxCdKykzck7zy3bw3lU/OAL0a9lHB63JvZduNGU8jOBtitwTu3OV OQlgsXu8fviMoSefWPaPUXIskb2OlWlMt7fHEiF+728C3kjH8VgzPdwHC Nm+FYcKnbcPPHcZCjHp9kFmRogTR6qhlqwGVBAXGqxtD3goe3xMzRBzpq hRjrLAAO/bkN5xRIsjQbBrSL4UndhOUhEzQIKGcDs+zI89hx5jg4JAPph A==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="357881822" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="357881822" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 23:14:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="1036079991" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="1036079991" Received: from azhao3-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.28.126]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 23:14:18 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Minchan Kim , Tim Chen , Yang Shi , Yu Zhao , Chris Li , Yosry Ahmed Subject: [PATCH -V3 2/5] swap, __read_swap_cache_async(): enlarge get/put_swap_device protection range Date: Mon, 29 May 2023 14:13:52 +0800 Message-Id: <20230529061355.125791-3-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529061355.125791-1-ying.huang@intel.com> References: <20230529061355.125791-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 625DD100019 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 3qad6q6d38ojex6ez3h7zgymr6o5y3k9 X-HE-Tag: 1685340863-933667 X-HE-Meta: U2FsdGVkX1/j2dKmVqMAMubrjCGirU6/F+ZjSyLr6d1XOoAfQUIwp4ueE0O5+ZiyHyO2AwGlyAlN75dpKp3Oc2H8UxxEdRAsmcQio8PuPF06q8avHMD61kTw2xsvedjhFoxyFuhZrBbWFYureEV5B3wfgGKf2ksuMMT0zmiyqcpEfAgDqpdWgAhMG9QcsmuUdYVreKW8wLhYjt1AlqIQDUDf7T4IdtT+CB2GFlmMI8CNAFSOWnjl0L0dpnBohbGH+zplb568YFL0g+mgmPa1Wdvo+ubvPuWr7ZQhU518YJY3chCNV6UmSg2iTNpnNC/9iiB5xXDuvhuwQlLPgbyNAda6AYDvM5f9a87FvV6JGt7Q0OutUwuuZdnqMClEtf1UEC2nc/5fM0hQEd/26VmRPvTg23ujK+wsEXr/076ygKFhCL3m83hAixAJkZdcIKpAWPcwfo/nHtAKxD+9qH0bgvKwgVrbwGoWyQKEFV1NbsEaThQe6PP+jdeZ5fLSUIEgKT6xyYu3yWcCu35JqPxwP/bXuAWorlS7PbL/qqXmWPDmQpXl6x0nQIU5B8goTvmQetldXdIzj2/67uwBOHC64TPL6mb5uEqvBaJigobnx9G6eP4RidSeLg+rF7SY3JuB3STKCxQUBtTX3ctBtoHPUaLzk/zGyZbey8AblRYrClEaAP+HTw4jwybvtrVibQa7i/AwvZgpgx3Bzo6VfwuTC4tWwf1P8o0WrnuMY4wmDffUfP5+MggYUwxtxbNO6eZho+PXzc5BHQXfOIaOYbjZrAQwjz+2INe6yPRXfwhYMeoM2+XmUdD3H0Mg3bMOX/mkH8WGLpMXcbeBk5No21qbpY5PXmBIqzIz9MtZNmMjm9GCCl1hpIhqt6rNkKzr0hLD+KQg19oYfIwBXLsk8iB4ntasoOQOO4XKiDjGUkhSnYnG85Ejy4n2i5MgYXiQfQ9CWJhrgFVZ3BuZB3cogPw sVnvKdww oMYUTtIgOnBUR3iwAd5G7uZENr50xmb1rFfQS2KcQmgmN9Sm1HLYsotDlNDbkqq/hKdgI5pH44Q6bhuxqk9i31n+5Au+6p2BV9GZRNoHEd/RVL4WwhQkJs9hO865wHd1By+rF/4VHYPDLmSiDdY0df8zJ0VW8am2acbrEZ6ilShrNqrTA8N6KAWWgyczNitCR8KKW6XYYttY9I/X3lh/VXcQQEOL9rEXzSbvw9v31DEYd6MSN/vL1D5FIuO+4Eu34O9NuQgqnD7sMqQGnXEa/IOS+q0Guly803qtr1+zlbj0I8eiyCk+s8MmPUvbCT8fGyoJhhOU4PMzjglXVHJ5cUTNiRpZ8VZzM4UJdCvnyuCAVW/8iIijbRUoJa29DHy6BngZ3vSEb0tGmH/bfxanjYys1yGZDy5VeYE26uVLjZxzZLmIzabAfl/+EY2ZiH2/gXJt5+zMu895kxzLi0RW61IlGSF7KfcTlR187yOY1taZUQp/rPp48jOUHyppZ1/rHta4n X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This makes the function a little easier to be understood because we don't need to consider swapoff. And this makes it possible to remove get/put_swap_device() calling in some functions called by __read_swap_cache_async(). Signed-off-by: "Huang, Ying" Cc: David Hildenbrand Cc: Hugh Dickins Cc: Johannes Weiner Cc: Matthew Wilcox Cc: Michal Hocko Cc: Minchan Kim Cc: Tim Chen Cc: Yang Shi Cc: Yu Zhao Cc: Chris Li Cc: Yosry Ahmed Reviewed-by: David Hildenbrand --- mm/swap_state.c | 31 +++++++++++++++++++++---------- 1 file changed, 21 insertions(+), 10 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index b76a65ac28b3..a8450b4a110c 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -417,9 +417,13 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, { struct swap_info_struct *si; struct folio *folio; + struct page *page; void *shadow = NULL; *new_page_allocated = false; + si = get_swap_device(entry); + if (!si) + return NULL; for (;;) { int err; @@ -428,14 +432,12 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * called after swap_cache_get_folio() failed, re-calling * that would confuse statistics. */ - si = get_swap_device(entry); - if (!si) - return NULL; folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry)); - put_swap_device(si); - if (!IS_ERR(folio)) - return folio_file_page(folio, swp_offset(entry)); + if (!IS_ERR(folio)) { + page = folio_file_page(folio, swp_offset(entry)); + goto got_page; + } /* * Just skip read ahead for unused swap slot. @@ -446,7 +448,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * else swap_off will be aborted if we return NULL. */ if (!__swp_swapcount(entry) && swap_slot_cache_enabled) - return NULL; + goto fail_put_swap; /* * Get a new page to read into from swap. Allocate it now, @@ -455,7 +457,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, */ folio = vma_alloc_folio(gfp_mask, 0, vma, addr, false); if (!folio) - return NULL; + goto fail_put_swap; /* * Swap entry may have been freed since our caller observed it. @@ -466,7 +468,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, folio_put(folio); if (err != -EEXIST) - return NULL; + goto fail_put_swap; /* * We might race against __delete_from_swap_cache(), and @@ -500,12 +502,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, /* Caller will initiate read into locked folio */ folio_add_lru(folio); *new_page_allocated = true; - return &folio->page; + page = &folio->page; +got_page: + put_swap_device(si); + return page; fail_unlock: put_swap_folio(folio, entry); folio_unlock(folio); folio_put(folio); +fail_put_swap: + put_swap_device(si); return NULL; } @@ -514,6 +521,10 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * and reading the disk if it is not already cached. * A failure return means that either the page allocation failed or that * the swap entry is no longer in use. + * + * get/put_swap_device() aren't needed to call this function, because + * __read_swap_cache_async() call them and swap_readpage() holds the + * swap cache folio lock. */ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma,