From patchwork Mon Dec 20 20:59:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12688523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81343C433EF for ; Mon, 20 Dec 2021 20:59:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B1AB46B0074; Mon, 20 Dec 2021 15:59:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AC72E6B0075; Mon, 20 Dec 2021 15:59:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B4496B0078; Mon, 20 Dec 2021 15:59:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0188.hostedemail.com [216.40.44.188]) by kanga.kvack.org (Postfix) with ESMTP id 8D1196B0074 for ; Mon, 20 Dec 2021 15:59:52 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2C63E181AC9CC for ; Mon, 20 Dec 2021 20:59:52 +0000 (UTC) X-FDA: 78939389424.10.27D3665 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 4B31A20037 for ; Mon, 20 Dec 2021 20:59:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:In-Reply-To:References; bh=60Yr6Gp5sFlQGuDfH3Lsyh46zOfEavJCI54l6GLk5hA=; b=puTbgrAnChqLj3/Rw0PyiIXbln r2UmRDlmrctrn7BAT/u/1WnQ8W9fBenJ07pwCYOa2nhq5F0fPZTsyamkEcA1/ybuxH8ZGtyZ7Vjhn Q28KtvNpzySMVHoKw93ToNSbRXBK9/s5XGtK1Kiy7l42kqT8AGK+H8Ioi5ljvKyMN0q+oETAhG6PQ 3Ya/DEWAI1/w1WFL8sUFhqJpQk1WyV+jbAuzQfP24PpUOoHKQto9HOh6jApDTn1/yRUWBeL6LoFQd QWTuf1/G9QQZjUjdRgYSsFAOu72kjgrVF4NQymGp5DSRp9UEeItEa8dPePN+jge2Mq/9NnPy606bj u2HO07ZA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzPl4-001uh3-Tq; Mon, 20 Dec 2021 20:59:47 +0000 From: "Matthew Wilcox (Oracle)" To: Linus Torvalds Cc: "Matthew Wilcox (Oracle)" , David Hildenbrand , Andrew Morton , linux-mm@kvack.org Subject: [PATCH 1/3] mm: Remove last argument of reuse_swap_page() Date: Mon, 20 Dec 2021 20:59:41 +0000 Message-Id: <20211220205943.456187-1-willy@infradead.org> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=puTbgrAn; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Queue-Id: 4B31A20037 X-Stat-Signature: ktby3krww78pmrp8kdp6c6co4nzu3n1t X-Rspamd-Server: rspam04 X-HE-Tag: 1640033983-269837 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: None of the callers care about the total_map_swapcount() any more. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: William Kucharski --- include/linux/swap.h | 6 +++--- mm/huge_memory.c | 2 +- mm/khugepaged.c | 2 +- mm/memory.c | 2 +- mm/swapfile.c | 8 +------- 5 files changed, 7 insertions(+), 13 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index d1ea44b31f19..bdccbf1efa61 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -514,7 +514,7 @@ extern int __swp_swapcount(swp_entry_t entry); extern int swp_swapcount(swp_entry_t entry); extern struct swap_info_struct *page_swap_info(struct page *); extern struct swap_info_struct *swp_swap_info(swp_entry_t entry); -extern bool reuse_swap_page(struct page *, int *); +extern bool reuse_swap_page(struct page *); extern int try_to_free_swap(struct page *); struct backing_dev_info; extern int init_swap_address_space(unsigned int type, unsigned long nr_pages); @@ -680,8 +680,8 @@ static inline int swp_swapcount(swp_entry_t entry) return 0; } -#define reuse_swap_page(page, total_map_swapcount) \ - (page_trans_huge_mapcount(page, total_map_swapcount) == 1) +#define reuse_swap_page(page) \ + (page_trans_huge_mapcount(page, NULL) == 1) static inline int try_to_free_swap(struct page *page) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e5483347291c..b61fbe95c856 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1322,7 +1322,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) * We can only reuse the page if nobody else maps the huge page or it's * part. */ - if (reuse_swap_page(page, NULL)) { + if (reuse_swap_page(page)) { pmd_t entry; entry = pmd_mkyoung(orig_pmd); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index e99101162f1a..11794bdf513a 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -681,7 +681,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, goto out; } if (!pte_write(pteval) && PageSwapCache(page) && - !reuse_swap_page(page, NULL)) { + !reuse_swap_page(page)) { /* * Page is in the swap cache and cannot be re-used. * It cannot be collapsed into a THP. diff --git a/mm/memory.c b/mm/memory.c index 8f1de811a1dc..dd85fd07cb24 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3626,7 +3626,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS); pte = mk_pte(page, vma->vm_page_prot); - if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) { + if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page)) { pte = maybe_mkwrite(pte_mkdirty(pte), vma); vmf->flags &= ~FAULT_FLAG_WRITE; ret |= VM_FAULT_WRITE; diff --git a/mm/swapfile.c b/mm/swapfile.c index e59e08ef46e1..a4f48189300a 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1668,12 +1668,8 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount, * to it. And as a side-effect, free up its swap: because the old content * on disk will never be read, and seeking back there to write new content * later would only waste time away from clustering. - * - * NOTE: total_map_swapcount should not be relied upon by the caller if - * reuse_swap_page() returns false, but it may be always overwritten - * (see the other implementation for CONFIG_SWAP=n). */ -bool reuse_swap_page(struct page *page, int *total_map_swapcount) +bool reuse_swap_page(struct page *page) { int count, total_mapcount, total_swapcount; @@ -1682,8 +1678,6 @@ bool reuse_swap_page(struct page *page, int *total_map_swapcount) return false; count = page_trans_huge_map_swapcount(page, &total_mapcount, &total_swapcount); - if (total_map_swapcount) - *total_map_swapcount = total_mapcount + total_swapcount; if (count == 1 && PageSwapCache(page) && (likely(!PageTransCompound(page)) || /* The remaining swap count will be freed soon */