From patchwork Tue Jun 4 11:48:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13685093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2483C25B78 for ; Tue, 4 Jun 2024 11:49:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39EAC6B00B5; Tue, 4 Jun 2024 07:49:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34EBC6B00B6; Tue, 4 Jun 2024 07:49:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 216006B00B7; Tue, 4 Jun 2024 07:49:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 052DA6B00B5 for ; Tue, 4 Jun 2024 07:49:05 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BDD5E807F0 for ; Tue, 4 Jun 2024 11:49:05 +0000 (UTC) X-FDA: 82193035050.03.D3616EB Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf02.hostedemail.com (Postfix) with ESMTP id 3270F80018 for ; Tue, 4 Jun 2024 11:49:02 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717501744; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PkQxejsOikwI6KacOgOXV4wkUVrGWeEmS/si0bP/SMY=; b=KEzTFqoTkihhujmwXZQLux7PatwBDuOMClCKPxxiofoRZSRdfH/qW+s5qDgRuCR+HYI3un 0mC9XpzBruwCI0OxlFX01F99J6qIk+D6iAOltus43t92lMeub5/3N2aWFbcupecam1paSE uDCQ+C23VjoXsicjFkaMlxcrShLxKZs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717501744; a=rsa-sha256; cv=none; b=MWPWxECAttKlE+KoCafhmDaRSCuyS01woJAhTMrwgU+18w+rZDrFk8SngntFjsitw0dDck /WWeXxsDtrq6fT+1a0GFRkbr3/oz4pOQiLY2EypeUJg0BrTNCJ59NeEJ5y+BQoZFSsMS+G GLdVSW2Xi04VoSSv+cqn0////5SmqGY= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Vtpk0437Nzcl6c; Tue, 4 Jun 2024 19:47:36 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 3DEB9140FD3; Tue, 4 Jun 2024 19:49:00 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 4 Jun 2024 19:48:59 +0800 From: Kefeng Wang To: Andrew Morton CC: Helge Deller , Daniel Vetter , David Hildenbrand , Matthew Wilcox , , Jonathan Corbet , Kefeng Wang Subject: [PATCH 4/4] mm: remove page_mkclean() Date: Tue, 4 Jun 2024 19:48:22 +0800 Message-ID: <20240604114822.2089819-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240604114822.2089819-1-wangkefeng.wang@huawei.com> References: <20240604114822.2089819-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 3270F80018 X-Rspam-User: X-Stat-Signature: bti4wg3zw1zh4nfww9dq1bzdrpje7rsc X-HE-Tag: 1717501742-10991 X-HE-Meta: U2FsdGVkX19QWJYaNG9POyHildu8ohoRzxQTrHhSA94Z7StosOeSZQTBLztlIURqloO6GCm2TOvDbe6QrreDGevoZs7OrUcUiS7oQKZm2ZUg0y1jqnINjXK4YqfJ+h9lhjBy3SEDl7xr35UY8jhfF64Q3HmIm6x1XNRufg6KYojKoqLK0ogjaGeOscbpt1bJQLussBxzrrYaoXvm9pJ1Bujya2st4NnBgkyAJ84/7yjhvsJzK/QAqdGOth776PBjh8ljIDwdz93uwIil0kWXjCu1rk/mzno8TYfkGUvEZk0+isfp1TvlvDRVGPt49CYiGnU364FX/uZTU1IkweNqmNFXkMQMVbvknrEYW+OqSLlpeFhZgcv2S6tHW5vuJYl0Rbp5fX5l2CeI8c64dBtrmhqXH0MCbgyBzyD7gCDsgFDeELB/i0lc6X9sXpRJO4sFAxPgxRtHdhwMW7tPtNLAnGrZ8tebnzKAENJe7/3wynZvnXcfJBbES11NL+yQY9CmoUMwCk/KCNX4kVMctxiZNryjvsKXykX2ALv4M+fPBmAfywzTD56ybh7GJ7y+87dsyeLnT7/oG3ovUvfg4za+dMevGRXrFaOT9u3Or5pZV3OVU6xhDgTTB18a+GlTjOL4BSRSeFLMUemOu2GwfwWtgZp+EQO//UQ/MFpZQ0iv7WeuLBpccVqBqTInfMzCbirwsaTLymaEqwvbzYW1Wwmb0oRLqvIozEsoFailiKIcsUKNm5DZfZsoYlm/odZjoyLmlqcQ8EsE+/muYFEPOvRRr6wcwhwhgWE4RBPO60GXQ6YPzigLtfxCg6yrPtKNX1tWG1Tez6gbFHqeiOTN9JNtappjd9Wzw7Go0ofbOutDmCIhZ2/bl11jO4lXFv1tjMjXrlpad6h+x6zNk3a+Hm25dowkn9BKD0c0hJqSogkF+sS0p4/5yqgz8fsWprj5CmTITXWZ7Lp2iixQaeCgkTY VWYUPOOs tVX8R9qqJMpUMpKzZvp6GjGnWoywq8S6mmD10/TZqgRAg23PUDodB/L4OUB9yjnYOqmWe1naSpWvUz5dOlRdctCNt9fhNGfPhOOT4eYudyTmmQu5NDKF/Psc5Vt9eC+BtRUXcLj4mwUxOhP9ecKyh++g1guyNGd1k0vmHrav1hvveH2WAGGYMYG+fk6uSphKsQSkd X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are no more users of page_mkclean(), remove it and update the document and comment. Signed-off-by: Kefeng Wang --- Documentation/core-api/pin_user_pages.rst | 8 ++++---- drivers/video/fbdev/core/fb_defio.c | 4 ++-- include/linux/mm.h | 2 +- include/linux/rmap.h | 4 ---- mm/gup.c | 2 +- mm/mremap.c | 2 +- 6 files changed, 9 insertions(+), 13 deletions(-) diff --git a/Documentation/core-api/pin_user_pages.rst b/Documentation/core-api/pin_user_pages.rst index 532ba8e8381b..4e997b0b8487 100644 --- a/Documentation/core-api/pin_user_pages.rst +++ b/Documentation/core-api/pin_user_pages.rst @@ -132,7 +132,7 @@ CASE 1: Direct IO (DIO) ----------------------- There are GUP references to pages that are serving as DIO buffers. These buffers are needed for a relatively short time (so they -are not "long term"). No special synchronization with page_mkclean() or +are not "long term"). No special synchronization with folio_mkclean() or munmap() is provided. Therefore, flags to set at the call site are: :: FOLL_PIN @@ -144,7 +144,7 @@ CASE 2: RDMA ------------ There are GUP references to pages that are serving as DMA buffers. These buffers are needed for a long time ("long term"). No special -synchronization with page_mkclean() or munmap() is provided. Therefore, flags +synchronization with folio_mkclean() or munmap() is provided. Therefore, flags to set at the call site are: :: FOLL_PIN | FOLL_LONGTERM @@ -170,7 +170,7 @@ callback, simply remove the range from the device's page tables. Either way, as long as the driver unpins the pages upon mmu notifier callback, then there is proper synchronization with both filesystem and mm -(page_mkclean(), munmap(), etc). Therefore, neither flag needs to be set. +(folio_mkclean(), munmap(), etc). Therefore, neither flag needs to be set. CASE 4: Pinning for struct page manipulation only ------------------------------------------------- @@ -200,7 +200,7 @@ folio_maybe_dma_pinned(): the whole point of pinning =================================================== The whole point of marking folios as "DMA-pinned" or "gup-pinned" is to be able -to query, "is this folio DMA-pinned?" That allows code such as page_mkclean() +to query, "is this folio DMA-pinned?" That allows code such as folio_mkclean() (and file system writeback code in general) to make informed decisions about what to do when a folio cannot be unmapped due to such pins. diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c index c9c8e294b7e7..d38998714215 100644 --- a/drivers/video/fbdev/core/fb_defio.c +++ b/drivers/video/fbdev/core/fb_defio.c @@ -113,7 +113,7 @@ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf) printk(KERN_ERR "no mapping available\n"); BUG_ON(!page->mapping); - page->index = vmf->pgoff; /* for page_mkclean() */ + page->index = vmf->pgoff; /* for folio_mkclean() */ vmf->page = page; return 0; @@ -161,7 +161,7 @@ static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long /* * We want the page to remain locked from ->page_mkwrite until - * the PTE is marked dirty to avoid page_mkclean() being called + * the PTE is marked dirty to avoid folio_mkclean() being called * before the PTE is updated, which would leave the page ignored * by defio. * Do this by locking the page here and informing the caller diff --git a/include/linux/mm.h b/include/linux/mm.h index d42497e25d43..cc21b2f0cdf8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1610,7 +1610,7 @@ static inline void put_page(struct page *page) * issue. * * Locking: the lockless algorithm described in folio_try_get_rcu() - * provides safe operation for get_user_pages(), page_mkclean() and + * provides safe operation for get_user_pages(), folio_mkclean() and * other calls that race to set up page table entries. */ #define GUP_PIN_COUNTING_BIAS (1U << 10) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 0a5d8c7f0690..216ddd369e8f 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -793,8 +793,4 @@ static inline int folio_mkclean(struct folio *folio) } #endif /* CONFIG_MMU */ -static inline int page_mkclean(struct page *page) -{ - return folio_mkclean(page_folio(page)); -} #endif /* _LINUX_RMAP_H */ diff --git a/mm/gup.c b/mm/gup.c index e17466fd62bb..83e279731d1b 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -393,7 +393,7 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, * 1) This code sees the page as already dirty, so it * skips the call to set_page_dirty(). That could happen * because clear_page_dirty_for_io() called - * page_mkclean(), followed by set_page_dirty(). + * folio_mkclean(), followed by set_page_dirty(). * However, now the page is going to get written back, * which meets the original intention of setting it * dirty, so all is well: clear_page_dirty_for_io() goes diff --git a/mm/mremap.c b/mm/mremap.c index 5f96bc5ee918..e7ae140fc640 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -198,7 +198,7 @@ static int move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, * PTE. * * NOTE! Both old and new PTL matter: the old one - * for racing with page_mkclean(), the new one to + * for racing with folio_mkclean(), the new one to * make sure the physical page stays valid until * the TLB entry for the old mapping has been * flushed.