From patchwork Thu Jan 11 11:12:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13517216 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5166C4707B for ; Thu, 11 Jan 2024 11:13:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 13E386B009B; Thu, 11 Jan 2024 06:13:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C5956B009C; Thu, 11 Jan 2024 06:13:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E80856B009D; Thu, 11 Jan 2024 06:13:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D49216B009B for ; Thu, 11 Jan 2024 06:13:42 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B01F9140CA9 for ; Thu, 11 Jan 2024 11:13:42 +0000 (UTC) X-FDA: 81666769884.19.2323342 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf12.hostedemail.com (Postfix) with ESMTP id 1206F40012 for ; Thu, 11 Jan 2024 11:13:39 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf12.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704971620; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B8KWnsevHVsv+Uq6/BJzxKrvTn/e00meplDlNP4dt2c=; b=vf4asVynxV+zRrWxMUGAAW4gNqYcboZ8Q716a6Kq8ufAoaT3YOSerueqTfrt5wAkgDI1Yc z1Ba9/6P5MEa9oPhMSqmO0ioDnOcnoXxZLnXeGCOqUWLxZY58YeVFDh14zRV5D+OEL7Bmz FVQ73o/9CqL7S+khlZzpXKbvhsyaYF8= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf12.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704971620; a=rsa-sha256; cv=none; b=xas3xBiWT2gCzrWer7QcrZc5svob9T7dmd1s2rPzof/38YxAQXAT9yhF6WiehEb+ol3SkO JrfZo5Xa4/NKjVJ4L1dOChb5uiHVcgyxUn6B67YgBVlfJyfg+p3g99h9UZ5TGLixj0R0Le tScEhQHL3QfUfsco+ewF3P4AKKPA6Fs= Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4T9hpc5HTZzWm4Q; Thu, 11 Jan 2024 19:12:40 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 7DC5C180077; Thu, 11 Jan 2024 19:13:33 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 11 Jan 2024 19:13:33 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , , , Kefeng Wang Subject: [PATCH v2 7/8] mm: convert mm_counter() to take a folio Date: Thu, 11 Jan 2024 19:12:38 +0800 Message-ID: <20240111111239.2453282-8-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240111111239.2453282-1-wangkefeng.wang@huawei.com> References: <20240111111239.2453282-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspamd-Queue-Id: 1206F40012 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: gqgcdt9q4oa3dzhjm75g3em9uq94mp1y X-HE-Tag: 1704971619-774576 X-HE-Meta: U2FsdGVkX19ev93NMVt63wejkXjD/5IvtQ2Ll2gNIBC9EuiqGQplutQtz541Ll/NgFfNviCEFauj0ozbdld5x0MdW18HpWsTQywm6jT9F3hf3xTd8L/u5FSQhWcigUo2SR6oqneli5fzMvvNzzMciVBoH6kt6hFEZ0xb90fZNL4wtijsFH/AcCRSxCSFQQO1w14djl6wr6HmFCleU8BmfNfF2rPJBLuani+ogMZ11YQaXPk/XHLXAmgmR0scEhbgrFDw4ip2dn9JAl03j5AYR6OZjINzOPAJINW3vI8FIfCgRPQJYf53W3WLqiOpgcaYxuDtOSV5XIMh7dwo86L26E0lx41Uy8cmtLyrbc62+D7Xth63RKwp7n6d6nnCJ5fwXGzG2u3cweUeNHFNyJWfcGvrcCCRD4V6tKMIqY7kMmd7kgwD9rgNvPW08PTHWP1+bCuvgEFaVyjhpJpP9u7dGsjUxBQ/isjo9Y98ruQqI+3s0vqY7ymC4OegaPfMmT4zMux259EDFFhwyAjjpOYiCV7GOjM/wgWRqOm2XuD5wPK3FjDJ18Ow781dN0lpWsMbcBhes8BP84K57eHOGghN5j7VBOufeWQwKbu1n360aBR7Sw2uHz9yro6eVOzAu6TLoCf42cwRfvQd6oGgAr7/779HjvMmI8kIbi9h3Ui8Eb0hMwEFqKLWx4OtDPBdnvqemnpfU9plv+GcVHbnzM2ef64rbx5Lo+t0T7AxW/ExSidSaM3BAgRQ1KKLwVIagKSHBLAysAuhHBgG1YnJwZc5yHzuoKW4z5OFa5hNWzrvY3iSg6HrtFEqbnOXh+EOVCaMoyzCax9QEcVgNZ0z7wxCWTvlKWrVP0LeWxaBn2Nm2kvCITORNHeJGL9IwMmme8sme+6ISPst11O1eh7E/8/ZJPWmTBuojIKqwJkN3ddmLGf7P+8/8zktT54QOkl0nIkUKucwM1rw0+C+RcsK4Xn pPQxF2Vv FMhFEWJgj4PfHeJkJ+BA1Wb7rAVyoZR0+PqSNZ6n4PJtsBM5/eERo/XRok/AljGXd504QxJuKtHWCtUliiku+k1SSoX55s3VLFi5YaDD+WBQUdJRJqTBddt/V3BDteLK5sX3mLI7A6vsqZ0X2kffXQsLGAotMUvgnQJxyYFF7VqRkfgAy2d7rizZVc5U7mM2q+2T7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Since all mm_counter() callers with a folio, let's convert mm_counter() to take a folio. Signed-off-by: Kefeng Wang --- arch/s390/mm/pgtable.c | 2 +- include/linux/mm.h | 6 +++--- mm/memory.c | 10 +++++----- mm/rmap.c | 8 ++++---- mm/userfaultfd.c | 2 +- 5 files changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c index e8fc5c55968e..4c92b08e3c59 100644 --- a/arch/s390/mm/pgtable.c +++ b/arch/s390/mm/pgtable.c @@ -723,7 +723,7 @@ static void ptep_zap_swap_entry(struct mm_struct *mm, swp_entry_t entry) else if (is_migration_entry(entry)) { struct folio *folio = pfn_swap_entry_to_folio(entry); - dec_mm_counter(mm, mm_counter(&folio->page)); + dec_mm_counter(mm, mm_counter(folio)); } free_swap_and_cache(entry); } diff --git a/include/linux/mm.h b/include/linux/mm.h index f5a97dec5169..22e597b36b38 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2603,11 +2603,11 @@ static inline int mm_counter_file(struct page *page) return MM_FILEPAGES; } -static inline int mm_counter(struct page *page) +static inline int mm_counter(struct folio *folio) { - if (PageAnon(page)) + if (folio_test_anon(folio)) return MM_ANONPAGES; - return mm_counter_file(page); + return mm_counter_file(&folio->page); } static inline unsigned long get_mm_rss(struct mm_struct *mm) diff --git a/mm/memory.c b/mm/memory.c index afba8b156457..2f858263e5a2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -808,7 +808,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, } else if (is_migration_entry(entry)) { folio = pfn_swap_entry_to_folio(entry); - rss[mm_counter(&folio->page)]++; + rss[mm_counter(folio)]++; if (!is_readable_migration_entry(entry) && is_cow_mapping(vm_flags)) { @@ -840,7 +840,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, * keep things as they are. */ folio_get(folio); - rss[mm_counter(page)]++; + rss[mm_counter(folio)]++; /* Cannot fail as these pages cannot get pinned. */ folio_try_dup_anon_rmap_pte(folio, page, src_vma); @@ -1476,7 +1476,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, if (pte_young(ptent) && likely(vma_has_recency(vma))) folio_mark_accessed(folio); } - rss[mm_counter(page)]--; + rss[mm_counter(folio)]--; if (!delay_rmap) { folio_remove_rmap_pte(folio, page, vma); if (unlikely(page_mapcount(page) < 0)) @@ -1504,7 +1504,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, * see zap_install_uffd_wp_if_needed(). */ WARN_ON_ONCE(!vma_is_anonymous(vma)); - rss[mm_counter(page)]--; + rss[mm_counter(folio)]--; if (is_device_private_entry(entry)) folio_remove_rmap_pte(folio, page, vma); folio_put(folio); @@ -1519,7 +1519,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, folio = pfn_swap_entry_to_folio(entry); if (!should_zap_folio(details, folio)) continue; - rss[mm_counter(&folio->page)]--; + rss[mm_counter(folio)]--; } else if (pte_marker_entry_uffd_wp(entry)) { /* * For anon: always drop the marker; for file: only diff --git a/mm/rmap.c b/mm/rmap.c index f5d43edad529..4648cf1d8178 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1780,7 +1780,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, set_huge_pte_at(mm, address, pvmw.pte, pteval, hsz); } else { - dec_mm_counter(mm, mm_counter(&folio->page)); + dec_mm_counter(mm, mm_counter(folio)); set_pte_at(mm, address, pvmw.pte, pteval); } @@ -1795,7 +1795,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, * migration) will not expect userfaults on already * copied pages. */ - dec_mm_counter(mm, mm_counter(&folio->page)); + dec_mm_counter(mm, mm_counter(folio)); } else if (folio_test_anon(folio)) { swp_entry_t entry = page_swap_entry(subpage); pte_t swp_pte; @@ -2181,7 +2181,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, set_huge_pte_at(mm, address, pvmw.pte, pteval, hsz); } else { - dec_mm_counter(mm, mm_counter(&folio->page)); + dec_mm_counter(mm, mm_counter(folio)); set_pte_at(mm, address, pvmw.pte, pteval); } @@ -2196,7 +2196,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, * migration) will not expect userfaults on already * copied pages. */ - dec_mm_counter(mm, mm_counter(&folio->page)); + dec_mm_counter(mm, mm_counter(folio)); } else { swp_entry_t entry; pte_t swp_pte; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 216ab4c8621f..662ab304cca3 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -124,7 +124,7 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd, * Must happen after rmap, as mm_counter() checks mapping (via * PageAnon()), which is set by __page_set_anon_rmap(). */ - inc_mm_counter(dst_mm, mm_counter(page)); + inc_mm_counter(dst_mm, mm_counter(folio)); set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte);