From patchwork Thu Apr 25 03:51:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13642855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37EE9C10F15 for ; Thu, 25 Apr 2024 03:24:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 780FB6B0083; Wed, 24 Apr 2024 23:24:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 730EA6B0085; Wed, 24 Apr 2024 23:24:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F8D06B0087; Wed, 24 Apr 2024 23:24:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3F4896B0083 for ; Wed, 24 Apr 2024 23:24:44 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id DBF19A09B7 for ; Thu, 25 Apr 2024 03:24:43 +0000 (UTC) X-FDA: 82046612046.05.D49DB8C Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf10.hostedemail.com (Postfix) with ESMTP id B199BC0002 for ; Thu, 25 Apr 2024 03:24:40 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714015482; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=3DsC9+kWsz+rxdYGU6Ye79EqSiXNvYxlWG3GnX0pank=; b=obYvyQq9S0FrGU8BzGwCH/eOpqatdS3N8w+ZIxg8EBGug3C2bUdFAira164lcIIY0/Zb+L FJaxrjsUDxZEQAZ+aseYLX/ore3tv8VzIkkEOREOVzTqedc2lrixMOOyTmHRc0qYdD0HIi lNhcMOSy3jQQq3TOvhtfz0rp0G2KC0A= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714015482; a=rsa-sha256; cv=none; b=aFGlTYTzNJxTSqEH0I5Ke4qHMsEhiZMRSYqGgN1mCKCIAzCkf0jUAnO9FeQvF3l3tUsghN vsVCLDtQwX1xkLmE34veK1IpVeRLrr5kDcgWNRnmSWzFI8pxvN9/LTYFSAwiYKbiO7UAjM f1UyGPIuP6SFZMBm9ILHIUbOMBA4TYE= Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4VQ1NP0tK1ztT2C; Thu, 25 Apr 2024 11:21:25 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id A40DF18007A; Thu, 25 Apr 2024 11:24:36 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Thu, 25 Apr 2024 11:24:36 +0800 From: Kefeng Wang To: Andrew Morton CC: Ryan Roberts , , David Hildenbrand , Kefeng Wang Subject: [PATCH v2] mm: add more readable thp_vma_allowable_order_foo() Date: Thu, 25 Apr 2024 11:51:08 +0800 Message-ID: <20240425035108.3063-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspamd-Queue-Id: B199BC0002 X-Stat-Signature: zpbq93sihst7pidrzust57wbss6x4g4p X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1714015480-504787 X-HE-Meta: U2FsdGVkX1/AdbjnW/QDTnsmEXoVvnIaSAcRMZOt3Ud7rMHGuObNp7ci2mniGXcuVxf68F4VhcridBDvafJkLiQA4oTSh9HayG2mk38jsECHyiWxJvXYyZWq8YhHBLm6LBfeAmDXhZtKGQO9tWZRaHO4xwDRj5VcCwqcpdGsSwOhe+SiOKuJqUu4Y16qTmymwjwoEK6Z3UDcwbirjbRhuKyhRPkI++dPvPWj+85jYys7hbZ3qu5yPbPkiXL5g2P8qUPQKqEoaCy6Rn60HUveOmMTWB/1b2V37FOYEw79vB56nQosrmkur47hw4khmGhFvnDYYcd8k/yZbmI5vMzV15pTMHSRMlwKIIUq9+qtVAxaFnjhWMhGXJD5OKX0pfpeAgB/aCkbT7Dmf7p0iDlh3hs1WfzHf0AmdQe9PNebwhW8+tdoZX+kZtiKJyVxOxHR4My5i/nw1/h1LNDQkb3z+iZmVCdFHuQCDqCXWnhra37ohrfRXEpl62rKXdd0vOWuakz/Xr5a2EKAy7DIl2OyKOmhMbOiGvfE9V/gTVSbfQXqdLty8m1dkxd9r6GkBMOPeEtkFTyQuppYYoSs39+SjyiM9cp0AV9vl2PPjN5xySbdXu4gy1BA2HzhFGFeQfj4HuxaxhB4g20c8v3KTpntw0GKLR2lr4cS1O7KL1/GJ/6HoE+0w2u9jX0uNGrWkDaB2Vc4h2O/dXMqZru8pXfY3xCBdbCpPnhY2wSBn2INZQNjIQpPYQ+G2Tb+4OTKPowqYykL5chRC2OQAIXUiB52bvebfdwkrakSwZyTfdhOYerazH1ZVTuV2zDyM7rc+HGjoIaX0ceg7KwpX1jnuFWtQSFsaKb33kioRJKwWl6rSOPCeibRQiWn/B4+Fh+1g3LH4fNwdKcgzG80firTAEOnBWtOTPPflXvUPffR2Z5vDRZVzxsrgh6VnJ1YQBQ6Uwlo5YfpPQULROiqFxmD/gN 3BOJJ+sp nTgMB16x7BuZXI/YFdXBnpxvK8oTY9v07iM7qSSPjDs7VDqQHEx7ANR0m35ymC4/mHEf830+1SoIImujWlYunI2ptrkwcG+3dx5Qj5OLy61O1RJEcyADSP5zv1UqBUlvXsSwA0mwHr1d5xK65cThomrvgR5KeOwYWwhMSSye/pfrpVCcajQq6J9pE0ruJoXpdyrXWowY0mMlqcG6acQdEoG1VaaPIDt/UfMVduxDi8FKwWtrpApZsspJzFQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are too many bool arguments in thp_vma_allowable_orders(), adding some more readable thp_vma_allowable_order_foo(), thp_vma_allowable_orders_smaps() is used in smaps thp_vma_allowable_order[s]_pf() is used in page fault thp_vma_allowable_order_khugepaged() is used in khugepaged scan and madvise Reviewed-by: Ryan Roberts Signed-off-by: Kefeng Wang --- v2: - use new thp_vma_allowable_order_khugepaged() naming, suggested by Ryan/David fs/proc/task_mmu.c | 3 +-- include/linux/huge_mm.h | 14 ++++++++++++-- mm/khugepaged.c | 24 ++++++++++++------------ mm/memory.c | 8 ++++---- 4 files changed, 29 insertions(+), 20 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index f4259b7edfde..e95ec49bf190 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -871,8 +871,7 @@ static int show_smap(struct seq_file *m, void *v) __show_smap(m, &mss, false); seq_printf(m, "THPeligible: %8u\n", - !!thp_vma_allowable_orders(vma, vma->vm_flags, true, false, - true, THP_ORDERS_ALL)); + thp_vma_allowable_orders_smaps(vma, vma->vm_flags)); if (arch_pkeys_enabled()) seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 56c7ea73090b..87409e87c241 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -83,8 +83,18 @@ extern struct kobj_attribute shmem_enabled_attr; */ #define THP_ORDERS_ALL (THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE) -#define thp_vma_allowable_order(vma, vm_flags, smaps, in_pf, enforce_sysfs, order) \ - (!!thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, enforce_sysfs, BIT(order))) +#define thp_vma_allowable_orders_smaps(vma, vm_flags) \ + (!!thp_vma_allowable_orders(vma, vm_flags, true, false, true, THP_ORDERS_ALL)) + +#define thp_vma_allowable_orders_pf(vma, vm_flags, orders) \ + (!!thp_vma_allowable_orders(vma, vm_flags, false, true, true, orders)) + +#define thp_vma_allowable_order_pf(vma, vm_flags, order) \ + (!!thp_vma_allowable_orders_pf(vma, vm_flags, BIT(order))) + +#define thp_vma_allowable_order_khugepaged(vma, vm_flags, enforce_sysfs, order) \ + (!!thp_vma_allowable_orders(vma, vm_flags, false, false, enforce_sysfs, BIT(order))) + #ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES #define HPAGE_PMD_SHIFT PMD_SHIFT diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 2f73d2aa9ae8..006c8c9a5b68 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -453,8 +453,8 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && hugepage_flags_enabled()) { - if (thp_vma_allowable_order(vma, vm_flags, false, false, true, - PMD_ORDER)) + if (thp_vma_allowable_order_khugepaged(vma, vm_flags, true, + PMD_ORDER)) __khugepaged_enter(vma->vm_mm); } } @@ -909,15 +909,15 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) return SCAN_ADDRESS_RANGE; - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, - cc->is_khugepaged, PMD_ORDER)) + if (!thp_vma_allowable_order_khugepaged(vma, vma->vm_flags, + cc->is_khugepaged, PMD_ORDER)) return SCAN_VMA_CHECK; /* * Anon VMA expected, the address may be unmapped then * remapped to file after khugepaged reaquired the mmap_lock. * - * thp_vma_allowable_order may return true for qualified file - * vmas. + * thp_vma_allowable_order_khugepaged may return true for + * qualified file vmas. */ if (expect_anon && (!(*vmap)->anon_vma || !vma_is_anonymous(*vmap))) return SCAN_PAGE_ANON; @@ -1493,8 +1493,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, * and map it by a PMD, regardless of sysfs THP settings. As such, let's * analogously elide sysfs THP settings here. */ - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, false, - PMD_ORDER)) + if (!thp_vma_allowable_order_khugepaged(vma, vma->vm_flags, false, + PMD_ORDER)) return SCAN_VMA_CHECK; /* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */ @@ -2355,8 +2355,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, progress++; break; } - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, - true, PMD_ORDER)) { + if (!thp_vma_allowable_order_khugepaged(vma, vma->vm_flags, true, + PMD_ORDER)) { skip: progress++; continue; @@ -2693,8 +2693,8 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, *prev = vma; - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, false, - PMD_ORDER)) + if (!thp_vma_allowable_order_khugepaged(vma, vma->vm_flags, false, + PMD_ORDER)) return -EINVAL; cc = kmalloc(sizeof(*cc), GFP_KERNEL); diff --git a/mm/memory.c b/mm/memory.c index 09ed76e5b8c0..a1255fb2c709 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4329,8 +4329,8 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) * for this vma. Then filter out the orders that can't be allocated over * the faulting address and still be fully contained in the vma. */ - orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true, - BIT(PMD_ORDER) - 1); + orders = thp_vma_allowable_orders_pf(vma, vma->vm_flags, + BIT(PMD_ORDER) - 1); orders = thp_vma_suitable_orders(vma, vmf->address, orders); if (!orders) @@ -5433,7 +5433,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, return VM_FAULT_OOM; retry_pud: if (pud_none(*vmf.pud) && - thp_vma_allowable_order(vma, vm_flags, false, true, true, PUD_ORDER)) { + thp_vma_allowable_order_pf(vma, vm_flags, PUD_ORDER)) { ret = create_huge_pud(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; @@ -5467,7 +5467,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, goto retry_pud; if (pmd_none(*vmf.pmd) && - thp_vma_allowable_order(vma, vm_flags, false, true, true, PMD_ORDER)) { + thp_vma_allowable_order_pf(vma, vm_flags, PMD_ORDER)) { ret = create_huge_pmd(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret;