From patchwork Wed Apr 24 14:07:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13641885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1095C4345F for ; Wed, 24 Apr 2024 13:40:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 627B48D000C; Wed, 24 Apr 2024 09:40:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D7F98D0001; Wed, 24 Apr 2024 09:40:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49F0E8D000C; Wed, 24 Apr 2024 09:40:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 2C6888D0001 for ; Wed, 24 Apr 2024 09:40:57 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D5718160341 for ; Wed, 24 Apr 2024 13:40:56 +0000 (UTC) X-FDA: 82044536112.27.9ECF113 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf30.hostedemail.com (Postfix) with ESMTP id D582B8001F for ; Wed, 24 Apr 2024 13:40:53 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf30.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713966055; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=W88V7TonLh8Os14qInGH3y4F4w3NJhhWFkOFAaob65Q=; b=YmAv4m4OfYe8FwXmuZsya6x+O86VNkJdmssNVJa5W3hA/ILJiKmfGYMV2NzKdMeDkY7jP6 U2T77N0gCSOLjpC7FHT+eiBd4ZSQ0xIe6d0MY4Hmw5fFs9gkLHtN3b+HIaUqATvqe9d4T6 uz08T+dH2Jt7sgYYASZVAGMQhZ7nnbw= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf30.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713966055; a=rsa-sha256; cv=none; b=DCoCUG4AnCWolYYcdHZyhksNhh/RiWdJKTr7pCJqfU49OaYPXuz7kuMRZA/2xGkICfLeO9 W+nLfMZG9cKUVwy7ZKkSs0TOw98tfzfU62SHIUyjQkOBOK+DtjfvovUAWnQ3jbuldMWMJ9 HjapTMyQdWPEv6go+oL4GacA2KjJuD4= Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4VPg6f5Hn7zCshG; Wed, 24 Apr 2024 21:38:18 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 47A711800C2; Wed, 24 Apr 2024 21:40:50 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Wed, 24 Apr 2024 21:40:49 +0800 From: Kefeng Wang To: Andrew Morton CC: Ryan Roberts , , David Hildenbrand , Kefeng Wang Subject: [PATCH] mm: add more readable thp_vma_allowable_order_foo() Date: Wed, 24 Apr 2024 22:07:15 +0800 Message-ID: <20240424140715.5838-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-Stat-Signature: z4ya3c5adfrxqkh5m16eccsuocmeu8kx X-Rspamd-Queue-Id: D582B8001F X-Rspamd-Server: rspam02 X-Rspam-User: X-HE-Tag: 1713966053-420844 X-HE-Meta: U2FsdGVkX1+ekKnFfTKkiFtym5P6J1K2oQ2ymtcPpr1LzUQImuHeW1HR8RmrIov4TXiW0oAFy243mhcTl427RR0x2FkER2wJmAg+0zrdFtfUaZPWgMq+s3UpqaXUhgDibIOOHXbGJ91+ad2vrerL/cKCfoPSujHgjKL2mJ0BqaHvzq/F3bZ8Km9nCX3VTYaiJAWK6+Cgew6uGP6+Ay1qRzvbvnezCz2LOWlePXL1F3g7YL+/cO8qgETWsDYogEpsGqnDSYRZd4evK2nv6ULGUrg7ol8bu0EqpPEWqsCU0mbOZ4SPzDWz4xQ4/5Cff7tFSgpThI7KRtZjQQNJtCfU/0a0go5S8dr86J1thLKSESv5MRMPuq71wRXpZAgto5V9pa64cyuD89b/eCVY8tjOiZmOrem0QAMsm/HuSBeYvU3OR8qBwDxLTeiRRbfpGwz1M4qXY2JeOEPSTKa/hsARHHxT3B4MTIY30sQp/tH3iZaSkSdPRFyx4ejnZVnzBeMscbIJgkzp3N2HP8r+MeLQhiUkGOTQtY/2jV0sRzS3Y+ZRn4C2ZU5H1NsCefn9DiwB8jG5T+X1/kiyKdIMg0jNf/kjwwH6rUhD2UZdBBpVRRJAiCocEbdQ4qh4X9WD5C0AShUUw6jZAXjTIpCGpm3hLNWKO4I+9eW69BoYg/bjicfl3TqmKWXWw2n3asX1xtVtyWUshshOpNgSm2fL6t/gJoAlC8gf3ZIl6k4rRkCep8l1eTEmbZRIP/Tz+7o35GSrA0Jf8BU8//+9lB2o2gQzH3zpR0PfBSbfuh9F9jBlojY2CoL3MNwQMqCIIz1qhxMoF8Ufh8J3c69HYe3YV3oSWYxmd4lVpeevGx+xOatatDv3HeQbz7edb2dKnErJ+81VnCK/TTepqZ+JPdPNDVrBfOojUiW+gpd2jYvI13N0rrknCn5X9IgylAkfij3BiOmMZ8ZtwADOodh944qClM7 QnmeirpE +AwaSTG3Mgt2OWANO/D/sDC09wt7tfvWADahu6GzkmcwzMnNToX6HZ3z4hR0Q1tHOG2RcmIl6muYHaCreTQC5ztJN5n14g5BHDTUp03NDwFHj059yfp6ynU0dPEaKfXWXFNtk1x24jHxyBsS+akZElFK/ci+q8PGIFtWG2L6+hZ2Qq4glGkT/4/em3zwzvmWlEjvk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are too many bool arguments in thp_vma_allowable_orders(), adding some more readable thp_vma_allowable_order_foo(), thp_vma_allowable_orders_insmaps() is used in samps thp_vma_allowable_order[s]_inpf() is used in page fault thp_vma_allowable_pmd_order_inhuge is used in khugepaged scan and madvise Signed-off-by: Kefeng Wang Reviewed-by: Ryan Roberts --- fs/proc/task_mmu.c | 3 +-- include/linux/huge_mm.h | 14 ++++++++++++-- mm/khugepaged.c | 20 ++++++++------------ mm/memory.c | 8 ++++---- 4 files changed, 25 insertions(+), 20 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index f4259b7edfde..1136aa97f143 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -871,8 +871,7 @@ static int show_smap(struct seq_file *m, void *v) __show_smap(m, &mss, false); seq_printf(m, "THPeligible: %8u\n", - !!thp_vma_allowable_orders(vma, vma->vm_flags, true, false, - true, THP_ORDERS_ALL)); + thp_vma_allowable_orders_insmaps(vma, vma->vm_flags)); if (arch_pkeys_enabled()) seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 56c7ea73090b..345cf394480b 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -83,8 +83,18 @@ extern struct kobj_attribute shmem_enabled_attr; */ #define THP_ORDERS_ALL (THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE) -#define thp_vma_allowable_order(vma, vm_flags, smaps, in_pf, enforce_sysfs, order) \ - (!!thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, enforce_sysfs, BIT(order))) +#define thp_vma_allowable_orders_insmaps(vma, vm_flags) \ + (!!thp_vma_allowable_orders(vma, vm_flags, true, false, true, THP_ORDERS_ALL)) + +#define thp_vma_allowable_orders_inpf(vma, vm_flags, orders) \ + (!!thp_vma_allowable_orders(vma, vm_flags, false, true, true, orders)) + +#define thp_vma_allowable_order_inpf(vma, vm_flags, order) \ + (!!thp_vma_allowable_orders_inpf(vma, vm_flags, BIT(order))) + +#define thp_vma_allowable_pmd_order_inhuge(vma, vm_flags, enforce_sysfs) \ + (!!thp_vma_allowable_orders(vma, vm_flags, false, false, enforce_sysfs, BIT(PMD_ORDER))) + #ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES #define HPAGE_PMD_SHIFT PMD_SHIFT diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 2f73d2aa9ae8..5a27dccfda02 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -453,8 +453,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && hugepage_flags_enabled()) { - if (thp_vma_allowable_order(vma, vm_flags, false, false, true, - PMD_ORDER)) + if (thp_vma_allowable_pmd_order_inhuge(vma, vm_flags, true)) __khugepaged_enter(vma->vm_mm); } } @@ -909,15 +908,15 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) return SCAN_ADDRESS_RANGE; - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, - cc->is_khugepaged, PMD_ORDER)) + if (!thp_vma_allowable_pmd_order_inhuge(vma, vma->vm_flags, + cc->is_khugepaged)) return SCAN_VMA_CHECK; /* * Anon VMA expected, the address may be unmapped then * remapped to file after khugepaged reaquired the mmap_lock. * - * thp_vma_allowable_order may return true for qualified file - * vmas. + * thp_vma_allowable_pmd_order_inhuge may return true for + * qualified file vmas. */ if (expect_anon && (!(*vmap)->anon_vma || !vma_is_anonymous(*vmap))) return SCAN_PAGE_ANON; @@ -1493,8 +1492,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, * and map it by a PMD, regardless of sysfs THP settings. As such, let's * analogously elide sysfs THP settings here. */ - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, false, - PMD_ORDER)) + if (!thp_vma_allowable_pmd_order_inhuge(vma, vma->vm_flags, false)) return SCAN_VMA_CHECK; /* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */ @@ -2355,8 +2353,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, progress++; break; } - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, - true, PMD_ORDER)) { + if (!thp_vma_allowable_pmd_order_inhuge(vma, vma->vm_flags, true)) { skip: progress++; continue; @@ -2693,8 +2690,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, *prev = vma; - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, false, - PMD_ORDER)) + if (!thp_vma_allowable_pmd_order_inhuge(vma, vma->vm_flags, false)) return -EINVAL; cc = kmalloc(sizeof(*cc), GFP_KERNEL); diff --git a/mm/memory.c b/mm/memory.c index 09ed76e5b8c0..8507bfda461a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4329,8 +4329,8 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) * for this vma. Then filter out the orders that can't be allocated over * the faulting address and still be fully contained in the vma. */ - orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true, - BIT(PMD_ORDER) - 1); + orders = thp_vma_allowable_orders_inpf(vma, vma->vm_flags, + BIT(PMD_ORDER) - 1); orders = thp_vma_suitable_orders(vma, vmf->address, orders); if (!orders) @@ -5433,7 +5433,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, return VM_FAULT_OOM; retry_pud: if (pud_none(*vmf.pud) && - thp_vma_allowable_order(vma, vm_flags, false, true, true, PUD_ORDER)) { + thp_vma_allowable_order_inpf(vma, vm_flags, PUD_ORDER)) { ret = create_huge_pud(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; @@ -5467,7 +5467,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, goto retry_pud; if (pmd_none(*vmf.pmd) && - thp_vma_allowable_order(vma, vm_flags, false, true, true, PMD_ORDER)) { + thp_vma_allowable_order_inpf(vma, vm_flags, PMD_ORDER)) { ret = create_huge_pmd(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret;