From patchwork Wed Nov 10 08:40:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12611593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 822A7C433F5 for ; Wed, 10 Nov 2021 08:41:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F3B8E61248 for ; Wed, 10 Nov 2021 08:41:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org F3B8E61248 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A157A6B0072; Wed, 10 Nov 2021 03:41:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C5B86B0073; Wed, 10 Nov 2021 03:41:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88DF66B0074; Wed, 10 Nov 2021 03:41:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 7A72F6B0072 for ; Wed, 10 Nov 2021 03:41:41 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 3796C184E9AD8 for ; Wed, 10 Nov 2021 08:41:41 +0000 (UTC) X-FDA: 78792377202.11.A600C7A Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf20.hostedemail.com (Postfix) with ESMTP id E2A46D0000A2 for ; Wed, 10 Nov 2021 08:41:29 +0000 (UTC) Received: by mail-pl1-f175.google.com with SMTP id n8so2427556plf.4 for ; Wed, 10 Nov 2021 00:41:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2uW/yCeCX04tsKzmi9YATgmqhnOC3lon0JZ/8HxInXE=; b=PC+0ptR3CRoiOYcuGEmuCfx1B/fVgoY+9n0GDJwvB0oHxq/KL0GzNuaNhQWJfTxdbu 8iQm+guscstXIodBn6FwCjrPmfhOfjR+syyiUsqg0Tn9NxACOneyl4y2Ul6SnMgLFnXk 5PWxJOUo97aTooEHI1WGfcZOiR7Bju7rK6pVN9/peaGEJtze+F9+3tHZRiYygTWMcw1w krsleb0BfFAYoSOaXKwuXtRqNzkeyZ3gM8EsE3+sS6PB32wayzedhbRsCtS4JcEvHwuP WvJOiI/Hq68rpmjTlfItIauKn++7RGwbuKWwoYXOP9rO59iABWDh1PpN4VkN/AgXK+Jp 5cSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2uW/yCeCX04tsKzmi9YATgmqhnOC3lon0JZ/8HxInXE=; b=iMcp8CBK3OluwKx7KPKcArKo00rd02gqCjRwIwfG43wHQpee0ul/azz1whISB17otx MPz9uRNQ6He+2pUgNIT/N2fjObkCsysKiMrfXI4RWncEPPeJCCDXn1A/XEVil9QcSl7W AgVkKKcFcRabd6bGO3ElL7kbJwisFS8XqVxkFse2UHJokRkPBMeEzKgA6PwELurxkIj5 d0rp7SHY04+a4Pj09nJDV8quvCLSe5YAXrnRTP0HkkLM7a9pi6E9dox7q6ZcsnptNbhD 0JbrgNphl8xyVuTPdvHGGCIG7935heuJyKZs+HlJj4hXmpaR5gjLhNZKGWiUYZr9GFlt r4FA== X-Gm-Message-State: AOAM533kkQnwEEQle/FV0L27CwaLDyGPPiLZzpOrmFnY2CeriytqLuNA GpVJ24Y7fXqpn+rRfMvf43zN3A== X-Google-Smtp-Source: ABdhPJxDUHb3N0EB7fBRSwo8sSEnmyV/FEoadE7ZqM0uRUd7DpuLRPGmQQH7K5eaSOY8PlhYWxmgaw== X-Received: by 2002:a17:90a:e389:: with SMTP id b9mr14737638pjz.235.1636533699652; Wed, 10 Nov 2021 00:41:39 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.251]) by smtp.gmail.com with ESMTPSA id v38sm5485368pgl.38.2021.11.10.00.41.31 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 Nov 2021 00:41:39 -0800 (PST) From: Qi Zheng To: akpm@linux-foundation.org, tglx@linutronix.de, kirill.shutemov@linux.intel.com, mika.penttila@nextfour.com, david@redhat.com, jgg@nvidia.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, zhouchengming@bytedance.com, Qi Zheng Subject: [PATCH v3 02/15] mm: introduce is_huge_pmd() helper Date: Wed, 10 Nov 2021 16:40:44 +0800 Message-Id: <20211110084057.27676-3-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20211110084057.27676-1-zhengqi.arch@bytedance.com> References: <20211110084057.27676-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E2A46D0000A2 X-Stat-Signature: rar3mrmr467xxephuhyk91jnyc4muixr Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=PC+0ptR3; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf20.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com X-HE-Tag: 1636533689-10199 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently we have some times the following judgments repeated in the code: is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd) which is to determine whether the *pmd is a huge pmd, so introduce is_huge_pmd() helper to deduplicate them. Signed-off-by: Qi Zheng --- include/linux/huge_mm.h | 10 +++++++--- mm/huge_memory.c | 3 +-- mm/memory.c | 5 ++--- mm/mprotect.c | 2 +- mm/mremap.c | 3 +-- 5 files changed, 12 insertions(+), 11 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index f280f33ff223..b37a89180846 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -199,8 +199,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, #define split_huge_pmd(__vma, __pmd, __address) \ do { \ pmd_t *____pmd = (__pmd); \ - if (is_swap_pmd(*____pmd) || pmd_trans_huge(*____pmd) \ - || pmd_devmap(*____pmd)) \ + if (is_huge_pmd(*____pmd)) \ __split_huge_pmd(__vma, __pmd, __address, \ false, NULL); \ } while (0) @@ -232,11 +231,16 @@ static inline int is_swap_pmd(pmd_t pmd) return !pmd_none(pmd) && !pmd_present(pmd); } +static inline int is_huge_pmd(pmd_t pmd) +{ + return is_swap_pmd(pmd) || pmd_trans_huge(pmd) || pmd_devmap(pmd); +} + /* mmap_lock must be held on entry */ static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) { - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) + if (is_huge_pmd(*pmd)) return __pmd_trans_huge_lock(pmd, vma); else return NULL; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e5483347291c..e76ee2e1e423 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1832,8 +1832,7 @@ spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) { spinlock_t *ptl; ptl = pmd_lock(vma->vm_mm, pmd); - if (likely(is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || - pmd_devmap(*pmd))) + if (likely(is_huge_pmd(*pmd))) return ptl; spin_unlock(ptl); return NULL; diff --git a/mm/memory.c b/mm/memory.c index 855486fff526..b00cd60fc368 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1146,8 +1146,7 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, src_pmd = pmd_offset(src_pud, addr); do { next = pmd_addr_end(addr, end); - if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd) - || pmd_devmap(*src_pmd)) { + if (is_huge_pmd(*src_pmd)) { int err; VM_BUG_ON_VMA(next-addr != HPAGE_PMD_SIZE, src_vma); err = copy_huge_pmd(dst_mm, src_mm, dst_pmd, src_pmd, @@ -1441,7 +1440,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb, pmd = pmd_offset(pud, addr); do { next = pmd_addr_end(addr, end); - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { + if (is_huge_pmd(*pmd)) { if (next - addr != HPAGE_PMD_SIZE) __split_huge_pmd(vma, pmd, addr, false, NULL); else if (zap_huge_pmd(tlb, vma, pmd, addr)) diff --git a/mm/mprotect.c b/mm/mprotect.c index e552f5e0ccbd..2d5064a4631c 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -257,7 +257,7 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, mmu_notifier_invalidate_range_start(&range); } - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { + if (is_huge_pmd(*pmd)) { if (next - addr != HPAGE_PMD_SIZE) { __split_huge_pmd(vma, pmd, addr, false, NULL); } else { diff --git a/mm/mremap.c b/mm/mremap.c index 002eec83e91e..c6e9da09dd0a 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -532,8 +532,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, new_pmd = alloc_new_pmd(vma->vm_mm, vma, new_addr); if (!new_pmd) break; - if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || - pmd_devmap(*old_pmd)) { + if (is_huge_pmd(*old_pmd)) { if (extent == HPAGE_PMD_SIZE && move_pgt_entry(HPAGE_PMD, vma, old_addr, new_addr, old_pmd, new_pmd, need_rmap_locks))