From patchwork Wed Nov 10 10:54:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12611779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08235C4332F for ; Wed, 10 Nov 2021 10:56:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A4DFE6112D for ; Wed, 10 Nov 2021 10:56:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A4DFE6112D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3FFA06B0072; Wed, 10 Nov 2021 05:56:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 362016B0080; Wed, 10 Nov 2021 05:56:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1DCAA6B0081; Wed, 10 Nov 2021 05:56:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0003.hostedemail.com [216.40.44.3]) by kanga.kvack.org (Postfix) with ESMTP id 04CED6B0072 for ; Wed, 10 Nov 2021 05:56:06 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B7AE154CBC for ; Wed, 10 Nov 2021 10:56:05 +0000 (UTC) X-FDA: 78792715890.20.03D0810 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) by imf03.hostedemail.com (Postfix) with ESMTP id 26C5430000AC for ; Wed, 10 Nov 2021 10:55:57 +0000 (UTC) Received: by mail-pj1-f54.google.com with SMTP id nh10-20020a17090b364a00b001a69adad5ebso1327836pjb.2 for ; Wed, 10 Nov 2021 02:56:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=25VRZHR2Pd1VOPr3qZJqaRujj9XwfT4H1vR/Swabhf0=; b=lz02LYdU8qxmCn1b6rHkpZA/GZ/O+w2Elu0uYFnq1f5iHi/0SBko9OtG0wFLIoD4yz b0hWoKv/hzZ0NYjWQ/OJu9oGxhKVIOhCv+moAsqKJJKJwZ89zZvegnmx1NxOLqtQl2c5 hMV71i7n2eMyMCO1JHtvVw48yoi8urgDLIHg1ob7sGE3BCov3KrnzAysFU94hqOwMdXN dH9vL3Z6czJou33gMPj72kQ3xO7D2Ms7T1uFWlImi+lXqu83eo+AGGO0D9StuTSabqVe O0nZBwDnMViWz7EmAxIiALvpWL+dCP80ex4FwnJA1Dni3ChoPgKRtoJRcrqYQtShwBxO iMjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=25VRZHR2Pd1VOPr3qZJqaRujj9XwfT4H1vR/Swabhf0=; b=ZV6MYnuRS2cZj2KRM1dcRY3gfLx4icvOC7BZ3KPX2rmcBBFLiAbQnlYx4NRYOHwbBJ 71htBENaXSavyWBe3LYWsGonj8+clG491r3KgxM2C6QG8OrA19EkerE/tw+PIPfJTISp yyIT0M1dU9Q/7uxqWn8tnuLalxVGgJPNSOHULczi5qevIhDN43Q1EfikPPQhGuOqwqWk HwMhuJaaZnCZUKn91MQBlaIKZUBuVH0jao20b9xBadP2TeWZ6KaOqkZZgyxZzjgdOh0O ybebjgMYdWjTYenku8rTWgtDM93qGLEqjWzPoJEz2HRH/eyGkpIOIwmoikbq/CCZWSaT pScQ== X-Gm-Message-State: AOAM530MsvIM46PAfcrQrlf3+oxywY1HJHe6x3AnTBdkbOi2w0Wbr9Xt QpEPQqi+YATbhuN25zjC74EASA== X-Google-Smtp-Source: ABdhPJwGhGqveQ+o8jACuOsGuYbmyPmdBPe4k0o2lNyqcYut9KcMuAGLX1QXE/zSG37UacD6C/vYpA== X-Received: by 2002:a17:902:e789:b0:140:801:1262 with SMTP id cp9-20020a170902e78900b0014008011262mr15005904plb.42.1636541764450; Wed, 10 Nov 2021 02:56:04 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.251]) by smtp.gmail.com with ESMTPSA id v38sm5865829pgl.38.2021.11.10.02.56.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 Nov 2021 02:56:04 -0800 (PST) From: Qi Zheng To: akpm@linux-foundation.org, tglx@linutronix.de, kirill.shutemov@linux.intel.com, mika.penttila@nextfour.com, david@redhat.com, jgg@nvidia.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, zhouchengming@bytedance.com, Qi Zheng Subject: [PATCH v3 15/15] mm/pte_ref: use mmu_gather to free PTE page table pages Date: Wed, 10 Nov 2021 18:54:28 +0800 Message-Id: <20211110105428.32458-16-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20211110105428.32458-1-zhengqi.arch@bytedance.com> References: <20211110105428.32458-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=lz02LYdU; spf=pass (imf03.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.216.54 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 26C5430000AC X-Stat-Signature: rh1y3wdi3x751epun8nzj3xius5rarq9 X-HE-Tag: 1636541757-584795 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In unmap_region() and other paths, we can reuse @tlb to free PTE page table, which can reduce the number of tlb flush. Signed-off-by: Qi Zheng --- Documentation/vm/pte_ref.rst | 58 +++++++++++++++++++++++--------------------- arch/x86/Kconfig | 2 +- include/linux/pte_ref.h | 34 ++++++++++++++++++++------ mm/madvise.c | 4 +-- mm/memory.c | 4 +-- mm/mmu_gather.c | 40 +++++++++++++----------------- mm/pte_ref.c | 13 +++++++--- 7 files changed, 90 insertions(+), 65 deletions(-) diff --git a/Documentation/vm/pte_ref.rst b/Documentation/vm/pte_ref.rst index c5323a263464..d304c0bfaae1 100644 --- a/Documentation/vm/pte_ref.rst +++ b/Documentation/vm/pte_ref.rst @@ -183,30 +183,34 @@ GUP as an example:: 4. Helpers ========== -+---------------------+-------------------------------------------------+ -| pte_ref_init | Initialize the pte_refcount and pmd | -+---------------------+-------------------------------------------------+ -| pte_to_pmd | Get the corresponding pmd | -+---------------------+-------------------------------------------------+ -| pte_update_pmd | Update the corresponding pmd | -+---------------------+-------------------------------------------------+ -| pte_get | Increment a pte_refcount | -+---------------------+-------------------------------------------------+ -| pte_get_many | Add a value to a pte_refcount | -+---------------------+-------------------------------------------------+ -| pte_get_unless_zero | Increment a pte_refcount unless it is 0 | -+---------------------+-------------------------------------------------+ -| pte_try_get | Try to increment a pte_refcount | -+---------------------+-------------------------------------------------+ -| pte_tryget_map | Try to increment a pte_refcount before | -| | pte_offset_map() | -+---------------------+-------------------------------------------------+ -| pte_tryget_map_lock | Try to increment a pte_refcount before | -| | pte_offset_map_lock() | -+---------------------+-------------------------------------------------+ -| pte_put | Decrement a pte_refcount | -+---------------------+-------------------------------------------------+ -| pte_put_many | Sub a value to a pte_refcount | -+---------------------+-------------------------------------------------+ -| pte_put_vmf | Decrement a pte_refcount in the page fault path | -+---------------------+-------------------------------------------------+ ++---------------------+------------------------------------------------------+ +| pte_ref_init | Initialize the pte_refcount and pmd | ++---------------------+------------------------------------------------------+ +| pte_to_pmd | Get the corresponding pmd | ++---------------------+------------------------------------------------------+ +| pte_update_pmd | Update the corresponding pmd | ++---------------------+------------------------------------------------------+ +| pte_get | Increment a pte_refcount | ++---------------------+------------------------------------------------------+ +| pte_get_many | Add a value to a pte_refcount | ++---------------------+------------------------------------------------------+ +| pte_get_unless_zero | Increment a pte_refcount unless it is 0 | ++---------------------+------------------------------------------------------+ +| pte_try_get | Try to increment a pte_refcount | ++---------------------+------------------------------------------------------+ +| pte_tryget_map | Try to increment a pte_refcount before | +| | pte_offset_map() | ++---------------------+------------------------------------------------------+ +| pte_tryget_map_lock | Try to increment a pte_refcount before | +| | pte_offset_map_lock() | ++---------------------+------------------------------------------------------+ +| __pte_put | Decrement a pte_refcount | ++---------------------+------------------------------------------------------+ +| __pte_put_many | Sub a value to a pte_refcount | ++---------------------+------------------------------------------------------+ +| pte_put | Decrement a pte_refcount(without tlb parameter) | ++---------------------+------------------------------------------------------+ +| pte_put_many | Sub a value to a pte_refcount(without tlb parameter) | ++---------------------+------------------------------------------------------+ +| pte_put_vmf | Decrement a pte_refcount in the page fault path | ++---------------------+------------------------------------------------------+ diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index ca5bfe83ec61..69ea13437947 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -233,7 +233,7 @@ config X86 select HAVE_PCI select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT + select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT || FREE_USER_PTE select HAVE_POSIX_CPU_TIMERS_TASK_WORK select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if X86_64 && (UNWINDER_FRAME_POINTER || UNWINDER_ORC) && STACK_VALIDATION diff --git a/include/linux/pte_ref.h b/include/linux/pte_ref.h index 8a26eaba83ef..dc3923bb38f6 100644 --- a/include/linux/pte_ref.h +++ b/include/linux/pte_ref.h @@ -22,7 +22,8 @@ enum pte_tryget_type pte_try_get(pmd_t *pmd); bool pte_get_unless_zero(pmd_t *pmd); #ifdef CONFIG_FREE_USER_PTE -void free_user_pte_table(struct mm_struct *mm, pmd_t *pmdp, unsigned long addr); +void free_user_pte_table(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmd, unsigned long addr); static inline void pte_ref_init(pgtable_t pte, pmd_t *pmd, int count) { @@ -48,14 +49,21 @@ static inline void pte_get_many(pmd_t *pmd, unsigned int nr) atomic_add(nr, &pte->pte_refcount); } -static inline void pte_put_many(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, unsigned int nr) +static inline void __pte_put_many(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmd, unsigned long addr, + unsigned int nr) { pgtable_t pte = pmd_pgtable(*pmd); VM_BUG_ON(!PageTable(pte)); if (atomic_sub_and_test(nr, &pte->pte_refcount)) - free_user_pte_table(mm, pmd, addr & PMD_MASK); + free_user_pte_table(tlb, mm, pmd, addr & PMD_MASK); +} + +static inline void __pte_put(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmd, unsigned long addr) +{ + __pte_put_many(tlb, mm, pmd, addr, 1); } #else static inline void pte_ref_init(pgtable_t pte, pmd_t *pmd, int count) @@ -75,8 +83,14 @@ static inline void pte_get_many(pmd_t *pmd, unsigned int nr) { } -static inline void pte_put_many(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, unsigned int nr) +static inline void __pte_put_many(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmd, unsigned long addr, + unsigned int nr) +{ +} + +static inline void __pte_put(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmd, unsigned long addr) { } #endif /* CONFIG_FREE_USER_PTE */ @@ -110,6 +124,12 @@ static inline pte_t *pte_tryget_map_lock(struct mm_struct *mm, pmd_t *pmd, return pte_offset_map_lock(mm, pmd, address, ptlp); } +static inline void pte_put_many(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, unsigned int nr) +{ + __pte_put_many(NULL, mm, pmd, addr, nr); +} + /* * pte_put - Decrement refcount for the PTE page table. * @mm: the mm_struct of the target address space. @@ -120,7 +140,7 @@ static inline pte_t *pte_tryget_map_lock(struct mm_struct *mm, pmd_t *pmd, */ static inline void pte_put(struct mm_struct *mm, pmd_t *pmd, unsigned long addr) { - pte_put_many(mm, pmd, addr, 1); + __pte_put(NULL, mm, pmd, addr); } #endif diff --git a/mm/madvise.c b/mm/madvise.c index 5cf2832abb98..b51254305bb2 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -477,7 +477,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, arch_leave_lazy_mmu_mode(); pte_unmap_unlock(orig_pte, ptl); - pte_put(vma->vm_mm, pmd, start); + __pte_put(tlb, vma->vm_mm, pmd, start); if (pageout) reclaim_pages(&page_list); cond_resched(); @@ -710,7 +710,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, arch_leave_lazy_mmu_mode(); pte_unmap_unlock(orig_pte, ptl); if (nr_put) - pte_put_many(mm, pmd, start, nr_put); + __pte_put_many(tlb, mm, pmd, start, nr_put); cond_resched(); next: return 0; diff --git a/mm/memory.c b/mm/memory.c index 4d1ede78d1b0..1bdae3b0f877 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1469,7 +1469,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, } if (nr_put) - pte_put_many(mm, pmd, start, nr_put); + __pte_put_many(tlb, mm, pmd, start, nr_put); return addr; } @@ -1515,7 +1515,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb, if (pte_try_get(pmd)) goto next; next = zap_pte_range(tlb, vma, pmd, addr, next, details); - pte_put(tlb->mm, pmd, addr); + __pte_put(tlb, tlb->mm, pmd, addr); next: cond_resched(); } while (pmd++, addr = next, addr != end); diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 1b9837419bf9..1bd9fa889421 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -134,42 +134,42 @@ static void __tlb_remove_table_free(struct mmu_table_batch *batch) * */ -static void tlb_remove_table_smp_sync(void *arg) +static void tlb_remove_table_rcu(struct rcu_head *head) { - /* Simply deliver the interrupt */ + __tlb_remove_table_free(container_of(head, struct mmu_table_batch, rcu)); } -static void tlb_remove_table_sync_one(void) +static void tlb_remove_table_free(struct mmu_table_batch *batch) { - /* - * This isn't an RCU grace period and hence the page-tables cannot be - * assumed to be actually RCU-freed. - * - * It is however sufficient for software page-table walkers that rely on - * IRQ disabling. - */ - smp_call_function(tlb_remove_table_smp_sync, NULL, 1); + call_rcu(&batch->rcu, tlb_remove_table_rcu); } -static void tlb_remove_table_rcu(struct rcu_head *head) +static void tlb_remove_table_one_rcu(struct rcu_head *head) { - __tlb_remove_table_free(container_of(head, struct mmu_table_batch, rcu)); + struct page *page = container_of(head, struct page, rcu_head); + + __tlb_remove_table(page); } -static void tlb_remove_table_free(struct mmu_table_batch *batch) +static void tlb_remove_table_one(void *table) { - call_rcu(&batch->rcu, tlb_remove_table_rcu); + pgtable_t page = (pgtable_t)table; + + call_rcu(&page->rcu_head, tlb_remove_table_one_rcu); } #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */ -static void tlb_remove_table_sync_one(void) { } - static void tlb_remove_table_free(struct mmu_table_batch *batch) { __tlb_remove_table_free(batch); } +static void tlb_remove_table_one(void *table) +{ + __tlb_remove_table(table); +} + #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */ /* @@ -187,12 +187,6 @@ static inline void tlb_table_invalidate(struct mmu_gather *tlb) } } -static void tlb_remove_table_one(void *table) -{ - tlb_remove_table_sync_one(); - __tlb_remove_table(table); -} - static void tlb_table_flush(struct mmu_gather *tlb) { struct mmu_table_batch **batch = &tlb->batch; diff --git a/mm/pte_ref.c b/mm/pte_ref.c index 728e61cea25e..f9650ad23c7c 100644 --- a/mm/pte_ref.c +++ b/mm/pte_ref.c @@ -8,6 +8,8 @@ #include #include #include +#include +#include #include #ifdef CONFIG_FREE_USER_PTE @@ -117,7 +119,8 @@ static void pte_free_rcu(struct rcu_head *rcu) __free_page(page); } -void free_user_pte_table(struct mm_struct *mm, pmd_t *pmd, unsigned long addr) +void free_user_pte_table(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmd, unsigned long addr) { struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); spinlock_t *ptl; @@ -125,10 +128,14 @@ void free_user_pte_table(struct mm_struct *mm, pmd_t *pmd, unsigned long addr) ptl = pmd_lock(mm, pmd); pmdval = pmdp_huge_get_and_clear(mm, addr, pmd); - flush_tlb_range(&vma, addr, addr + PMD_SIZE); + if (!tlb) + flush_tlb_range(&vma, addr, addr + PMD_SIZE); + else + pte_free_tlb(tlb, pmd_pgtable(pmdval), addr); spin_unlock(ptl); pte_free_debug(pmdval); mm_dec_nr_ptes(mm); - call_rcu(&pmd_pgtable(pmdval)->rcu_head, pte_free_rcu); + if (!tlb) + call_rcu(&pmd_pgtable(pmdval)->rcu_head, pte_free_rcu); }