From patchwork Wed Aug 30 09:50:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13370144 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0684C83F15 for ; Wed, 30 Aug 2023 09:50:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 37BE928004F; Wed, 30 Aug 2023 05:50:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 304D828004E; Wed, 30 Aug 2023 05:50:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17E9428004F; Wed, 30 Aug 2023 05:50:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 028E028004E for ; Wed, 30 Aug 2023 05:50:37 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D0916A0378 for ; Wed, 30 Aug 2023 09:50:36 +0000 (UTC) X-FDA: 81180301272.15.EF64D4F Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf10.hostedemail.com (Postfix) with ESMTP id 35DE2C0019 for ; Wed, 30 Aug 2023 09:50:35 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf10.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693389035; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=S9VCZWuSZ6aTd42QL1q/rOhq2pSewtGN3EMk2R8LAHk=; b=RBAjO7L5WDz/btwbTOYdOmt5CDFR9psCIOcFTNAa0Rb2+FrEBBIWIH7eUo00FP/qhzww5D WTB2O3iqAKnpE7BxGudozpW/sbRkL0h1SntW8qXTRMNEOaCZeRuoKmwVprwbJYFtI4M2wl zOu2jZRGmI8noMinV+f/ssrTp7CJiSs= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf10.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693389035; a=rsa-sha256; cv=none; b=pq2cYpo+mjigG/yYurIiU9qqtbjUYFoiL98fopURIRIPUdYefQxakTP00ZLwfrNN3cGSvr kQdfSuE6ZZvkmKhfZkjC6cmCS2WDswglJKMTigpILDpG74oJguNj3ySXJs8kTY6Qpqjiw9 kVkt6MLRxeO80JH/Uh+EzBAECTFTHc8= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8F9451424; Wed, 30 Aug 2023 02:51:13 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 732523F64C; Wed, 30 Aug 2023 02:50:31 -0700 (PDT) From: Ryan Roberts To: Will Deacon , "Aneesh Kumar K.V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Christian Borntraeger , Sven Schnelle , Arnd Bergmann , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yu Zhao , "Kirill A. Shutemov" , Yin Fengwei , Yang Shi , "Huang, Ying" , Zi Yan Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/5] mm/mmu_gather: generalize mmu_gather rmap removal mechanism Date: Wed, 30 Aug 2023 10:50:08 +0100 Message-Id: <20230830095011.1228673-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230830095011.1228673-1-ryan.roberts@arm.com> References: <20230830095011.1228673-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 35DE2C0019 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: zzozroyjtr8r7hie4txui6ahxosukyfh X-HE-Tag: 1693389035-178369 X-HE-Meta: U2FsdGVkX1/Xk/uQJZdbkQcGfN6FppejiFvGYw6RiBfpRfeZEicj9LRMtnR+qevGBFO7u0nIti6aMd57eym8+/VbGT6yEPNXd9uY+dvAZOHKXCl8+cVPgu+GznrFX927bD/+HsJafgxyyfR+eVO8YpirpbKVYlzLz8uwhCaW5vrAy30xWmVRLM363GLIvCZjUI4S66gVnZDweJxVYy58/8OMs0Cm7E9MQswpdUvufmEJkhNgRS12n52JI+D1pSG8CbE3PL6uNt6UPk9Il51yGXSrDRjmpXV7u8CyI1MxatT/jwoJLN44XRTIOkoyPnhTvjBaCSL86ixdfAYZFCNp470AJW4v/AYFky5Nvq26DJnTD37i8VRWn6dsFgxboatmk4aQRswg5ip6jLcJvoXx3Wdef9L6DjHyMQBa4rlkRHH1YiG2dJ/+8yMM2+Tg2mYBaVEppUkAuDzoN1Q1QqBcLyF1INij6hwEkGoX2Lr4E4NlexmTDGQ2s8xX29Kqbq9JRzSE2W63mKDzXs/z3Qd9XF83CoSrKRLbpsDg34tqW59Q231uC6Qm1WM1M8Pr1nZkH6/MEPf42a32RbgFXSpRvXmdPaCDBKvYXYvjvYpLMH7nCb6aANC0/plaRJouy5bFI3Xn1z0URQvBJR9ZKUcHxH3bbeFfL2qMs+hFLmrL7QSEglyISNPChXuLBK877AxHBUOBTL2Ol6AkeC5LshDuPBRH4qUn88e7/NAAremHAyQ/hOHWXnZ4oDoaOv1gLQlHjGWRAYmc5bT0gQs3c2RYG81wfXJDU3W+MmPdsXEi4fXZK0HSv8SenLMeTY58bZ4g4tJU8Y/nJ9QRBa5AqCnyK4CIlOHv/e3KOVo47Po64G1hJ/lK/Qxyc64gabWs827wD5Cz6YYpSU4vC6n/5fnrXFXQaBon74MW64zixL87tdRHVcL+FD0p8nSQVn64Bm0IW3tn7bEBpDspCt+yTYh LcMt9XfO AHskV9I0hqxTI9t6zobHCIVpfLciwL3bKlm8O2Sf0XuA13Bk3uloBwWzQe5BaKfN+YIwKlrjQqKjP1Rkulj8ASNi5ONXLXJyXi4aLHi0zjkfieW3aNBOvauq8UmX35OjBz8CvSS47SMPPpi9pdJc1GCjgSAhllvtgEtdBNaX+LorgmvP8iHq91+SKO5kdtAWbi6wY0z6ZoLm6t58= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit 5df397dec7c4 ("mm: delay page_remove_rmap() until after the TLB has been flushed") added a mechanism whereby pages added to the mmu_gather buffer could indicate whether they should also be removed from the rmap. Then a call to the new tlb_flush_rmaps() API would iterate though the buffer and remove each flagged page from the rmap. This mechanism was intended for use with !PageAnon(page) pages only. Let's generalize this rmap removal mechanism so that any type of page can be removed from the rmap. This is done as preparation for batching rmap removals with folio_remove_rmap_range(), whereby we will pass a contiguous range of pages belonging to the same folio to be removed in one shot for a performance improvement. The mmu_gather now maintains a "pointer" that points to batch and index within that batch of the next page in the queue that is yet to be removed from the rmap. tlb_discard_rmaps() resets this "pointer" to the first empty location in the queue. Whenever tlb_flush_rmaps() is called, every page from "pointer" to the end of the queue is removed from the rmap. Once the mmu is flushed (tlb_flush_mmu()/tlb_finish_mmu()) any pending rmap removals are discarded. This pointer mechanism ensures that tlb_flush_rmaps() only has to walk the part of the queue for which rmap removal is pending, avoids the (potentially large) early portion of the queue for which rmap removal has already been performed but for which tlb invalidation/page freeing is still pending. tlb_flush_rmaps() must always be called under the same PTL as was used to clear the corresponding PTEs. So in practice rmap removal will be done in a batch for each PTE table, while the tlbi/freeing can continue to be done in much bigger batches outside the PTL. See this example flow: tlb_gather_mmu() for each pte table { with ptl held { for each pte { tlb_remove_tlb_entry() __tlb_remove_page() } if (any removed pages require rmap after tlbi) tlb_flush_mmu_tlbonly() tlb_flush_rmaps() } if (full) tlb_flush_mmu() } tlb_finish_mmu() So this more general mechanism is no longer just for delaying rmap removal until after tlbi, but can be used that way when required. Note that s390 does not gather pages, but does immediate tlbi and page freeing. In this case we continue to do the rmap removal page-by-page without gathering them in the mmu_gather. Signed-off-by: Ryan Roberts --- include/asm-generic/tlb.h | 34 ++++++++++++------------ mm/memory.c | 24 ++++++++++------- mm/mmu_gather.c | 54 +++++++++++++++++++++++---------------- 3 files changed, 65 insertions(+), 47 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 129a3a759976..f339d68cf44f 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -266,25 +266,30 @@ extern bool __tlb_remove_page_size(struct mmu_gather *tlb, #ifdef CONFIG_SMP /* - * This both sets 'delayed_rmap', and returns true. It would be an inline - * function, except we define it before the 'struct mmu_gather'. + * For configurations that support batching the rmap removal, the removal is + * triggered by calling tlb_flush_rmaps(), which must be called after the pte(s) + * are cleared and the page has been added to the mmu_gather, and before the ptl + * lock that was held for clearing the pte is released. */ -#define tlb_delay_rmap(tlb) (((tlb)->delayed_rmap = 1), true) +#define tlb_batch_rmap(tlb) (true) extern void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma); +extern void tlb_discard_rmaps(struct mmu_gather *tlb); #endif #endif /* - * We have a no-op version of the rmap removal that doesn't - * delay anything. That is used on S390, which flushes remote - * TLBs synchronously, and on UP, which doesn't have any - * remote TLBs to flush and is not preemptible due to this - * all happening under the page table lock. + * We have a no-op version of the rmap removal that doesn't do anything. That is + * used on S390, which flushes remote TLBs synchronously, and on UP, which + * doesn't have any remote TLBs to flush and is not preemptible due to this all + * happening under the page table lock. Here, the caller must manage each rmap + * removal separately. */ -#ifndef tlb_delay_rmap -#define tlb_delay_rmap(tlb) (false) -static inline void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) { } +#ifndef tlb_batch_rmap +#define tlb_batch_rmap(tlb) (false) +static inline void tlb_flush_rmaps(struct mmu_gather *tlb, + struct vm_area_struct *vma) { } +static inline void tlb_discard_rmaps(struct mmu_gather *tlb) { } #endif /* @@ -317,11 +322,6 @@ struct mmu_gather { */ unsigned int freed_tables : 1; - /* - * Do we have pending delayed rmap removals? - */ - unsigned int delayed_rmap : 1; - /* * at which levels have we cleared entries? */ @@ -343,6 +343,8 @@ struct mmu_gather { struct mmu_gather_batch *active; struct mmu_gather_batch local; struct page *__pages[MMU_GATHER_BUNDLE]; + struct mmu_gather_batch *rmap_pend; + unsigned int rmap_pend_first; #ifdef CONFIG_MMU_GATHER_PAGE_SIZE unsigned int page_size; diff --git a/mm/memory.c b/mm/memory.c index 12647d139a13..823c8a6813d1 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1405,6 +1405,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, swp_entry_t entry; tlb_change_page_size(tlb, PAGE_SIZE); + tlb_discard_rmaps(tlb); init_rss_vec(rss); start_pte = pte = pte_offset_map_lock(mm, pmd, addr, &ptl); if (!pte) @@ -1423,7 +1424,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, break; if (pte_present(ptent)) { - unsigned int delay_rmap; + unsigned int batch_rmap; page = vm_normal_page(vma, addr, ptent); if (unlikely(!should_zap_page(details, page))) @@ -1438,12 +1439,15 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, continue; } - delay_rmap = 0; + batch_rmap = tlb_batch_rmap(tlb); if (!PageAnon(page)) { if (pte_dirty(ptent)) { set_page_dirty(page); - if (tlb_delay_rmap(tlb)) { - delay_rmap = 1; + if (batch_rmap) { + /* + * Ensure tlb flush happens + * before rmap remove. + */ force_flush = 1; } } @@ -1451,12 +1455,12 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, mark_page_accessed(page); } rss[mm_counter(page)]--; - if (!delay_rmap) { + if (!batch_rmap) { page_remove_rmap(page, vma, false); if (unlikely(page_mapcount(page) < 0)) print_bad_pte(vma, addr, ptent, page); } - if (unlikely(__tlb_remove_page(tlb, page, delay_rmap))) { + if (unlikely(__tlb_remove_page(tlb, page, 0))) { force_flush = 1; addr += PAGE_SIZE; break; @@ -1517,10 +1521,12 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, arch_leave_lazy_mmu_mode(); /* Do the actual TLB flush before dropping ptl */ - if (force_flush) { + if (force_flush) tlb_flush_mmu_tlbonly(tlb); - tlb_flush_rmaps(tlb, vma); - } + + /* Rmap removal must always happen before dropping ptl */ + tlb_flush_rmaps(tlb, vma); + pte_unmap_unlock(start_pte, ptl); /* diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 4f559f4ddd21..fb34151c0da9 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -19,10 +19,6 @@ static bool tlb_next_batch(struct mmu_gather *tlb) { struct mmu_gather_batch *batch; - /* Limit batching if we have delayed rmaps pending */ - if (tlb->delayed_rmap && tlb->active != &tlb->local) - return false; - batch = tlb->active; if (batch->next) { tlb->active = batch->next; @@ -48,37 +44,49 @@ static bool tlb_next_batch(struct mmu_gather *tlb) } #ifdef CONFIG_SMP -static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct vm_area_struct *vma) +static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, + unsigned int first, + struct vm_area_struct *vma) { - for (int i = 0; i < batch->nr; i++) { + for (int i = first; i < batch->nr; i++) { struct encoded_page *enc = batch->encoded_pages[i]; + struct page *page = encoded_page_ptr(enc); - if (encoded_page_flags(enc)) { - struct page *page = encoded_page_ptr(enc); - page_remove_rmap(page, vma, false); - } + page_remove_rmap(page, vma, false); } } /** - * tlb_flush_rmaps - do pending rmap removals after we have flushed the TLB + * tlb_flush_rmaps - do pending rmap removals * @tlb: the current mmu_gather * @vma: The memory area from which the pages are being removed. * - * Note that because of how tlb_next_batch() above works, we will - * never start multiple new batches with pending delayed rmaps, so - * we only need to walk through the current active batch and the - * original local one. + * Removes rmap from all pages added via (e.g.) __tlb_remove_page_size() since + * the last call to tlb_discard_rmaps() or tlb_flush_rmaps(). All of those pages + * must have been mapped by vma. Must be called after the pte(s) are cleared, + * and before the ptl lock that was held for clearing the pte is released. Pages + * are accounted using the order-0 folio (or base page) scheme. */ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) { - if (!tlb->delayed_rmap) - return; + struct mmu_gather_batch *batch = tlb->rmap_pend; - tlb_flush_rmap_batch(&tlb->local, vma); - if (tlb->active != &tlb->local) - tlb_flush_rmap_batch(tlb->active, vma); - tlb->delayed_rmap = 0; + tlb_flush_rmap_batch(batch, tlb->rmap_pend_first, vma); + + for (batch = batch->next; batch && batch->nr; batch = batch->next) + tlb_flush_rmap_batch(batch, 0, vma); + + tlb_discard_rmaps(tlb); +} + +/** + * tlb_discard_rmaps - discard any pending rmap removals + * @tlb: the current mmu_gather + */ +void tlb_discard_rmaps(struct mmu_gather *tlb) +{ + tlb->rmap_pend = tlb->active; + tlb->rmap_pend_first = tlb->active->nr; } #endif @@ -103,6 +111,7 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) } while (batch->nr); } tlb->active = &tlb->local; + tlb_discard_rmaps(tlb); } static void tlb_batch_list_free(struct mmu_gather *tlb) @@ -313,8 +322,9 @@ static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, tlb->local.max = ARRAY_SIZE(tlb->__pages); tlb->active = &tlb->local; tlb->batch_count = 0; + tlb->rmap_pend = &tlb->local; + tlb->rmap_pend_first = 0; #endif - tlb->delayed_rmap = 0; tlb_table_init(tlb); #ifdef CONFIG_MMU_GATHER_PAGE_SIZE