From patchwork Mon Jan 29 14:32:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13535742 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB5B1C47422 for ; Mon, 29 Jan 2024 14:33:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A3DD6B0095; Mon, 29 Jan 2024 09:33:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 653306B0096; Mon, 29 Jan 2024 09:33:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A5CA6B0098; Mon, 29 Jan 2024 09:33:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3C2096B0095 for ; Mon, 29 Jan 2024 09:33:02 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 05A6D12038F for ; Mon, 29 Jan 2024 14:33:01 +0000 (UTC) X-FDA: 81732590604.30.FF38178 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf10.hostedemail.com (Postfix) with ESMTP id 3F489C002E for ; Mon, 29 Jan 2024 14:33:00 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="POJj4/lN"; spf=pass (imf10.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706538780; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hRXeU82IBxy/RcnY5wfdvSjzmzqpCs6vJAdTnhTQH90=; b=Hzw9XU0DcXZMunE8AH7jzfgduAfGkJIzlnSPqpAWO09L74ryhWI0xMJ9cn7c6SKnEF+UNo NG21jcxwS4JG1lPMm5FP3WoxTLPQO2iDyYtwWV8dUKiUAAHWR/TQ6GzcaXtXg06rHjX+Jv HNf1ONOQ5WKO43nk/5QsTC1UVzqwTAM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706538780; a=rsa-sha256; cv=none; b=uGJpQbcTKcpx5+caBJwNDzhLvk0eOIjdIF8OqkOdrcIcH/e7jUIuwkDKmuNX6bOIC+aZ00 NwpVaDKjQF9gS8WVTflzcCaZUX1XVi1d9OOexHmkh91jGS3fsJi17QfwjYhxBtJ47vz7GU NEbqDFhNm6aVgf3LzabB5W3v0XjFA8Y= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="POJj4/lN"; spf=pass (imf10.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1706538779; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hRXeU82IBxy/RcnY5wfdvSjzmzqpCs6vJAdTnhTQH90=; b=POJj4/lNGKrPMzIG/2zii53FhYQWi8vNmGtPCJ/aJD3+d27Ag3c1UhRujedqdxTGdfVUE0 m6MkFsFGP2ZDVfS3onBCiMQI91m7bdTnK/e4JPmmbSRQhZxww6/7Lf1zIuzUHF9HarUfUO isC4DgdJrKtyqJ4BHSUukJf+xtLJTVs= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-654-RcYHT5HwMdeLqdQ1EEIoXA-1; Mon, 29 Jan 2024 09:32:53 -0500 X-MC-Unique: RcYHT5HwMdeLqdQ1EEIoXA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 61CE93C13AA2; Mon, 29 Jan 2024 14:32:52 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.194.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id D5AF0AD1; Mon, 29 Jan 2024 14:32:47 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Matthew Wilcox , Ryan Roberts , Catalin Marinas , Will Deacon , "Aneesh Kumar K.V" , Nick Piggin , Peter Zijlstra , Michael Ellerman , Christophe Leroy , "Naveen N. Rao" , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Arnd Bergmann , linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org Subject: [PATCH v1 5/9] mm/mmu_gather: pass "delay_rmap" instead of encoded page to __tlb_remove_page_size() Date: Mon, 29 Jan 2024 15:32:17 +0100 Message-ID: <20240129143221.263763-6-david@redhat.com> In-Reply-To: <20240129143221.263763-1-david@redhat.com> References: <20240129143221.263763-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Rspamd-Queue-Id: 3F489C002E X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: q4qjaphcxu8yhhnpxsq7tc3kfu8jnizf X-HE-Tag: 1706538780-547459 X-HE-Meta: U2FsdGVkX18yioCJwhpr9h3ACZGwY1Q9MD/4PExA6LSRU6YXdvotSqebi0OBYDQ3xJMpdLj58WIkG4R7JD9IdgjCJqZdRmld+N6HQOjU2as+AxpiSsT8xHUCxSTOtDuidAr4Mt25ovxXZy1lyaXE7Y+cBtdhUmCdslCe2A7szOgJB8afuzY3cNgay9dhyvWREVUUnpV4YyyDBNtoHB4CWp6FiSGrbTFE6+GE4Wn4Bn6s5Lg1a+UdMKmGCXUgKoQeSYq4GMndshYBG30vpKsrc9w40n6tWNBcUKDnPNjkT60tXUOzvLGA9dxCYvvaU9k9hB3mYlgi2l8hX1L2WX5IBxgLad1sPFkR907Pd6oA5zjJcoh90SYqyOAdkz/Iyagi8f7v3NL7C/z9t1bdqe1/ojwkY6z+FwBrWor3Hoiq79/FKFCeGT583vqgEU66KnfYCTZj8UbIQmWpZs6RJa7ebp8OLcvvCEJCNVlI3xIGCf+XaCFbdJ5t4bfIuN6+krraWgZD3D8IQfrkqMoASF7DSy0TPpGeEx7B2uejQSnOom7BOorFWi7shlvSKuBV7LfnM/JvD3AsBVVbP5JXTxNYikzYczP+4S2/cuHah11iAyZkod3EGQ59i1BrSqURh295Q1ypcRyVDrZik3T3THS0Wf3wzdXVQDpfBqYSmjDQCai4WqKyiTUKXCcd1Y64/HhV1vxaHtfsmOogzE9in2MkgIP1j9vkv0mwco2XPQAmTU0Kd+y6HHbtpCXsbBvmd+brcpBD1JVRRbOvKeQmQyhIFnsBTM4yaYAEoxlFpSzzEJfQCQ4L8WUgW/FvaJVPvFR5orlHRIIZ+m4dY2pv+fune1r0fxcndIlOnUvAEPiS4WqL4EGOt6M+nqmE0XKO+jW6YT4lFPjmwEo1KfTcEX5b1xToQF8QnGp2OX26ujt9pR8xcStZW47DWF+16FW+uGzhvgSeWR2Bl1vbRs+t1lU YuY9Vc+j ofFbePDUWq72KECeNVVBJ5HORS+hiZz5qzXPa0hPLS0VMT+arCTuOE8TUg4+l64StDaw0g19k52w2Yvzmy4dFHSdLu1RWW80ysTGx+dy7tyzPd+RLCUomJ6ydTDyOT3kD80lrvE4NQLiBfapMJrCuJcSX7hTkLjpUYByQftEOb69+rwp3JNfUGPeXq4Jys71dfxL7oYgwFx3ol2eKdVzd9rrs2oQFJ703XGeh X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We have two bits available in the encoded page pointer to store additional information. Currently, we use one bit to request delay of the rmap removal until after a TLB flush. We want to make use of the remaining bit internally for batching of multiple pages of the same folio, specifying that the next encoded page pointer in an array is actually "nr_pages". So pass page + delay_rmap flag instead of an encoded page, to handle the encoding internally. Signed-off-by: David Hildenbrand Reviewed-by: Ryan Roberts --- arch/s390/include/asm/tlb.h | 13 ++++++------- include/asm-generic/tlb.h | 12 ++++++------ mm/mmu_gather.c | 7 ++++--- 3 files changed, 16 insertions(+), 16 deletions(-) diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index d1455a601adc..48df896d5b79 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -25,8 +25,7 @@ void __tlb_remove_table(void *_table); static inline void tlb_flush(struct mmu_gather *tlb); static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, - struct encoded_page *page, - int page_size); + struct page *page, bool delay_rmap, int page_size); #define tlb_flush tlb_flush #define pte_free_tlb pte_free_tlb @@ -42,14 +41,14 @@ static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, * tlb_ptep_clear_flush. In both flush modes the tlb for a page cache page * has already been freed, so just do free_page_and_swap_cache. * - * s390 doesn't delay rmap removal, so there is nothing encoded in - * the page pointer. + * s390 doesn't delay rmap removal. */ static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, - struct encoded_page *page, - int page_size) + struct page *page, bool delay_rmap, int page_size) { - free_page_and_swap_cache(encoded_page_ptr(page)); + VM_WARN_ON_ONCE(delay_rmap); + + free_page_and_swap_cache(page); return false; } diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 129a3a759976..2eb7b0d4f5d2 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -260,9 +260,8 @@ struct mmu_gather_batch { */ #define MAX_GATHER_BATCH_COUNT (10000UL/MAX_GATHER_BATCH) -extern bool __tlb_remove_page_size(struct mmu_gather *tlb, - struct encoded_page *page, - int page_size); +extern bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, + bool delay_rmap, int page_size); #ifdef CONFIG_SMP /* @@ -462,13 +461,14 @@ static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) static inline void tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size) { - if (__tlb_remove_page_size(tlb, encode_page(page, 0), page_size)) + if (__tlb_remove_page_size(tlb, page, false, page_size)) tlb_flush_mmu(tlb); } -static __always_inline bool __tlb_remove_page(struct mmu_gather *tlb, struct page *page, unsigned int flags) +static __always_inline bool __tlb_remove_page(struct mmu_gather *tlb, + struct page *page, bool delay_rmap) { - return __tlb_remove_page_size(tlb, encode_page(page, flags), PAGE_SIZE); + return __tlb_remove_page_size(tlb, page, delay_rmap, PAGE_SIZE); } /* tlb_remove_page diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 604ddf08affe..ac733d81b112 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -116,7 +116,8 @@ static void tlb_batch_list_free(struct mmu_gather *tlb) tlb->local.next = NULL; } -bool __tlb_remove_page_size(struct mmu_gather *tlb, struct encoded_page *page, int page_size) +bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, + bool delay_rmap, int page_size) { struct mmu_gather_batch *batch; @@ -131,13 +132,13 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct encoded_page *page, i * Add the page and check if we are full. If so * force a flush. */ - batch->encoded_pages[batch->nr++] = page; + batch->encoded_pages[batch->nr++] = encode_page(page, delay_rmap); if (batch->nr == batch->max) { if (!tlb_next_batch(tlb)) return true; batch = tlb->active; } - VM_BUG_ON_PAGE(batch->nr > batch->max, encoded_page_ptr(page)); + VM_BUG_ON_PAGE(batch->nr > batch->max, page); return false; }