From patchwork Fri Jul 8 07:18:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 12910663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D50E3C43334 for ; Fri, 8 Jul 2022 07:20:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 221AD6B0073; Fri, 8 Jul 2022 03:20:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D0536B0074; Fri, 8 Jul 2022 03:20:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0BF8D900002; Fri, 8 Jul 2022 03:20:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id F024C6B0071 for ; Fri, 8 Jul 2022 03:20:02 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C337861185 for ; Fri, 8 Jul 2022 07:20:02 +0000 (UTC) X-FDA: 79663083444.17.F1F032F Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) by imf20.hostedemail.com (Postfix) with ESMTP id A0C8A1C0054 for ; Fri, 8 Jul 2022 07:20:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=ijzouG1bvvv3uipoWgZ8tpfpptlpbv/OWRgYCHfEMU0=; b=Q0fwaTcro7sNMPuY3vDdtyNGTl w+UIMp3F9gv75uS46HJc4LLkEzdlu8uk8Bv3YApuwtJXoOyfEUGDytON6y5IOhaZrUVuiyJFVCIeU RS5otWNwnYMLXgUNzDS4JDEyovkCC4OF65OoliTrvWJFnpaayh+d1IsR06dmelFnCbfOQA1vkOwZk JR88b0ixOSKkyPXXRzh1w8PANSeYCoS4vXEdlcvHzvoN0fAM3gftm9W14pqJAcXUq3MGYQIN1yaAC hP6YR/7DAU6nQsCCJrEO4bHC9rbRIfDCKypbqeLapBiuEG9HNJJMzZMeAo7ZaTgoDLjGa6XyaYPKl 4e2Esh7w==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o9iHI-001dLv-JL; Fri, 08 Jul 2022 07:19:53 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id B1F14300903; Fri, 8 Jul 2022 09:19:51 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 53D2D203C0628; Fri, 8 Jul 2022 09:19:51 +0200 (CEST) Message-ID: <20220708071834.084532973@infradead.org> User-Agent: quilt/0.66 Date: Fri, 08 Jul 2022 09:18:05 +0200 From: Peter Zijlstra To: Jann Horn , Linus Torvalds , Will Deacon Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, peterz@infradead.org, Dave Airlie , Daniel Vetter , Andrew Morton , Guo Ren , David Miller Subject: [PATCH 3/4] mmu_gather: Let there be one tlb_{start,end}_vma() implementation References: <20220708071802.751003711@infradead.org> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=desiato.20200630 header.b=Q0fwaTcr; spf=none (imf20.hostedemail.com: domain of peterz@infradead.org has no SPF policy when checking 90.155.92.199) smtp.mailfrom=peterz@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657264802; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:references:dkim-signature; bh=ijzouG1bvvv3uipoWgZ8tpfpptlpbv/OWRgYCHfEMU0=; b=wP6K+kcpmqiuG7v/2NoQBDmPEpyC6B+7PT+iBqJtnQVzF/UIM6PYh3tajyWDHIXIITo578 HUG+kjPzLsAKhmOYnWBgDVKx7jfIIemmBGRlM/pu8QkXa/yvPuiT4Ls9Ef+gZAWpdO44/+ GjODEMoETiHnmZWZUKX7Lvz0RO64M1o= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657264802; a=rsa-sha256; cv=none; b=jCxPo8156u4lMlOX0l4Jd/9+86jI5ZfPiZXFcBUt1zSrvBvHWkoEs5k36tnlL0skkmrlIi t/q/5gIZGHLO0AseOEDwDLhp7gOPzkjFnD4uVkVc9cecI/CvEO5kWDgAwLu0twnelF462t K8zrfIkjUrX7qwVW7Kf4fxsLVA6c7/E= X-Rspam-User: X-Stat-Signature: msmq3tyjpcd8fuyhoc3yej1saqxzw479 X-Rspamd-Queue-Id: A0C8A1C0054 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=desiato.20200630 header.b=Q0fwaTcr; spf=none (imf20.hostedemail.com: domain of peterz@infradead.org has no SPF policy when checking 90.155.92.199) smtp.mailfrom=peterz@infradead.org; dmarc=none X-Rspamd-Server: rspam11 X-HE-Tag: 1657264801-542252 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that architectures are no longer allowed to override tlb_{start,end}_vma() re-arrange code so that there is only one implementation for each of these functions. This much simplifies trying to figure out what they actually do. Signed-off-by: Peter Zijlstra (Intel) Acked-by: Will Deacon --- include/asm-generic/tlb.h | 15 ++------------- 1 file changed, 2 insertions(+), 13 deletions(-) --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -346,8 +346,8 @@ static inline void __tlb_reset_range(str #ifdef CONFIG_MMU_GATHER_NO_RANGE -#if defined(tlb_flush) || defined(tlb_start_vma) || defined(tlb_end_vma) -#error MMU_GATHER_NO_RANGE relies on default tlb_flush(), tlb_start_vma() and tlb_end_vma() +#if defined(tlb_flush) +#error MMU_GATHER_NO_RANGE relies on default tlb_flush() #endif /* @@ -367,17 +367,10 @@ static inline void tlb_flush(struct mmu_ static inline void tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { } -#define tlb_end_vma tlb_end_vma -static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { } - #else /* CONFIG_MMU_GATHER_NO_RANGE */ #ifndef tlb_flush -#if defined(tlb_start_vma) || defined(tlb_end_vma) -#error Default tlb_flush() relies on default tlb_start_vma() and tlb_end_vma() -#endif - /* * When an architecture does not provide its own tlb_flush() implementation * but does have a reasonably efficient flush_vma_range() implementation @@ -498,7 +491,6 @@ static inline unsigned long tlb_get_unma * case where we're doing a full MM flush. When we're doing a munmap, * the vmas are adjusted to only cover the region to be torn down. */ -#ifndef tlb_start_vma static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { if (tlb->fullmm) @@ -509,9 +501,7 @@ static inline void tlb_start_vma(struct flush_cache_range(vma, vma->vm_start, vma->vm_end); #endif } -#endif -#ifndef tlb_end_vma static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) @@ -525,7 +515,6 @@ static inline void tlb_end_vma(struct mm */ tlb_flush_mmu_tlbonly(tlb); } -#endif /* * tlb_flush_{pte|pmd|pud|p4d}_range() adjust the tlb->start and tlb->end,