From patchwork Tue Dec 19 17:50:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 13498799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18077C46CA2 for ; Tue, 19 Dec 2023 18:03:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E94C8D0007; Tue, 19 Dec 2023 13:03:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 26F948D0001; Tue, 19 Dec 2023 13:03:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C2A68D0007; Tue, 19 Dec 2023 13:03:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E8E518D0001 for ; Tue, 19 Dec 2023 13:03:33 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B4A571C120A for ; Tue, 19 Dec 2023 18:03:33 +0000 (UTC) X-FDA: 81584340306.06.A3CCD08 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf02.hostedemail.com (Postfix) with ESMTP id C3FEA80033 for ; Tue, 19 Dec 2023 18:03:31 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=fIZMvRHa; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf02.hostedemail.com: domain of jszhang@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=jszhang@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703009011; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xxH216JwTZl30EG274Lt8XoefjQpgY5KXQzb1yFkO9s=; b=VvjdZYG3fVwbHgHHvYrXPyCq4XWFVczBhV28FpokxxUus15fJIN336pFDOoB8bT4tIveGY 0LVB5ezL3cCh3+4f2Mez5nfFUvDVr1TbTYXC8AO8jGa5c7lHRjdKkZV0ACqqRzqMcAKFDt xLMQ/ci/r4CNBjDBHYEWTUS4a907cSE= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=fIZMvRHa; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf02.hostedemail.com: domain of jszhang@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=jszhang@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703009011; a=rsa-sha256; cv=none; b=j9Q9rRbn6gcrWafzuYYRDeVi1qNVnLhTZ2fcnRchkOV6Gr1Czy0lraStQgjSsqKcciLTxV bfvroNefNoREfQy9Gb2qmyEMyq5bM7c1F9B6IAs5TwthKbi25z1MULQmFcq802asJyRTSD 0m4unm//1w4IaY8/RDdLiCqGjC3Z+Sk= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id DE1B26141D; Tue, 19 Dec 2023 18:03:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 48267C433C7; Tue, 19 Dec 2023 18:03:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703009010; bh=7HEG7NxaC+hEbKvi6V3ezlMLYmQxX8eFSiY1oLoxI38=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fIZMvRHaWT8zGpNUvDXtLuxxyNtS3fWC3iodCoNDOBXnwEoc+rjvl0JDXiUFsG9JF 6g39zjtD2RlRHz83NGBvDZhFP9DCC+xyZFyt+3RESTpahoQOFX42bsSJldjUzDdjj9 yLPv1UTkwiTLGP8tz243HzVOxqcWTkvEMmM9JfpeHT5eaT5GmJwY3atxcN7rVSj+6K MuLrMN1qzfxqovVsps8N8LtxMR7kl1Zkg95orsXK4QgvYkfd+e+rZzqpNYxx+fG2Cl Uf048r/vPqCqySf8ci8W/7sQQ9/qvAcl8d1Y/tYdpY5KC4rBLQqt2HdR1BOhpOH6P5 PRUU08tc5mcJg== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/4] riscv: enable MMU_GATHER_RCU_TABLE_FREE for SMP && MMU Date: Wed, 20 Dec 2023 01:50:45 +0800 Message-Id: <20231219175046.2496-4-jszhang@kernel.org> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20231219175046.2496-1-jszhang@kernel.org> References: <20231219175046.2496-1-jszhang@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: C3FEA80033 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: r3sqxzbo9wibnq1wao11hey7qx9fw5ko X-HE-Tag: 1703009011-469237 X-HE-Meta: U2FsdGVkX19Nd7PyORx+F+uEIBxUhRRCt5KeVvtxdvir/Oi5OAxpiqlqFJZqZEbtFXzPqXdcN620zqGXKAOHrsNDEyydvYSr/Q4Dp65GH2+ugSZ/s03jHoMRFxKgrJKOdFboFjvr3Zm12JTNw9EHXPD4T9U/W219TDcaakk7pPInSQMCPlqPcNSLjNHHqeDLny0hbyoOLp7MZq3iVh4bJ3Hi94fT1p5gvkSzer5bzP34BlfmggL0hEYUq0El/OiIQXNKwBSzefCotUn/7a6oTq+lHk8UVgViTtcdJm0NGVIwYk/JYKE/+lWiEq348lFzEAHwweayUrTsgYodqmfFWuRAy2ZIl5KfDl1voNT0paKXaR5as4S22ltBjyCYvDjC7pFc0GunS5QZWUYAfnqiMI/4CZGHI4W51wRzv7HlIh+L11dLuAAvxxg3tyP0xpjig+E4E/FBp9vq/Kwc0MWvtwHCJFCq6kktLAti/+pmcVW8/UqgCdsv6V9duXFLpHWp+1nJD8apFcFRvYJ+O2FSVS1/BmhFbtivys6Kb0AaNwA+1LlrMfSM4BgGGWYW3Nk6ITvYV9iwDgm3YfgwpyLS3Q3XmWdCJ6R8hVzZhdv9zq2Z/ghnRfiurKZXH7OnyWUw7/BfXlTqU/OHXo4r4fRYelwaUl5IS5YfZCz/61XGcGr7XoJ5PBRSOTReYjn1Y123jnrAZ52eipob3t6v4WfJ1i25bcYJ2ZOk6qK6XCiQS1mQaOSiGMDDJ1hh2IxffGcSXQdy8O+DqiLvYnNM0WmqnIJOO6MXB/hCAQw0Ys1tTCYsgLk05XXR55czZEWhaTwLgpJqFXeoKlrwoS3DtQBa1oii5xvcmMpiK7bVxxADrD1sNEOGvz2TqZLRJVKRNFXk7+VzSwznC7sRWIjF8MUu8QxCoqOqd5Osrzr/US/LoXwqiZ3FCI0Loc3ILCE3QdKqYKTABWnEwpyz9aDwkq3 fVB+vuGe Fa2MhJNA98NJrhG2aphPBnr8hFSOuOah+xoLbqXZPPnJxQbeLJ172fHideqDuTrra7q68j967C8iW/+qVbISOsHIbHiRoDgjw11cVqHij5cVzSWMAxlEDSQjLaRmXP40QVSvdWbRaVTqn3uLchlOZCzMW1HOQXDxn06iNlMa+A0tfIzI+eM6AcH2a8gKf+dMVOXJgzpSTzJm2uUlMHtT+YegV4mtVnXNWGeCI+H/Y/fPcxFQfQkUQCck2T0TXo4Wr9pwj5wDbKyqp9x7NdVtFbMIXvqR8YQlT4DXL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In order to implement fast gup we need to ensure that the page table walker is protected from page table pages being freed from under it. riscv situation is more complicated than other architectures: some riscv platforms may use IPI to perform TLB shootdown, for example, those platforms which support AIA, usually the riscv_ipi_for_rfence is true on these platforms; some riscv platforms may rely on the SBI to perform TLB shootdown, usually the riscv_ipi_for_rfence is false on these platforms. To keep software pagetable walkers safe in this case we switch to RCU based table free (MMU_GATHER_RCU_TABLE_FREE). See the comment below 'ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE' in include/asm-generic/tlb.h for more details. This patch enables MMU_GATHER_RCU_TABLE_FREE, then use *tlb_remove_page_ptdesc() for those platforms which use IPI to perform TLB shootdown; *tlb_remove_ptdesc() for those platforms which use SBI to perform TLB shootdown; Both case mean that disabling interrupts will block the free and protect the fast gup page walker. Signed-off-by: Jisheng Zhang Reviewed-by: Alexandre Ghiti --- arch/riscv/Kconfig | 1 + arch/riscv/include/asm/pgalloc.h | 23 ++++++++++++++++++----- arch/riscv/include/asm/tlb.h | 18 ++++++++++++++++++ 3 files changed, 37 insertions(+), 5 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 24c1799e2ec4..d3555173d9f4 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -147,6 +147,7 @@ config RISCV select IRQ_FORCED_THREADING select KASAN_VMALLOC if KASAN select LOCK_MM_AND_FIND_VMA + select MMU_GATHER_RCU_TABLE_FREE if SMP && MMU select MODULES_USE_ELF_RELA if MODULES select MODULE_SECTIONS if MODULES select OF diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index 3c5e3bd15f46..deaf971253a2 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -102,7 +102,10 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, struct ptdesc *ptdesc = virt_to_ptdesc(pud); pagetable_pud_dtor(ptdesc); - tlb_remove_page_ptdesc(tlb, ptdesc); + if (riscv_use_ipi_for_rfence()) + tlb_remove_page_ptdesc(tlb, ptdesc); + else + tlb_remove_ptdesc(tlb, ptdesc); } } @@ -136,8 +139,12 @@ static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, unsigned long addr) { - if (pgtable_l5_enabled) - tlb_remove_page_ptdesc(tlb, virt_to_ptdesc(p4d)); + if (pgtable_l5_enabled) { + if (riscv_use_ipi_for_rfence()) + tlb_remove_page_ptdesc(tlb, virt_to_ptdesc(p4d)); + else + tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); + } } #endif /* __PAGETABLE_PMD_FOLDED */ @@ -169,7 +176,10 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, struct ptdesc *ptdesc = virt_to_ptdesc(pmd); pagetable_pmd_dtor(ptdesc); - tlb_remove_page_ptdesc(tlb, ptdesc); + if (riscv_use_ipi_for_rfence()) + tlb_remove_page_ptdesc(tlb, ptdesc); + else + tlb_remove_ptdesc(tlb, ptdesc); } #endif /* __PAGETABLE_PMD_FOLDED */ @@ -180,7 +190,10 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, struct ptdesc *ptdesc = page_ptdesc(pte); pagetable_pte_dtor(ptdesc); - tlb_remove_page_ptdesc(tlb, ptdesc); + if (riscv_use_ipi_for_rfence()) + tlb_remove_page_ptdesc(tlb, ptdesc); + else + tlb_remove_ptdesc(tlb, ptdesc); } #endif /* CONFIG_MMU */ diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h index 1eb5682b2af6..a0b8b853503f 100644 --- a/arch/riscv/include/asm/tlb.h +++ b/arch/riscv/include/asm/tlb.h @@ -10,6 +10,24 @@ struct mmu_gather; static void tlb_flush(struct mmu_gather *tlb); +#ifdef CONFIG_MMU +#include + +/* + * While riscv platforms with riscv_ipi_for_rfence as true require an IPI to + * perform TLB shootdown, some platforms with riscv_ipi_for_rfence as false use + * SBI to perform TLB shootdown. To keep software pagetable walkers safe in this + * case we switch to RCU based table free (MMU_GATHER_RCU_TABLE_FREE). See the + * comment below 'ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE' in include/asm-generic/tlb.h + * for more details. + */ +static inline void __tlb_remove_table(void *table) +{ + free_page_and_swap_cache(table); +} + +#endif /* CONFIG_MMU */ + #define tlb_flush tlb_flush #include