From patchwork Sat Nov 26 17:34:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13056539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E1F9FC4321E for ; Sat, 26 Nov 2022 17:35:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5/z4ZfdRO3B1SFqqjJprhm/vuX8WnnmaSd1rDdIMEZs=; b=sx98v2S56Ka8v/ aOzVrTjpWSYG65DANu6YljCnD75MTic33ROXK6uQnXqu3/yAshOsUX6Bc9f1wEa6/6NM6vjKYnMc+ Gh26ZwSJEFODG3QhX7R7yYmThVW5T2e8yJ/rDEHt9Q+igMWTKaALFOYzcbp1L886evUAT8lTYzvUY cfg17Lrq1xjjem8VbBGD1/14IFY0/8M1gEkT7jNiFAmgFv3d+VL4cAkW9+t6a0BI3N1OGRy45FTzL VHECqMqhEp49+uw36CoZoymxmMyjm7E5rC69Q0EyDgrdaJbHJvw0k2FlCU18H60sP3ipBkxJFWbNV 9lvl2oeybpz+Hqn9KUyg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyz5V-007Uki-6m; Sat, 26 Nov 2022 17:35:37 +0000 Received: from mail-pl1-x62e.google.com ([2607:f8b0:4864:20::62e]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyz5R-007Uhm-VU for linux-riscv@lists.infradead.org; Sat, 26 Nov 2022 17:35:35 +0000 Received: by mail-pl1-x62e.google.com with SMTP id p24so2836080plw.1 for ; Sat, 26 Nov 2022 09:35:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PXYoHLXQCavco5NNh0sXuOKKVYR0blfrb+/AP/qt1J8=; b=BoFO/pdpkF4220ArqSyT7Fef2z9AiCHP6F7W9lWvdlGPiU00YBGm7wHJLjYJHmo4Lb Trzl6LKgcMpejbttoEv/1hXeg/1kXrG1EzQcu4xfCZ8swc9LSLfsLnmm3qW9FTAnVql5 7BSKiUU/+wIpPC7FeVuBj1O4lr1Iq2+W8Y+uR0TR/V/ARPlSHdrxbyEbvO/KR18QRBKQ 4LUSUZ1ZKNPd89mMroefjyUp3YMsgntT/3RHOaF+CuLof61mf8cKHoJBoUDRwFPg3Jyk DtGNyX+SSRbVFwZigk4/nwgwUkjw644SSAePdqZhF7VvSnYDaGpchZUe3GrLTrhcVKrR Sk8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PXYoHLXQCavco5NNh0sXuOKKVYR0blfrb+/AP/qt1J8=; b=BwYGe/D6Pt/+reenfsPTXEMwNAQkg1W/0vEGm5s/QH0mjirtHftT1iqZhe0De15K55 PhYQZ5v1exAX6IYmoSe2VQ5o7FQ6PeokLkGTIiVdhBXKpcpkawQ3+rxalMnitZo7vTxD FFd3CawaFymH4cZLoRMf7ZMG1RIhjLeLn9i6Yjy7xUeRj+8zYu3kNZWQFTyk/xeZ1BXD GgcKkqQ1krCGw39ZmsygVHxt03ShYDLbkMes9F9l1v4sPFbmyHjntf1rO4odvv98x01k 6+npZzLBoY5fAxpaJ/vYvRlR8AB3aLpX5NiYzXXjyoUxqoTf6W+YcXNE0sL08Fkxrljt F6eQ== X-Gm-Message-State: ANoB5pkrIU44Kaz5aHsEqmA1QiYHAyYkSYrrGtdBZSkiRJbmi26A/icb wN/Z1MfHww2ySmhPl2cIJEDYvg== X-Google-Smtp-Source: AA0mqf5fyW1nl1aC9v5JezzB0GdJqMRNf/pUfVdetvpGFW8w0LPH4Z2H+d0v5qJ1Xo9p3xWAZuxZWA== X-Received: by 2002:a17:902:934b:b0:189:78db:12be with SMTP id g11-20020a170902934b00b0018978db12bemr2604216plp.8.1669484132869; Sat, 26 Nov 2022 09:35:32 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id u11-20020a170902bf4b00b0017f7c4e260fsm5639813pls.150.2022.11.26.09.35.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 26 Nov 2022 09:35:32 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Daniel Lezcano Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v12 6/7] RISC-V: Use IPIs for remote TLB flush when possible Date: Sat, 26 Nov 2022 23:04:52 +0530 Message-Id: <20221126173453.306088-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221126173453.306088-1-apatel@ventanamicro.com> References: <20221126173453.306088-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221126_093534_065458_F15D383C X-CRM114-Status: GOOD ( 12.44 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org If we have specialized interrupt controller (such as AIA IMSIC) which allows supervisor mode to directly inject IPIs without any assistance from M-mode or HS-mode then using such specialized interrupt controller, we can do remote TLB flushes directly from supervisor mode instead of using the SBI RFENCE calls. This patch extends remote TLB flush functions to use supervisor mode IPIs whenever direct supervisor mode IPIs.are supported by interrupt controller. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/mm/tlbflush.c | 93 +++++++++++++++++++++++++++++++++------- 1 file changed, 78 insertions(+), 15 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 37ed760d007c..27a7db8eb2c4 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -23,14 +23,62 @@ static inline void local_flush_tlb_page_asid(unsigned long addr, : "memory"); } +static inline void local_flush_tlb_range(unsigned long start, + unsigned long size, unsigned long stride) +{ + if (size <= stride) + local_flush_tlb_page(start); + else + local_flush_tlb_all(); +} + +static inline void local_flush_tlb_range_asid(unsigned long start, + unsigned long size, unsigned long stride, unsigned long asid) +{ + if (size <= stride) + local_flush_tlb_page_asid(start, asid); + else + local_flush_tlb_all_asid(asid); +} + +static void __ipi_flush_tlb_all(void *info) +{ + local_flush_tlb_all(); +} + void flush_tlb_all(void) { - sbi_remote_sfence_vma(NULL, 0, -1); + if (riscv_use_ipi_for_rfence()) + on_each_cpu(__ipi_flush_tlb_all, NULL, 1); + else + sbi_remote_sfence_vma(NULL, 0, -1); +} + +struct flush_tlb_range_data { + unsigned long asid; + unsigned long start; + unsigned long size; + unsigned long stride; +}; + +static void __ipi_flush_tlb_range_asid(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); +} + +static void __ipi_flush_tlb_range(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range(d->start, d->size, d->stride); } -static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, - unsigned long size, unsigned long stride) +static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, + unsigned long size, unsigned long stride) { + struct flush_tlb_range_data ftd; struct cpumask *cmask = mm_cpumask(mm); unsigned int cpuid; bool broadcast; @@ -45,19 +93,34 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, unsigned long asid = atomic_long_read(&mm->context.id); if (broadcast) { - sbi_remote_sfence_vma_asid(cmask, start, size, asid); - } else if (size <= stride) { - local_flush_tlb_page_asid(start, asid); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = asid; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range_asid, + &ftd, 1); + } else + sbi_remote_sfence_vma_asid(cmask, + start, size, asid); } else { - local_flush_tlb_all_asid(asid); + local_flush_tlb_range_asid(start, size, stride, asid); } } else { if (broadcast) { - sbi_remote_sfence_vma(cmask, start, size); - } else if (size <= stride) { - local_flush_tlb_page(start); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = 0; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range, + &ftd, 1); + } else + sbi_remote_sfence_vma(cmask, start, size); } else { - local_flush_tlb_all(); + local_flush_tlb_range(start, size, stride); } } @@ -66,23 +129,23 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, void flush_tlb_mm(struct mm_struct *mm) { - __sbi_tlb_flush_range(mm, 0, -1, PAGE_SIZE); + __flush_tlb_range(mm, 0, -1, PAGE_SIZE); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - __sbi_tlb_flush_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PAGE_SIZE); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PMD_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PMD_SIZE); } #endif