From patchwork Tue Nov 1 14:33:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13026977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1AA18C433FE for ; Tue, 1 Nov 2022 14:35:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Fo4uBLeY1CKENfSjbL6+jBCG+dg5CSVXAkXMz8AY260=; b=APKWnOuox6yvTC QzlQEmqEjrTXmToAJJThqvasF7lKrKbciBYyRS9y1KRLEYM8LeLxnFVf7uHZSAgNL0ECNrF895YjU zwfV8w5LDckr5aIpIalQ728yvwLeJcMQ8S7x31X6JQAKgryIz4XTROlntkzHvD9G5yIHCEpj8vgUs HFPpEcUvfSxVwXdXwE1gX2WqyWQtVM/XffIudykVbWct+fHpc8j/yXmexLr3D6Vlb/wz9DOQdGCBV oS26Mwy0ODY6kvm7lChPtorwecsO67tF1HLdC7d4RbinbA06MGnasnmyUrTk2jSQWR58yP2Eprk9N 90UO2BJ2nC+/u3JkoGrw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1opsM2-005VQh-VP; Tue, 01 Nov 2022 14:35:02 +0000 Received: from mail-pf1-x42a.google.com ([2607:f8b0:4864:20::42a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1opsLz-005VLE-It for linux-riscv@lists.infradead.org; Tue, 01 Nov 2022 14:35:01 +0000 Received: by mail-pf1-x42a.google.com with SMTP id 192so13630586pfx.5 for ; Tue, 01 Nov 2022 07:34:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BjmiwAV+qkEddhUojBQVUbZIbS/TIGSArMUYW9Y9o7E=; b=Zy5sdTrMf0M3pjYa/MT2umcZ0Oy093AmxiJWAZCUyMOcrYjMskX3ojkSBUlq18/80d dEI1FtpwIJiACRD/VD5OE735DGA4vZpabreavado648cZKP9wy4GvGvZLtzu4uTKkr0n +1/wsZCJScfAmXqZEtH+EYmoYWjqKEJxyqC/bcduFnDx7FNK8WkgyXZ7Myxeo92gfMEr 9fxKUh40MhaM7KjmRXCcQRi5eh9x9mI2d2JQaxd5O3pOBaqFd4ZpfkVUfRmLb+sCdTqA MRJP6HUdEx8bdfwtqfSOX/LuMLCyMrTiilEZ88mrI+9g5+pOxNKaP8KM6/bTiU/CxK7f puug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BjmiwAV+qkEddhUojBQVUbZIbS/TIGSArMUYW9Y9o7E=; b=zIEFK+zwEDvO3lXDyGf4dtQRL0jGk8aJIamrHlthmdcCLemSkLw8RpKtLNNRadHzQh tMSiKal2r7v6yTU3DMMU/aBc2xEdHbWmMj4avpRy/6YcwpHx2TODed7gLJmqHwcSrX1c NlBi4xvTyRZ7ftJOjm3r6IeUbF7Y1YvHd9XNFOitHPqxq8gK7Vcmt8vcSUPDx2VfbmJq TB0RKMO5CQ9G2kq7cb8zvXUGtfHBExNH2iwsvINt1afUPm41Lm2uWOargMan6WxQmqUs pe5mgm8nFIn6yXvTm3t40wvLhao4AcJlA9d0N6sMC+Ov5N/8Pvac5uOsoVukz7X4BLQI hK0A== X-Gm-Message-State: ACrzQf3/JY1/+3qTxxmhvVDH9UijrVEnvTFs0QgCHO3e7+TTd2WmFUWZ xMZ/o+AZLNmQ25WXr9w+LUlsLA== X-Google-Smtp-Source: AMsMyM7QJcgvDfRdpeq1dYeqPz7Xly0F12ahFShZG5/A80SCS3aZoJXe7HxgBbpMcEIs2YHxXK005w== X-Received: by 2002:a05:6a00:a94:b0:562:dcbb:47c3 with SMTP id b20-20020a056a000a9400b00562dcbb47c3mr20315835pfl.79.1667313296494; Tue, 01 Nov 2022 07:34:56 -0700 (PDT) Received: from anup-ubuntu64-vm.. ([171.76.80.52]) by smtp.gmail.com with ESMTPSA id o20-20020a170903009400b0018685aaf41dsm6449055pld.18.2022.11.01.07.34.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Nov 2022 07:34:56 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Daniel Lezcano Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v10 6/7] RISC-V: Use IPIs for remote TLB flush when possible Date: Tue, 1 Nov 2022 20:03:59 +0530 Message-Id: <20221101143400.690000-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221101143400.690000-1-apatel@ventanamicro.com> References: <20221101143400.690000-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221101_073459_662274_FCCC5879 X-CRM114-Status: GOOD ( 13.45 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org If we have specialized interrupt controller (such as AIA IMSIC) which allows supervisor mode to directly inject IPIs without any assistance from M-mode or HS-mode then using such specialized interrupt controller, we can do remote TLB flushes directly from supervisor mode instead of using the SBI RFENCE calls. This patch extends remote TLB flush functions to use supervisor mode IPIs whenever direct supervisor mode IPIs.are supported by interrupt controller. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/mm/tlbflush.c | 93 +++++++++++++++++++++++++++++++++------- 1 file changed, 78 insertions(+), 15 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 37ed760d007c..27a7db8eb2c4 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -23,14 +23,62 @@ static inline void local_flush_tlb_page_asid(unsigned long addr, : "memory"); } +static inline void local_flush_tlb_range(unsigned long start, + unsigned long size, unsigned long stride) +{ + if (size <= stride) + local_flush_tlb_page(start); + else + local_flush_tlb_all(); +} + +static inline void local_flush_tlb_range_asid(unsigned long start, + unsigned long size, unsigned long stride, unsigned long asid) +{ + if (size <= stride) + local_flush_tlb_page_asid(start, asid); + else + local_flush_tlb_all_asid(asid); +} + +static void __ipi_flush_tlb_all(void *info) +{ + local_flush_tlb_all(); +} + void flush_tlb_all(void) { - sbi_remote_sfence_vma(NULL, 0, -1); + if (riscv_use_ipi_for_rfence()) + on_each_cpu(__ipi_flush_tlb_all, NULL, 1); + else + sbi_remote_sfence_vma(NULL, 0, -1); +} + +struct flush_tlb_range_data { + unsigned long asid; + unsigned long start; + unsigned long size; + unsigned long stride; +}; + +static void __ipi_flush_tlb_range_asid(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); +} + +static void __ipi_flush_tlb_range(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range(d->start, d->size, d->stride); } -static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, - unsigned long size, unsigned long stride) +static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, + unsigned long size, unsigned long stride) { + struct flush_tlb_range_data ftd; struct cpumask *cmask = mm_cpumask(mm); unsigned int cpuid; bool broadcast; @@ -45,19 +93,34 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, unsigned long asid = atomic_long_read(&mm->context.id); if (broadcast) { - sbi_remote_sfence_vma_asid(cmask, start, size, asid); - } else if (size <= stride) { - local_flush_tlb_page_asid(start, asid); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = asid; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range_asid, + &ftd, 1); + } else + sbi_remote_sfence_vma_asid(cmask, + start, size, asid); } else { - local_flush_tlb_all_asid(asid); + local_flush_tlb_range_asid(start, size, stride, asid); } } else { if (broadcast) { - sbi_remote_sfence_vma(cmask, start, size); - } else if (size <= stride) { - local_flush_tlb_page(start); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = 0; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range, + &ftd, 1); + } else + sbi_remote_sfence_vma(cmask, start, size); } else { - local_flush_tlb_all(); + local_flush_tlb_range(start, size, stride); } } @@ -66,23 +129,23 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, void flush_tlb_mm(struct mm_struct *mm) { - __sbi_tlb_flush_range(mm, 0, -1, PAGE_SIZE); + __flush_tlb_range(mm, 0, -1, PAGE_SIZE); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - __sbi_tlb_flush_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PAGE_SIZE); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PMD_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PMD_SIZE); } #endif