From patchwork Tue Nov 29 14:24:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13058640 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DF1A6C4321E for ; Tue, 29 Nov 2022 14:25:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5/z4ZfdRO3B1SFqqjJprhm/vuX8WnnmaSd1rDdIMEZs=; b=PEMddstv/sC3bz 3kyK8CR19l5hY+40Z3+3+ysuXg2PXI3/f2BooExnsmsNvDdvBxXhRZWzQHlOUTqcBBTYRPaEl34SJ FZ1wcbB9AVaFq6V+qGfoVTmeTaB5b2AbuOebuGY4q5ssiUVwSd6WfhAwFT5kM0dXA+9GdL10ADSy3 LmtF5J0jv14KaLgkvaClJxe+tKB4z3T5wiTaRXMdj5tdFI4BB+ZJ7rfTKymbKPsHqLKqefaxOfWEY GTlMOADjuZXcNyyhPSni5uSD5douXTAwzFKIzkOrFWGJb+L7tt3RUL5QgdrXd1l20isQlhwI0qioF 6KkuWDR0tQHjuLGfjpVw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p01YR-009EAG-Ii; Tue, 29 Nov 2022 14:25:47 +0000 Received: from mail-pf1-x42b.google.com ([2607:f8b0:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p01Y6-009Ds7-M9 for linux-riscv@lists.infradead.org; Tue, 29 Nov 2022 14:25:30 +0000 Received: by mail-pf1-x42b.google.com with SMTP id 21so699640pfw.4 for ; Tue, 29 Nov 2022 06:25:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PXYoHLXQCavco5NNh0sXuOKKVYR0blfrb+/AP/qt1J8=; b=dEB14mQj8qBCttkvjnb6E1JA0IdbkAEU9ugmnQ3ZCO46Zin4IT5tJFrN/3nvpdyi86 RT1jtUPlAe5m1dD2CVItXOKKTEtgX5k9wmgeSGlDlVsniJIOQ7H2KIcBoc73z4Tq7gKo vtpfullAt1M823y496QaUbk2MNCSVg+hMdNg0QgrHIGzTM8to3a8TIL+ptLm54lYWRvf eYhgUhhs/s3lK6caHxX67XB9T6YSrxROd5pUHjeFJLmTodX5ixg59VjKhw7Q4Q+aTh0O f9Emk70Dy7wdbEVPANTn5HSL85ghz+1VEJYsa7dA5gnhBq8t6T/0p62zZ8P9GkFc1Epi Krjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PXYoHLXQCavco5NNh0sXuOKKVYR0blfrb+/AP/qt1J8=; b=bxlJbyBh+cQzbu9YPbYWHwwqlLYsorWhJaAjmRXWH4cS2LEPZfIiuABgBqzgxX2LCz iEekHuCibk2rrlFVVT2hOviYOpFiQwZ+zFbtQDoHiQfqajY17I6yqK5X021DAQF7ot6H NgZDsx8VFf9nxa6l6l3JdXYDGb+pfBIp/53wrjE4SonRN4PbgyN8njmMSac+/0Wm9NOG sE5jVkF7WIm2qmOR+IqWA0AiVecjOqDwOwKVi3duaPIjA89EqGgzfUYos7OhJqmyOgb8 xbD5duwB7ngdNqZ9e0OmsbZ+gC4Nq0T77rpvfKiHylqg2ifd5ppzNJr8OAyYHXRyLOwQ pmgQ== X-Gm-Message-State: ANoB5plMcncde8dR6AjcZVzKURFyyXBUSnzFMCWYTzLT353aa51OYu9J 1TnRiLWde+kSB8fpK4sh03QNUQ== X-Google-Smtp-Source: AA0mqf4uHbr6usW8+78pUcFgJ+fvnf6lJQShr/MJm6h8aKmruJ8HTmeEQvKX4GdvrbpSgZAWwhnTDA== X-Received: by 2002:a05:6a00:24c1:b0:573:a1f0:5968 with SMTP id d1-20020a056a0024c100b00573a1f05968mr37368521pfv.0.1669731923411; Tue, 29 Nov 2022 06:25:23 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([171.76.84.98]) by smtp.gmail.com with ESMTPSA id l12-20020a170903120c00b00176a2d23d1asm11039076plh.56.2022.11.29.06.25.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Nov 2022 06:25:23 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Daniel Lezcano Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v13 6/7] RISC-V: Use IPIs for remote TLB flush when possible Date: Tue, 29 Nov 2022 19:54:48 +0530 Message-Id: <20221129142449.886518-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221129142449.886518-1-apatel@ventanamicro.com> References: <20221129142449.886518-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221129_062526_826062_79BEEFD9 X-CRM114-Status: GOOD ( 12.89 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org If we have specialized interrupt controller (such as AIA IMSIC) which allows supervisor mode to directly inject IPIs without any assistance from M-mode or HS-mode then using such specialized interrupt controller, we can do remote TLB flushes directly from supervisor mode instead of using the SBI RFENCE calls. This patch extends remote TLB flush functions to use supervisor mode IPIs whenever direct supervisor mode IPIs.are supported by interrupt controller. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/mm/tlbflush.c | 93 +++++++++++++++++++++++++++++++++------- 1 file changed, 78 insertions(+), 15 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 37ed760d007c..27a7db8eb2c4 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -23,14 +23,62 @@ static inline void local_flush_tlb_page_asid(unsigned long addr, : "memory"); } +static inline void local_flush_tlb_range(unsigned long start, + unsigned long size, unsigned long stride) +{ + if (size <= stride) + local_flush_tlb_page(start); + else + local_flush_tlb_all(); +} + +static inline void local_flush_tlb_range_asid(unsigned long start, + unsigned long size, unsigned long stride, unsigned long asid) +{ + if (size <= stride) + local_flush_tlb_page_asid(start, asid); + else + local_flush_tlb_all_asid(asid); +} + +static void __ipi_flush_tlb_all(void *info) +{ + local_flush_tlb_all(); +} + void flush_tlb_all(void) { - sbi_remote_sfence_vma(NULL, 0, -1); + if (riscv_use_ipi_for_rfence()) + on_each_cpu(__ipi_flush_tlb_all, NULL, 1); + else + sbi_remote_sfence_vma(NULL, 0, -1); +} + +struct flush_tlb_range_data { + unsigned long asid; + unsigned long start; + unsigned long size; + unsigned long stride; +}; + +static void __ipi_flush_tlb_range_asid(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); +} + +static void __ipi_flush_tlb_range(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range(d->start, d->size, d->stride); } -static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, - unsigned long size, unsigned long stride) +static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, + unsigned long size, unsigned long stride) { + struct flush_tlb_range_data ftd; struct cpumask *cmask = mm_cpumask(mm); unsigned int cpuid; bool broadcast; @@ -45,19 +93,34 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, unsigned long asid = atomic_long_read(&mm->context.id); if (broadcast) { - sbi_remote_sfence_vma_asid(cmask, start, size, asid); - } else if (size <= stride) { - local_flush_tlb_page_asid(start, asid); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = asid; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range_asid, + &ftd, 1); + } else + sbi_remote_sfence_vma_asid(cmask, + start, size, asid); } else { - local_flush_tlb_all_asid(asid); + local_flush_tlb_range_asid(start, size, stride, asid); } } else { if (broadcast) { - sbi_remote_sfence_vma(cmask, start, size); - } else if (size <= stride) { - local_flush_tlb_page(start); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = 0; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range, + &ftd, 1); + } else + sbi_remote_sfence_vma(cmask, start, size); } else { - local_flush_tlb_all(); + local_flush_tlb_range(start, size, stride); } } @@ -66,23 +129,23 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, void flush_tlb_mm(struct mm_struct *mm) { - __sbi_tlb_flush_range(mm, 0, -1, PAGE_SIZE); + __flush_tlb_range(mm, 0, -1, PAGE_SIZE); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - __sbi_tlb_flush_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PAGE_SIZE); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PMD_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PMD_SIZE); } #endif