From patchwork Thu Dec 1 13:01:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13061309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3105FC43217 for ; Thu, 1 Dec 2022 13:07:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5/z4ZfdRO3B1SFqqjJprhm/vuX8WnnmaSd1rDdIMEZs=; b=HFC5e5bOIHjnpX h+enTX26wMuqk3svMG0zHnnbFvd6Mc8VvQGfcg9zEOJ9sfEEAa/nIos1KwDaKwF8eYu9Gga6xzS16 0HDSrGnZE6yZMEhp100LxK/Z+sbhK6JlsBUjs8NqwI9LpLuaBouJaXcao4bLQuTEFxrvr7ZNhRXX3 JvJLrGkmbZUw0IQ9oOUnn0aqqNHHtoV4iN0MlI2NRD9tN4U5BYniqsAnp+urim1IxHYIiPluo7U27 1bSTgmMIxxRIF9jRKlU//7v5EoXHQWfo3Gx0SCSTMy1Gqjscfen18iu+sGPpVsrwzFEcL0TJvCuxC GahzYIWybLxCZ6DTwm1A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0jHt-007aJK-MI; Thu, 01 Dec 2022 13:07:37 +0000 Received: from mail-pl1-x631.google.com ([2607:f8b0:4864:20::631]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0jCf-007T80-GI for linux-riscv@lists.infradead.org; Thu, 01 Dec 2022 13:02:15 +0000 Received: by mail-pl1-x631.google.com with SMTP id y4so1593162plb.2 for ; Thu, 01 Dec 2022 05:02:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PXYoHLXQCavco5NNh0sXuOKKVYR0blfrb+/AP/qt1J8=; b=MnvoQLxLNgbmo5IrRyEZjSpPAnvgKpvqlr+05gHoAOA2OLdGYdmcd82NT1Okx4BBqe ZuWBFm76SwXdW/NrCFeBrIPE1tlBj/GTu4GC6NYQkBVm4cmfcnSy1kRyR4mjFYARy51r YsG1Q2LkkHMwGwdC1AoeXKfVQs5ttxQ13h8QbyYrgO2EV6KKkw+hqlWMoEGivgpo2616 egjYvKCSNZKQ8vqfVAPunkGNQFu8v2GPZf2r3xBv5aWrKBC9tbKOl0FTHaGGwpK5q4OD 8ywuY1yApdHN7EKl3eANRphgi6ZQh1nBw8HCoy/f9xgXKPXQyt7MKjeGQPTbPB30dDet mhHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PXYoHLXQCavco5NNh0sXuOKKVYR0blfrb+/AP/qt1J8=; b=lwXIMjR1LfZ9m4tVbgilwe5ywEpG9rzjd2QQ39XId5yRM1+KnUtmUtNY64o2mcxFnf RpQbCTQsCLo+r1t3zlHY/DU4jZ+WzWOV+5pjuuxng6NmDhaDakN4Bx7DJBZQYwZ/RkWm 4UrhZOOqRoitNPgaVXOBmjFfMZLri56S82xvzfoIaYxRWqjcgzFydk2spZioa6rwwC7C LdtDv1EOH8RlPUgEt/+zIoEs/DQpAKGkxRZOjDqIOkq77bc/B4jMTbOpTi5X8ayB3JNN awNGgf0kmkpij0ThwtpMnWkzoNastPHb4iqs8vvcqCYz6FjMBrIaTWWNiwrLrQCjcckU 6bEg== X-Gm-Message-State: ANoB5pmMWeMRaJNw3KouUMzpHXzoYyD0bjRqL28yPIccpQlPxqe6GE4W 9zgXCiIEsnPG/2PVACnKaIH91A== X-Google-Smtp-Source: AA0mqf6ZGTSHskxR4iL2VVwLEp1XFjtgHEhzkdgxM1JnnHG6QN09+9ukNapPt7gWKZtAL9W+rTIGEw== X-Received: by 2002:a17:902:aa06:b0:183:7f67:25d7 with SMTP id be6-20020a170902aa0600b001837f6725d7mr47783541plb.164.1669899732741; Thu, 01 Dec 2022 05:02:12 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([171.76.81.69]) by smtp.gmail.com with ESMTPSA id l4-20020a17090a384400b00212c27abcaesm4855856pjf.17.2022.12.01.05.02.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Dec 2022 05:02:12 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Daniel Lezcano Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v14 6/8] RISC-V: Use IPIs for remote TLB flush when possible Date: Thu, 1 Dec 2022 18:31:33 +0530 Message-Id: <20221201130135.1115380-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221201130135.1115380-1-apatel@ventanamicro.com> References: <20221201130135.1115380-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221201_050213_628265_B6682470 X-CRM114-Status: GOOD ( 12.51 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org If we have specialized interrupt controller (such as AIA IMSIC) which allows supervisor mode to directly inject IPIs without any assistance from M-mode or HS-mode then using such specialized interrupt controller, we can do remote TLB flushes directly from supervisor mode instead of using the SBI RFENCE calls. This patch extends remote TLB flush functions to use supervisor mode IPIs whenever direct supervisor mode IPIs.are supported by interrupt controller. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/mm/tlbflush.c | 93 +++++++++++++++++++++++++++++++++------- 1 file changed, 78 insertions(+), 15 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 37ed760d007c..27a7db8eb2c4 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -23,14 +23,62 @@ static inline void local_flush_tlb_page_asid(unsigned long addr, : "memory"); } +static inline void local_flush_tlb_range(unsigned long start, + unsigned long size, unsigned long stride) +{ + if (size <= stride) + local_flush_tlb_page(start); + else + local_flush_tlb_all(); +} + +static inline void local_flush_tlb_range_asid(unsigned long start, + unsigned long size, unsigned long stride, unsigned long asid) +{ + if (size <= stride) + local_flush_tlb_page_asid(start, asid); + else + local_flush_tlb_all_asid(asid); +} + +static void __ipi_flush_tlb_all(void *info) +{ + local_flush_tlb_all(); +} + void flush_tlb_all(void) { - sbi_remote_sfence_vma(NULL, 0, -1); + if (riscv_use_ipi_for_rfence()) + on_each_cpu(__ipi_flush_tlb_all, NULL, 1); + else + sbi_remote_sfence_vma(NULL, 0, -1); +} + +struct flush_tlb_range_data { + unsigned long asid; + unsigned long start; + unsigned long size; + unsigned long stride; +}; + +static void __ipi_flush_tlb_range_asid(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); +} + +static void __ipi_flush_tlb_range(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range(d->start, d->size, d->stride); } -static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, - unsigned long size, unsigned long stride) +static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, + unsigned long size, unsigned long stride) { + struct flush_tlb_range_data ftd; struct cpumask *cmask = mm_cpumask(mm); unsigned int cpuid; bool broadcast; @@ -45,19 +93,34 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, unsigned long asid = atomic_long_read(&mm->context.id); if (broadcast) { - sbi_remote_sfence_vma_asid(cmask, start, size, asid); - } else if (size <= stride) { - local_flush_tlb_page_asid(start, asid); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = asid; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range_asid, + &ftd, 1); + } else + sbi_remote_sfence_vma_asid(cmask, + start, size, asid); } else { - local_flush_tlb_all_asid(asid); + local_flush_tlb_range_asid(start, size, stride, asid); } } else { if (broadcast) { - sbi_remote_sfence_vma(cmask, start, size); - } else if (size <= stride) { - local_flush_tlb_page(start); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = 0; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range, + &ftd, 1); + } else + sbi_remote_sfence_vma(cmask, start, size); } else { - local_flush_tlb_all(); + local_flush_tlb_range(start, size, stride); } } @@ -66,23 +129,23 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, void flush_tlb_mm(struct mm_struct *mm) { - __sbi_tlb_flush_range(mm, 0, -1, PAGE_SIZE); + __flush_tlb_range(mm, 0, -1, PAGE_SIZE); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - __sbi_tlb_flush_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PAGE_SIZE); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PMD_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PMD_SIZE); } #endif