From patchwork Tue Mar 28 03:52:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13190446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ECDE1C76195 for ; Tue, 28 Mar 2023 03:53:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=L2dD3uAKQF3cnd/MionR8weVQSmI/p5cJt/BXtcaQgo=; b=TtmSBjECzQErx6 LtOWutSHW2j7rQWlIwYdD9BZqHpjm93nY1COMyqa/mvPMhTPg0S9gdx17fTpS1g9RKVsbQOd5Vavp MeNEMFUKY9ErkD9XPZJx2m/htlo0Xp4wLYJro/TpNmyrIzyCnAE8Im/8ZQG1uHx4ssHuxTPsqwV1h fhg4WPc7o4Fl9pFDsj7moIvGteF6PeZoOYN1ZiJ4c6TTRWqoIF6Ex4b7NrMigG3a78dlXEjaCTZYt /T1h/Tih03Wsx2OLuDJQ2Af/6V/QwFGnfXCxwDzwRQcqCmrikFbW2AUX+kmpmSFDi46I9i5daEqzb CKTeTbVB4i0lDDRzBZTg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1ph0OR-00D3oq-22; Tue, 28 Mar 2023 03:53:07 +0000 Received: from mail-pj1-x102c.google.com ([2607:f8b0:4864:20::102c]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ph0OO-00D3nN-0o for linux-riscv@lists.infradead.org; Tue, 28 Mar 2023 03:53:05 +0000 Received: by mail-pj1-x102c.google.com with SMTP id r7-20020a17090b050700b002404be7920aso9854960pjz.5 for ; Mon, 27 Mar 2023 20:53:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1679975583; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=a2tND4+OZFie8n8iNE7DZHGFfCkqDurrEJT4noc8Afo=; b=LUWh1K+PyVs1rXPdYzIcKHG3d2DqIoNvHunWDZ2aAsc+VJGJIscCSWu7a041Mox94H Oqb0gWGjD52jhvykiDpPYoNqWMGsAtyEGQnn6GqGkau7LNVU3RO03yUVkJHZE3bKVW/c mwD8L66sui6Qsbf/m3mRhqM+763sLzRmYglGhdEzuu0OpqfVWT/IdP+JuFETiL+hMCyq 9ztDMqz/6BFBlW2zN4iycf5U+IgKsPikbOIgq5y1yGjih21bzWwI5D6sJ2SkOy2YzE1W Q++6D+ZmMXbLzVAGMQ/ZD/0IfIi72rWaK2XsFLtrvZNRWFrlGYh7fmHM5680Xau8eSyx Efnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679975583; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a2tND4+OZFie8n8iNE7DZHGFfCkqDurrEJT4noc8Afo=; b=qPa/V4xdTcRxu6tesrhnFmFB5vinU20y5z2MOamxy+REkafYbdM778DrYG/eAxjmme Pc9puXpQEooyXCe/nk6M8rEvyGDG0bALxyJ805VCGK7PSVMo8/rxaF1cYVxtKDBwN6KN EXvys5wfTWqHbX5hP/4T5tB98Qm30dZxD83eg+FN3rsB3xqUzr7fpbDE9IrIqefXM/3q aI7+BZItv160isO5hESzfWm8dtLQs8JOxCEVCCoZ+N2mvf0KZUxgS7QR+JdH3avR+cug o4G6zniwN7f5h7cJ0Dh5rSTK6z+0jXbJdZkIyvBlL4D8J2Tj3/Bga1mhHBpmoDT9GNKJ 101A== X-Gm-Message-State: AAQBX9cMhWS3HyJF6blN5MFElauZ65N91mZqkjnKwIhf0UxjnNEwXa9p kSmGlDoJnF/aDf8vhMr284a11kYoNqkTR2ISW1I= X-Google-Smtp-Source: AKy350ac+F60romXUhiPhc1CAN9WKBTbifYJq4pb2vHGoxq1DEY0sxRmPX3cY/opN76O8vIBqzJljw== X-Received: by 2002:a17:902:eecc:b0:1a0:6bd4:ea78 with SMTP id h12-20020a170902eecc00b001a06bd4ea78mr12280592plb.31.1679975582804; Mon, 27 Mar 2023 20:53:02 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([171.76.82.104]) by smtp.gmail.com with ESMTPSA id g6-20020a170902740600b0019cb534a824sm19880278pll.172.2023.03.27.20.52.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Mar 2023 20:53:02 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Daniel Lezcano Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra , Palmer Dabbelt Subject: [PATCH v18 5/7] RISC-V: Use IPIs for remote TLB flush when possible Date: Tue, 28 Mar 2023 09:22:21 +0530 Message-Id: <20230328035223.1480939-6-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230328035223.1480939-1-apatel@ventanamicro.com> References: <20230328035223.1480939-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230327_205304_285336_D00C5B51 X-CRM114-Status: GOOD ( 12.92 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org If we have specialized interrupt controller (such as AIA IMSIC) which allows supervisor mode to directly inject IPIs without any assistance from M-mode or HS-mode then using such specialized interrupt controller, we can do remote TLB flushes directly from supervisor mode instead of using the SBI RFENCE calls. This patch extends remote TLB flush functions to use supervisor mode IPIs whenever direct supervisor mode IPIs.are supported by interrupt controller. Signed-off-by: Anup Patel Reviewed-by: Atish Patra Acked-by: Palmer Dabbelt --- arch/riscv/mm/tlbflush.c | 93 +++++++++++++++++++++++++++++++++------- 1 file changed, 78 insertions(+), 15 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index ef701fa83f36..77be59aadc73 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -23,14 +23,62 @@ static inline void local_flush_tlb_page_asid(unsigned long addr, : "memory"); } +static inline void local_flush_tlb_range(unsigned long start, + unsigned long size, unsigned long stride) +{ + if (size <= stride) + local_flush_tlb_page(start); + else + local_flush_tlb_all(); +} + +static inline void local_flush_tlb_range_asid(unsigned long start, + unsigned long size, unsigned long stride, unsigned long asid) +{ + if (size <= stride) + local_flush_tlb_page_asid(start, asid); + else + local_flush_tlb_all_asid(asid); +} + +static void __ipi_flush_tlb_all(void *info) +{ + local_flush_tlb_all(); +} + void flush_tlb_all(void) { - sbi_remote_sfence_vma(NULL, 0, -1); + if (riscv_use_ipi_for_rfence()) + on_each_cpu(__ipi_flush_tlb_all, NULL, 1); + else + sbi_remote_sfence_vma(NULL, 0, -1); +} + +struct flush_tlb_range_data { + unsigned long asid; + unsigned long start; + unsigned long size; + unsigned long stride; +}; + +static void __ipi_flush_tlb_range_asid(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); +} + +static void __ipi_flush_tlb_range(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range(d->start, d->size, d->stride); } -static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, - unsigned long size, unsigned long stride) +static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, + unsigned long size, unsigned long stride) { + struct flush_tlb_range_data ftd; struct cpumask *cmask = mm_cpumask(mm); unsigned int cpuid; bool broadcast; @@ -45,19 +93,34 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, unsigned long asid = atomic_long_read(&mm->context.id) & asid_mask; if (broadcast) { - sbi_remote_sfence_vma_asid(cmask, start, size, asid); - } else if (size <= stride) { - local_flush_tlb_page_asid(start, asid); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = asid; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range_asid, + &ftd, 1); + } else + sbi_remote_sfence_vma_asid(cmask, + start, size, asid); } else { - local_flush_tlb_all_asid(asid); + local_flush_tlb_range_asid(start, size, stride, asid); } } else { if (broadcast) { - sbi_remote_sfence_vma(cmask, start, size); - } else if (size <= stride) { - local_flush_tlb_page(start); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = 0; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range, + &ftd, 1); + } else + sbi_remote_sfence_vma(cmask, start, size); } else { - local_flush_tlb_all(); + local_flush_tlb_range(start, size, stride); } } @@ -66,23 +129,23 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, void flush_tlb_mm(struct mm_struct *mm) { - __sbi_tlb_flush_range(mm, 0, -1, PAGE_SIZE); + __flush_tlb_range(mm, 0, -1, PAGE_SIZE); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - __sbi_tlb_flush_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PAGE_SIZE); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PMD_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PMD_SIZE); } #endif