From patchwork Fri Jul 19 16:09:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13737403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BC0B8C3DA59 for ; Fri, 19 Jul 2024 16:09:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CLrrN1SCu5RCCtw31++kJVF3GUhzuTEG58HDl/h8qEk=; b=2kbW+/VHIhaqn1 eBszzYdqbIQL3/EQuEkTEKKUOX6QyV4ziGGwt0/WCxBJQVBPMjoK+bbXZ7dHTQjrHOOsKLMVjr8AV 5aDzzkcjSOia0fWkWqe1GUZMkr3r5+MzchAmbQXncV3ykq6gR+qGPDCs1kEuaW26aq0t8eo6c2fu5 3ljZuXKG0kShpJbZRbS2bAQxowBDB+VspZYUJawxeiic/A/uLeqtOSGgEr2eJe35EcPnyoeFvxMY8 hw6lQafyOrEPD0Cp+ZZrJ/EupwVaJtQN+ABOoMr8TLuuR8NQJVXnkjmPGy8DpYjXvUWC+eZWQy8KB vtLSG6CxOxe4UuUoPmuA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqAm-00000003B7u-39AP; Fri, 19 Jul 2024 16:09:32 +0000 Received: from mail-pl1-x62c.google.com ([2607:f8b0:4864:20::62c]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqAj-00000003B6G-3GBq for linux-riscv@lists.infradead.org; Fri, 19 Jul 2024 16:09:31 +0000 Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-1fc4fccdd78so14282975ad.2 for ; Fri, 19 Jul 2024 09:09:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1721405368; x=1722010168; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4afLvyEEC8RtJCW4UQr1WFckvidc7t8VQbZfjqmPCBw=; b=GM4IfbKoYR5NhSsxwRbmON73o++Ylw9oXrf2eT6kF3g1XkT3/FkswaBJohVwHj7rdC WVKLfJGNm1788mGCEZ/Lf1WZB6iffF3Hv6IcvvrIJcvmGOCT4u9QovTTHzRQFgLZmT6f 8JWEbJWF5F+xbZI+QsLN8nHdRxuk58dEz/alMW2xS0PGewSV7uTnxBaaSd9pMoyJSEP/ JQtJGJqnM7nxOUpbcmX1TgogPYjJdFyr4/ig0wzaDTTswqesixeKM1r+Fm3dA6+LO4Z/ tXZXUC+eegZrhxWvLv0SAmiUvEWnSfqRc69oVenVZsZCI4ymUKzABD535TAEjm8GfiUF Gp/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721405368; x=1722010168; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4afLvyEEC8RtJCW4UQr1WFckvidc7t8VQbZfjqmPCBw=; b=VDCvWdnTKoXQT2lroYghlAa8sKTprRRCN3sn3nsp1zcCjs3glsWA6uC+SjSsg72GFf y0H8k8aiM4bN2I0L5vID9kJ3ukXz/fJUH9Q4/QYwlsoj+SPBCH6eUSk0eb1PvFo/Mehc Y3JP0fon+sPXdKuSEeETeOuT+vFldLWxXfW3Ra/HI5cYXTfIYBhOrScLlj44cpUTxHmv O1+rXVEO5jzPmFM1LDoxQMLQZyA83sdAA8z1i1GTX88AF58ltiGQ5TJhKuGxZfawcied pI8XAb9VYb9gt5M2rpwl0qBgtHk1hF4mh/vWbxPlHuh0GHCkI4+YiTL78AM3SwrDy3BV aKvg== X-Forwarded-Encrypted: i=1; AJvYcCU4RyHmu9qkINQaQLqimMbCZPWzMzDQd2YA5SxckB37TMZ+V819lImILt7E6MI8v5SWOZn6dqTOa83bpYI05wAMdyD4CT+ts0pDhNR+7F1R X-Gm-Message-State: AOJu0Yx6ql4wZptZqBrTWjg3YX1OWh07Tm0wnnC5vT5J1+t5eiOixUdC T8abfxPlLlJVMmABDRAIxuNl0CyZLNrEGiE1k9qSN/ETOeRgogMS9J/nT/Ch6eU= X-Google-Smtp-Source: AGHT+IFvnzeRYXUnxpbkoVZbtYpLHtISa8TDnsAJ4DZ27uwiHvkB35LGFVFo5pBDK1eTxYFOHUVg1Q== X-Received: by 2002:a17:903:2306:b0:1fb:55da:c3d with SMTP id d9443c01a7336-1fd745637b5mr4123095ad.25.1721405368211; Fri, 19 Jul 2024 09:09:28 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([223.185.135.236]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fd6f28f518sm6632615ad.69.2024.07.19.09.09.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jul 2024 09:09:27 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 01/13] RISC-V: KVM: Order the object files alphabetically Date: Fri, 19 Jul 2024 21:39:01 +0530 Message-Id: <20240719160913.342027-2-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240719160913.342027-1-apatel@ventanamicro.com> References: <20240719160913.342027-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_090929_838116_6019FA1A X-CRM114-Status: UNSURE ( 8.12 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Order the object files alphabetically in the Makefile so that it is very predictable inserting new object files in the future. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/Makefile | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index c2cacfbc06a0..c1eac0d093de 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -9,27 +9,29 @@ include $(srctree)/virt/kvm/Makefile.kvm obj-$(CONFIG_KVM) += kvm.o +# Ordered alphabetically +kvm-y += aia.o +kvm-y += aia_aplic.o +kvm-y += aia_device.o +kvm-y += aia_imsic.o kvm-y += main.o -kvm-y += vm.o -kvm-y += vmid.o -kvm-y += tlb.o kvm-y += mmu.o +kvm-y += tlb.o kvm-y += vcpu.o kvm-y += vcpu_exit.o kvm-y += vcpu_fp.o -kvm-y += vcpu_vector.o kvm-y += vcpu_insn.o kvm-y += vcpu_onereg.o -kvm-y += vcpu_switch.o +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o kvm-y += vcpu_sbi.o -kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_v01.o kvm-y += vcpu_sbi_base.o -kvm-y += vcpu_sbi_replace.o kvm-y += vcpu_sbi_hsm.o +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_sbi_pmu.o +kvm-y += vcpu_sbi_replace.o kvm-y += vcpu_sbi_sta.o +kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_v01.o +kvm-y += vcpu_switch.o kvm-y += vcpu_timer.o -kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o -kvm-y += aia.o -kvm-y += aia_device.o -kvm-y += aia_aplic.o -kvm-y += aia_imsic.o +kvm-y += vcpu_vector.o +kvm-y += vm.o +kvm-y += vmid.o From patchwork Fri Jul 19 16:09:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13737404 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 26F9AC3DA59 for ; Fri, 19 Jul 2024 16:09:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=tF51WGBOUAPRpOPqH0Rx7vH92kZMt6AerN4j8R0bo74=; b=wjE2WRyvxH/FBe ZYin0J9NpiSN/kPfP4VUMsHcSxQymLuD9mrCswjsvZrtjNoRnR466G6ZIvz10RnGYsBouWIIWSSwH HTPNklvdgv8HgFyCfpM2e3dUyaX55bQjsbbHjNWpPEgDTmvIH3X297wzcVz2UQ4BU5rLLNxmH4WqW 3wkd/VZn7LoDXpLAhzuld4k4stwgPec9ZBMvULDehnHLvsE6WWjlGgWVnE2q3ItRRBQlFh1rA+Ev0 b0wz+4HVmi0dX1butOyR9/jFJaS5yxtZe4vAcl/5wqugvlsQGuhhbQAX55VD05BWbsJPsg+qiNxLq yKB7iJoYDCnqOIlguefA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqAo-00000003B9E-3sZL; Fri, 19 Jul 2024 16:09:34 +0000 Received: from mail-pl1-x62c.google.com ([2607:f8b0:4864:20::62c]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqAm-00000003B7e-1yd7 for linux-riscv@lists.infradead.org; Fri, 19 Jul 2024 16:09:33 +0000 Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-1fc611a0f8cso14689025ad.2 for ; Fri, 19 Jul 2024 09:09:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1721405372; x=1722010172; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OGcYsUF3DJYRqNt/bHhanbow+t8gedh1ONp0/95fX0k=; b=EsT4isAd9N7sWdecTGD9YwEgP+eN+H3mcRHCrH/YC2EV8B8Ztoyh7GHIvqYWSwLegJ 0e6oAC7WzcWXmCdUcNwiZsPtVR3qJhXK2e9WqZDwR2HryuRmJ/E17uyAGRNZ+bxSp3c5 PIjqVVzPUnUVd5z1BVuNIvkFrCB0c/aiZIXkf6m6vgbQYhoQofLoxnbVEASi633MLBDq xOP8gByVnZ2UZhcmkBaZwMgkb6DWfc4wFOy/Jj9AGOkam1Enk0qoolerVgk7llNjoFss 9NKa742B4mD/Vs6SXgxXyEj3Vk51SufiDjAubF0ATFFpe568B7nUyeedpOUY6bMytIov IqgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721405372; x=1722010172; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OGcYsUF3DJYRqNt/bHhanbow+t8gedh1ONp0/95fX0k=; b=c53WRjY0Gtt9ouWnGBl8eE4QF5UaSeWSURf3px2wLcQggHh3m1QSdb4fIWlGKqrK0x hfgXQBnFjpTOU3Ak8ZkvTA2ZpujBdHDYYn9ELssi++2H/u9Q4R7nVCOa4Mnw7K8hoLts cySdNuYyll74lF71ha5owjIwS7DlZs0NmCFxDP163wW/OYHVBYm490U9SSZmH9u7PGEF dlLvjodAMWnb5roZSGHJFDasUz11q7iaXoBSiN9Zjz07gWyHS/tb1P3RLxoWwBjqA23j aiawRQI4iKYTelqSR+V/dcXDQvSjhEnBIsBjr06UBjp0ppKmHnT2zHWOR+deOsAMKTJC i6/A== X-Forwarded-Encrypted: i=1; AJvYcCV9um5uLCOp01vVuw5tJE2sxzm6N67WsZbTFacNS2vlBrAUFMHfHyo/moYQd1KtNI4aUng2C4OyjPTmKlOPaBbaZBHv8C2sWFi2qJeukAJZ X-Gm-Message-State: AOJu0YzuJSI35IjFKPY3FCDHlowuhuyQpEVZa7gC5dbh5mcoLo/kiDfy +fI3GgF6rR7S8LmDfn3POtNJcZMSr8BA3g0z6Umfe78oSafP0rojyCpsbhitQY4= X-Google-Smtp-Source: AGHT+IGPjT8nh2ANsFn3KOth35OvyCH2HUqbnWMJwIQKneQhkQKr1OrfOoTIE7geur0pJijTHCcRbQ== X-Received: by 2002:a17:903:234a:b0:1f9:f3c6:ed37 with SMTP id d9443c01a7336-1fd74597a55mr4348925ad.14.1721405371674; Fri, 19 Jul 2024 09:09:31 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([223.185.135.236]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fd6f28f518sm6632615ad.69.2024.07.19.09.09.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jul 2024 09:09:31 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 02/13] RISC-V: KVM: Save/restore HSTATUS in C source Date: Fri, 19 Jul 2024 21:39:02 +0530 Message-Id: <20240719160913.342027-3-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240719160913.342027-1-apatel@ventanamicro.com> References: <20240719160913.342027-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_090932_539090_B7724A53 X-CRM114-Status: UNSURE ( 9.28 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org We will be optimizing HSTATUS CSR access via shared memory setup using the SBI nested acceleration extension. To facilitate this, we first move HSTATUS save/restore in kvm_riscv_vcpu_enter_exit(). Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/vcpu.c | 9 +++++++++ arch/riscv/kvm/vcpu_switch.S | 36 +++++++++++++----------------------- 2 files changed, 22 insertions(+), 23 deletions(-) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 449e5bb948c2..93b1ce043482 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -720,9 +720,18 @@ static __always_inline void kvm_riscv_vcpu_swap_in_host_state(struct kvm_vcpu *v */ static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *gcntx = &vcpu->arch.guest_context; + struct kvm_cpu_context *hcntx = &vcpu->arch.host_context; + kvm_riscv_vcpu_swap_in_guest_state(vcpu); guest_state_enter_irqoff(); + + hcntx->hstatus = csr_swap(CSR_HSTATUS, gcntx->hstatus); + __kvm_riscv_switch_to(&vcpu->arch); + + gcntx->hstatus = csr_swap(CSR_HSTATUS, hcntx->hstatus); + vcpu->arch.last_exit_cpu = vcpu->cpu; guest_state_exit_irqoff(); kvm_riscv_vcpu_swap_in_host_state(vcpu); diff --git a/arch/riscv/kvm/vcpu_switch.S b/arch/riscv/kvm/vcpu_switch.S index 0c26189aa01c..f83643c4fdb9 100644 --- a/arch/riscv/kvm/vcpu_switch.S +++ b/arch/riscv/kvm/vcpu_switch.S @@ -43,35 +43,30 @@ SYM_FUNC_START(__kvm_riscv_switch_to) /* Load Guest CSR values */ REG_L t0, (KVM_ARCH_GUEST_SSTATUS)(a0) - REG_L t1, (KVM_ARCH_GUEST_HSTATUS)(a0) - REG_L t2, (KVM_ARCH_GUEST_SCOUNTEREN)(a0) - la t4, .Lkvm_switch_return - REG_L t5, (KVM_ARCH_GUEST_SEPC)(a0) + REG_L t1, (KVM_ARCH_GUEST_SCOUNTEREN)(a0) + la t3, .Lkvm_switch_return + REG_L t4, (KVM_ARCH_GUEST_SEPC)(a0) /* Save Host and Restore Guest SSTATUS */ csrrw t0, CSR_SSTATUS, t0 - /* Save Host and Restore Guest HSTATUS */ - csrrw t1, CSR_HSTATUS, t1 - /* Save Host and Restore Guest SCOUNTEREN */ - csrrw t2, CSR_SCOUNTEREN, t2 + csrrw t1, CSR_SCOUNTEREN, t1 /* Save Host STVEC and change it to return path */ - csrrw t4, CSR_STVEC, t4 + csrrw t3, CSR_STVEC, t3 /* Save Host SSCRATCH and change it to struct kvm_vcpu_arch pointer */ - csrrw t3, CSR_SSCRATCH, a0 + csrrw t2, CSR_SSCRATCH, a0 /* Restore Guest SEPC */ - csrw CSR_SEPC, t5 + csrw CSR_SEPC, t4 /* Store Host CSR values */ REG_S t0, (KVM_ARCH_HOST_SSTATUS)(a0) - REG_S t1, (KVM_ARCH_HOST_HSTATUS)(a0) - REG_S t2, (KVM_ARCH_HOST_SCOUNTEREN)(a0) - REG_S t3, (KVM_ARCH_HOST_SSCRATCH)(a0) - REG_S t4, (KVM_ARCH_HOST_STVEC)(a0) + REG_S t1, (KVM_ARCH_HOST_SCOUNTEREN)(a0) + REG_S t2, (KVM_ARCH_HOST_SSCRATCH)(a0) + REG_S t3, (KVM_ARCH_HOST_STVEC)(a0) /* Restore Guest GPRs (except A0) */ REG_L ra, (KVM_ARCH_GUEST_RA)(a0) @@ -153,8 +148,7 @@ SYM_FUNC_START(__kvm_riscv_switch_to) REG_L t1, (KVM_ARCH_HOST_STVEC)(a0) REG_L t2, (KVM_ARCH_HOST_SSCRATCH)(a0) REG_L t3, (KVM_ARCH_HOST_SCOUNTEREN)(a0) - REG_L t4, (KVM_ARCH_HOST_HSTATUS)(a0) - REG_L t5, (KVM_ARCH_HOST_SSTATUS)(a0) + REG_L t4, (KVM_ARCH_HOST_SSTATUS)(a0) /* Save Guest SEPC */ csrr t0, CSR_SEPC @@ -168,18 +162,14 @@ SYM_FUNC_START(__kvm_riscv_switch_to) /* Save Guest and Restore Host SCOUNTEREN */ csrrw t3, CSR_SCOUNTEREN, t3 - /* Save Guest and Restore Host HSTATUS */ - csrrw t4, CSR_HSTATUS, t4 - /* Save Guest and Restore Host SSTATUS */ - csrrw t5, CSR_SSTATUS, t5 + csrrw t4, CSR_SSTATUS, t4 /* Store Guest CSR values */ REG_S t0, (KVM_ARCH_GUEST_SEPC)(a0) REG_S t2, (KVM_ARCH_GUEST_A0)(a0) REG_S t3, (KVM_ARCH_GUEST_SCOUNTEREN)(a0) - REG_S t4, (KVM_ARCH_GUEST_HSTATUS)(a0) - REG_S t5, (KVM_ARCH_GUEST_SSTATUS)(a0) + REG_S t4, (KVM_ARCH_GUEST_SSTATUS)(a0) /* Restore Host GPRs (except A0 and T0-T6) */ REG_L ra, (KVM_ARCH_HOST_RA)(a0) From patchwork Fri Jul 19 16:09:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13737405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2F1BAC3DA5D for ; Fri, 19 Jul 2024 16:09:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=v6od6wrsYI0K+lvHgHDLCKV27POS3k0v4888gA4sRwk=; b=qEr/OZ8v60XH1v VwZ/W+7otiZ0CDhZgJIDvB3nnN43hT03pmeSBupNBrkv0BssTNfcCfFMafIuizKUIKlB94MmWSZMm oLjG+OmfxzlItZhLUEkFQDFTE7GSdqp2l64sw09zR64m3cMxLnlPfCwAR/8ZtrTLQVPTNDXPZ6CAb 0MRDQBmyK7p5ADqssa9oHY+Xmg50ola88MV/9c9QK0R+HdMUTxEOir16kPxVgUMurskF+re0SwXKR 7b9NKb+bxG8nw38eyyaWdw+ThxIwXgJnRG/uJ6OHyfXsA6Vk6aGmSGaAYN2fOloGfxXfxwKLucXHu WoOT1MulNpRDO2r7EZWA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqAu-00000003BDy-2LGo; Fri, 19 Jul 2024 16:09:40 +0000 Received: from mail-pf1-x432.google.com ([2607:f8b0:4864:20::432]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqAr-00000003BAn-24ML for linux-riscv@lists.infradead.org; Fri, 19 Jul 2024 16:09:38 +0000 Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-70af22a9c19so823414b3a.2 for ; Fri, 19 Jul 2024 09:09:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1721405376; x=1722010176; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=e3NWlthcoywMVunFB4+gvIDJc/eJtmciuSUTOqnPM9U=; b=EuARneD4xzpdKScyCkD9ZK6DA/XcxtsDWjYY1htF917NhxWQbTNUa30J+iJ2GldSVj bIoJ+w532/V6wub2gdU7k+8+ciFecVIglOZ+cv9UpFVyfIvgTC3FEjBnr4uLe4iFdygs hajAlMuW3wkIHsQamyfVKVQcYYTEdqY+HBsK0G0uLDYz04Q5/0qav4auiH8p6SSN8uLv ag56wZ3nP7oc++nlssrQGTcvwiOw43XEzFc09UxA0HnIP5Q2aECorTdh+zJNxnPe/XW+ 0bb7tWvVd5uxRkS7vWI6gGsDEijH8xpqM3fdWs3SpXgNaETOfTYsyz9kpXum/VjmAY/3 h17Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721405376; x=1722010176; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e3NWlthcoywMVunFB4+gvIDJc/eJtmciuSUTOqnPM9U=; b=q/l6PPiof47zL101lr+J7Ks8+GYa/Vlpq5lEALBPHMXJgmzjRTsTag8MynsaRBMAzg 252BDQwLnbSFT4LjI9tQP9lB7hWrn6we1Xm24leL56TZ4lkSGcodY3NvBLND2R33jaAU SQ/VoF885Xka0cIIeVORow69CEuX4P+97uk+hhpXuVWJk0Cw5ptkCh+4zdFdnK4ArGw1 6ivdBSuJgL0nc6GxAVb689UZjlCGFckUdJ6oMqYk5VyyxRc0d5XH6FkZAs+x5u9J7c5s qYfE7TussTHhq3F5DjKuNeGnA+myQnd2z5QKabWxLOFW8aBPMDEnkhPDGyNUrdmCblao 6eug== X-Forwarded-Encrypted: i=1; AJvYcCUokSDIrzrq3McjfUEtFRYA1ycZfxnPjEzhLqJ3b10GwHgLcSF6AwgHziPLxBSIXhjyM10fesNHOGdRo0tMGBV/o42xCxMGOIhkjdHL9w9l X-Gm-Message-State: AOJu0YzOwZPzPFyLB+5oYgSeSGl7w5ph/HQqNlsRB2tnAjqguOePFW29 PRphWCxaEg88uXq+xEHQ+WNslyLx4tlCwaPefgcom6ghKN48LGBjcESuTXlUHqY= X-Google-Smtp-Source: AGHT+IHSyLP6BUKo5Y6kcqhYNpaqUtXlIEUBiGZ3Sff8ZSea+zDXaY9Z5yRlxXFAYJK3pPW05CR8DA== X-Received: by 2002:a05:6a20:9145:b0:1c0:f1cb:c4b2 with SMTP id adf61e73a8af0-1c3fdcbfff9mr10900609637.4.1721405375471; Fri, 19 Jul 2024 09:09:35 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([223.185.135.236]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fd6f28f518sm6632615ad.69.2024.07.19.09.09.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jul 2024 09:09:34 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 03/13] RISC-V: KVM: Save/restore SCOUNTEREN in C source Date: Fri, 19 Jul 2024 21:39:03 +0530 Message-Id: <20240719160913.342027-4-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240719160913.342027-1-apatel@ventanamicro.com> References: <20240719160913.342027-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_090937_564526_2A594AF4 X-CRM114-Status: UNSURE ( 9.57 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The SCOUNTEREN CSR need not be saved/restored in the low-level __kvm_riscv_switch_to() function hence move the SCOUNTEREN CSR save/restore to the kvm_riscv_vcpu_swap_in_guest_state() and kvm_riscv_vcpu_swap_in_host_state() functions in C sources. Also, re-arrange the CSR save/restore and related GPR usage in the low-level __kvm_riscv_switch_to() low-level function. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/vcpu.c | 2 ++ arch/riscv/kvm/vcpu_switch.S | 52 +++++++++++++++--------------------- 2 files changed, 23 insertions(+), 31 deletions(-) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 93b1ce043482..957e1a5e081b 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -691,6 +691,7 @@ static __always_inline void kvm_riscv_vcpu_swap_in_guest_state(struct kvm_vcpu * struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; struct kvm_vcpu_config *cfg = &vcpu->arch.cfg; + vcpu->arch.host_scounteren = csr_swap(CSR_SCOUNTEREN, csr->scounteren); vcpu->arch.host_senvcfg = csr_swap(CSR_SENVCFG, csr->senvcfg); if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN) && (cfg->hstateen0 & SMSTATEEN0_SSTATEEN0)) @@ -704,6 +705,7 @@ static __always_inline void kvm_riscv_vcpu_swap_in_host_state(struct kvm_vcpu *v struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; struct kvm_vcpu_config *cfg = &vcpu->arch.cfg; + csr->scounteren = csr_swap(CSR_SCOUNTEREN, vcpu->arch.host_scounteren); csr->senvcfg = csr_swap(CSR_SENVCFG, vcpu->arch.host_senvcfg); if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN) && (cfg->hstateen0 & SMSTATEEN0_SSTATEEN0)) diff --git a/arch/riscv/kvm/vcpu_switch.S b/arch/riscv/kvm/vcpu_switch.S index f83643c4fdb9..3f8cbc21a644 100644 --- a/arch/riscv/kvm/vcpu_switch.S +++ b/arch/riscv/kvm/vcpu_switch.S @@ -43,30 +43,25 @@ SYM_FUNC_START(__kvm_riscv_switch_to) /* Load Guest CSR values */ REG_L t0, (KVM_ARCH_GUEST_SSTATUS)(a0) - REG_L t1, (KVM_ARCH_GUEST_SCOUNTEREN)(a0) - la t3, .Lkvm_switch_return - REG_L t4, (KVM_ARCH_GUEST_SEPC)(a0) + la t1, .Lkvm_switch_return + REG_L t2, (KVM_ARCH_GUEST_SEPC)(a0) /* Save Host and Restore Guest SSTATUS */ csrrw t0, CSR_SSTATUS, t0 - /* Save Host and Restore Guest SCOUNTEREN */ - csrrw t1, CSR_SCOUNTEREN, t1 - /* Save Host STVEC and change it to return path */ - csrrw t3, CSR_STVEC, t3 - - /* Save Host SSCRATCH and change it to struct kvm_vcpu_arch pointer */ - csrrw t2, CSR_SSCRATCH, a0 + csrrw t1, CSR_STVEC, t1 /* Restore Guest SEPC */ - csrw CSR_SEPC, t4 + csrw CSR_SEPC, t2 + + /* Save Host SSCRATCH and change it to struct kvm_vcpu_arch pointer */ + csrrw t3, CSR_SSCRATCH, a0 /* Store Host CSR values */ REG_S t0, (KVM_ARCH_HOST_SSTATUS)(a0) - REG_S t1, (KVM_ARCH_HOST_SCOUNTEREN)(a0) - REG_S t2, (KVM_ARCH_HOST_SSCRATCH)(a0) - REG_S t3, (KVM_ARCH_HOST_STVEC)(a0) + REG_S t1, (KVM_ARCH_HOST_STVEC)(a0) + REG_S t3, (KVM_ARCH_HOST_SSCRATCH)(a0) /* Restore Guest GPRs (except A0) */ REG_L ra, (KVM_ARCH_GUEST_RA)(a0) @@ -145,31 +140,26 @@ SYM_FUNC_START(__kvm_riscv_switch_to) REG_S t6, (KVM_ARCH_GUEST_T6)(a0) /* Load Host CSR values */ - REG_L t1, (KVM_ARCH_HOST_STVEC)(a0) - REG_L t2, (KVM_ARCH_HOST_SSCRATCH)(a0) - REG_L t3, (KVM_ARCH_HOST_SCOUNTEREN)(a0) - REG_L t4, (KVM_ARCH_HOST_SSTATUS)(a0) - - /* Save Guest SEPC */ - csrr t0, CSR_SEPC + REG_L t0, (KVM_ARCH_HOST_STVEC)(a0) + REG_L t1, (KVM_ARCH_HOST_SSCRATCH)(a0) + REG_L t2, (KVM_ARCH_HOST_SSTATUS)(a0) /* Save Guest A0 and Restore Host SSCRATCH */ - csrrw t2, CSR_SSCRATCH, t2 + csrrw t1, CSR_SSCRATCH, t1 - /* Restore Host STVEC */ - csrw CSR_STVEC, t1 + /* Save Guest SEPC */ + csrr t3, CSR_SEPC - /* Save Guest and Restore Host SCOUNTEREN */ - csrrw t3, CSR_SCOUNTEREN, t3 + /* Restore Host STVEC */ + csrw CSR_STVEC, t0 /* Save Guest and Restore Host SSTATUS */ - csrrw t4, CSR_SSTATUS, t4 + csrrw t2, CSR_SSTATUS, t2 /* Store Guest CSR values */ - REG_S t0, (KVM_ARCH_GUEST_SEPC)(a0) - REG_S t2, (KVM_ARCH_GUEST_A0)(a0) - REG_S t3, (KVM_ARCH_GUEST_SCOUNTEREN)(a0) - REG_S t4, (KVM_ARCH_GUEST_SSTATUS)(a0) + REG_S t1, (KVM_ARCH_GUEST_A0)(a0) + REG_S t2, (KVM_ARCH_GUEST_SSTATUS)(a0) + REG_S t3, (KVM_ARCH_GUEST_SEPC)(a0) /* Restore Host GPRs (except A0 and T0-T6) */ REG_L ra, (KVM_ARCH_HOST_RA)(a0) From patchwork Fri Jul 19 16:09:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13737406 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 54E82C3DA5D for ; Fri, 19 Jul 2024 16:09:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Gaal0x3xeObiJXmCOVqKJ5vxXrhmAkAgADvNunhtv30=; b=NYRYlrE5uHzA+W OSCaeLXkK4c5JGOQGvYMqX89pFTF5U4fqARAJiWm6sfSEdEgS1YlP09e0KZn1yWeC5NqaKRRdORUA YHI/Jc1fpGBAe3OnL2Foa9xfZfoH28hFG8rFSLEWZwYmyoCwE9nZnmB23ErrO6i/ti8g+GURoMRsh 0DffhxuKQ9O2WBkxTWyKhTgU4vXM3QikfSl4Z/aKSBbvBXNnDtT/XHRSshuCuIC1sNyspQr7xaOaD mabDKaPoqYhsFrxapUSC9kT0V9bGOax6cAa3C/zdmC2GiGxILZ3zEt0d73Msw7+yV9bxflpp5PRGy M9nulbTNax0pVhSLMUAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqAy-00000003BHu-2ZzH; Fri, 19 Jul 2024 16:09:44 +0000 Received: from mail-pl1-x62b.google.com ([2607:f8b0:4864:20::62b]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqAu-00000003BD0-22QF for linux-riscv@lists.infradead.org; Fri, 19 Jul 2024 16:09:42 +0000 Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1fc56fd4de1so16472735ad.0 for ; Fri, 19 Jul 2024 09:09:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1721405379; x=1722010179; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YmGpyKIpsYjD2Un1YAAjNQazrOZv5LQO9NnpLjromTA=; b=BJmy4OPYfQxvdi+iEU/jLoBscEI0I8sIVqxuKwVZP7pFaUK2LSM86Qs2KjSoPwMTgF M8IpuSxM2IT+nLfBiSHEsK1bui/xyj2Rq0eua/+DKkDHSl4kjILImVsUXMCE8VZuQSyr LsGYv+QlOHxnvp5koqa3roVtSmFzn+4nmoh3hmdeUBHg+kpSthOOTxp8LnQY3w5olzgl G8yMtGx7mNkzfRIr/tK6HBoaCEoMsiaUsMII/wAHwNL0bKlcuc42fuPU3A4j1ic9doGY O009+cxK8+43+MNFShVtkX6ofbOYgxHyxR0da5AlAR89uewEY+Plk9kUUKc0mv59gx/9 cx8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721405379; x=1722010179; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YmGpyKIpsYjD2Un1YAAjNQazrOZv5LQO9NnpLjromTA=; b=ZRaFPB3ly6hZgcS0PL5+7I7V47cufjSYAZjoG1erGV/g1gSrYWuL8RK86H4FgF4Sa7 n+6dHyiQLkFBhK1RsN1vPzy0bglzIkF45B3+OyA03jx07yOD2Zs4FBJDBgndjU2Nwznf AREnKnhcMILdLhePofBXrhe4J2+7AkABKFFosPOTOVRVkQ36LZKIARVNi+DweuNgkJBg z0c2d7hVnKmKzVemeUD2v5pzjGjcBXYYar8j0S/OpTnfLwCi63Bj4LG9T1HwtjXmtj8I qsi6SdFRC8j7LMwO68I72trNEfVC1c/jtK2pUy00Wo3qvaMJO7tftTq4BsKXEvMvQK1i bMig== X-Forwarded-Encrypted: i=1; AJvYcCU7pA7YpAAGdR/9ZV8rIJYa+9zwbEh8DbqUtj7B6+a6JP8uwhlG/41gK/IPcX+kcEzjgBphQMR7YhoAbeZ2pG2YHF9TqG+B7hQqOg/wLnHK X-Gm-Message-State: AOJu0Yy1tzgs4jlok5dyFa1YilSK3KVzSVxgRLGvT88UiZtHVEslatmm h69+k/zQFcI8FPaYjtxH61KyIWWiD2hVJnRsZ4xFL5fjE8CVJ+HuRoHfTxmbmgg= X-Google-Smtp-Source: AGHT+IEx43SWMSduBJ80D+hDqdhh6L1kpIV5ped9Bm8n1yHsWnthT/wrpUHAZQLMRgacJBPbKhUTgw== X-Received: by 2002:a17:902:f651:b0:1fb:5f9c:a86c with SMTP id d9443c01a7336-1fd74cfeddamr2369965ad.3.1721405378979; Fri, 19 Jul 2024 09:09:38 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([223.185.135.236]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fd6f28f518sm6632615ad.69.2024.07.19.09.09.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jul 2024 09:09:38 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 04/13] RISC-V: KVM: Break down the __kvm_riscv_switch_to() into macros Date: Fri, 19 Jul 2024 21:39:04 +0530 Message-Id: <20240719160913.342027-5-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240719160913.342027-1-apatel@ventanamicro.com> References: <20240719160913.342027-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_090940_640812_94E65395 X-CRM114-Status: UNSURE ( 9.05 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Break down the __kvm_riscv_switch_to() function into macros so that these macros can be later re-used by SBI NACL extension based low-level switch function. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/vcpu_switch.S | 52 +++++++++++++++++++++++++++--------- 1 file changed, 40 insertions(+), 12 deletions(-) diff --git a/arch/riscv/kvm/vcpu_switch.S b/arch/riscv/kvm/vcpu_switch.S index 3f8cbc21a644..9f13e5ce6a18 100644 --- a/arch/riscv/kvm/vcpu_switch.S +++ b/arch/riscv/kvm/vcpu_switch.S @@ -11,11 +11,7 @@ #include #include - .text - .altmacro - .option norelax - -SYM_FUNC_START(__kvm_riscv_switch_to) +.macro SAVE_HOST_GPRS /* Save Host GPRs (except A0 and T0-T6) */ REG_S ra, (KVM_ARCH_HOST_RA)(a0) REG_S sp, (KVM_ARCH_HOST_SP)(a0) @@ -40,10 +36,12 @@ SYM_FUNC_START(__kvm_riscv_switch_to) REG_S s9, (KVM_ARCH_HOST_S9)(a0) REG_S s10, (KVM_ARCH_HOST_S10)(a0) REG_S s11, (KVM_ARCH_HOST_S11)(a0) +.endm +.macro SAVE_HOST_AND_RESTORE_GUEST_CSRS __resume_addr /* Load Guest CSR values */ REG_L t0, (KVM_ARCH_GUEST_SSTATUS)(a0) - la t1, .Lkvm_switch_return + la t1, \__resume_addr REG_L t2, (KVM_ARCH_GUEST_SEPC)(a0) /* Save Host and Restore Guest SSTATUS */ @@ -62,7 +60,9 @@ SYM_FUNC_START(__kvm_riscv_switch_to) REG_S t0, (KVM_ARCH_HOST_SSTATUS)(a0) REG_S t1, (KVM_ARCH_HOST_STVEC)(a0) REG_S t3, (KVM_ARCH_HOST_SSCRATCH)(a0) +.endm +.macro RESTORE_GUEST_GPRS /* Restore Guest GPRs (except A0) */ REG_L ra, (KVM_ARCH_GUEST_RA)(a0) REG_L sp, (KVM_ARCH_GUEST_SP)(a0) @@ -97,13 +97,9 @@ SYM_FUNC_START(__kvm_riscv_switch_to) /* Restore Guest A0 */ REG_L a0, (KVM_ARCH_GUEST_A0)(a0) +.endm - /* Resume Guest */ - sret - - /* Back to Host */ - .align 2 -.Lkvm_switch_return: +.macro SAVE_GUEST_GPRS /* Swap Guest A0 with SSCRATCH */ csrrw a0, CSR_SSCRATCH, a0 @@ -138,7 +134,9 @@ SYM_FUNC_START(__kvm_riscv_switch_to) REG_S t4, (KVM_ARCH_GUEST_T4)(a0) REG_S t5, (KVM_ARCH_GUEST_T5)(a0) REG_S t6, (KVM_ARCH_GUEST_T6)(a0) +.endm +.macro SAVE_GUEST_AND_RESTORE_HOST_CSRS /* Load Host CSR values */ REG_L t0, (KVM_ARCH_HOST_STVEC)(a0) REG_L t1, (KVM_ARCH_HOST_SSCRATCH)(a0) @@ -160,7 +158,9 @@ SYM_FUNC_START(__kvm_riscv_switch_to) REG_S t1, (KVM_ARCH_GUEST_A0)(a0) REG_S t2, (KVM_ARCH_GUEST_SSTATUS)(a0) REG_S t3, (KVM_ARCH_GUEST_SEPC)(a0) +.endm +.macro RESTORE_HOST_GPRS /* Restore Host GPRs (except A0 and T0-T6) */ REG_L ra, (KVM_ARCH_HOST_RA)(a0) REG_L sp, (KVM_ARCH_HOST_SP)(a0) @@ -185,6 +185,34 @@ SYM_FUNC_START(__kvm_riscv_switch_to) REG_L s9, (KVM_ARCH_HOST_S9)(a0) REG_L s10, (KVM_ARCH_HOST_S10)(a0) REG_L s11, (KVM_ARCH_HOST_S11)(a0) +.endm + + .text + .altmacro + .option norelax + + /* + * Parameters: + * A0 <= Pointer to struct kvm_vcpu_arch + */ +SYM_FUNC_START(__kvm_riscv_switch_to) + SAVE_HOST_GPRS + + SAVE_HOST_AND_RESTORE_GUEST_CSRS .Lkvm_switch_return + + RESTORE_GUEST_GPRS + + /* Resume Guest using SRET */ + sret + + /* Back to Host */ + .align 2 +.Lkvm_switch_return: + SAVE_GUEST_GPRS + + SAVE_GUEST_AND_RESTORE_HOST_CSRS + + RESTORE_HOST_GPRS /* Return to C code */ ret From patchwork Fri Jul 19 16:09:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13737407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DDD77C3DA59 for ; Fri, 19 Jul 2024 16:09:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ECfJAUZJyQmMFH3kknrtwIRC1Ks9cKbyFpMtJT592Vw=; b=hSy9M0WXJpTtY3 fuD+jrk/qvqG5ZESTVrYW+zXzTwx/a5lBXia+UxTKuL8sKn3i0Rdd0zs7E3TG4GIKQiTmxY5wliio HTJrnFQJD62wkMmtL1xq1iqnNLlSj02IdCNnRnjrhdN1+ylNS0iImuyIHUQ8cAvfmYV5mCVkxG1HB BIdKfJ1cs61m1Zi35ZqFph7GuTYyu/wMDetSzJ96gAl5m0T4PNhLvRf9pdyRcAuFRdTyCaCV4hR0g 6qAFpwr7JbrmxvrKmYSjxmZkpfqoIFzc4mLQKhHdrVIucFYcQY6Ei+DLJ0+kntEKyFMZvaPwTeUbE 4fZ/K4sbmc0H4pcfrQIA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqB1-00000003BL7-3PzR; Fri, 19 Jul 2024 16:09:47 +0000 Received: from mail-pl1-x62b.google.com ([2607:f8b0:4864:20::62b]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqAx-00000003BGf-0tNd for linux-riscv@lists.infradead.org; Fri, 19 Jul 2024 16:09:45 +0000 Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1fc49c1f3e5so20231855ad.1 for ; Fri, 19 Jul 2024 09:09:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1721405383; x=1722010183; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uQYdxy4I3d97ZA3Gu7a5XPpIu9ZgexRW3zods61Iwf0=; b=kyTMKjgonwD+SEKwnLYHxuF21VyH8AqwwQV95UDLQXONWaxnDRE8suwkiH28TQupDP 4GP2cv1RH2X3fjqMGK29a85Ed361F2njy2C7iAt65bGxqASplPOgHD/hPVzQKuWp+OUR p4KGZ/whRNgf7OL3yhMizKdmWb2M6BAq3NHfKf3Urp1M/vhERBwmparQSH04dv77TMv6 J8L3TEtpEKJNioQWuhRLWVkUshFGtQay71vL/hDqW/cz5x5oHFdP2V1ERSdnxcJJRfHU FUJ11mzSD+4eWCJVNggj1gsgOa+/z2/pREAJ/QxjBrghKga2cCeeQQZ3bwMUdbpWhnWa fb4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721405383; x=1722010183; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uQYdxy4I3d97ZA3Gu7a5XPpIu9ZgexRW3zods61Iwf0=; b=wU6c5BZM5nhgKn4CYnXTHcpuyq53QKeH3W3Dobtb35aRLnQg/Z5twpGpk39A7WaKgM h5UDkxKPc0vjUPonN6no+WQ/7080BjjEWCeElrGQ8MRxXm5YRKAfUAivkUOpgPyzaV3b rlBR2x6wf2uTBCyK8gSpJMUElPuKYflmv7ulGbk11ugosnIbiHOnJ/7+lzDLYrvHx/j/ KSaxEk2jFt95e6HolVwzThMcp8WqvUA7lqXM5UZhkjtaZ9Uo4iJ0kgogc0K3TbWm1GHB FQ3oukk7NE87PzfUjBxLIN/xeGKTzrJKeWJ72KEKoFi2hvZ+d5Zr1f9l+oGA1cKiZW0p vjlA== X-Forwarded-Encrypted: i=1; AJvYcCV6aZTuYLhUClvY7zbbgnIbdyPLrFCoWpdZCKUV92yRk7zl6GtoO6psFQZ+GmocpTcBfLHN9aYrBRsuYW/tLjy23CcllrqDFN22oExA6MFO X-Gm-Message-State: AOJu0YzBjfMxf1ytWRHvBgz+MYjJr6APTYxonzoVgxcxE1ERMt3E9Hcj At2FSjuUIQ7kYjJSz3UK1Gsu5WpLvJLMLdZFtF30gMNRpTrgP0O3F9ka71tsY0E= X-Google-Smtp-Source: AGHT+IFcQ7KahPwLVADQu0Jr4SJQkcqdSHgky/PzW7byERWwoXyqIuaKO5JrsOXY2Zp2DqwfDtDsyQ== X-Received: by 2002:a17:903:41d1:b0:1fb:4d84:f461 with SMTP id d9443c01a7336-1fd745f626cmr3062385ad.37.1721405382435; Fri, 19 Jul 2024 09:09:42 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([223.185.135.236]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fd6f28f518sm6632615ad.69.2024.07.19.09.09.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jul 2024 09:09:42 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 05/13] RISC-V: KVM: Replace aia_set_hvictl() with aia_hvictl_value() Date: Fri, 19 Jul 2024 21:39:05 +0530 Message-Id: <20240719160913.342027-6-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240719160913.342027-1-apatel@ventanamicro.com> References: <20240719160913.342027-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_090943_288875_7B072404 X-CRM114-Status: GOOD ( 10.84 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The aia_set_hvictl() internally writes the HVICTL CSR which makes it difficult optimize the CSR write using SBI NACL extension for kvm_riscv_vcpu_aia_update_hvip() function so replace aia_set_hvictl() with new aia_hvictl_value() which only computes the HVICTL value. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/aia.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index 2967d305c442..17ae4a7c0e94 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -51,7 +51,7 @@ static int aia_find_hgei(struct kvm_vcpu *owner) return hgei; } -static void aia_set_hvictl(bool ext_irq_pending) +static inline unsigned long aia_hvictl_value(bool ext_irq_pending) { unsigned long hvictl; @@ -62,7 +62,7 @@ static void aia_set_hvictl(bool ext_irq_pending) hvictl = (IRQ_S_EXT << HVICTL_IID_SHIFT) & HVICTL_IID; hvictl |= ext_irq_pending; - csr_write(CSR_HVICTL, hvictl); + return hvictl; } #ifdef CONFIG_32BIT @@ -130,7 +130,7 @@ void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu) #ifdef CONFIG_32BIT csr_write(CSR_HVIPH, vcpu->arch.aia_context.guest_csr.hviph); #endif - aia_set_hvictl(!!(csr->hvip & BIT(IRQ_VS_EXT))); + csr_write(CSR_HVICTL, aia_hvictl_value(!!(csr->hvip & BIT(IRQ_VS_EXT)))); } void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu) @@ -536,7 +536,7 @@ void kvm_riscv_aia_enable(void) if (!kvm_riscv_aia_available()) return; - aia_set_hvictl(false); + csr_write(CSR_HVICTL, aia_hvictl_value(false)); csr_write(CSR_HVIPRIO1, 0x0); csr_write(CSR_HVIPRIO2, 0x0); #ifdef CONFIG_32BIT @@ -572,7 +572,7 @@ void kvm_riscv_aia_disable(void) csr_clear(CSR_HIE, BIT(IRQ_S_GEXT)); disable_percpu_irq(hgei_parent_irq); - aia_set_hvictl(false); + csr_write(CSR_HVICTL, aia_hvictl_value(false)); raw_spin_lock_irqsave(&hgctrl->lock, flags); From patchwork Fri Jul 19 16:09:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13737408 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 377FDC3DA59 for ; Fri, 19 Jul 2024 16:09:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VlONZIwWlaTnvIBmwfFLo2uDX2qwO/rX/CRSmX/raYo=; b=nmN14Dhf9a2dwd JZ5xXRKvX4WYebaLllxH2e7Hta7kcoXewveyOJNHMznmDWniiSrsFd5fcIfgmt6t3K4tNLTT8EvZ2 dQz5sQlzUo1kcYET6vLheYvZwvNvff3PX/0ZxBim7INDCsJty1OZwPIk+DPIZIPIrrZ7FuKsrP232 u8ZQY7rw8u7Gk+n5y8ULr0BLYxlCB8rLS/3QfT22z/f5pqyTzn2EmUs+zmJgHkbupJ2cfF0rpWi2f 3ulECZ43qwaJRIdigCYqGcvPN7edYS5vB09yTWMLWm/TaY7UsRp04eyEVzXGRd/BWB6SwO/R6D8Xu ku5ZGzjozbsKWDaloSFg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqB8-00000003BRW-318l; Fri, 19 Jul 2024 16:09:54 +0000 Received: from mail-pl1-x632.google.com ([2607:f8b0:4864:20::632]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqB1-00000003BJo-1dLG for linux-riscv@lists.infradead.org; Fri, 19 Jul 2024 16:09:48 +0000 Received: by mail-pl1-x632.google.com with SMTP id d9443c01a7336-1fb3b7d0d56so13757395ad.1 for ; Fri, 19 Jul 2024 09:09:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1721405386; x=1722010186; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fvm3hpKvRW81Dir7XpGK5V4AZ1ZJAiyVeCH2mMtrPq8=; b=DinHFzDy9oTST9xsnkx1XVO3orY5VMxbeLCYR+RSIz5bjxD6N7vp6G93bzJiFg/3Hj LB91qFbPtZ8P1jn+c53SzBFW0hLAvFbBkust8zUyAGfgyutU2hwvpNp9zExE1dgrMuu1 E5rhYFz3DcB2KM9jRlYZtpwQrvEtASncc6DfYPaMGew1NTx0cwAxjB6gymZ54bUK33on yPkdLNDGlODhNeV8edunwpKw69Lfqwa8z3sXRkRA0GmdToGnsBwtP+NtusS6OXkZYcLZ SIOyCd6wdHyRLXzYJCLXciXt9m7eCjJX8ooBTHqJHPOcG7l9ZlSjc3KhfdAl65cy4xDu ZVLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721405386; x=1722010186; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fvm3hpKvRW81Dir7XpGK5V4AZ1ZJAiyVeCH2mMtrPq8=; b=JyTDPziJ+Qwy7ASZETzJNKYUBcOXIG6kZF0aOmPsgFmp2niwN2U1N9kqPWMr3/yG5d FnZ9DyirhJikIh/L/K1yLIoMWPaq7ucrTHSoykJaG+JLt9qby/dIk67hB2SZuFyugcuq 7JWKPNd8NjsqC2pONxBjPif7QZ0sUVIgyRNhMCT4ioct8196U/mMt/2fhHtDvXgal7Fs fNx0v0odOMfi6gNi/sisYyi3dkGT9X4JdhtQAZQnWuAHNa7FzS/Axo+7aAAAzSlrW9/X vdKC983u+XOk2IpyGDZiB43nePP+1LtHWTXmsJKdXEPq4GBn/Z4l9WycyG70yfQmh26d eZHQ== X-Forwarded-Encrypted: i=1; AJvYcCX/ftBM+QKJ6w/8b9U9MUe7nORW+HxPcSBrm4RVPFXUpk2WPHo45cQr19ssov6lYvcd5WSStSv1dWKDSN9X8ufzHp1I1c0LjJ55Dq5YhfDp X-Gm-Message-State: AOJu0YwOFpHEx3aFRaFn+t83Abfu0GQ5VGA4edSA7juOnwXq3OiOt1gF wABYi/FvZ0NUStP9Yj/wn8BKnvJi3Y4S8b5O+IT4ij0L7dcNkDwdyMQzdw50ESc= X-Google-Smtp-Source: AGHT+IHTBp6zlsH1hrlMfviNDvDhBwVMm6bPyuRntoKMRuaLAxmrocV1MrWIWDQcyMq9zx2+n0JCxw== X-Received: by 2002:a17:902:dac8:b0:1fc:5e18:f369 with SMTP id d9443c01a7336-1fd7459dd67mr3483245ad.23.1721405385835; Fri, 19 Jul 2024 09:09:45 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([223.185.135.236]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fd6f28f518sm6632615ad.69.2024.07.19.09.09.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jul 2024 09:09:45 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 06/13] RISC-V: KVM: Don't setup SGEI for zero guest external interrupts Date: Fri, 19 Jul 2024 21:39:06 +0530 Message-Id: <20240719160913.342027-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240719160913.342027-1-apatel@ventanamicro.com> References: <20240719160913.342027-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_090947_451028_26A56F43 X-CRM114-Status: UNSURE ( 9.51 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org No need to setup SGEI local interrupt when there are zero guest external interrupts (i.e. zero HW IMSIC guest files). Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/aia.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index 17ae4a7c0e94..8ffae0330c89 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -499,6 +499,10 @@ static int aia_hgei_init(void) hgctrl->free_bitmap = 0; } + /* Skip SGEI interrupt setup for zero guest external interrupts */ + if (!kvm_riscv_aia_nr_hgei) + goto skip_sgei_interrupt; + /* Find INTC irq domain */ domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(), DOMAIN_BUS_ANY); @@ -522,11 +526,16 @@ static int aia_hgei_init(void) return rc; } +skip_sgei_interrupt: return 0; } static void aia_hgei_exit(void) { + /* Do nothing for zero guest external interrupts */ + if (!kvm_riscv_aia_nr_hgei) + return; + /* Free per-CPU SGEI interrupt */ free_percpu_irq(hgei_parent_irq, &aia_hgei); } From patchwork Fri Jul 19 16:09:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13737409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DC817C3DA59 for ; Fri, 19 Jul 2024 16:10:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JhEomQZ3w/Rt6oUaLliuutmjWb7sVjDOo0oUZuBut64=; b=1tbsj4K4Y22/wq 3xnj2lAFmpcFRpP/lsaGt+bL7kifW4Bdpfyu74t/7d2Ys/n5LNzeI3YuTKP/sqmL4DCh3o5glvkj0 bsgACaALzaySFfm2EluVCVarhhyEJjqPkLD5Cxy09RRHlq1BWePOpY1fqZQ7UKrhNflHZkfOMnsDN dMU85K6ke3exAvfA45ED23FJ9ideIFIbe8NLHMfUuxzIxz/nVkxeRoaw0mbPx0maFj0zbmIvUV6tu dTZR6qRkKmemKIQXFK3xT5tjD9xfDNPBOcRZEGRROx+eJ4EgAgXTR/1ps3CegAysKTYR6c6xVUjgX /NkCRIkcV8I7d4HCZOYQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBA-00000003BSs-3Yhi; Fri, 19 Jul 2024 16:09:56 +0000 Received: from mail-pl1-x62a.google.com ([2607:f8b0:4864:20::62a]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqB4-00000003BN1-1ZAb for linux-riscv@lists.infradead.org; Fri, 19 Jul 2024 16:09:53 +0000 Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-1fc491f9b55so20683725ad.3 for ; Fri, 19 Jul 2024 09:09:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1721405389; x=1722010189; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r2Ojl6CxNtzijt64bsbd3hW1xSapMUpvDVQohCtRHds=; b=KhujUTv0wLhGsnl25uWOT6eyjn1n6dC8PnQhTMJgRQHlVbakXpBmYVtPo8/reVIA8m SasfCZl0CIlCI0WxIELjg76hCNwg0S7cEesB0EkQAXGNbap37bw/loZ9xDk8Smed8bAR 80ylquBYXuix19/PVq192VO6Tm4vApOjpNyRWwINfo9ffoO2CFVoWE7dImLnKR4FXr1t hA/drJg1e78czofi4+voNmjCVYUweB9SLL82NXRepSgr9WQcCQ+BxFFhcc+nzf3raZaW KbRNhjItex1nS+MhLRZmzdvw/lkgBt2Qy54JE5ERQpjNb/hZhb4Ddwx1jbNGPKjtCLwk qPmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721405389; x=1722010189; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r2Ojl6CxNtzijt64bsbd3hW1xSapMUpvDVQohCtRHds=; b=YRdK9XmocZC+WsPysDDrHPdtekDXal+iAp8yiccuUtd4GEQPNXBShxEOa+pn0d/EaR kv/ftLBDmbt2On3k1JiWhjebV9w4wcatFjGTbytPxdK0WM4ZNzvMtuGA5MSbZtV+Ahwu y1hbdDZFucmDIAWG7dWjfc6zsFCquvggcsXdOt5hmK8KFiUYhLp5a/ktP9L+V/jkWM+S bR2vLMxRsPxrsqS5qXj3wtgaH51RWQdQkgfspVefohW3bZZBRdt2WpHZoO/uzwu0fTvr CdzmJYP683Mq/EO+Fhatia0nKGKzKto9+j9B9RD7ni2Z8zJHiPFG1Zdu7e7ddaU6JsLE 4QRw== X-Forwarded-Encrypted: i=1; AJvYcCUAJ5NlEVdx83ic8q2LS4kzztq+lEZfNa/RcQLv97Oa9I55AkXBQZfm+8Emt0NIWvnkyXCPLVEwdBiKMwNkU7pSwOMvjrXv3NEYoCLM3NFr X-Gm-Message-State: AOJu0YxhZS9ib9p8Z8s6zG3HQLidcnrjh2ixV9PD8Z71SVpdkIb7l78E p5ZBo8KMYUOnTHzJCIVIsZ24uFF0VgY2sweVBAEoShrPm2PABAigLzgdeS4kpog= X-Google-Smtp-Source: AGHT+IFJHVb5AztA8Bz3SXW38j/FW4FxxQCuUsU/Eh5eRydXFkn3S1MMglCigkpjrto6nPgWyi46Gw== X-Received: by 2002:a17:903:22c9:b0:1fb:7b96:8467 with SMTP id d9443c01a7336-1fd7463c9d4mr3651825ad.63.1721405389316; Fri, 19 Jul 2024 09:09:49 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([223.185.135.236]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fd6f28f518sm6632615ad.69.2024.07.19.09.09.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jul 2024 09:09:48 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 07/13] RISC-V: Add defines for the SBI nested acceleration extension Date: Fri, 19 Jul 2024 21:39:07 +0530 Message-Id: <20240719160913.342027-8-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240719160913.342027-1-apatel@ventanamicro.com> References: <20240719160913.342027-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_090950_598341_E9556161 X-CRM114-Status: UNSURE ( 9.54 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add defines for the new SBI nested acceleration extension which was ratified as part of the SBI v2.0 specification. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/sbi.h | 120 +++++++++++++++++++++++++++++++++++ 1 file changed, 120 insertions(+) diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index 1079e214fe85..7c9ec953c519 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -33,6 +33,7 @@ enum sbi_ext_id { SBI_EXT_PMU = 0x504D55, SBI_EXT_DBCN = 0x4442434E, SBI_EXT_STA = 0x535441, + SBI_EXT_NACL = 0x4E41434C, /* Experimentals extensions must lie within this range */ SBI_EXT_EXPERIMENTAL_START = 0x08000000, @@ -279,6 +280,125 @@ struct sbi_sta_struct { #define SBI_SHMEM_DISABLE -1 +enum sbi_ext_nacl_fid { + SBI_EXT_NACL_PROBE_FEATURE = 0x0, + SBI_EXT_NACL_SET_SHMEM = 0x1, + SBI_EXT_NACL_SYNC_CSR = 0x2, + SBI_EXT_NACL_SYNC_HFENCE = 0x3, + SBI_EXT_NACL_SYNC_SRET = 0x4, +}; + +enum sbi_ext_nacl_feature { + SBI_NACL_FEAT_SYNC_CSR = 0x0, + SBI_NACL_FEAT_SYNC_HFENCE = 0x1, + SBI_NACL_FEAT_SYNC_SRET = 0x2, + SBI_NACL_FEAT_AUTOSWAP_CSR = 0x3, +}; + +#define SBI_NACL_SHMEM_ADDR_SHIFT 12 +#define SBI_NACL_SHMEM_SCRATCH_OFFSET 0x0000 +#define SBI_NACL_SHMEM_SCRATCH_SIZE 0x1000 +#define SBI_NACL_SHMEM_SRET_OFFSET 0x0000 +#define SBI_NACL_SHMEM_SRET_SIZE 0x0200 +#define SBI_NACL_SHMEM_AUTOSWAP_OFFSET (SBI_NACL_SHMEM_SRET_OFFSET + \ + SBI_NACL_SHMEM_SRET_SIZE) +#define SBI_NACL_SHMEM_AUTOSWAP_SIZE 0x0080 +#define SBI_NACL_SHMEM_UNUSED_OFFSET (SBI_NACL_SHMEM_AUTOSWAP_OFFSET + \ + SBI_NACL_SHMEM_AUTOSWAP_SIZE) +#define SBI_NACL_SHMEM_UNUSED_SIZE 0x0580 +#define SBI_NACL_SHMEM_HFENCE_OFFSET (SBI_NACL_SHMEM_UNUSED_OFFSET + \ + SBI_NACL_SHMEM_UNUSED_SIZE) +#define SBI_NACL_SHMEM_HFENCE_SIZE 0x0780 +#define SBI_NACL_SHMEM_DBITMAP_OFFSET (SBI_NACL_SHMEM_HFENCE_OFFSET + \ + SBI_NACL_SHMEM_HFENCE_SIZE) +#define SBI_NACL_SHMEM_DBITMAP_SIZE 0x0080 +#define SBI_NACL_SHMEM_CSR_OFFSET (SBI_NACL_SHMEM_DBITMAP_OFFSET + \ + SBI_NACL_SHMEM_DBITMAP_SIZE) +#define SBI_NACL_SHMEM_CSR_SIZE ((__riscv_xlen / 8) * 1024) +#define SBI_NACL_SHMEM_SIZE (SBI_NACL_SHMEM_CSR_OFFSET + \ + SBI_NACL_SHMEM_CSR_SIZE) + +#define SBI_NACL_SHMEM_CSR_INDEX(__csr_num) \ + ((((__csr_num) & 0xc00) >> 2) | ((__csr_num) & 0xff)) + +#define SBI_NACL_SHMEM_HFENCE_ENTRY_SZ ((__riscv_xlen / 8) * 4) +#define SBI_NACL_SHMEM_HFENCE_ENTRY_MAX \ + (SBI_NACL_SHMEM_HFENCE_SIZE / \ + SBI_NACL_SHMEM_HFENCE_ENTRY_SZ) +#define SBI_NACL_SHMEM_HFENCE_ENTRY(__num) \ + (SBI_NACL_SHMEM_HFENCE_OFFSET + \ + (__num) * SBI_NACL_SHMEM_HFENCE_ENTRY_SZ) +#define SBI_NACL_SHMEM_HFENCE_ENTRY_CONFIG(__num) \ + SBI_NACL_SHMEM_HFENCE_ENTRY(__num) +#define SBI_NACL_SHMEM_HFENCE_ENTRY_PNUM(__num)\ + (SBI_NACL_SHMEM_HFENCE_ENTRY(__num) + (__riscv_xlen / 8)) +#define SBI_NACL_SHMEM_HFENCE_ENTRY_PCOUNT(__num)\ + (SBI_NACL_SHMEM_HFENCE_ENTRY(__num) + \ + ((__riscv_xlen / 8) * 3)) + +#define SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_BITS 1 +#define SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_SHIFT \ + (__riscv_xlen - SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_BITS) +#define SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_MASK \ + ((1UL << SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_BITS) - 1) +#define SBI_NACL_SHMEM_HFENCE_CONFIG_PEND \ + (SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_MASK << \ + SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_SHIFT) + +#define SBI_NACL_SHMEM_HFENCE_CONFIG_RSVD1_BITS 3 +#define SBI_NACL_SHMEM_HFENCE_CONFIG_RSVD1_SHIFT \ + (SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_SHIFT - \ + SBI_NACL_SHMEM_HFENCE_CONFIG_RSVD1_BITS) + +#define SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_BITS 4 +#define SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_SHIFT \ + (SBI_NACL_SHMEM_HFENCE_CONFIG_RSVD1_SHIFT - \ + SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_BITS) +#define SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_MASK \ + ((1UL << SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_BITS) - 1) + +#define SBI_NACL_SHMEM_HFENCE_TYPE_GVMA 0x0 +#define SBI_NACL_SHMEM_HFENCE_TYPE_GVMA_ALL 0x1 +#define SBI_NACL_SHMEM_HFENCE_TYPE_GVMA_VMID 0x2 +#define SBI_NACL_SHMEM_HFENCE_TYPE_GVMA_VMID_ALL 0x3 +#define SBI_NACL_SHMEM_HFENCE_TYPE_VVMA 0x4 +#define SBI_NACL_SHMEM_HFENCE_TYPE_VVMA_ALL 0x5 +#define SBI_NACL_SHMEM_HFENCE_TYPE_VVMA_ASID 0x6 +#define SBI_NACL_SHMEM_HFENCE_TYPE_VVMA_ASID_ALL 0x7 + +#define SBI_NACL_SHMEM_HFENCE_CONFIG_RSVD2_BITS 1 +#define SBI_NACL_SHMEM_HFENCE_CONFIG_RSVD2_SHIFT \ + (SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_SHIFT - \ + SBI_NACL_SHMEM_HFENCE_CONFIG_RSVD2_BITS) + +#define SBI_NACL_SHMEM_HFENCE_CONFIG_ORDER_BITS 7 +#define SBI_NACL_SHMEM_HFENCE_CONFIG_ORDER_SHIFT \ + (SBI_NACL_SHMEM_HFENCE_CONFIG_RSVD2_SHIFT - \ + SBI_NACL_SHMEM_HFENCE_CONFIG_ORDER_BITS) +#define SBI_NACL_SHMEM_HFENCE_CONFIG_ORDER_MASK \ + ((1UL << SBI_NACL_SHMEM_HFENCE_CONFIG_ORDER_BITS) - 1) +#define SBI_NACL_SHMEM_HFENCE_ORDER_BASE 12 + +#if __riscv_xlen == 32 +#define SBI_NACL_SHMEM_HFENCE_CONFIG_ASID_BITS 9 +#define SBI_NACL_SHMEM_HFENCE_CONFIG_VMID_BITS 7 +#else +#define SBI_NACL_SHMEM_HFENCE_CONFIG_ASID_BITS 16 +#define SBI_NACL_SHMEM_HFENCE_CONFIG_VMID_BITS 14 +#endif +#define SBI_NACL_SHMEM_HFENCE_CONFIG_VMID_SHIFT \ + SBI_NACL_SHMEM_HFENCE_CONFIG_ASID_BITS +#define SBI_NACL_SHMEM_HFENCE_CONFIG_ASID_MASK \ + ((1UL << SBI_NACL_SHMEM_HFENCE_CONFIG_ASID_BITS) - 1) +#define SBI_NACL_SHMEM_HFENCE_CONFIG_VMID_MASK \ + ((1UL << SBI_NACL_SHMEM_HFENCE_CONFIG_VMID_BITS) - 1) + +#define SBI_NACL_SHMEM_AUTOSWAP_FLAG_HSTATUS BIT(0) +#define SBI_NACL_SHMEM_AUTOSWAP_HSTATUS ((__riscv_xlen / 8) * 1) + +#define SBI_NACL_SHMEM_SRET_X(__i) ((__riscv_xlen / 8) * (__i)) +#define SBI_NACL_SHMEM_SRET_X_LAST 31 + /* SBI spec version fields */ #define SBI_SPEC_VERSION_DEFAULT 0x1 #define SBI_SPEC_VERSION_MAJOR_SHIFT 24 From patchwork Fri Jul 19 16:09:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13737410 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4FA60C3DA5D for ; Fri, 19 Jul 2024 16:10:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hJZuhdSzQST7iYYM9AsT4qrZbAApf3Fud6yG56bRKuM=; b=2YExvwudroJvTJ ThDJgNMp66n/RQq7FubIdcvgakJSXgSWqxyxWHzUBNhui+PduOsUp89igSyK1BIkhy+LQL5Zw0vIN 0MYTAHkPZf/LJJqSKC1zzfzGhC/y6HJsIqT4aFwkuk0G/uC7fHmJGgTtewxavE/q3GruyfNcvEegw v19nNafy7JIY/GFomraOHo/hqM0EgusVVaJdtkpbAQugY5bnWC5Gu8bch4Kmy9E+8/UC616MKLXiJ MdgsFXV8dTo4lfVJmyZJJRSotKU7XHFqegQJ+4+hzWX/BBcIyS1IzQSGX227HwFmJc5TsYRAzo+Sf EfMr5XbUiDwF/1NP7i/w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBJ-00000003Baz-2frh; Fri, 19 Jul 2024 16:10:05 +0000 Received: from mail-pl1-x636.google.com ([2607:f8b0:4864:20::636]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqB7-00000003BQB-2bEG for linux-riscv@lists.infradead.org; Fri, 19 Jul 2024 16:09:59 +0000 Received: by mail-pl1-x636.google.com with SMTP id d9443c01a7336-1fb3b7d0d3aso12773935ad.2 for ; Fri, 19 Jul 2024 09:09:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1721405393; x=1722010193; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pNRjpl9AWOu5Y/06ZXovZLxYd73eZ2CcfqE7LV9a3jU=; b=gUxEjbP8BpXgGukOF/suXynrE076z7yLh2xnLc85GkoDICrlcLqHL+bfnLQPyOL3wP 8w8we7x4+5Qt4SA8KWieuyxQbQpXMnqpjR+7w2+RE7kGhUvkdlLfT06D09CEMCTuHSlD gT/qG7UUIVxApWrHiGbN5QIhcP4GejEkZ9fOxreRDEJ3fAB8f1DS+HOZquTJ6ch5d5kk jo5uXlC6fpTonvKnBPANd/1/eqH3D9NaUDTdhhYY179HY9lNi/w8FclMyiE15Nk9MR6S A5NJiH5G/EmjxuBCztd91W/uv/yFfdUYdGULm5uCTNnWUgRLdIjjznubTz0s/7C/6XGs vVtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721405393; x=1722010193; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pNRjpl9AWOu5Y/06ZXovZLxYd73eZ2CcfqE7LV9a3jU=; b=chF9hEGnfEuPQjxTzQro/OWVGiqJGf4XLB0yxL5lO99Ul27LfDTe3+0n0KtdWhslEY B9FbD/0ows613ODtDEoK6uHmwD2BtvKRwi2ompDLQYSGSIe7oaMoqvSS+8hsUykis+tO 1SaIDA400srm0/feQsHRn7UH+a1ur7kVGTPNuWA18BbP18k9cth0axsyrGnekue9puSY ZvS4QVcKh6egn008UwUBjrKLy/QW71SGrVR7wWXdigDjo5+EDg+XTyRj4WDwIvIbBGkc JX6jDW6VSBMod0wmCK24cWsplg56Ltf/L3BB2T4GiifaYQaEuQ/DB2q2Hf+PJPiEI0e3 geJQ== X-Forwarded-Encrypted: i=1; AJvYcCVlI2jfsvgL8by30VWH6tUA+yCxx+W4QvH2144ut997uGYv17YzxcaCZTU0km97E9u7lGGPGL+hCJtPu8nvGtzdEawPABins7CLfnZK9MyF X-Gm-Message-State: AOJu0YyCUlzOClJCbAv2UN1TTgeneeZa71JMJ67sU9N3nS+k4OD0hpo/ A7hJE3Mki0zE1HAPqe7ZVdQQuPQnxZZiwr+LWv9TGsYA5MzpbqmlJXwA1g2Z0n8= X-Google-Smtp-Source: AGHT+IGsSBffRZbxlhKIbUi3MquECwG/00bsx5PoG3YEzYyosi1BS/71TDciBJgkl9GHm7KPMmnNHw== X-Received: by 2002:a17:902:e547:b0:1fa:ab25:f625 with SMTP id d9443c01a7336-1fd74585690mr3006245ad.38.1721405392800; Fri, 19 Jul 2024 09:09:52 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([223.185.135.236]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fd6f28f518sm6632615ad.69.2024.07.19.09.09.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jul 2024 09:09:52 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 08/13] RISC-V: KVM: Add common nested acceleration support Date: Fri, 19 Jul 2024 21:39:08 +0530 Message-Id: <20240719160913.342027-9-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240719160913.342027-1-apatel@ventanamicro.com> References: <20240719160913.342027-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_090953_724834_E362B466 X-CRM114-Status: GOOD ( 24.29 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add a common nested acceleration support which will be shared by all parts of KVM RISC-V. This nested acceleration support detects and enables SBI NACL extension usage based on static keys which ensures minimum impact on the non-nested scenario. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_nacl.h | 205 ++++++++++++++++++++++++++++++ arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/main.c | 53 +++++++- arch/riscv/kvm/nacl.c | 152 ++++++++++++++++++++++ 4 files changed, 409 insertions(+), 2 deletions(-) create mode 100644 arch/riscv/include/asm/kvm_nacl.h create mode 100644 arch/riscv/kvm/nacl.c diff --git a/arch/riscv/include/asm/kvm_nacl.h b/arch/riscv/include/asm/kvm_nacl.h new file mode 100644 index 000000000000..a704e8000a58 --- /dev/null +++ b/arch/riscv/include/asm/kvm_nacl.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2024 Ventana Micro Systems Inc. + */ + +#ifndef __KVM_NACL_H +#define __KVM_NACL_H + +#include +#include +#include +#include +#include + +DECLARE_STATIC_KEY_FALSE(kvm_riscv_nacl_available); +#define kvm_riscv_nacl_available() \ + static_branch_unlikely(&kvm_riscv_nacl_available) + +DECLARE_STATIC_KEY_FALSE(kvm_riscv_nacl_sync_csr_available); +#define kvm_riscv_nacl_sync_csr_available() \ + static_branch_unlikely(&kvm_riscv_nacl_sync_csr_available) + +DECLARE_STATIC_KEY_FALSE(kvm_riscv_nacl_sync_hfence_available); +#define kvm_riscv_nacl_sync_hfence_available() \ + static_branch_unlikely(&kvm_riscv_nacl_sync_hfence_available) + +DECLARE_STATIC_KEY_FALSE(kvm_riscv_nacl_sync_sret_available); +#define kvm_riscv_nacl_sync_sret_available() \ + static_branch_unlikely(&kvm_riscv_nacl_sync_sret_available) + +DECLARE_STATIC_KEY_FALSE(kvm_riscv_nacl_autoswap_csr_available); +#define kvm_riscv_nacl_autoswap_csr_available() \ + static_branch_unlikely(&kvm_riscv_nacl_autoswap_csr_available) + +struct kvm_riscv_nacl { + void *shmem; + phys_addr_t shmem_phys; +}; +DECLARE_PER_CPU(struct kvm_riscv_nacl, kvm_riscv_nacl); + +void __kvm_riscv_nacl_hfence(void *shmem, + unsigned long control, + unsigned long page_num, + unsigned long page_count); + +int kvm_riscv_nacl_enable(void); + +void kvm_riscv_nacl_disable(void); + +void kvm_riscv_nacl_exit(void); + +int kvm_riscv_nacl_init(void); + +#ifdef CONFIG_32BIT +#define lelong_to_cpu(__x) le32_to_cpu(__x) +#define cpu_to_lelong(__x) cpu_to_le32(__x) +#else +#define lelong_to_cpu(__x) le64_to_cpu(__x) +#define cpu_to_lelong(__x) cpu_to_le64(__x) +#endif + +#define nacl_shmem() \ + this_cpu_ptr(&kvm_riscv_nacl)->shmem +#define nacl_shmem_fast() \ + (kvm_riscv_nacl_available() ? nacl_shmem() : NULL) + +#define nacl_sync_hfence(__e) \ + sbi_ecall(SBI_EXT_NACL, SBI_EXT_NACL_SYNC_HFENCE, \ + (__e), 0, 0, 0, 0, 0) + +#define nacl_hfence_mkconfig(__type, __order, __vmid, __asid) \ +({ \ + unsigned long __c = SBI_NACL_SHMEM_HFENCE_CONFIG_PEND; \ + __c |= ((__type) & SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_MASK) \ + << SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_SHIFT; \ + __c |= (((__order) - SBI_NACL_SHMEM_HFENCE_ORDER_BASE) & \ + SBI_NACL_SHMEM_HFENCE_CONFIG_ORDER_MASK) \ + << SBI_NACL_SHMEM_HFENCE_CONFIG_ORDER_SHIFT; \ + __c |= ((__vmid) & SBI_NACL_SHMEM_HFENCE_CONFIG_VMID_MASK) \ + << SBI_NACL_SHMEM_HFENCE_CONFIG_VMID_SHIFT; \ + __c |= ((__asid) & SBI_NACL_SHMEM_HFENCE_CONFIG_ASID_MASK); \ + __c; \ +}) + +#define nacl_hfence_mkpnum(__order, __addr) \ + ((__addr) >> (__order)) + +#define nacl_hfence_mkpcount(__order, __size) \ + ((__size) >> (__order)) + +#define nacl_hfence_gvma(__shmem, __gpa, __gpsz, __order) \ +__kvm_riscv_nacl_hfence(__shmem, \ + nacl_hfence_mkconfig(SBI_NACL_SHMEM_HFENCE_TYPE_GVMA, \ + __order, 0, 0), \ + nacl_hfence_mkpnum(__order, __gpa), \ + nacl_hfence_mkpcount(__order, __gpsz)) + +#define nacl_hfence_gvma_all(__shmem) \ +__kvm_riscv_nacl_hfence(__shmem, \ + nacl_hfence_mkconfig(SBI_NACL_SHMEM_HFENCE_TYPE_GVMA_ALL, \ + 0, 0, 0), 0, 0) + +#define nacl_hfence_gvma_vmid(__shmem, __vmid, __gpa, __gpsz, __order) \ +__kvm_riscv_nacl_hfence(__shmem, \ + nacl_hfence_mkconfig(SBI_NACL_SHMEM_HFENCE_TYPE_GVMA_VMID, \ + __order, __vmid, 0), \ + nacl_hfence_mkpnum(__order, __gpa), \ + nacl_hfence_mkpcount(__order, __gpsz)) + +#define nacl_hfence_gvma_vmid_all(__shmem, __vmid) \ +__kvm_riscv_nacl_hfence(__shmem, \ + nacl_hfence_mkconfig(SBI_NACL_SHMEM_HFENCE_TYPE_GVMA_VMID_ALL, \ + 0, __vmid, 0), 0, 0) + +#define nacl_hfence_vvma(__shmem, __vmid, __gva, __gvsz, __order) \ +__kvm_riscv_nacl_hfence(__shmem, \ + nacl_hfence_mkconfig(SBI_NACL_SHMEM_HFENCE_TYPE_VVMA, \ + __order, __vmid, 0), \ + nacl_hfence_mkpnum(__order, __gva), \ + nacl_hfence_mkpcount(__order, __gvsz)) + +#define nacl_hfence_vvma_all(__shmem, __vmid) \ +__kvm_riscv_nacl_hfence(__shmem, \ + nacl_hfence_mkconfig(SBI_NACL_SHMEM_HFENCE_TYPE_VVMA_ALL, \ + 0, __vmid, 0), 0, 0) + +#define nacl_hfence_vvma_asid(__shmem, __vmid, __asid, __gva, __gvsz, __order)\ +__kvm_riscv_nacl_hfence(__shmem, \ + nacl_hfence_mkconfig(SBI_NACL_SHMEM_HFENCE_TYPE_VVMA_ASID, \ + __order, __vmid, __asid), \ + nacl_hfence_mkpnum(__order, __gva), \ + nacl_hfence_mkpcount(__order, __gvsz)) + +#define nacl_hfence_vvma_asid_all(__shmem, __vmid, __asid) \ +__kvm_riscv_nacl_hfence(__shmem, \ + nacl_hfence_mkconfig(SBI_NACL_SHMEM_HFENCE_TYPE_VVMA_ASID_ALL, \ + 0, __vmid, __asid), 0, 0) + +#define nacl_csr_read(__shmem, __csr) \ +({ \ + unsigned long *__a = (__shmem) + SBI_NACL_SHMEM_CSR_OFFSET; \ + lelong_to_cpu(__a[SBI_NACL_SHMEM_CSR_INDEX(__csr)]); \ +}) + +#define nacl_csr_write(__shmem, __csr, __val) \ +do { \ + void *__s = (__shmem); \ + unsigned int __i = SBI_NACL_SHMEM_CSR_INDEX(__csr); \ + unsigned long *__a = (__s) + SBI_NACL_SHMEM_CSR_OFFSET; \ + u8 *__b = (__s) + SBI_NACL_SHMEM_DBITMAP_OFFSET; \ + __a[__i] = cpu_to_lelong(__val); \ + __b[__i >> 3] |= 1U << (__i & 0x7); \ +} while (0) + +#define nacl_csr_swap(__shmem, __csr, __val) \ +({ \ + void *__s = (__shmem); \ + unsigned int __i = SBI_NACL_SHMEM_CSR_INDEX(__csr); \ + unsigned long *__a = (__s) + SBI_NACL_SHMEM_CSR_OFFSET; \ + u8 *__b = (__s) + SBI_NACL_SHMEM_DBITMAP_OFFSET; \ + unsigned long __r = lelong_to_cpu(__a[__i]); \ + __a[__i] = cpu_to_lelong(__val); \ + __b[__i >> 3] |= 1U << (__i & 0x7); \ + __r; \ +}) + +#define nacl_sync_csr(__csr) \ + sbi_ecall(SBI_EXT_NACL, SBI_EXT_NACL_SYNC_CSR, \ + (__csr), 0, 0, 0, 0, 0) + +#define ncsr_read(__csr) \ +({ \ + unsigned long __r; \ + if (kvm_riscv_nacl_available()) \ + __r = nacl_csr_read(nacl_shmem(), __csr); \ + else \ + __r = csr_read(__csr); \ + __r; \ +}) + +#define ncsr_write(__csr, __val) \ +do { \ + if (kvm_riscv_nacl_sync_csr_available()) \ + nacl_csr_write(nacl_shmem(), __csr, __val); \ + else \ + csr_write(__csr, __val); \ +} while (0) + +#define ncsr_swap(__csr, __val) \ +({ \ + unsigned long __r; \ + if (kvm_riscv_nacl_sync_csr_available()) \ + __r = nacl_csr_swap(nacl_shmem(), __csr, __val); \ + else \ + __r = csr_swap(__csr, __val); \ + __r; \ +}) + +#define nsync_csr(__csr) \ +do { \ + if (kvm_riscv_nacl_sync_csr_available()) \ + nacl_sync_csr(__csr); \ +} while (0) + +#endif diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index c1eac0d093de..0fb1840c3e0a 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -16,6 +16,7 @@ kvm-y += aia_device.o kvm-y += aia_imsic.o kvm-y += main.o kvm-y += mmu.o +kvm-y += nacl.o kvm-y += tlb.o kvm-y += vcpu.o kvm-y += vcpu_exit.o diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index bab2ec34cd87..fd78f40bbb04 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -10,8 +10,8 @@ #include #include #include -#include #include +#include #include long kvm_arch_dev_ioctl(struct file *filp, @@ -22,6 +22,12 @@ long kvm_arch_dev_ioctl(struct file *filp, int kvm_arch_hardware_enable(void) { + int rc; + + rc = kvm_riscv_nacl_enable(); + if (rc) + return rc; + csr_write(CSR_HEDELEG, KVM_HEDELEG_DEFAULT); csr_write(CSR_HIDELEG, KVM_HIDELEG_DEFAULT); @@ -49,11 +55,14 @@ void kvm_arch_hardware_disable(void) csr_write(CSR_HVIP, 0); csr_write(CSR_HEDELEG, 0); csr_write(CSR_HIDELEG, 0); + + kvm_riscv_nacl_disable(); } static int __init riscv_kvm_init(void) { int rc; + char slist[64]; const char *str; if (!riscv_isa_extension_available(NULL, h)) { @@ -71,16 +80,53 @@ static int __init riscv_kvm_init(void) return -ENODEV; } + rc = kvm_riscv_nacl_init(); + if (rc && rc != -ENODEV) + return rc; + kvm_riscv_gstage_mode_detect(); kvm_riscv_gstage_vmid_detect(); rc = kvm_riscv_aia_init(); - if (rc && rc != -ENODEV) + if (rc && rc != -ENODEV) { + kvm_riscv_nacl_exit(); return rc; + } kvm_info("hypervisor extension available\n"); + if (kvm_riscv_nacl_available()) { + rc = 0; + slist[0] = '\0'; + if (kvm_riscv_nacl_sync_csr_available()) { + if (rc) + strcat(slist, ", "); + strcat(slist, "sync_csr"); + rc++; + } + if (kvm_riscv_nacl_sync_hfence_available()) { + if (rc) + strcat(slist, ", "); + strcat(slist, "sync_hfence"); + rc++; + } + if (kvm_riscv_nacl_sync_sret_available()) { + if (rc) + strcat(slist, ", "); + strcat(slist, "sync_sret"); + rc++; + } + if (kvm_riscv_nacl_autoswap_csr_available()) { + if (rc) + strcat(slist, ", "); + strcat(slist, "autoswap_csr"); + rc++; + } + kvm_info("using SBI nested acceleration with %s\n", + (rc) ? slist : "no features"); + } + switch (kvm_riscv_gstage_mode()) { case HGATP_MODE_SV32X4: str = "Sv32x4"; @@ -108,6 +154,7 @@ static int __init riscv_kvm_init(void) rc = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE); if (rc) { kvm_riscv_aia_exit(); + kvm_riscv_nacl_exit(); return rc; } @@ -119,6 +166,8 @@ static void __exit riscv_kvm_exit(void) { kvm_riscv_aia_exit(); + kvm_riscv_nacl_exit(); + kvm_exit(); } module_exit(riscv_kvm_exit); diff --git a/arch/riscv/kvm/nacl.c b/arch/riscv/kvm/nacl.c new file mode 100644 index 000000000000..08a95ad9ada2 --- /dev/null +++ b/arch/riscv/kvm/nacl.c @@ -0,0 +1,152 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2024 Ventana Micro Systems Inc. + */ + +#include +#include +#include + +DEFINE_STATIC_KEY_FALSE(kvm_riscv_nacl_available); +DEFINE_STATIC_KEY_FALSE(kvm_riscv_nacl_sync_csr_available); +DEFINE_STATIC_KEY_FALSE(kvm_riscv_nacl_sync_hfence_available); +DEFINE_STATIC_KEY_FALSE(kvm_riscv_nacl_sync_sret_available); +DEFINE_STATIC_KEY_FALSE(kvm_riscv_nacl_autoswap_csr_available); +DEFINE_PER_CPU(struct kvm_riscv_nacl, kvm_riscv_nacl); + +void __kvm_riscv_nacl_hfence(void *shmem, + unsigned long control, + unsigned long page_num, + unsigned long page_count) +{ + int i, ent = -1, try_count = 5; + unsigned long *entp; + +again: + for (i = 0; i < SBI_NACL_SHMEM_HFENCE_ENTRY_MAX; i++) { + entp = shmem + SBI_NACL_SHMEM_HFENCE_ENTRY_CONFIG(i); + if (lelong_to_cpu(*entp) & SBI_NACL_SHMEM_HFENCE_CONFIG_PEND) + continue; + + ent = i; + break; + } + + if (ent < 0) { + if (try_count) { + nacl_sync_hfence(-1UL); + goto again; + } else { + pr_warn("KVM: No free entry in NACL shared memory\n"); + return; + } + } + + entp = shmem + SBI_NACL_SHMEM_HFENCE_ENTRY_CONFIG(i); + *entp = cpu_to_lelong(control); + entp = shmem + SBI_NACL_SHMEM_HFENCE_ENTRY_PNUM(i); + *entp = cpu_to_lelong(page_num); + entp = shmem + SBI_NACL_SHMEM_HFENCE_ENTRY_PCOUNT(i); + *entp = cpu_to_lelong(page_count); +} + +int kvm_riscv_nacl_enable(void) +{ + int rc; + struct sbiret ret; + struct kvm_riscv_nacl *nacl; + + if (!kvm_riscv_nacl_available()) + return 0; + nacl = this_cpu_ptr(&kvm_riscv_nacl); + + ret = sbi_ecall(SBI_EXT_NACL, SBI_EXT_NACL_SET_SHMEM, + nacl->shmem_phys, 0, 0, 0, 0, 0); + rc = sbi_err_map_linux_errno(ret.error); + if (rc) + return rc; + + return 0; +} + +void kvm_riscv_nacl_disable(void) +{ + if (!kvm_riscv_nacl_available()) + return; + + sbi_ecall(SBI_EXT_NACL, SBI_EXT_NACL_SET_SHMEM, + SBI_SHMEM_DISABLE, SBI_SHMEM_DISABLE, 0, 0, 0, 0); +} + +void kvm_riscv_nacl_exit(void) +{ + int cpu; + struct kvm_riscv_nacl *nacl; + + if (!kvm_riscv_nacl_available()) + return; + + /* Allocate per-CPU shared memory */ + for_each_possible_cpu(cpu) { + nacl = per_cpu_ptr(&kvm_riscv_nacl, cpu); + if (!nacl->shmem) + continue; + + free_pages((unsigned long)nacl->shmem, + get_order(SBI_NACL_SHMEM_SIZE)); + nacl->shmem = NULL; + nacl->shmem_phys = 0; + } +} + +static long nacl_probe_feature(long feature_id) +{ + struct sbiret ret; + + if (!kvm_riscv_nacl_available()) + return 0; + + ret = sbi_ecall(SBI_EXT_NACL, SBI_EXT_NACL_PROBE_FEATURE, + feature_id, 0, 0, 0, 0, 0); + return ret.value; +} + +int kvm_riscv_nacl_init(void) +{ + int cpu; + struct page *shmem_page; + struct kvm_riscv_nacl *nacl; + + if (sbi_spec_version < sbi_mk_version(1, 0) || + sbi_probe_extension(SBI_EXT_NACL) <= 0) + return -ENODEV; + + /* Enable NACL support */ + static_branch_enable(&kvm_riscv_nacl_available); + + /* Probe NACL features */ + if (nacl_probe_feature(SBI_NACL_FEAT_SYNC_CSR)) + static_branch_enable(&kvm_riscv_nacl_sync_csr_available); + if (nacl_probe_feature(SBI_NACL_FEAT_SYNC_HFENCE)) + static_branch_enable(&kvm_riscv_nacl_sync_hfence_available); + if (nacl_probe_feature(SBI_NACL_FEAT_SYNC_SRET)) + static_branch_enable(&kvm_riscv_nacl_sync_sret_available); + if (nacl_probe_feature(SBI_NACL_FEAT_AUTOSWAP_CSR)) + static_branch_enable(&kvm_riscv_nacl_autoswap_csr_available); + + /* Allocate per-CPU shared memory */ + for_each_possible_cpu(cpu) { + nacl = per_cpu_ptr(&kvm_riscv_nacl, cpu); + + shmem_page = alloc_pages(GFP_KERNEL | __GFP_ZERO, + get_order(SBI_NACL_SHMEM_SIZE)); + if (!shmem_page) { + kvm_riscv_nacl_exit(); + return -ENOMEM; + } + nacl->shmem = page_to_virt(shmem_page); + nacl->shmem_phys = page_to_phys(shmem_page); + } + + return 0; +} From patchwork Fri Jul 19 16:09:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13737411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 40B1FC3DA5D for ; Fri, 19 Jul 2024 16:10:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MyGdwU3gvgkWAJiwZ32jXd2DyOy3JiGExzjsd8SuDZk=; b=jTjeGOzgIe6kDh qynS/xhGLYsSNckaQItTcjVYKZIlIxyYGtd/I8KiV1YGeli5poFGMc77vT7ijh4lSEZ5VTzlpWjN/ o28tk7mwJO3cohoZ0occRp6BbKYrVIdKdzFtothqVb0zkcAuqehRErUUxkp0yBfxj3PrSQGGw4Wci rwvWCEJn+qozHsjnzrWWBIcFqsY9G+DBznqFSQkNrRkfhz53uhScGxDpaTtfwJCfjG3lVXZSQvY5M t+MfEW61SbQWVHDtBTbySyUoYPVjwJTNAeZAkX3F1KJbfVCt19RvR7Mlugrl6HIKtwmIWTTaVYyZi IR4lpoZX+yJAopUhH+Mw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBN-00000003Be4-0FzJ; Fri, 19 Jul 2024 16:10:09 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBH-00000003BYW-3sw5 for linux-riscv@bombadil.infradead.org; Fri, 19 Jul 2024 16:10:04 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IsurUNCBwKbasFP+LQ4AwV8kcPbxaWWGXZnVmJhCYpY=; b=PydxK/J3Bk9+1NH7NzTPGW9rGE 8Fzk+ozR7i2QSXy0q2uCZOndL4VIs27mHiIXchwdMT9QNuAnC2JDXGSp5KMe2KARAVFPsh5sngDoU qL+q4R38RO/YKLm8FqJCY8nCbv+dborWtayIN3Gn11t7ieScBMYIwpvXFMZ/RFzRW4eX6gJrNwFht inptd9KQ87PSmbGlj7yeMIv2zfFjArSgFmAZ4x0BzmthsYHn2pm0HoqkCmQafqV9pFVSrSTx7clZh qOvLj3IzIN4l9xfw8H9tmjTcc7vBiaQD3SSv8jBmDR7XYejdDWgSdcI4cmMtMs44zXSyC6a71fGy/ TxnlCAuw==; Received: from mail-pl1-x634.google.com ([2607:f8b0:4864:20::634]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBE-00000002qF5-0Hy0 for linux-riscv@lists.infradead.org; Fri, 19 Jul 2024 16:10:02 +0000 Received: by mail-pl1-x634.google.com with SMTP id d9443c01a7336-1fc49c0aaffso20453765ad.3 for ; Fri, 19 Jul 2024 09:09:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1721405396; x=1722010196; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IsurUNCBwKbasFP+LQ4AwV8kcPbxaWWGXZnVmJhCYpY=; b=P5VkDxtzu2aPTduafE0Mh7iCDEZ+qEIWcy+ujmh/wKpViT0f8MM9ap07HfvQVSmvil MvPaCB5Y4C5lQe8d0zTwN4zK1KXbMP/WDxlZRDRwfwKDQAu1LO/I91I9sDOKWYf/mCdy UhwFSrheX0XJ/58c0vde04KPQTHGOy+wvkMr0rN4ianSRKRgBmFx7AlQOjFqaYxJFhid CfSt/pT/gd4la+FYxDoB+vfDBHnTx/UGudF3O3Nm94RM+AAipN9f+yL/Tnz/6/Dz2mwL xtVQGbbOszynk/2P2xdBhgAtE1E/Mz9yr5jNlwAEbE05m591PG/nek8YWV/8JalmW04n 11sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721405396; x=1722010196; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IsurUNCBwKbasFP+LQ4AwV8kcPbxaWWGXZnVmJhCYpY=; b=POKmJH79oGeL37aZ/dx4uCRyfXxStf+j6zvnklM7DHiBEIqbiG+rF2Tse9pfcOLUvV sCnX3pZAIWAQ0OcHWH0lWqQG3B6dTIhZcuPlnclhqwlMUBPJZT1FwQVONwkwD27ifHhT pVk8zxaFYk4qp7Jq8FGEJwTRmC73RiJ4BmsfmZq62ByqF5z1GWGvZcCa14Gg2XT5CLJG /t6eULvu/FQEUhMi/13WxCyC19tz5c/b48dz8equp9auUr5VMKYeLC+GxEDicoy5LOa0 h3xsa6pQRPk1cCZ5KR9BoFaG8Xqxt3VNV8twFTha6aUojU4dw5+NtRV77cGCwtDKSZGP UeZQ== X-Forwarded-Encrypted: i=1; AJvYcCW0FXAxxUcEysDubUjJ9RQQhOvm779RoGpput5//uqmDJsPjlgtIIRZjWnY5mjHzmdGMqfU/Pdat3siWljK6IkzIJfVXx5oVE7MCJdyi/gs X-Gm-Message-State: AOJu0YxKxaPFSstZ+MbubqCAt3zRzzTnGopTSKiSUSDRE5GHA53JuY+h wWFh5bDbJTyln8dI20mSop+1U+0PvDMW2HjNx6M47GpZzkZV9yd/Ngt7bgZLFhE= X-Google-Smtp-Source: AGHT+IFt4vd1+c8reh4Ha+aYln2xe2c29bGKPEtqrRXGFbhD4f57xNlONljHLht5FR4B0fjb/WuV9g== X-Received: by 2002:a17:903:182:b0:1fb:8924:df95 with SMTP id d9443c01a7336-1fd7466af15mr2373085ad.48.1721405396302; Fri, 19 Jul 2024 09:09:56 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([223.185.135.236]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fd6f28f518sm6632615ad.69.2024.07.19.09.09.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jul 2024 09:09:55 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 09/13] RISC-V: KVM: Use nacl_csr_xyz() for accessing H-extension CSRs Date: Fri, 19 Jul 2024 21:39:09 +0530 Message-Id: <20240719160913.342027-10-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240719160913.342027-1-apatel@ventanamicro.com> References: <20240719160913.342027-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_171000_326036_D3784CAF X-CRM114-Status: GOOD ( 14.69 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org When running under some other hypervisor, prefer nacl_csr_xyz() for accessing H-extension CSRs in the run-loop. This makes CSR access faster whenever SBI nested acceleration is available. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/mmu.c | 4 +- arch/riscv/kvm/vcpu.c | 103 +++++++++++++++++++++++++----------- arch/riscv/kvm/vcpu_timer.c | 28 +++++----- 3 files changed, 87 insertions(+), 48 deletions(-) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index b63650f9b966..45ace9138947 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -15,7 +15,7 @@ #include #include #include -#include +#include #include #include @@ -732,7 +732,7 @@ void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu) hgatp |= (READ_ONCE(k->vmid.vmid) << HGATP_VMID_SHIFT) & HGATP_VMID; hgatp |= (k->pgd_phys >> PAGE_SHIFT) & HGATP_PPN; - csr_write(CSR_HGATP, hgatp); + ncsr_write(CSR_HGATP, hgatp); if (!kvm_riscv_gstage_vmid_bits()) kvm_riscv_local_hfence_gvma_all(); diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 957e1a5e081b..00baaf1b0136 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -17,8 +17,8 @@ #include #include #include -#include #include +#include #include #define CREATE_TRACE_POINTS @@ -361,10 +361,10 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu) struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; /* Read current HVIP and VSIE CSRs */ - csr->vsie = csr_read(CSR_VSIE); + csr->vsie = ncsr_read(CSR_VSIE); /* Sync-up HVIP.VSSIP bit changes does by Guest */ - hvip = csr_read(CSR_HVIP); + hvip = ncsr_read(CSR_HVIP); if ((csr->hvip ^ hvip) & (1UL << IRQ_VS_SOFT)) { if (hvip & (1UL << IRQ_VS_SOFT)) { if (!test_and_set_bit(IRQ_VS_SOFT, @@ -561,26 +561,49 @@ static void kvm_riscv_vcpu_setup_config(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { + void *nsh; struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; struct kvm_vcpu_config *cfg = &vcpu->arch.cfg; - csr_write(CSR_VSSTATUS, csr->vsstatus); - csr_write(CSR_VSIE, csr->vsie); - csr_write(CSR_VSTVEC, csr->vstvec); - csr_write(CSR_VSSCRATCH, csr->vsscratch); - csr_write(CSR_VSEPC, csr->vsepc); - csr_write(CSR_VSCAUSE, csr->vscause); - csr_write(CSR_VSTVAL, csr->vstval); - csr_write(CSR_HEDELEG, cfg->hedeleg); - csr_write(CSR_HVIP, csr->hvip); - csr_write(CSR_VSATP, csr->vsatp); - csr_write(CSR_HENVCFG, cfg->henvcfg); - if (IS_ENABLED(CONFIG_32BIT)) - csr_write(CSR_HENVCFGH, cfg->henvcfg >> 32); - if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) { - csr_write(CSR_HSTATEEN0, cfg->hstateen0); + if (kvm_riscv_nacl_sync_csr_available()) { + nsh = nacl_shmem(); + nacl_csr_write(nsh, CSR_VSSTATUS, csr->vsstatus); + nacl_csr_write(nsh, CSR_VSIE, csr->vsie); + nacl_csr_write(nsh, CSR_VSTVEC, csr->vstvec); + nacl_csr_write(nsh, CSR_VSSCRATCH, csr->vsscratch); + nacl_csr_write(nsh, CSR_VSEPC, csr->vsepc); + nacl_csr_write(nsh, CSR_VSCAUSE, csr->vscause); + nacl_csr_write(nsh, CSR_VSTVAL, csr->vstval); + nacl_csr_write(nsh, CSR_HEDELEG, cfg->hedeleg); + nacl_csr_write(nsh, CSR_HVIP, csr->hvip); + nacl_csr_write(nsh, CSR_VSATP, csr->vsatp); + nacl_csr_write(nsh, CSR_HENVCFG, cfg->henvcfg); + if (IS_ENABLED(CONFIG_32BIT)) + nacl_csr_write(nsh, CSR_HENVCFGH, cfg->henvcfg >> 32); + if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) { + nacl_csr_write(nsh, CSR_HSTATEEN0, cfg->hstateen0); + if (IS_ENABLED(CONFIG_32BIT)) + nacl_csr_write(nsh, CSR_HSTATEEN0H, cfg->hstateen0 >> 32); + } + } else { + csr_write(CSR_VSSTATUS, csr->vsstatus); + csr_write(CSR_VSIE, csr->vsie); + csr_write(CSR_VSTVEC, csr->vstvec); + csr_write(CSR_VSSCRATCH, csr->vsscratch); + csr_write(CSR_VSEPC, csr->vsepc); + csr_write(CSR_VSCAUSE, csr->vscause); + csr_write(CSR_VSTVAL, csr->vstval); + csr_write(CSR_HEDELEG, cfg->hedeleg); + csr_write(CSR_HVIP, csr->hvip); + csr_write(CSR_VSATP, csr->vsatp); + csr_write(CSR_HENVCFG, cfg->henvcfg); if (IS_ENABLED(CONFIG_32BIT)) - csr_write(CSR_HSTATEEN0H, cfg->hstateen0 >> 32); + csr_write(CSR_HENVCFGH, cfg->henvcfg >> 32); + if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) { + csr_write(CSR_HSTATEEN0, cfg->hstateen0); + if (IS_ENABLED(CONFIG_32BIT)) + csr_write(CSR_HSTATEEN0H, cfg->hstateen0 >> 32); + } } kvm_riscv_gstage_update_hgatp(vcpu); @@ -603,6 +626,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { + void *nsh; struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; vcpu->cpu = -1; @@ -618,15 +642,28 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) vcpu->arch.isa); kvm_riscv_vcpu_host_vector_restore(&vcpu->arch.host_context); - csr->vsstatus = csr_read(CSR_VSSTATUS); - csr->vsie = csr_read(CSR_VSIE); - csr->vstvec = csr_read(CSR_VSTVEC); - csr->vsscratch = csr_read(CSR_VSSCRATCH); - csr->vsepc = csr_read(CSR_VSEPC); - csr->vscause = csr_read(CSR_VSCAUSE); - csr->vstval = csr_read(CSR_VSTVAL); - csr->hvip = csr_read(CSR_HVIP); - csr->vsatp = csr_read(CSR_VSATP); + if (kvm_riscv_nacl_available()) { + nsh = nacl_shmem(); + csr->vsstatus = nacl_csr_read(nsh, CSR_VSSTATUS); + csr->vsie = nacl_csr_read(nsh, CSR_VSIE); + csr->vstvec = nacl_csr_read(nsh, CSR_VSTVEC); + csr->vsscratch = nacl_csr_read(nsh, CSR_VSSCRATCH); + csr->vsepc = nacl_csr_read(nsh, CSR_VSEPC); + csr->vscause = nacl_csr_read(nsh, CSR_VSCAUSE); + csr->vstval = nacl_csr_read(nsh, CSR_VSTVAL); + csr->hvip = nacl_csr_read(nsh, CSR_HVIP); + csr->vsatp = nacl_csr_read(nsh, CSR_VSATP); + } else { + csr->vsstatus = csr_read(CSR_VSSTATUS); + csr->vsie = csr_read(CSR_VSIE); + csr->vstvec = csr_read(CSR_VSTVEC); + csr->vsscratch = csr_read(CSR_VSSCRATCH); + csr->vsepc = csr_read(CSR_VSEPC); + csr->vscause = csr_read(CSR_VSCAUSE); + csr->vstval = csr_read(CSR_VSTVAL); + csr->hvip = csr_read(CSR_HVIP); + csr->vsatp = csr_read(CSR_VSATP); + } } static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) @@ -681,7 +718,7 @@ static void kvm_riscv_update_hvip(struct kvm_vcpu *vcpu) { struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; - csr_write(CSR_HVIP, csr->hvip); + ncsr_write(CSR_HVIP, csr->hvip); kvm_riscv_vcpu_aia_update_hvip(vcpu); } @@ -728,7 +765,9 @@ static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_swap_in_guest_state(vcpu); guest_state_enter_irqoff(); - hcntx->hstatus = csr_swap(CSR_HSTATUS, gcntx->hstatus); + hcntx->hstatus = ncsr_swap(CSR_HSTATUS, gcntx->hstatus); + + nsync_csr(-1UL); __kvm_riscv_switch_to(&vcpu->arch); @@ -863,8 +902,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) trap.sepc = vcpu->arch.guest_context.sepc; trap.scause = csr_read(CSR_SCAUSE); trap.stval = csr_read(CSR_STVAL); - trap.htval = csr_read(CSR_HTVAL); - trap.htinst = csr_read(CSR_HTINST); + trap.htval = ncsr_read(CSR_HTVAL); + trap.htinst = ncsr_read(CSR_HTINST); /* Syncup interrupts state with HW */ kvm_riscv_vcpu_sync_interrupts(vcpu); diff --git a/arch/riscv/kvm/vcpu_timer.c b/arch/riscv/kvm/vcpu_timer.c index 75486b25ac45..96e7a4e463f7 100644 --- a/arch/riscv/kvm/vcpu_timer.c +++ b/arch/riscv/kvm/vcpu_timer.c @@ -11,8 +11,8 @@ #include #include #include -#include #include +#include #include static u64 kvm_riscv_current_cycles(struct kvm_guest_timer *gt) @@ -72,12 +72,12 @@ static int kvm_riscv_vcpu_timer_cancel(struct kvm_vcpu_timer *t) static int kvm_riscv_vcpu_update_vstimecmp(struct kvm_vcpu *vcpu, u64 ncycles) { #if defined(CONFIG_32BIT) - csr_write(CSR_VSTIMECMP, ncycles & 0xFFFFFFFF); - csr_write(CSR_VSTIMECMPH, ncycles >> 32); + ncsr_write(CSR_VSTIMECMP, ncycles & 0xFFFFFFFF); + ncsr_write(CSR_VSTIMECMPH, ncycles >> 32); #else - csr_write(CSR_VSTIMECMP, ncycles); + ncsr_write(CSR_VSTIMECMP, ncycles); #endif - return 0; + return 0; } static int kvm_riscv_vcpu_update_hrtimer(struct kvm_vcpu *vcpu, u64 ncycles) @@ -289,10 +289,10 @@ static void kvm_riscv_vcpu_update_timedelta(struct kvm_vcpu *vcpu) struct kvm_guest_timer *gt = &vcpu->kvm->arch.timer; #if defined(CONFIG_32BIT) - csr_write(CSR_HTIMEDELTA, (u32)(gt->time_delta)); - csr_write(CSR_HTIMEDELTAH, (u32)(gt->time_delta >> 32)); + ncsr_write(CSR_HTIMEDELTA, (u32)(gt->time_delta)); + ncsr_write(CSR_HTIMEDELTAH, (u32)(gt->time_delta >> 32)); #else - csr_write(CSR_HTIMEDELTA, gt->time_delta); + ncsr_write(CSR_HTIMEDELTA, gt->time_delta); #endif } @@ -306,10 +306,10 @@ void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu) return; #if defined(CONFIG_32BIT) - csr_write(CSR_VSTIMECMP, (u32)t->next_cycles); - csr_write(CSR_VSTIMECMPH, (u32)(t->next_cycles >> 32)); + ncsr_write(CSR_VSTIMECMP, (u32)t->next_cycles); + ncsr_write(CSR_VSTIMECMPH, (u32)(t->next_cycles >> 32)); #else - csr_write(CSR_VSTIMECMP, t->next_cycles); + ncsr_write(CSR_VSTIMECMP, t->next_cycles); #endif /* timer should be enabled for the remaining operations */ @@ -327,10 +327,10 @@ void kvm_riscv_vcpu_timer_sync(struct kvm_vcpu *vcpu) return; #if defined(CONFIG_32BIT) - t->next_cycles = csr_read(CSR_VSTIMECMP); - t->next_cycles |= (u64)csr_read(CSR_VSTIMECMPH) << 32; + t->next_cycles = ncsr_read(CSR_VSTIMECMP); + t->next_cycles |= (u64)ncsr_read(CSR_VSTIMECMPH) << 32; #else - t->next_cycles = csr_read(CSR_VSTIMECMP); + t->next_cycles = ncsr_read(CSR_VSTIMECMP); #endif } From patchwork Fri Jul 19 16:09:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13737412 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DB9FFC3DA5D for ; Fri, 19 Jul 2024 16:10:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IULoMU62cUr9xVkiBCpCQN39qF9ZEi/2E/An37AQr4o=; b=MYl2hpABD5t5QY iZYfHpOPwI3ipO38NwoNe5GcaV33KlFzo67RlQ1qweDDUjg32v2aSqblDDbYOXgUU5MY6RT1CRMiI WJquLKUbA6oJR44mbAQf8SNScbEVoELXDO3DvIUWXRsXmtCmZLO7kslsGkaYkPa7iO/ZL9Zc+5Cpk xj8+g1dgdl96O6BbnsG6bI+M8uoMWEKMVU3EmEIl7w4pec0OqI8hHLXrYioO3pFd9SIvazfwBsAtv IWKv6so2y7qmMhgT6K2Wzwe79A0n0iXUMww3+7EMva1hfWkEpH5sATHRN0MXbrM5mhNRK0vmqoG5+ /v/7SeTjMQMQbZZn7Fuw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBQ-00000003BhA-33V7; Fri, 19 Jul 2024 16:10:12 +0000 Received: from mail-pl1-f173.google.com ([209.85.214.173]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBF-00000003BWX-47Jj for linux-riscv@lists.infradead.org; Fri, 19 Jul 2024 16:10:05 +0000 Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-1fd70ba6a15so2630425ad.0 for ; Fri, 19 Jul 2024 09:10:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1721405400; x=1722010200; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QtNFL5zw6IBLLvkGdXCYkboBqItL+86jnTgFrfDdkSg=; b=nkPlPS5nsTabTmySHy2/cC7qk6i7Zvq3gAYxKEkvDiHmNWnML4s6SmfCt5UGnzfJ57 36K1UzV27hYXuQiDgmTi0fWG8xM8Gh0gONnA0VTif9hNFHoSZMEoPMyyUByxSzk741Nr qt6KD3cHXUABRrOe3ESVNyXVhrx8MeV6vS6mKEagCsxRtYe2XwlmBnCn/qvNLXUsjkfT 8h52t0MXp0h4puex8hqqBriUkddy9i+OOSW7xKYFVOddVu2H3UvkuODMiIuTop5G/Zfv 9AgkldpNGSGoCdTfWByvO4/PCcVSjnDfMVCDDsWnwMzp3MrZ11spsCONg9jhYB/8tIbw bZ7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721405400; x=1722010200; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QtNFL5zw6IBLLvkGdXCYkboBqItL+86jnTgFrfDdkSg=; b=Z5lqxWhILFlHwPa2MmE83TTRs5MERt7xpGxCb1xzkyCpOHUC/kCYL7bfXToRNhUs8Q ubNlWepYspopgweV1Jnl6NHFEnrZrmQ4UdvtjquXSK8Sgztr5S4b20YnylyoFQLDb+IC FJ5PxP+UjWsLJn7Ok1jdKLIw2oYqpcJ+meu3hYpoLWZOCyO7pWFjFBxTwpgECMQeQxpy 3vFZUcYm+h5/8G09qeyYk+rGO1XYR2odQPpPJ+3qkZBpzG5OavnFB25WTa9DjG/OebOb akNZOx/fYzaTPywrA7psQaGWL0/a7vW1BTa733OnU2Nirz9QQ2uIDi1txRYoeMOFGVjG ILkA== X-Forwarded-Encrypted: i=1; AJvYcCXGnC+CQENGt4kxU5x1Yp+GMmKoLVEtLyWh6TmUiVHzFgUPPaXJPOLPimozUaaHkKfQ9cGnLhePcf3uOVvV+lVVJfWGoHNBjP9V6Jv2nES9 X-Gm-Message-State: AOJu0YzH1yX38ao/pNLZ6/U1I86d48NJjpI/g9gbwgPUxsBQgSjZawW2 M1qybtuyQoScI8CkOrbZ5Srljj8nkQh1sT+pBlfAVC0P6QiyiaH3q/7mgrjclMo= X-Google-Smtp-Source: AGHT+IE6FYkmm8DMa+wcmCi2mhwqT2Wr9Wnby8aQ4da6gYfYdk6hWAZmI/41ReZ874E3E8C4iOimbQ== X-Received: by 2002:a17:902:fd48:b0:1fb:74b3:53d5 with SMTP id d9443c01a7336-1fd7457c7dcmr2679895ad.35.1721405399762; Fri, 19 Jul 2024 09:09:59 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([223.185.135.236]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fd6f28f518sm6632615ad.69.2024.07.19.09.09.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jul 2024 09:09:59 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 10/13] RISC-V: KVM: Use nacl_csr_xyz() for accessing AIA CSRs Date: Fri, 19 Jul 2024 21:39:10 +0530 Message-Id: <20240719160913.342027-11-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240719160913.342027-1-apatel@ventanamicro.com> References: <20240719160913.342027-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_091002_706464_4B969758 X-CRM114-Status: GOOD ( 13.02 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org When running under some other hypervisor, prefer nacl_csr_xyz() for accessing AIA CSRs in the run-loop. This makes CSR access faster whenever SBI nested acceleration is available. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/aia.c | 97 ++++++++++++++++++++++++++++---------------- 1 file changed, 63 insertions(+), 34 deletions(-) diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index 8ffae0330c89..dcced4db7fe8 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -16,6 +16,7 @@ #include #include #include +#include struct aia_hgei_control { raw_spinlock_t lock; @@ -88,7 +89,7 @@ void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu) struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr; if (kvm_riscv_aia_available()) - csr->vsieh = csr_read(CSR_VSIEH); + csr->vsieh = ncsr_read(CSR_VSIEH); } #endif @@ -115,7 +116,7 @@ bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask) hgei = aia_find_hgei(vcpu); if (hgei > 0) - return !!(csr_read(CSR_HGEIP) & BIT(hgei)); + return !!(ncsr_read(CSR_HGEIP) & BIT(hgei)); return false; } @@ -128,45 +129,73 @@ void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu) return; #ifdef CONFIG_32BIT - csr_write(CSR_HVIPH, vcpu->arch.aia_context.guest_csr.hviph); + ncsr_write(CSR_HVIPH, vcpu->arch.aia_context.guest_csr.hviph); #endif - csr_write(CSR_HVICTL, aia_hvictl_value(!!(csr->hvip & BIT(IRQ_VS_EXT)))); + ncsr_write(CSR_HVICTL, aia_hvictl_value(!!(csr->hvip & BIT(IRQ_VS_EXT)))); } void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu) { struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr; + void *nsh; if (!kvm_riscv_aia_available()) return; - csr_write(CSR_VSISELECT, csr->vsiselect); - csr_write(CSR_HVIPRIO1, csr->hviprio1); - csr_write(CSR_HVIPRIO2, csr->hviprio2); + if (kvm_riscv_nacl_sync_csr_available()) { + nsh = nacl_shmem(); + nacl_csr_write(nsh, CSR_VSISELECT, csr->vsiselect); + nacl_csr_write(nsh, CSR_HVIPRIO1, csr->hviprio1); + nacl_csr_write(nsh, CSR_HVIPRIO2, csr->hviprio2); +#ifdef CONFIG_32BIT + nacl_csr_write(nsh, CSR_VSIEH, csr->vsieh); + nacl_csr_write(nsh, CSR_HVIPH, csr->hviph); + nacl_csr_write(nsh, CSR_HVIPRIO1H, csr->hviprio1h); + nacl_csr_write(nsh, CSR_HVIPRIO2H, csr->hviprio2h); +#endif + } else { + csr_write(CSR_VSISELECT, csr->vsiselect); + csr_write(CSR_HVIPRIO1, csr->hviprio1); + csr_write(CSR_HVIPRIO2, csr->hviprio2); #ifdef CONFIG_32BIT - csr_write(CSR_VSIEH, csr->vsieh); - csr_write(CSR_HVIPH, csr->hviph); - csr_write(CSR_HVIPRIO1H, csr->hviprio1h); - csr_write(CSR_HVIPRIO2H, csr->hviprio2h); + csr_write(CSR_VSIEH, csr->vsieh); + csr_write(CSR_HVIPH, csr->hviph); + csr_write(CSR_HVIPRIO1H, csr->hviprio1h); + csr_write(CSR_HVIPRIO2H, csr->hviprio2h); #endif + } } void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu) { struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr; + void *nsh; if (!kvm_riscv_aia_available()) return; - csr->vsiselect = csr_read(CSR_VSISELECT); - csr->hviprio1 = csr_read(CSR_HVIPRIO1); - csr->hviprio2 = csr_read(CSR_HVIPRIO2); + if (kvm_riscv_nacl_available()) { + nsh = nacl_shmem(); + csr->vsiselect = nacl_csr_read(nsh, CSR_VSISELECT); + csr->hviprio1 = nacl_csr_read(nsh, CSR_HVIPRIO1); + csr->hviprio2 = nacl_csr_read(nsh, CSR_HVIPRIO2); #ifdef CONFIG_32BIT - csr->vsieh = csr_read(CSR_VSIEH); - csr->hviph = csr_read(CSR_HVIPH); - csr->hviprio1h = csr_read(CSR_HVIPRIO1H); - csr->hviprio2h = csr_read(CSR_HVIPRIO2H); + csr->vsieh = nacl_csr_read(nsh, CSR_VSIEH); + csr->hviph = nacl_csr_read(nsh, CSR_HVIPH); + csr->hviprio1h = nacl_csr_read(nsh, CSR_HVIPRIO1H); + csr->hviprio2h = nacl_csr_read(nsh, CSR_HVIPRIO2H); #endif + } else { + csr->vsiselect = csr_read(CSR_VSISELECT); + csr->hviprio1 = csr_read(CSR_HVIPRIO1); + csr->hviprio2 = csr_read(CSR_HVIPRIO2); +#ifdef CONFIG_32BIT + csr->vsieh = csr_read(CSR_VSIEH); + csr->hviph = csr_read(CSR_HVIPH); + csr->hviprio1h = csr_read(CSR_HVIPRIO1H); + csr->hviprio2h = csr_read(CSR_HVIPRIO2H); +#endif + } } int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu, @@ -250,20 +279,20 @@ static u8 aia_get_iprio8(struct kvm_vcpu *vcpu, unsigned int irq) switch (bitpos / BITS_PER_LONG) { case 0: - hviprio = csr_read(CSR_HVIPRIO1); + hviprio = ncsr_read(CSR_HVIPRIO1); break; case 1: #ifndef CONFIG_32BIT - hviprio = csr_read(CSR_HVIPRIO2); + hviprio = ncsr_read(CSR_HVIPRIO2); break; #else - hviprio = csr_read(CSR_HVIPRIO1H); + hviprio = ncsr_read(CSR_HVIPRIO1H); break; case 2: - hviprio = csr_read(CSR_HVIPRIO2); + hviprio = ncsr_read(CSR_HVIPRIO2); break; case 3: - hviprio = csr_read(CSR_HVIPRIO2H); + hviprio = ncsr_read(CSR_HVIPRIO2H); break; #endif default: @@ -283,20 +312,20 @@ static void aia_set_iprio8(struct kvm_vcpu *vcpu, unsigned int irq, u8 prio) switch (bitpos / BITS_PER_LONG) { case 0: - hviprio = csr_read(CSR_HVIPRIO1); + hviprio = ncsr_read(CSR_HVIPRIO1); break; case 1: #ifndef CONFIG_32BIT - hviprio = csr_read(CSR_HVIPRIO2); + hviprio = ncsr_read(CSR_HVIPRIO2); break; #else - hviprio = csr_read(CSR_HVIPRIO1H); + hviprio = ncsr_read(CSR_HVIPRIO1H); break; case 2: - hviprio = csr_read(CSR_HVIPRIO2); + hviprio = ncsr_read(CSR_HVIPRIO2); break; case 3: - hviprio = csr_read(CSR_HVIPRIO2H); + hviprio = ncsr_read(CSR_HVIPRIO2H); break; #endif default: @@ -308,20 +337,20 @@ static void aia_set_iprio8(struct kvm_vcpu *vcpu, unsigned int irq, u8 prio) switch (bitpos / BITS_PER_LONG) { case 0: - csr_write(CSR_HVIPRIO1, hviprio); + ncsr_write(CSR_HVIPRIO1, hviprio); break; case 1: #ifndef CONFIG_32BIT - csr_write(CSR_HVIPRIO2, hviprio); + ncsr_write(CSR_HVIPRIO2, hviprio); break; #else - csr_write(CSR_HVIPRIO1H, hviprio); + ncsr_write(CSR_HVIPRIO1H, hviprio); break; case 2: - csr_write(CSR_HVIPRIO2, hviprio); + ncsr_write(CSR_HVIPRIO2, hviprio); break; case 3: - csr_write(CSR_HVIPRIO2H, hviprio); + ncsr_write(CSR_HVIPRIO2H, hviprio); break; #endif default: @@ -377,7 +406,7 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num, return KVM_INSN_ILLEGAL_TRAP; /* First try to emulate in kernel space */ - isel = csr_read(CSR_VSISELECT) & ISELECT_MASK; + isel = ncsr_read(CSR_VSISELECT) & ISELECT_MASK; if (isel >= ISELECT_IPRIO0 && isel <= ISELECT_IPRIO15) return aia_rmw_iprio(vcpu, isel, val, new_val, wr_mask); else if (isel >= IMSIC_FIRST && isel <= IMSIC_LAST && From patchwork Fri Jul 19 16:09:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13737413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 77A7BC3DA5D for ; Fri, 19 Jul 2024 16:10:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XHmDIn1QuRz/b31X0TyiILxonYlf0UJJRhLK0L8ju14=; b=XogD5wkYlpw6GH xa17Z+Jgfa+otd17O4LZp7dlu7IPQ0VLn62J5dgllDa+CorkGmwm02YDt4ZNv8ExdhfXxIeDOICRX pwbNUSU92IcWsVtmvoSA4Mfbsz/CkIHsPL3N9mgA77+1hO1IBOOTmvPSHz+auWOxvJqxavwi/oiwS 8tTvhUmedchA+7XC0lSSRh3cRY0w6brmq7fqDjKcVaXYfawlBS0cP0QwWhy089TSOmUbzTq5xoEWY WY9YX33Rpdsx36aNOQ7OtXKtRfe9xi41ww97ABC0GrBmMGqA7JKh3Bt1nrD9fZztF+Jtf79PEQDx4 wGLcEYP74/Qu2n9FlCnQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBX-00000003BnT-109L; Fri, 19 Jul 2024 16:10:19 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBN-00000003Bds-1utp for linux-riscv@bombadil.infradead.org; Fri, 19 Jul 2024 16:10:09 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zbBUe2dWVBXnAYtzUPKnZBxBsVbv9ClXZgKI121YyMI=; b=KKNV72ggX/IvuLabgt++CIyaql aWWAupe7HTYPLtPaLhf+H0qrFt1O0ULNFsVtYNcFMLvHOVE+9+hIaxKx3YVA47Fax2wiyd4v/16WZ 96kMDarcJYDlNRyfQCs/lvTS64X2vPMWcxvDKa5343/A1Qtyzdnt4fZUUvR8F+XRii4jb1g9quFov knvY+0AGl2TlEiBws/5jmEUqQTJgXYlf2NESgD/aeAvV/RD8CxjbsF/p45d0EMv3RTlglhkzbab+Y 2GPsu2UDAeJkc5IDjtFowv5jrQGjWoLV4vaH/Et3QUpQjl2ZfAyXPt+zrOorVLbvKb1p8CmbhfQib P4tWHWjQ==; Received: from mail-pl1-x62d.google.com ([2607:f8b0:4864:20::62d]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBK-00000002qGT-1IzG for linux-riscv@lists.infradead.org; Fri, 19 Jul 2024 16:10:08 +0000 Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1fd640a6454so8713685ad.3 for ; Fri, 19 Jul 2024 09:10:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1721405403; x=1722010203; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zbBUe2dWVBXnAYtzUPKnZBxBsVbv9ClXZgKI121YyMI=; b=l74BIU7qzllp9Yl/m88DVtV5/8uOQw0YP3moCDH8WG2mGcoZOPtzKvF2Hq0ZS8Br4y 6JfXWvoh0VG2RTIVu33Z0TEtissJjzHSYO22sSIRPcXhrTwG4NMNIInzNf3tPpjZP/OB WYzLtqXt5kmk50509N5sgZpB1ddXBqEdPRyZswzl5vedwZGcp5rMCkQUryBVSPutbG5C FwP84TKjhwoq13stGrfk2IrZDmdSxJ8+sfoPlEmRuWsTzVk1RSYL3HCT8K2W//0GiaJK +U+gRmBenXzn0oVnc1unf6h2nlcdgWkkuEOnIuNNLAnjzoXH4L9HQXCP6a+C0lQphj17 XGPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721405403; x=1722010203; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zbBUe2dWVBXnAYtzUPKnZBxBsVbv9ClXZgKI121YyMI=; b=kQkacVrdqdFoZ1qef39NSqmT4vr0TxkzFdXh+3sRDU5klspgESCsFRSe+6rSouexRY a9mm1lgILLvaZvqgXMoBsv68AzfI6VAedEcDsSZ1ipF5wkM0r4inGMC9CFax5AYJvGrt aPR9SG0IMFqfVRdXU5ImfKIcJ4Zsrh0h7uKNg9rbzX/be1dJ8o0wJNRTIsVef7nWRC+E IG2pplQOhesgHtmaPhZB3/7g5KQaN727pRBVH2cQqziZER30Ya9KY+VcmveSMiEVUtVn lcG02pdzJ5UYx3j6+okWqRAyJroQGmnaC6PbVbSQ5nLEa5OJHmy6JMHDT7DqOu0UviiZ 5S/Q== X-Forwarded-Encrypted: i=1; AJvYcCU+OHkBLjEx7OsW9t6JpFXjXQjSU8m+BDZysRr7b9dq51QSl73PMb5+XC8xMi59mYzOB1hRf0iprZXzKZMGUCBOarZhydMXBENaOinz030N X-Gm-Message-State: AOJu0YwrLZuOlw2wYLDnjEx9XRp8pclQI0kpgSMCerIIWH3ZcVOviA5a 2wsNuyB8vesjoR/6n7rBXKsrb4wRasIAF6gNCTtgQgrNeqm5YH7yOFQIkrIk/JQ= X-Google-Smtp-Source: AGHT+IHnsrDgJZ0Xs5ITUii+G2fuHbMcUuyE62ZDbeRXtLEqFfI/Ns6FYVTGusklNoZKrSdaOfNp8w== X-Received: by 2002:a17:902:f947:b0:1fb:81ec:26da with SMTP id d9443c01a7336-1fd7462c073mr2523815ad.58.1721405403214; Fri, 19 Jul 2024 09:10:03 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([223.185.135.236]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fd6f28f518sm6632615ad.69.2024.07.19.09.10.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jul 2024 09:10:02 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 11/13] RISC-V: KVM: Use SBI sync SRET call when available Date: Fri, 19 Jul 2024 21:39:11 +0530 Message-Id: <20240719160913.342027-12-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240719160913.342027-1-apatel@ventanamicro.com> References: <20240719160913.342027-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_171006_549530_5B107A2E X-CRM114-Status: GOOD ( 14.21 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Implement an optimized KVM world-switch using SBI sync SRET call when SBI nested acceleration extension is available. This improves KVM world-switch when KVM RISC-V is running as a Guest under some other hypervisor. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_nacl.h | 32 +++++++++++++++++++++ arch/riscv/kvm/vcpu.c | 48 ++++++++++++++++++++++++++++--- arch/riscv/kvm/vcpu_switch.S | 29 +++++++++++++++++++ 3 files changed, 105 insertions(+), 4 deletions(-) diff --git a/arch/riscv/include/asm/kvm_nacl.h b/arch/riscv/include/asm/kvm_nacl.h index a704e8000a58..5e74238ea525 100644 --- a/arch/riscv/include/asm/kvm_nacl.h +++ b/arch/riscv/include/asm/kvm_nacl.h @@ -12,6 +12,8 @@ #include #include +struct kvm_vcpu_arch; + DECLARE_STATIC_KEY_FALSE(kvm_riscv_nacl_available); #define kvm_riscv_nacl_available() \ static_branch_unlikely(&kvm_riscv_nacl_available) @@ -43,6 +45,10 @@ void __kvm_riscv_nacl_hfence(void *shmem, unsigned long page_num, unsigned long page_count); +void __kvm_riscv_nacl_switch_to(struct kvm_vcpu_arch *vcpu_arch, + unsigned long sbi_ext_id, + unsigned long sbi_func_id); + int kvm_riscv_nacl_enable(void); void kvm_riscv_nacl_disable(void); @@ -64,6 +70,32 @@ int kvm_riscv_nacl_init(void); #define nacl_shmem_fast() \ (kvm_riscv_nacl_available() ? nacl_shmem() : NULL) +#define nacl_scratch_read_long(__shmem, __offset) \ +({ \ + unsigned long *__p = (__shmem) + \ + SBI_NACL_SHMEM_SCRATCH_OFFSET + \ + (__offset); \ + lelong_to_cpu(*__p); \ +}) + +#define nacl_scratch_write_long(__shmem, __offset, __val) \ +do { \ + unsigned long *__p = (__shmem) + \ + SBI_NACL_SHMEM_SCRATCH_OFFSET + \ + (__offset); \ + *__p = cpu_to_lelong(__val); \ +} while (0) + +#define nacl_scratch_write_longs(__shmem, __offset, __array, __count) \ +do { \ + unsigned int __i; \ + unsigned long *__p = (__shmem) + \ + SBI_NACL_SHMEM_SCRATCH_OFFSET + \ + (__offset); \ + for (__i = 0; __i < (__count); __i++) \ + __p[__i] = cpu_to_lelong((__array)[__i]); \ +} while (0) + #define nacl_sync_hfence(__e) \ sbi_ecall(SBI_EXT_NACL, SBI_EXT_NACL_SYNC_HFENCE, \ (__e), 0, 0, 0, 0, 0) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 00baaf1b0136..fe849fb1aaab 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -759,19 +759,59 @@ static __always_inline void kvm_riscv_vcpu_swap_in_host_state(struct kvm_vcpu *v */ static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu) { + void *nsh; struct kvm_cpu_context *gcntx = &vcpu->arch.guest_context; struct kvm_cpu_context *hcntx = &vcpu->arch.host_context; kvm_riscv_vcpu_swap_in_guest_state(vcpu); guest_state_enter_irqoff(); - hcntx->hstatus = ncsr_swap(CSR_HSTATUS, gcntx->hstatus); + if (kvm_riscv_nacl_sync_sret_available()) { + nsh = nacl_shmem(); - nsync_csr(-1UL); + if (kvm_riscv_nacl_autoswap_csr_available()) { + hcntx->hstatus = + nacl_csr_read(nsh, CSR_HSTATUS); + nacl_scratch_write_long(nsh, + SBI_NACL_SHMEM_AUTOSWAP_OFFSET + + SBI_NACL_SHMEM_AUTOSWAP_HSTATUS, + gcntx->hstatus); + nacl_scratch_write_long(nsh, + SBI_NACL_SHMEM_AUTOSWAP_OFFSET, + SBI_NACL_SHMEM_AUTOSWAP_FLAG_HSTATUS); + } else if (kvm_riscv_nacl_sync_csr_available()) { + hcntx->hstatus = nacl_csr_swap(nsh, + CSR_HSTATUS, gcntx->hstatus); + } else { + hcntx->hstatus = csr_swap(CSR_HSTATUS, gcntx->hstatus); + } - __kvm_riscv_switch_to(&vcpu->arch); + nacl_scratch_write_longs(nsh, + SBI_NACL_SHMEM_SRET_OFFSET + + SBI_NACL_SHMEM_SRET_X(1), + &gcntx->ra, + SBI_NACL_SHMEM_SRET_X_LAST); + + __kvm_riscv_nacl_switch_to(&vcpu->arch, SBI_EXT_NACL, + SBI_EXT_NACL_SYNC_SRET); + + if (kvm_riscv_nacl_autoswap_csr_available()) { + nacl_scratch_write_long(nsh, + SBI_NACL_SHMEM_AUTOSWAP_OFFSET, + 0); + gcntx->hstatus = nacl_scratch_read_long(nsh, + SBI_NACL_SHMEM_AUTOSWAP_OFFSET + + SBI_NACL_SHMEM_AUTOSWAP_HSTATUS); + } else { + gcntx->hstatus = csr_swap(CSR_HSTATUS, hcntx->hstatus); + } + } else { + hcntx->hstatus = csr_swap(CSR_HSTATUS, gcntx->hstatus); - gcntx->hstatus = csr_swap(CSR_HSTATUS, hcntx->hstatus); + __kvm_riscv_switch_to(&vcpu->arch); + + gcntx->hstatus = csr_swap(CSR_HSTATUS, hcntx->hstatus); + } vcpu->arch.last_exit_cpu = vcpu->cpu; guest_state_exit_irqoff(); diff --git a/arch/riscv/kvm/vcpu_switch.S b/arch/riscv/kvm/vcpu_switch.S index 9f13e5ce6a18..47686bcb21e0 100644 --- a/arch/riscv/kvm/vcpu_switch.S +++ b/arch/riscv/kvm/vcpu_switch.S @@ -218,6 +218,35 @@ SYM_FUNC_START(__kvm_riscv_switch_to) ret SYM_FUNC_END(__kvm_riscv_switch_to) + /* + * Parameters: + * A0 <= Pointer to struct kvm_vcpu_arch + * A1 <= SBI extension ID + * A2 <= SBI function ID + */ +SYM_FUNC_START(__kvm_riscv_nacl_switch_to) + SAVE_HOST_GPRS + + SAVE_HOST_AND_RESTORE_GUEST_CSRS .Lkvm_nacl_switch_return + + /* Resume Guest using SBI nested acceleration */ + add a6, a2, zero + add a7, a1, zero + ecall + + /* Back to Host */ + .align 2 +.Lkvm_nacl_switch_return: + SAVE_GUEST_GPRS + + SAVE_GUEST_AND_RESTORE_HOST_CSRS + + RESTORE_HOST_GPRS + + /* Return to C code */ + ret +SYM_FUNC_END(__kvm_riscv_nacl_switch_to) + SYM_CODE_START(__kvm_riscv_unpriv_trap) /* * We assume that faulting unpriv load/store instruction is From patchwork Fri Jul 19 16:09:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13737414 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D78FBC3DA5D for ; Fri, 19 Jul 2024 16:10:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wXQLJNX47QDO8WriiV7rJ4z6U5rGoecLNZWbcjP/jik=; b=vrjHMWHR5fCaGi tBbTFMx3ELksYwjPaQvFkxUJtRKU6ySlygWN4VoWntJnxFZ/22a3MA0tL88u6RIbEpgJQ+wAuyfjh wj4m5YZ5v1+SsioO37jOqXvdeUUQ2Q+08OMyOe1DBAsqQZ+gOXc3d07uzco8DObcUW6HUHof5X80k Avhy6K6Yg8cxabW7KR00U5uVCyGMgK1bP8L49EQaCsK/V1wkybzRTURrJavPyu41S+gEaVhIvLTfM UkXpV866Doz3qF3JCv6WwEUaK50X1HbPFfePBsTmbx84VyYOf6DORfB/wBSQd9UaDSLlMD1flYEry G2adiOjNSMBU5xJSb07A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBe-00000003Bta-1g3b; Fri, 19 Jul 2024 16:10:26 +0000 Received: from mail-pf1-x42d.google.com ([2607:f8b0:4864:20::42d]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBM-00000003Bca-1q2j for linux-riscv@lists.infradead.org; Fri, 19 Jul 2024 16:10:10 +0000 Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-70b31272a04so688360b3a.1 for ; Fri, 19 Jul 2024 09:10:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1721405407; x=1722010207; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=e4NP08UX/w85iYkbIVPDnOkRGbucYck2yHjBnwqEXTo=; b=Te2RD60wDYPKy0QqWcfSFAtQIBksL8LEVIXBNymarOMwrz2vaoW2yUWTb6/jgEdu1v LqE7TgVCPPMseQvOtByoEHyGkHrXv0dM7JvTcq/oNyUT+zWFL8lb7pLbr4MTGxIs6k5k ro2bVX87iaQnL1ZEk0YKq0wYsXBPTxfxKTZ5OrS6RfxfAnejQSjdnHTm/0lFYnxwjMHu DLw7z/NVxw4e2dqvh90u5cPQtSL7sQ7VqxqobrybhuAVk5tChko/Az7l4KHNCVfB3ZIc 2iioyU/XG7J12lo4ZX4PyzVvGDH6d9T/yZo/hlHED8HNNqDtGc8PXHD/QkvdVLDEZY9B 7zRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721405407; x=1722010207; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e4NP08UX/w85iYkbIVPDnOkRGbucYck2yHjBnwqEXTo=; b=moiE63vHfOP6Aku/sVUuI/KLFQvLHDJvTQ9vYCf8R161LwP/bxQNd3cB6jQEVLMkVZ 7b/Be2XqtAyWDEJ/wUVCtsoBnCUcrSDPTl+N+h55hx/sUTjRiQH/h8glDEALoNgguRN9 YHqsHfSG4fa2poyMbaEhxqWSk5uzI+Ixwj2Oyfxwgfi3PMMjYh6UdfHR25FbsVvJ4K1V XPugrh942ZbT8QHrQPFrR07Z+4/8/IB4FraAvvVSmPsfDH4sCNXyhtYN+69Y/KjAjSMl NNEqc9sGe1QaYz1fa+nvrIQCgcBasx7K40sP+s/qYhPLdha5Z+3UYAe/C9sLriS4JrTx ibYw== X-Forwarded-Encrypted: i=1; AJvYcCWkoSr6Uv5rNdBMGx8oNC317euxHdIQlOSk42dPjYfrtfVHTKl6yHAGGGY0Rh2jumF6OODwbZSqP+xLoD+qRGeVq2/tBDgavdXB6B+xsaDW X-Gm-Message-State: AOJu0YykDl+um5bFQRR0ZdL6eHEsCKn4rv0PbdFxM9rGSt9a3f10CoFv pKexfwjiOc7GWpe56wqCFmyqEwlG8jeIX3ITFGkQLNDDzlNQTXs7bqrnfwZDYBo= X-Google-Smtp-Source: AGHT+IE+xJaOF5e+dz9ph4mEQcMOaOJS+xXUFim+WrfebsgXUwhG7R+IIRODL08tQgBfyTXBjU6Aqg== X-Received: by 2002:a17:903:32c5:b0:1fb:80a3:5826 with SMTP id d9443c01a7336-1fd74cff03bmr2252485ad.4.1721405406717; Fri, 19 Jul 2024 09:10:06 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([223.185.135.236]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fd6f28f518sm6632615ad.69.2024.07.19.09.10.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jul 2024 09:10:06 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 12/13] RISC-V: KVM: Save trap CSRs in kvm_riscv_vcpu_enter_exit() Date: Fri, 19 Jul 2024 21:39:12 +0530 Message-Id: <20240719160913.342027-13-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240719160913.342027-1-apatel@ventanamicro.com> References: <20240719160913.342027-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_091008_519341_CE5931E5 X-CRM114-Status: GOOD ( 12.52 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Save trap CSRs in the kvm_riscv_vcpu_enter_exit() function instead of the kvm_arch_vcpu_ioctl_run() function so that HTVAL and HTINST CSRs are accessed in more optimized manner while running under some other hypervisor. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/vcpu.c | 34 +++++++++++++++++++++------------- 1 file changed, 21 insertions(+), 13 deletions(-) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index fe849fb1aaab..854d98aa165e 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -757,12 +757,21 @@ static __always_inline void kvm_riscv_vcpu_swap_in_host_state(struct kvm_vcpu *v * This must be noinstr as instrumentation may make use of RCU, and this is not * safe during the EQS. */ -static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu) +static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu, + struct kvm_cpu_trap *trap) { void *nsh; struct kvm_cpu_context *gcntx = &vcpu->arch.guest_context; struct kvm_cpu_context *hcntx = &vcpu->arch.host_context; + /* + * We save trap CSRs (such as SEPC, SCAUSE, STVAL, HTVAL, and + * HTINST) here because we do local_irq_enable() after this + * function in kvm_arch_vcpu_ioctl_run() which can result in + * an interrupt immediately after local_irq_enable() and can + * potentially change trap CSRs. + */ + kvm_riscv_vcpu_swap_in_guest_state(vcpu); guest_state_enter_irqoff(); @@ -805,14 +814,24 @@ static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu) } else { gcntx->hstatus = csr_swap(CSR_HSTATUS, hcntx->hstatus); } + + trap->htval = nacl_csr_read(nsh, CSR_HTVAL); + trap->htinst = nacl_csr_read(nsh, CSR_HTINST); } else { hcntx->hstatus = csr_swap(CSR_HSTATUS, gcntx->hstatus); __kvm_riscv_switch_to(&vcpu->arch); gcntx->hstatus = csr_swap(CSR_HSTATUS, hcntx->hstatus); + + trap->htval = csr_read(CSR_HTVAL); + trap->htinst = csr_read(CSR_HTINST); } + trap->sepc = gcntx->sepc; + trap->scause = csr_read(CSR_SCAUSE); + trap->stval = csr_read(CSR_STVAL); + vcpu->arch.last_exit_cpu = vcpu->cpu; guest_state_exit_irqoff(); kvm_riscv_vcpu_swap_in_host_state(vcpu); @@ -929,22 +948,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) guest_timing_enter_irqoff(); - kvm_riscv_vcpu_enter_exit(vcpu); + kvm_riscv_vcpu_enter_exit(vcpu, &trap); vcpu->mode = OUTSIDE_GUEST_MODE; vcpu->stat.exits++; - /* - * Save SCAUSE, STVAL, HTVAL, and HTINST because we might - * get an interrupt between __kvm_riscv_switch_to() and - * local_irq_enable() which can potentially change CSRs. - */ - trap.sepc = vcpu->arch.guest_context.sepc; - trap.scause = csr_read(CSR_SCAUSE); - trap.stval = csr_read(CSR_STVAL); - trap.htval = ncsr_read(CSR_HTVAL); - trap.htinst = ncsr_read(CSR_HTINST); - /* Syncup interrupts state with HW */ kvm_riscv_vcpu_sync_interrupts(vcpu); From patchwork Fri Jul 19 16:09:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13737415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B1ECEC3DA5D for ; Fri, 19 Jul 2024 16:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0rIwXIc+2BDzydPPOhslvNIuWyRE7/QuMB2RfNxeyjI=; b=4wy87mpdRWLZfD Iy/bLdgRCrQyuRPgJxyvxwK+pgm2AJtkARlNeLuJxFNSzq6AHhnNlhnUwknBrxxroszTrn3mLjZG7 ubbvfNSKqneYP1b2ItJeH6trLjJqIrjqkvEsyelMBfGWSegCjX8Vh7If85bfvEImol5lrhbicLHVL kSeyjBnYiv/pTH0pz1r5tkFRcKDyTI6j8adCpF87OAOi9UF3hvB5DdbkIn5mltpJiRUxrahMDC/tM 8HLfz9CBN/UiIFeWDdjzO26SAA9KH8CR2VdxAOsDuVxMSujIS3grDfjEANENqVjmnbtGtASGksp+U 1HVMgZEjFbT+K4F69wnQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBh-00000003Bwi-3GDp; Fri, 19 Jul 2024 16:10:29 +0000 Received: from mail-pl1-x631.google.com ([2607:f8b0:4864:20::631]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUqBP-00000003Bfa-40Fd for linux-riscv@lists.infradead.org; Fri, 19 Jul 2024 16:10:14 +0000 Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1fbe6f83957so18444225ad.3 for ; Fri, 19 Jul 2024 09:10:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1721405410; x=1722010210; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Qyi9hrGfVRmOD1WpD5vxj3JpORinviXBA5VC8Qc3Jso=; b=GUP5vqaDmuZjfFzslUetgninflMgNUq75i3EEFqywsbmuY6ify5FPDnbYeemyTDdMM 9KBpRe+QcYGStRGmQrqGkGI2cCdQXeJqfJYqf+ISGuu5pOio6Eetiz4ZuEGuRy4iCJvR YSMmdwEvvzG1sHbc0JSMS4zdL1NILpLoLGM3nYQsFdi4VCRgQIp0AoV+NiePmPFivMhx k2sXLZt/pRo9ZgLkjQt+A/+Wf16twrZZmmRKOP2/xZU9PqAV890iLDaRA7TxAjatS1A9 CqTrjgP09FpobZikA9j+p+PWJATc63DBeZMzO7oLLSv4rrt8MaABydYLo3JgaXoun4vw qU0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721405410; x=1722010210; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qyi9hrGfVRmOD1WpD5vxj3JpORinviXBA5VC8Qc3Jso=; b=PjQwtuYmBGLLoTaC5HcT6uTOqFz8NacJEUPuDvMFjz0PXgPHUlNAezn5rGeKVeH2Gq kD5K3xeM1lr2Xmf5gpNZsy5v29Jd6o5rpk1Wa47USNGpMdUHrAq1lu6FKNl6ZWKC1XS6 VSSklRchLBD/UmGWRyotvwG6NSIDcIMAfRGTJR1mabJwjmuJZBGZ0XQKcjL6ZSlfh6bJ ITD3YqjPRcT1TqD4HmQJD2yVopAnvejOAEW3DMIjtkINz/ry0s2eSouPuyIIJ+GgR1Iw hgH3+ZqiK6e0zUEKBz0fgD5lDsaR2M28sm/XiH8f97qLR58lPXwcbLybLnFGiyNTWMeA TmFg== X-Forwarded-Encrypted: i=1; AJvYcCUrAggrYU9c2PRPBms46qaQw7N/5C16ysTQ/SdjKoUGDxRvv7d1p3PvibG/QEKgR8OMWcfsvgsjYEbzo3BfWXvcKZ5N2Fbyg1brVSy2QNR8 X-Gm-Message-State: AOJu0Yx4Np8JcbPf+BDeMZph5vp3lZfd0t2wZdgwoMM/s1aPqJz1vFhd S7UnP3sG0M0sizuRC44/sItsFmpowXsprVEOGulfK623stu/eHahLsGIhbDpKkE= X-Google-Smtp-Source: AGHT+IFxxJQJMT1yPV9iBgyfcleth8inhyBwlA3lhunIDQ81CpWWGH98IspVcY6bKB19yLGgqpEUJg== X-Received: by 2002:a17:902:ecc5:b0:1fb:7978:6b1 with SMTP id d9443c01a7336-1fd74578fb2mr4074945ad.31.1721405410214; Fri, 19 Jul 2024 09:10:10 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([223.185.135.236]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fd6f28f518sm6632615ad.69.2024.07.19.09.10.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jul 2024 09:10:09 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 13/13] RISC-V: KVM: Use NACL HFENCEs for KVM request based HFENCEs Date: Fri, 19 Jul 2024 21:39:13 +0530 Message-Id: <20240719160913.342027-14-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240719160913.342027-1-apatel@ventanamicro.com> References: <20240719160913.342027-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_091012_114102_72338389 X-CRM114-Status: GOOD ( 10.51 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org When running under some other hypervisor, use SBI NACL based HFENCEs for TLB shoot-down via KVM requests. This makes HFENCEs faster whenever SBI nested acceleration is available. Signed-off-by: Anup Patel --- arch/riscv/kvm/tlb.c | 57 +++++++++++++++++++++++++++++++------------- 1 file changed, 40 insertions(+), 17 deletions(-) diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 23c0e82b5103..2f91ea5f8493 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -14,6 +14,7 @@ #include #include #include +#include #define has_svinval() riscv_has_extension_unlikely(RISCV_ISA_EXT_SVINVAL) @@ -186,18 +187,24 @@ void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu) void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu) { - struct kvm_vmid *vmid; + struct kvm_vmid *v = &vcpu->kvm->arch.vmid; + unsigned long vmid = READ_ONCE(v->vmid); - vmid = &vcpu->kvm->arch.vmid; - kvm_riscv_local_hfence_gvma_vmid_all(READ_ONCE(vmid->vmid)); + if (kvm_riscv_nacl_available()) + nacl_hfence_gvma_vmid_all(nacl_shmem(), vmid); + else + kvm_riscv_local_hfence_gvma_vmid_all(vmid); } void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu) { - struct kvm_vmid *vmid; + struct kvm_vmid *v = &vcpu->kvm->arch.vmid; + unsigned long vmid = READ_ONCE(v->vmid); - vmid = &vcpu->kvm->arch.vmid; - kvm_riscv_local_hfence_vvma_all(READ_ONCE(vmid->vmid)); + if (kvm_riscv_nacl_available()) + nacl_hfence_vvma_all(nacl_shmem(), vmid); + else + kvm_riscv_local_hfence_vvma_all(vmid); } static bool vcpu_hfence_dequeue(struct kvm_vcpu *vcpu, @@ -251,6 +258,7 @@ static bool vcpu_hfence_enqueue(struct kvm_vcpu *vcpu, void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) { + unsigned long vmid; struct kvm_riscv_hfence d = { 0 }; struct kvm_vmid *v = &vcpu->kvm->arch.vmid; @@ -259,26 +267,41 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) case KVM_RISCV_HFENCE_UNKNOWN: break; case KVM_RISCV_HFENCE_GVMA_VMID_GPA: - kvm_riscv_local_hfence_gvma_vmid_gpa( - READ_ONCE(v->vmid), - d.addr, d.size, d.order); + vmid = READ_ONCE(v->vmid); + if (kvm_riscv_nacl_available()) + nacl_hfence_gvma_vmid(nacl_shmem(), vmid, + d.addr, d.size, d.order); + else + kvm_riscv_local_hfence_gvma_vmid_gpa(vmid, d.addr, + d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); - kvm_riscv_local_hfence_vvma_asid_gva( - READ_ONCE(v->vmid), d.asid, - d.addr, d.size, d.order); + vmid = READ_ONCE(v->vmid); + if (kvm_riscv_nacl_available()) + nacl_hfence_vvma_asid(nacl_shmem(), vmid, d.asid, + d.addr, d.size, d.order); + else + kvm_riscv_local_hfence_vvma_asid_gva(vmid, d.asid, d.addr, + d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_ALL: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); - kvm_riscv_local_hfence_vvma_asid_all( - READ_ONCE(v->vmid), d.asid); + vmid = READ_ONCE(v->vmid); + if (kvm_riscv_nacl_available()) + nacl_hfence_vvma_asid_all(nacl_shmem(), vmid, d.asid); + else + kvm_riscv_local_hfence_vvma_asid_all(vmid, d.asid); break; case KVM_RISCV_HFENCE_VVMA_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD); - kvm_riscv_local_hfence_vvma_gva( - READ_ONCE(v->vmid), - d.addr, d.size, d.order); + vmid = READ_ONCE(v->vmid); + if (kvm_riscv_nacl_available()) + nacl_hfence_vvma(nacl_shmem(), vmid, + d.addr, d.size, d.order); + else + kvm_riscv_local_hfence_vvma_gva(vmid, d.addr, + d.size, d.order); break; default: break;