From patchwork Thu Jun 18 12:25:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611949 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7D9BC913 for ; Thu, 18 Jun 2020 12:26:01 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5AAC120890 for ; Thu, 18 Jun 2020 12:26:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="P5Q45wYw"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="v2fnvJZk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5AAC120890 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VfAPTSIsXQVsenLGXNu0RFDN2+P3H2KYgkEsKJHo10A=; b=P5Q45wYwKHEo9p mpR9vxyCFvwYHiuDxUF/XqVnCpBSjrcBWO880R60/d8wC9CKZdSi//r4R3+WKIjnPMmdUPQQ5HtRM uqV0ri+SB3TGfDbjhzzIE7sSrLnh8/ZGhk/102RI3g7wcPFk6Bv1K51Q9hpxridv9c9MIMgdBHk0S hh6pFv0aBSSfC2wESD00P+zck1OckYQDLzdwo/FG+UF9zNMgL2XjGEyGlQvYptGJQ4+OhHbr3F+0f Rpbk8/4xw3q72WTYa49dXI1qei9PMJc2+zU/+2DDmZaveqCBE9WAS+SmbciVkK03IbZOMNZQAb0zr Xi4/2dseblhDDp4zjfcg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcE-0004Ue-SV; Thu, 18 Jun 2020 12:25:58 +0000 Received: from mail-wm1-x341.google.com ([2a00:1450:4864:20::341]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcC-0004Sq-7g for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:25:57 +0000 Received: by mail-wm1-x341.google.com with SMTP id r15so5433778wmh.5 for ; Thu, 18 Jun 2020 05:25:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=djiFW55yS1Rp4YNDz8isiFDt/Jj9YD2Q3nFMXytrXY8=; b=v2fnvJZkLqF0i9ErTBWcqsVxtNWg/3wdSutfObc9AIk4DR/IavDByhXEtlOYekhmdf R2CgLoEuJAbgHIWE/vPnL5o/HpKTPnr77VMTfNap2Jdz0WW+qcGIUEwuTapaFS1tKF1p ZLLoqMNDzEdm8u5aMtSPYg4A04ypHted9caH6mhehwNz3HStqSoG2BT/EHLQuOauISpr IDWxdnQNL5hLTPOnquHTBZ5v3Y/zX4wzZRzveuQ2xKU4K4qfn3raKqsYUtZTiuJ+T2rF 8RLx+Nk8YRIHIp1ihTyjcJvrrloifKjS1iJbF6NSUB52yi2G+564oLebdH39X7+ZWhVQ IuJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=djiFW55yS1Rp4YNDz8isiFDt/Jj9YD2Q3nFMXytrXY8=; b=Yz/SjDQBmA5mijkdPJONj1p7kcisqgDIi5lzkKQrpnSRgDr9R2/UZJ1LAlETlakO38 pJVntP2R9hMqVANniw8mOQ+vWV/xMwSsFte/YIp/edeX6QwE3c/EBRP8tbfKhsy+ONdU YR2uSdLssgfOU0Ciy1rbtJnbbRj8inLHZfx7zGoyvgIqe+Gv48Vvz+ijPcgPxzDEKrrv 1mTwLQ0kDUwxs7W4QM2f+v6flXEpCi/HFvmoulGhyvYGLdvT4n8gWpZcy7N+w/xb0ozr hwdo5vr/ubwI9tILmBDxPOJ3aIXpbOQCPkx7CLAtITcAdncFnZPYgUxepcIL2/IbE8gu zfCw== X-Gm-Message-State: AOAM533qVJ0cuVKJdcRNOPvJ/t04zzqiAVxtXiI+6MdHwVdNA8bMxznY USDOjGQGzNbq8VHLHRWLaMXH6Q== X-Google-Smtp-Source: ABdhPJy9A8Zbppj9s06HLgvsvKt+NXsCtHpKYpFtYCSZa2fG9j2ycaFqUHQSP17r/EIg0sOzuQL+lg== X-Received: by 2002:a1c:5411:: with SMTP id i17mr3942342wmb.137.1592483151024; Thu, 18 Jun 2020 05:25:51 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id 89sm3423962wrg.56.2020.06.18.05.25.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:25:50 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 01/15] arm64: kvm: Fix symbol dependency in __hyp_call_panic_nvhe Date: Thu, 18 Jun 2020 13:25:23 +0100 Message-Id: <20200618122537.9625-2-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052556_293374_5A61E9BE X-CRM114-Status: GOOD ( 13.95 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:341 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org __hyp_call_panic_nvhe contains inline assembly which did not declare its dependency on the __hyp_panic_string symbol. The static-declared string has previously been kept alive because of a use in __hyp_call_panic_vhe. Fix this in preparation for separating the source files between VHE and nVHE when the two users land in two different compilation units. The static variable otherwise gets dropped when compiling the nVHE source file, causing an undefined symbol linker error later. Signed-off-by: David Brazdil --- arch/arm64/kvm/hyp/switch.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index db1c4487d95d..9270b14157b5 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -897,7 +897,7 @@ static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par, * making sure it is a kernel address and not a PC-relative * reference. */ - asm volatile("ldr %0, =__hyp_panic_string" : "=r" (str_va)); + asm volatile("ldr %0, =%1" : "=r" (str_va) : "S" (__hyp_panic_string)); __hyp_do_panic(str_va, spsr, elr, From patchwork Thu Jun 18 12:25:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611957 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7EBA7913 for ; Thu, 18 Jun 2020 12:27:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 59B542075E for ; Thu, 18 Jun 2020 12:27:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Au62jQDt"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="N30CS11r" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 59B542075E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IgZQmkLWAaJB3IEDYgLoj1Z1L96vabM029SMN1MHGUU=; b=Au62jQDt9E1ADI u5dSe5B8yIa+8pp+l8cR3z8rIOEcnk+O42wPfK6wxfa8XSEkEF1EqE9D41BpG+yVqa6aVbl+OSHm+ K1FgFXuXgnruxkwGIgKOShW1daZ7qSxUNb9YPhDK9gA7RJ4guzp6iUljpDSK/9kRvmtXWD+O56hIK W/KmapOQYQ4y3EEcVEAxaGkalliDp0EOqRIrXHRkqyrpF6iAcyTiYct5+fdgP7xSV1sgmrgJYehgi OObel1T7SkNWcRszkf4miySy/NgY5sWPSn0dgWx8fqoOR2XYiREze3HgOlpbOUq3dlRnJ2Wa5GM+5 YQy6v6+G5nWj7iYqANEg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltdb-0005ZF-AM; Thu, 18 Jun 2020 12:27:23 +0000 Received: from mail-wr1-x433.google.com ([2a00:1450:4864:20::433]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcE-0004T2-I0 for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:25:59 +0000 Received: by mail-wr1-x433.google.com with SMTP id x6so5823352wrm.13 for ; Thu, 18 Jun 2020 05:25:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EI3NYrtwn6qJxPWPwh60aHXJB+vFaqkjVnXPNPOUvlM=; b=N30CS11r8sn0rGiPBf7Y+li2ePVY+jY9p3+fPyUt6oXi+Hb8X9cFUGgXLuswEudrWb R6lpv7ssWn6pFLjvITdLvoqc8nOphTTAvpILm710A8CgQ6SfXSwMc0ch4EVXyguq4LUj e1V29s0KvlvbpnQt+mdxjze4D4q9izalNSzHHXOoel0ZK0U/wkcHr3zLXg3k1JT0WNMf ebCEtjNPbgocdWIiK3vxM93imD08qIV19XzPFFVGBrRIUx3JGvYWOLYUNrlW4Pa8qDGT /PeUsKnG272Pgn2Ehdx9Qpj8vfpth/Hd2z5bPv219EmkHrvL6DPUSlEQnRrknv78BM1/ ghwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EI3NYrtwn6qJxPWPwh60aHXJB+vFaqkjVnXPNPOUvlM=; b=P1B4/dfDFOO7buFCKlIPeAAttiFmnXOtWE3m6Cfd8NlS/b/zYtELtIRbQJvJRu5tkI r1GCIGpJpuFMdG7UDSXFrhiMVtqmzM8+Tk2/Om1Ll0HpaDOgB6o3wRZuw50odPJZRyHL 7UxCOqBMwuBlZ59B+47MJz4pbfYS6l2rXmjsLbBc2vz/SWAxYuTXqmTLahXJ7ySB4MfI VQn4wZLgejw5ehHMaGhKkqb9MML8g+8N3a2+i9llcTVFxXGqNt+gq1DZOYkrlia7p0pT 9/DOc8r6q9bARCzWF+sl7vGvklyNRNqwFP33YRSTOpF+2WK5rerM0nS2js5GFgV4VMnz /x6A== X-Gm-Message-State: AOAM531oNwMsQrRCHdbOZ2iHieyF4pGNup2U9rfmjIld9JvUbNuBUf/3 DFaE07KT3LGtObOoIT5pHjzNag== X-Google-Smtp-Source: ABdhPJwa2EW6p36Iawe8UNLmui6v2zHWYcLTdQb2AQWxuuBBIWLUvLId9MfLH1gRIgvgURkn+W35jg== X-Received: by 2002:adf:f08b:: with SMTP id n11mr4229961wro.312.1592483152531; Thu, 18 Jun 2020 05:25:52 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id n1sm3343936wrp.10.2020.06.18.05.25.51 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:25:51 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 02/15] arm64: kvm: Move __smccc_workaround_1_smc to .rodata Date: Thu, 18 Jun 2020 13:25:24 +0100 Message-Id: <20200618122537.9625-3-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052558_593207_133D928B X-CRM114-Status: GOOD ( 15.27 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:433 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This snippet of assembly is used by cpu_errata.c to overwrite parts of KVM hyp vector. Move it to its own source file and change its ELF section to .rodata. Signed-off-by: David Brazdil --- arch/arm64/kvm/hyp/Makefile | 1 + arch/arm64/kvm/hyp/hyp-entry.S | 16 ---------------- arch/arm64/kvm/hyp/smccc_wa.S | 30 ++++++++++++++++++++++++++++++ 3 files changed, 31 insertions(+), 16 deletions(-) create mode 100644 arch/arm64/kvm/hyp/smccc_wa.S diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index 8c9880783839..5d8357ddc234 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -7,6 +7,7 @@ ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) obj-$(CONFIG_KVM) += hyp.o +obj-$(CONFIG_KVM_INDIRECT_VECTORS) += smccc_wa.o hyp-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ debug-sr.o entry.o switch.o fpsimd.o tlb.o hyp-entry.o diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 9c5cfb04170e..d362fad97cc8 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -318,20 +318,4 @@ SYM_CODE_START(__bp_harden_hyp_vecs) 1: .org __bp_harden_hyp_vecs + __BP_HARDEN_HYP_VECS_SZ .org 1b SYM_CODE_END(__bp_harden_hyp_vecs) - - .popsection - -SYM_CODE_START(__smccc_workaround_1_smc) - esb - sub sp, sp, #(8 * 4) - stp x2, x3, [sp, #(8 * 0)] - stp x0, x1, [sp, #(8 * 2)] - mov w0, #ARM_SMCCC_ARCH_WORKAROUND_1 - smc #0 - ldp x2, x3, [sp, #(8 * 0)] - ldp x0, x1, [sp, #(8 * 2)] - add sp, sp, #(8 * 4) -1: .org __smccc_workaround_1_smc + __SMCCC_WORKAROUND_1_SMC_SZ - .org 1b -SYM_CODE_END(__smccc_workaround_1_smc) #endif diff --git a/arch/arm64/kvm/hyp/smccc_wa.S b/arch/arm64/kvm/hyp/smccc_wa.S new file mode 100644 index 000000000000..aa25b5428e77 --- /dev/null +++ b/arch/arm64/kvm/hyp/smccc_wa.S @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2015-2018 - ARM Ltd + * Author: Marc Zyngier + */ + +#include + +#include +#include + + /* + * This is not executed directly and is instead copied into the vectors + * by install_bp_hardening_cb(). + */ + .data + .pushsection .rodata + .global __smccc_workaround_1_smc +__smccc_workaround_1_smc: + esb + sub sp, sp, #(8 * 4) + stp x2, x3, [sp, #(8 * 0)] + stp x0, x1, [sp, #(8 * 2)] + mov w0, #ARM_SMCCC_ARCH_WORKAROUND_1 + smc #0 + ldp x2, x3, [sp, #(8 * 0)] + ldp x0, x1, [sp, #(8 * 2)] + add sp, sp, #(8 * 4) +1: .org __smccc_workaround_1_smc + __SMCCC_WORKAROUND_1_SMC_SZ + .org 1b From patchwork Thu Jun 18 12:25:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611953 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1C599912 for ; Thu, 18 Jun 2020 12:26:42 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EE573207E8 for ; Thu, 18 Jun 2020 12:26:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="QxrZrnmo"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="lDL+1M12" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EE573207E8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZNvhqcfszIFvLjSOgXowKAMrsFTGEwMSTWxpkpg05VU=; b=QxrZrnmoMTskot U+ctuUO5e3Aat9XQBZQztKbk+bzTqxaWRWzDg0DTlT6L7UkQaoADBLjfvKNHKMkz/ody3ELW18kXa Vqrx4pXvqJrxdlf/J4a0aJhr2y/UzrFfi6q2crQGudebDSHKwgf2LKr3VsalAV3bEfGH3sjmmTn2y WrSTV7EheAe0Ghq03aR5ZHwwjdNUv5SOApEu+M68H9L1nGEMDvUB+MZV2lpxeh0FG7m9ZE3Wkg5z2 +7nqpIsLEL4dN5etd8WPfEqP31BcAq1diLVyKSN6h/g6HErXYAjAgxoib1RRljK0RHW6WtdT+fanB CRNIhINrKkOxqHO6kOLA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcs-0004zB-9j; Thu, 18 Jun 2020 12:26:38 +0000 Received: from mail-wr1-x442.google.com ([2a00:1450:4864:20::442]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcC-0004T6-N7 for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:25:59 +0000 Received: by mail-wr1-x442.google.com with SMTP id p5so5821853wrw.9 for ; Thu, 18 Jun 2020 05:25:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9UqUeY7OXLgzhtG0HSCI0KGyw2fojhVrUyAizWtz3uQ=; b=lDL+1M12TpapBiE/BxxPn7H/Zdg//ebIYbJekkz1AV0ZryE1ikhGJ5S23TJI3xERzk 4VVo1NVYtqQQ9io/3n17rVZorxQ9hPbukspNmDlfaI201Arw7MDytSRjgsfWxCgQIjrO v5VNJRuIhv08aJGwKRn/Q74a80Gs0DKvQn0oa9bycdXCe5JH2D6sYktAaqxQC9rv+tma gnM9hF6A9VlmIvGMFAnurXHJedhdR2pEHgINA7A81uqJKqgzZgjRE4XVIzDOs+5YXMnC dyvhAbfkN+IHx2AactUh9w6/nvIPTHquaz6kXgqjknUS+4TILH16apm8+74uWp3+5oZ4 HzOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9UqUeY7OXLgzhtG0HSCI0KGyw2fojhVrUyAizWtz3uQ=; b=KSroX/kVYbRPb41EruFWmzUs/w5r7u4k+1qjvxm4hgOVpIyyvr4UTUMpbX+iTYMI5A 84/cGeUL4tHgamryROtwPlojKXmIJXzVmEmgtk59wGhQBB0HIVkM5vq84LUKJAsbAelu eiHcmVEWJvm7nKWw5L4fKwJ7ToE0JtfotgYV33ZlTG9ESqC5TKstukRoSXAEEPy+WNXW m+biNiVy5tlz7QT0efh2XQR+sqd+7hOe3g0IQW3whfbmu0MuwYcsQ9tL91/y3M9frqPt f/MMQ9afFXzlhOBzczQk2ITdTMfdKaqmKGbCohOHVnbMVYBeXvLmxwxRqD0haIOzY777 vd/Q== X-Gm-Message-State: AOAM532amcRzFtBGgJf+ICGF5lB9YoruaJCamMlB/Q9he9KADbWBBGh6 GsGYRLxvIPkvJLFzfjr4SEYa7Q== X-Google-Smtp-Source: ABdhPJzcWJJmllc8nXAIXAc3nGaGVvG8RkpVHVgkWk2PFkI55fnUiJ1XjM/incD87Yee3o8Ny+J3vQ== X-Received: by 2002:a5d:664e:: with SMTP id f14mr4427208wrw.6.1592483154144; Thu, 18 Jun 2020 05:25:54 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id u13sm3336611wrp.53.2020.06.18.05.25.53 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:25:53 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 03/15] arm64: kvm: Add build rules for separate nVHE object files Date: Thu, 18 Jun 2020 13:25:25 +0100 Message-Id: <20200618122537.9625-4-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052556_759445_139A2F1E X-CRM114-Status: GOOD ( 17.93 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:442 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Add new folder arch/arm64/kvm/hyp/nvhe and a Makefile for building code that runs in EL2 under nVHE KVM. Compile each source file into a `.hyp.tmp.o` object first, then prefix all its symbols with "__kvm_nvhe_" using `objcopy` and produce a `.hyp.o`. Suffixes were chosen so that it would be possible for VHE and nVHE to share some source files, but compiled with different CFLAGS. nVHE build rules add -D__KVM_NVHE_HYPERVISOR__. The nVHE ELF symbol prefix is added to kallsyms.c as ignored. EL2-only symbols will never appear in EL1 stack traces. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 12 +++++++++++ arch/arm64/kvm/hyp/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/Makefile | 35 ++++++++++++++++++++++++++++++++ scripts/kallsyms.c | 1 + 4 files changed, 49 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/Makefile diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index be0a63ffed23..f32b406e90c0 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -51,4 +51,16 @@ __efistub__ctype = _ctype; #endif +#ifdef CONFIG_KVM + +/* + * KVM nVHE code has its own symbol namespace prefixed by __kvm_nvhe_, to + * isolate it from the kernel proper. The following symbols are legally + * accessed by it, therefore provide aliases to make them linkable. + * Do not include symbols which may not be safely accessed under hypervisor + * memory mappings. + */ + +#endif /* CONFIG_KVM */ + #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index 5d8357ddc234..5f4f217532e0 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -6,7 +6,7 @@ ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) -obj-$(CONFIG_KVM) += hyp.o +obj-$(CONFIG_KVM) += hyp.o nvhe/ obj-$(CONFIG_KVM_INDIRECT_VECTORS) += smccc_wa.o hyp-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile new file mode 100644 index 000000000000..7d64235dba62 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -0,0 +1,35 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for Kernel-based Virtual Machine module, HYP/nVHE part +# + +asflags-y := -D__KVM_NVHE_HYPERVISOR__ +ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ + -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) + +obj-y := + +obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) +extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) + +$(obj)/%.hyp.tmp.o: $(src)/%.c FORCE + $(call if_changed_rule,cc_o_c) +$(obj)/%.hyp.tmp.o: $(src)/%.S FORCE + $(call if_changed_rule,as_o_S) +$(obj)/%.hyp.o: $(obj)/%.hyp.tmp.o FORCE + $(call if_changed,hypcopy) + +quiet_cmd_hypcopy = HYPCOPY $@ + cmd_hypcopy = $(OBJCOPY) --prefix-symbols=__kvm_nvhe_ $< $@ + +# KVM nVHE code is run at a different exception code with a different map, so +# compiler instrumentation that inserts callbacks or checks into the code may +# cause crashes. Just disable it. +GCOV_PROFILE := n +KASAN_SANITIZE := n +UBSAN_SANITIZE := n +KCOV_INSTRUMENT := n + +# Skip objtool checking for this directory because nVHE code is compiled with +# non-standard build rules. +OBJECT_FILES_NON_STANDARD := y diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c index 6dc3078649fa..0096cd965332 100644 --- a/scripts/kallsyms.c +++ b/scripts/kallsyms.c @@ -109,6 +109,7 @@ static bool is_ignored_symbol(const char *name, char type) ".LASANPC", /* s390 kasan local symbols */ "__crc_", /* modversions */ "__efistub_", /* arm64 EFI stub namespace */ + "__kvm_nvhe_", /* arm64 non-VHE KVM namespace */ NULL }; From patchwork Thu Jun 18 12:25:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611955 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19E9C912 for ; Thu, 18 Jun 2020 12:27:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E7F092075E for ; Thu, 18 Jun 2020 12:27:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="gYHoUDot"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="R5wOZrow" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E7F092075E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OSYi4YajogcGvNYS3yqNi+UkDVjYhMtu7IdNEfPBhWY=; b=gYHoUDotELNfyC ggwfpk1SbAmpIZPKXNkEv6n3oOrE6zzcZUtwcx3H5cq9WQSM1KDo8JVNvZUchpYCejIzEntIVVpwn M4i5fY+zJZOqHZe5ROq5L1wZ7Ip7RW7aok6tIO7u7YkRb6k5W07n0vP9019VkcADktGDNSu/pT8lM rRbccgxg0DVFjZSKpBCmnb3tE+/pdpfKSfJd4UPDXmGySK1YZnoKXsP7xphlrzlAZ7i7rEVSh2DmH iJXg2ji+ZHbv13OTYQ8Gy4d1Gk8J9sf44yCmQaPMDOWPjyuXhG4r8+Fs2YVvoS8CXPf3N2/JVkuOy 692lYvw/e4i6PTcYbhFg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltdF-0005FM-CF; Thu, 18 Jun 2020 12:27:01 +0000 Received: from mail-wm1-x343.google.com ([2a00:1450:4864:20::343]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcD-0004TX-Cz for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:25:59 +0000 Received: by mail-wm1-x343.google.com with SMTP id b82so5008017wmb.1 for ; Thu, 18 Jun 2020 05:25:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=D60Z34SZFzunDrVnRRvupIcqGYHycsZaSexa8Z/NEcc=; b=R5wOZrowNvymOhII5Vo3wuKlNlmIIa7ZuV7agZTTgZFUH3aAoFADrBAriFs36Lux4i u0oPXIAbolsLV5fsikLjn1CtZzdJFyM63oIkCXmc+Wcdowa1DxkLq5384FP92ewvLiYd cM6TPbBZx2bDn+wrPbd0wlqEJ3h73VM/Lc+WXvNTURLaJ/VNY+XvDWAdVLmGHRfY7q84 RtvzKZUpk4XQOMw9IRyDxmly045+stSxNhnjPFGC9awWJp2xAiWx+E79XB0vLsXaydg4 WM+2CBz7tNeqcvb3ML1e25FabZNYr0irIsvvtnyDR4YyZF6begqoWk07LoEGxWViiHEQ nNIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=D60Z34SZFzunDrVnRRvupIcqGYHycsZaSexa8Z/NEcc=; b=l/wVto+m3VjxV/Yv13JPq3zsH6uhpzX6URHR7o07yZgY81cMZHblflmm4hAFPbqxkp Djqze2u+3MValPOsd4xDjdhEpB7ORu43fBUlWqCVXpBzCaw0uavNPALJUftdLmD/tc4N o5iVW9JzyFwHTOYjcs4ku48dS6oNdqzad1RMSf7nAnTL+jHpD9pibHQkhQp4B/8+oy6C FePWhk2XPs0GOHrFBT+TtvPrgxWMeO7mrKIP0b3qu7og+E1UWDAHEDldMZKYwQ3Mm284 YLFlbtFSJuevMQlHytO8MmQQrGwppElVWnW0BmI3wibqQhsbHp/kQ4ZqclA6tA4dFHjt 9C7Q== X-Gm-Message-State: AOAM531I4Lo+X/svE2rNQbfdhYQeFe6oUkri/f8Opf0RBB01DoYEsjWm WxqG656P0NAJ4UEAB7BKjLHmpg== X-Google-Smtp-Source: ABdhPJyqgv2iExydIc+UmgLZxFWWeg4P2dcwrwIeBrX8vM7l/xXUXy2CvIqw9oAusrkmQ0YvNplT6w== X-Received: by 2002:a1c:3881:: with SMTP id f123mr3770820wma.178.1592483155742; Thu, 18 Jun 2020 05:25:55 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id c206sm3629346wmf.36.2020.06.18.05.25.54 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:25:55 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 04/15] arm64: kvm: Handle calls to prefixed hyp functions Date: Thu, 18 Jun 2020 13:25:26 +0100 Message-Id: <20200618122537.9625-5-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052557_433935_435CF4F7 X-CRM114-Status: GOOD ( 17.01 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:343 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, Andrew Scull , David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Andrew Scull This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. Once hyp functions are moved to a hyp object, they will have prefixed symbols. This change declares and gets the address of the prefixed version for calls to the hyp functions. To aid migration, the hyp functions that have not yet moved have their prefixed versions aliased to their non-prefixed version. This begins with all the hyp functions being listed and will reduce to none of them once the migration is complete. Signed-off-by: Andrew Scull Extracted kvm_call_hyp nVHE branches into own helper macros. Signed-off-by: David Brazdil Signed-off-by: Andrew Scull Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_asm.h | 19 +++++++++++++++++++ arch/arm64/include/asm/kvm_host.h | 19 ++++++++++++++++--- arch/arm64/kernel/image-vars.h | 15 +++++++++++++++ 3 files changed, 50 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 352aaebf4198..6a682d66a640 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -42,6 +42,24 @@ #include +/* + * Translate name of a symbol defined in nVHE hyp to the name seen + * by kernel proper. All nVHE symbols are prefixed by the build system + * to avoid clashes with the VHE variants. + */ +#define kvm_nvhe_sym(sym) __kvm_nvhe_##sym + +#define DECLARE_KVM_VHE_SYM(sym) extern char sym[] +#define DECLARE_KVM_NVHE_SYM(sym) extern char kvm_nvhe_sym(sym)[] + +/* + * Define a pair of symbols sharing the same name but one defined in + * VHE and the other in nVHE hyp implementations. + */ +#define DECLARE_KVM_HYP_SYM(sym) \ + DECLARE_KVM_VHE_SYM(sym); \ + DECLARE_KVM_NVHE_SYM(sym) + /* Translate a kernel address of @sym into its equivalent linear mapping */ #define kvm_ksym_ref(sym) \ ({ \ @@ -50,6 +68,7 @@ val = lm_alias(&sym); \ val; \ }) +#define kvm_ksym_ref_nvhe(sym) kvm_ksym_ref(kvm_nvhe_sym(sym)) struct kvm; struct kvm_vcpu; diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index c3e6fcc664b1..e782f98243d3 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -448,6 +448,20 @@ void kvm_arm_resume_guest(struct kvm *kvm); u64 __kvm_call_hyp(void *hypfn, ...); +#define kvm_call_hyp_nvhe(f, ...) \ + do { \ + DECLARE_KVM_NVHE_SYM(f); \ + __kvm_call_hyp(kvm_ksym_ref_nvhe(f), ##__VA_ARGS__); \ + } while(0) + +#define kvm_call_hyp_nvhe_ret(f, ...) \ + ({ \ + DECLARE_KVM_NVHE_SYM(f); \ + typeof(f(__VA_ARGS__)) ret; \ + ret = __kvm_call_hyp(kvm_ksym_ref_nvhe(f), \ + ##__VA_ARGS__); \ + }) + /* * The couple of isb() below are there to guarantee the same behaviour * on VHE as on !VHE, where the eret to EL1 acts as a context @@ -459,7 +473,7 @@ u64 __kvm_call_hyp(void *hypfn, ...); f(__VA_ARGS__); \ isb(); \ } else { \ - __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__); \ + kvm_call_hyp_nvhe(f, ##__VA_ARGS__); \ } \ } while(0) @@ -471,8 +485,7 @@ u64 __kvm_call_hyp(void *hypfn, ...); ret = f(__VA_ARGS__); \ isb(); \ } else { \ - ret = __kvm_call_hyp(kvm_ksym_ref(f), \ - ##__VA_ARGS__); \ + ret = kvm_call_hyp_nvhe_ret(f, ##__VA_ARGS__); \ } \ \ ret; \ diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index f32b406e90c0..89affa38b143 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,6 +61,21 @@ __efistub__ctype = _ctype; * memory mappings. */ +__kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; +__kvm_nvhe___kvm_flush_vm_context = __kvm_flush_vm_context; +__kvm_nvhe___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; +__kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; +__kvm_nvhe___kvm_tlb_flush_local_vmid = __kvm_tlb_flush_local_vmid; +__kvm_nvhe___kvm_tlb_flush_vmid = __kvm_tlb_flush_vmid; +__kvm_nvhe___kvm_tlb_flush_vmid_ipa = __kvm_tlb_flush_vmid_ipa; +__kvm_nvhe___kvm_vcpu_run_nvhe = __kvm_vcpu_run_nvhe; +__kvm_nvhe___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2; +__kvm_nvhe___vgic_v3_init_lrs = __vgic_v3_init_lrs; +__kvm_nvhe___vgic_v3_read_vmcr = __vgic_v3_read_vmcr; +__kvm_nvhe___vgic_v3_restore_aprs = __vgic_v3_restore_aprs; +__kvm_nvhe___vgic_v3_save_aprs = __vgic_v3_save_aprs; +__kvm_nvhe___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; + #endif /* CONFIG_KVM */ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ From patchwork Thu Jun 18 12:25:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611961 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 972FC913 for ; Thu, 18 Jun 2020 12:27:46 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 578322075E for ; Thu, 18 Jun 2020 12:27:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="VQzl2FBU"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="YbBmkwDV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 578322075E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jdy6MMWU/TvS7ClWglc+QTulOgHUXRfgKOUeSFAe5Js=; b=VQzl2FBU2ewbKm BLElZJKglPgZHs9dEwp4/ZR7KuBmO3gXoZvtVZvKbFKZtGu4Is3fGYH/j6wod4P4MkEu2Vog79u0Q RVVY0Xic/h4n6b9h24SdkfpOpwvmXrM0rzhG8Dy3vV5Mam1dT0b9ALV/i0mCEQKferfPCnXMK7BF4 uSI+CJ+VSimUgOrqTQf8iraQeFlfD4EDMJsYK7PdpYfrl8gAa3LQNK/qXalnhpYCry/8FYZ5iy30u oxIhlxass8+TFsZjOvGdQmPv1FZt45CTAcPbBTvB3FeIXsY++lr/325VIz2j82uWSNaJHuQF7eukJ SobPZcYdDby+tflhO2Hg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltdx-00064y-OU; Thu, 18 Jun 2020 12:27:45 +0000 Received: from mail-wm1-x344.google.com ([2a00:1450:4864:20::344]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcG-0004UH-2g for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:26:01 +0000 Received: by mail-wm1-x344.google.com with SMTP id f185so5451156wmf.3 for ; Thu, 18 Jun 2020 05:25:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JeSZ3ju04Bd2crqjXPTxRnTO+d3f02vbHN2E9FmQ9OE=; b=YbBmkwDV1/gylcLDyHnszLrFtvXgH+KtGc3MDiyBJnyEYoMA8o0vM/ZhK5Z7obhvQO sRXgfJIotpxP8PpdV5u0K50lO1NaBpawiZOFwKhG4nwT46i+DI7R5awMGDuukOXMZrWH hrUi3qpw00jEcSVnWxBcJ3NiYs0yJgFGEMKrrWjincfrt0IAjpTpK/09Aovjeale8AF/ nL7zHffqJKGNK5uuopcHPhWtIupsEIpnB17WOkRsZ5WOLX7oj3xJTNHSLjSvj9Uj3698 Lc6+mgNLfDzvy8MszEe8sjKRhJ4eX/wXiJFp+05c0u4wSyzsjLXrOYODYqKW81g1eREt 5kDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JeSZ3ju04Bd2crqjXPTxRnTO+d3f02vbHN2E9FmQ9OE=; b=ZcgruifhLECne6eoV3XbZXlvtt9N6GwvcSd/FsJULMn7SZJ0Lwyra0zg3gBlm/rCfx arpKVk+Y5uwZh0r4yeAoqvTngKFCTI4fHJEUkbJ2V4hfujuwxd9mAyTExr+ToayBMwLK +xH5fSoF02SpFNhOoTtI/4hRPKJqslVslnLyt9Vh+rHHUjrNRW34CYgQHmzF7lM/v1eb rKXYdriXpKhsA+xywKPPN/qlGduOtgC5c/c3kulI7MlwhOImfUIhPwDEAy/TS9zW6x5X Vegd2sqr7MDB37AUoW9p2Lle8yXBLp+72duVgb+U6ko8+3rfRfJ0IyccNsUfYFclxGEE 7XSQ== X-Gm-Message-State: AOAM532mWJVqEvswuJGBq9ceL1deDP+X63ACIRvkEhZioDRPIRf5wMBH wbFhBryBrPAiMHk8O6fWh3s0uA== X-Google-Smtp-Source: ABdhPJzMUCWOPKHDWa4BHmPmY132WIRINBO+s7PBOG0KeCc+1AbTwTyD79tdrFKQUebZ5g8CVUXpCA== X-Received: by 2002:a7b:c5d5:: with SMTP id n21mr3824690wmk.106.1592483157586; Thu, 18 Jun 2020 05:25:57 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id j4sm3518516wma.7.2020.06.18.05.25.56 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:25:56 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 05/15] arm64: kvm: Build hyp-entry.S separately for VHE/nVHE Date: Thu, 18 Jun 2020 13:25:27 +0100 Message-Id: <20200618122537.9625-6-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052600_136085_61007627 X-CRM114-Status: GOOD ( 17.58 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:344 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. hyp-entry.S contains implementation of KVM hyp vectors. This code is mostly shared between VHE/nVHE, therefore compile it under both VHE and nVHE build rules. nVHE-specific host HVC handler is hidden behind __KVM_NVHE_HYPERVISOR__. Adjust code which selects which KVM hyp vecs to install to choose the correct VHE/nVHE symbol. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_asm.h | 7 ++++++- arch/arm64/include/asm/kvm_mmu.h | 16 ++++++++++------ arch/arm64/include/asm/mmu.h | 7 ------- arch/arm64/kernel/cpu_errata.c | 4 +++- arch/arm64/kernel/image-vars.h | 12 ++++++++++++ arch/arm64/kvm/hyp/hyp-entry.S | 2 ++ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/va_layout.c | 2 +- 8 files changed, 35 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 6a682d66a640..2baa69324cc9 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -76,7 +76,12 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; -extern char __kvm_hyp_vector[]; +DECLARE_KVM_HYP_SYM(__kvm_hyp_vector); + +#ifdef CONFIG_KVM_INDIRECT_VECTORS +DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs); +extern atomic_t arm64_el2_vector_last_slot; +#endif extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index b12bfc1f051a..5bfc7ee61997 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -461,11 +461,15 @@ extern int __kvm_harden_el2_vector_slot; static inline void *kvm_get_hyp_vector(void) { struct bp_hardening_data *data = arm64_get_bp_hardening_data(); - void *vect = kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector)); int slot = -1; + void *vect = kern_hyp_va(has_vhe() + ? kvm_ksym_ref(__kvm_hyp_vector) + : kvm_ksym_ref_nvhe(__kvm_hyp_vector)); if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) && data->fn) { - vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs)); + vect = kern_hyp_va(has_vhe() + ? kvm_ksym_ref(__bp_harden_hyp_vecs) + : kvm_ksym_ref_nvhe(__bp_harden_hyp_vecs)); slot = data->hyp_vectors_slot; } @@ -494,12 +498,11 @@ static inline int kvm_map_vectors(void) * HBP + HEL2 -> use hardened vertors and use exec mapping */ if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) { - __kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs); - __kvm_bp_vect_base = kern_hyp_va(__kvm_bp_vect_base); + __kvm_bp_vect_base = kern_hyp_va(kvm_ksym_ref_nvhe(__bp_harden_hyp_vecs)); } if (cpus_have_const_cap(ARM64_HARDEN_EL2_VECTORS)) { - phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs); + phys_addr_t vect_pa = __pa_symbol(kvm_nvhe_sym(__bp_harden_hyp_vecs)); unsigned long size = __BP_HARDEN_HYP_VECS_SZ; /* @@ -518,7 +521,8 @@ static inline int kvm_map_vectors(void) #else static inline void *kvm_get_hyp_vector(void) { - return kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector)); + return kern_hyp_va(has_vhe() ? kvm_ksym_ref(__kvm_hyp_vector) + : kvm_ksym_ref_nvhe(__kvm_hyp_vector)); } static inline int kvm_map_vectors(void) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 68140fdd89d6..4d913f6dd366 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -42,13 +42,6 @@ struct bp_hardening_data { bp_hardening_cb_t fn; }; -#if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \ - defined(CONFIG_HARDEN_EL2_VECTORS)) - -extern char __bp_harden_hyp_vecs[]; -extern atomic_t arm64_el2_vector_last_slot; -#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR || CONFIG_HARDEN_EL2_VECTORS */ - #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index ad06d6802d2e..318b76a62c56 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -117,7 +117,9 @@ DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, const char *hyp_vecs_end) { - void *dst = lm_alias(__bp_harden_hyp_vecs + slot * SZ_2K); + char *vec = has_vhe() ? __bp_harden_hyp_vecs + : kvm_nvhe_sym(__bp_harden_hyp_vecs); + void *dst = lm_alias(vec + slot * SZ_2K); int i; for (i = 0; i < SZ_2K; i += 0x80) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 89affa38b143..dc7ee85531f5 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,9 +61,11 @@ __efistub__ctype = _ctype; * memory mappings. */ +__kvm_nvhe___guest_exit = __guest_exit; __kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; __kvm_nvhe___kvm_flush_vm_context = __kvm_flush_vm_context; __kvm_nvhe___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; +__kvm_nvhe___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; __kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; __kvm_nvhe___kvm_tlb_flush_local_vmid = __kvm_tlb_flush_local_vmid; __kvm_nvhe___kvm_tlb_flush_vmid = __kvm_tlb_flush_vmid; @@ -75,6 +77,16 @@ __kvm_nvhe___vgic_v3_read_vmcr = __vgic_v3_read_vmcr; __kvm_nvhe___vgic_v3_restore_aprs = __vgic_v3_restore_aprs; __kvm_nvhe___vgic_v3_save_aprs = __vgic_v3_save_aprs; __kvm_nvhe___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; +__kvm_nvhe_abort_guest_exit_end = abort_guest_exit_end; +__kvm_nvhe_abort_guest_exit_start = abort_guest_exit_start; +__kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; +__kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; +__kvm_nvhe_hyp_panic = hyp_panic; +__kvm_nvhe_kimage_voffset = kimage_voffset; +__kvm_nvhe_kvm_host_data = kvm_host_data; +__kvm_nvhe_kvm_patch_vector_branch = kvm_patch_vector_branch; +__kvm_nvhe_kvm_update_va_mask = kvm_update_va_mask; +__kvm_nvhe_panic = panic; #endif /* CONFIG_KVM */ diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index d362fad97cc8..7e3c72fa634f 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -40,6 +40,7 @@ el1_sync: // Guest trapped into EL2 ccmp x0, #ESR_ELx_EC_HVC32, #4, ne b.ne el1_trap +#ifdef __KVM_NVHE_HYPERVISOR__ mrs x1, vttbr_el2 // If vttbr is valid, the guest cbnz x1, el1_hvc_guest // called HVC @@ -74,6 +75,7 @@ el1_sync: // Guest trapped into EL2 eret sb +#endif /* __KVM_NVHE_HYPERVISOR__ */ el1_hvc_guest: /* diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 7d64235dba62..c68801e24950 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := +obj-y := ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c index a4f48c1ac28c..157d106235f7 100644 --- a/arch/arm64/kvm/va_layout.c +++ b/arch/arm64/kvm/va_layout.c @@ -150,7 +150,7 @@ void kvm_patch_vector_branch(struct alt_instr *alt, /* * Compute HYP VA by using the same computation as kern_hyp_va() */ - addr = (uintptr_t)kvm_ksym_ref(__kvm_hyp_vector); + addr = (uintptr_t)kvm_ksym_ref_nvhe(__kvm_hyp_vector); addr &= va_mask; addr |= tag_val << tag_lsb; From patchwork Thu Jun 18 12:25:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611959 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 029CA913 for ; Thu, 18 Jun 2020 12:27:40 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D47352075E for ; Thu, 18 Jun 2020 12:27:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="kplp7kC9"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="j6rqhLs+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D47352075E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=REzTbJYnuDtF4SLwuubXXjPh6LHeuNImB2AebnnKExE=; b=kplp7kC9NvKvAi bTMTS0++tsCLrqObEq9b9pB16NlrJgAKfYcpd5dERc60b0aTN2FBX8BXMAqASKK1L2Xplfq3sV91j Lt8sw4X4VEvetQt2lzBQI+nBOtHIcX0ehk0q85oWFfJpu7Mqs6lsNoNDlh2juyqMvtPNfQTH4hLY8 JonpXu6GdZo6ZlY+HR8tKp1AYGUIvc5ddINlEaZwSfTH6zIDnfyXMXK2U6xTYCBwO2yEPNWBWVxUu SjxEUv7pbSoE3WSG4uh1WH9RNsZsO2hOBtZ0++Ddr3Dy5a6xE5diVbwTHq7W0xS2Y9DgoV7M4Z+c3 eD9sXAmd+lCAPMeTG09A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltdk-0005kh-FA; Thu, 18 Jun 2020 12:27:32 +0000 Received: from mail-wm1-x341.google.com ([2a00:1450:4864:20::341]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcG-0004W9-PH for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:26:02 +0000 Received: by mail-wm1-x341.google.com with SMTP id f185so5451247wmf.3 for ; Thu, 18 Jun 2020 05:26:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SyJVc4e+GoUAEUncaRu7Abitq8XUwuB4SemkCGVL7hU=; b=j6rqhLs+9os6IUKzcc97SvmbzHvnm7iqekCNBV594Zz30SPAzzCrDgd66fUViZUI2O K8z74PXDmGJBAuIgnxkGg4mCcAmULENp3iY4u++w/w6LgKk9ZACdhRBzvItmpMrPQKF+ R+/nF1ktCKfoT7htrA46PeHuWXPobMcsq7yu1TnjnM8f2rI+Qasb3LcxvkSwHbwQTsn3 gKyDSsnPYr2uut1d01KbynGrj2nuWCBCFM/pK3RcJZxQv16IA4IQoySJVSG9PXxkksHe QbccuKwjmDkQcSY+6ZwqLpcDDD5yAsnzn6vXNg1M0Ch0YZ/IcuFacuxZ02lQ4Sw+QAwR 4gSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SyJVc4e+GoUAEUncaRu7Abitq8XUwuB4SemkCGVL7hU=; b=gYo6SZ6D63zO3MpdOO0FBGmTHEJ9mQurm4p8PL6Ox8QWa9ZppZ24/DOuQqUIJTRHdJ JYOn5oF7I8SpmpdwOqtHU7CtTQj3no8X+CQzZOuBfuB/rII+aAvRyQiCcX8BHZTQvxjK cdQS02NeoFtLloMw8It9idvqa6si8pQfc8IiOhrdpnFe+cisnb4UVxJsOC7xZN9nqvKz QdjLTdwV8KLCyhR8Ac/5JkxHRQddTcexn3WLM4M9cLb9jlZdcCSjAl+zzCmo8pY+McBd N4Jxi/XdXlq0xdleu8MVTqNHC7WQnEmZ0jf/ph1xm+991Av82gO0j1tO4kvM+MPEj7Zk M6wQ== X-Gm-Message-State: AOAM531g52QfU35hTIoq46pTfXbNAIu+vjkIQO9UWSySr0vL8YGGVMlN L7pHHBD0r0Xv+g0lchFiZe7bt80dsxFhRG8T X-Google-Smtp-Source: ABdhPJxpr4HRSO92YDjrRYPnbc5L3TTKiVCvuPbkGpjbaCdfcJuGc2VlZl8Wnwp1JTVFdgXjnGyXRQ== X-Received: by 2002:a1c:1fc2:: with SMTP id f185mr3791350wmf.4.1592483159389; Thu, 18 Jun 2020 05:25:59 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id q13sm3400341wrn.84.2020.06.18.05.25.58 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:25:58 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 06/15] arm64: kvm: Move hyp-init.S to nVHE Date: Thu, 18 Jun 2020 13:25:28 +0100 Message-Id: <20200618122537.9625-7-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052600_856869_DAC33698 X-CRM114-Status: GOOD ( 13.64 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:341 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, Andrew Scull , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Andrew Scull This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. hyp-init.S contains the identity mapped initialisation code for the non-VHE code that runs at EL2. It is only used for non-VHE. Adjust code that calls into this to use the prefixed symbol name. Signed-off-by: Andrew Scull --- arch/arm64/include/asm/kvm_asm.h | 4 +--- arch/arm64/kernel/image-vars.h | 3 ++- arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/{ => hyp/nvhe}/hyp-init.S | 0 arch/arm64/kvm/mmu.c | 2 +- 6 files changed, 6 insertions(+), 7 deletions(-) rename arch/arm64/kvm/{ => hyp/nvhe}/hyp-init.S (100%) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 2baa69324cc9..bab14b64c4fc 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -73,9 +73,7 @@ struct kvm; struct kvm_vcpu; -extern char __kvm_hyp_init[]; -extern char __kvm_hyp_init_end[]; - +DECLARE_KVM_NVHE_SYM(__kvm_hyp_init); DECLARE_KVM_HYP_SYM(__kvm_hyp_vector); #ifdef CONFIG_KVM_INDIRECT_VECTORS diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index dc7ee85531f5..4dc969ccda9e 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -62,10 +62,10 @@ __efistub__ctype = _ctype; */ __kvm_nvhe___guest_exit = __guest_exit; +__kvm_nvhe___hyp_stub_vectors = __hyp_stub_vectors; __kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; __kvm_nvhe___kvm_flush_vm_context = __kvm_flush_vm_context; __kvm_nvhe___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; -__kvm_nvhe___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; __kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; __kvm_nvhe___kvm_tlb_flush_local_vmid = __kvm_tlb_flush_local_vmid; __kvm_nvhe___kvm_tlb_flush_vmid = __kvm_tlb_flush_vmid; @@ -82,6 +82,7 @@ __kvm_nvhe_abort_guest_exit_start = abort_guest_exit_start; __kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; __kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; __kvm_nvhe_hyp_panic = hyp_panic; +__kvm_nvhe_idmap_t0sz = idmap_t0sz; __kvm_nvhe_kimage_voffset = kimage_voffset; __kvm_nvhe_kvm_host_data = kvm_host_data; __kvm_nvhe_kvm_patch_vector_branch = kvm_patch_vector_branch; diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 8d3d9513cbfe..152d8845a1a2 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -13,7 +13,7 @@ obj-$(CONFIG_KVM) += hyp/ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \ $(KVM)/vfio.o $(KVM)/irqchip.o \ arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \ - inject_fault.o regmap.o va_layout.o hyp.o hyp-init.o handle_exit.o \ + inject_fault.o regmap.o va_layout.o hyp.o handle_exit.o \ guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o \ vgic-sys-reg-v3.o fpsimd.o pmu.o \ aarch32.o arch_timer.o \ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index c68801e24950..fef6f1881765 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := ../hyp-entry.o +obj-y := hyp-init.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S similarity index 100% rename from arch/arm64/kvm/hyp-init.S rename to arch/arm64/kvm/hyp/nvhe/hyp-init.S diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8c0035cab6b6..592afe5e7003 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2346,7 +2346,7 @@ int kvm_mmu_init(void) hyp_idmap_start = ALIGN_DOWN(hyp_idmap_start, PAGE_SIZE); hyp_idmap_end = __pa_symbol(__hyp_idmap_text_end); hyp_idmap_end = ALIGN(hyp_idmap_end, PAGE_SIZE); - hyp_idmap_vector = __pa_symbol(__kvm_hyp_init); + hyp_idmap_vector = __pa_symbol(kvm_nvhe_sym(__kvm_hyp_init)); /* * We rely on the linker script to ensure at build time that the HYP From patchwork Thu Jun 18 12:25:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611963 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DF3B5912 for ; Thu, 18 Jun 2020 12:28:09 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AADCF2075E for ; Thu, 18 Jun 2020 12:28:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="g7nZ91fM"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="E5P4NQGY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AADCF2075E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=StHCD4CjJj+Lgk27y7UbLG8pnHXj7QBafVk3+pKGTGI=; b=g7nZ91fMwu56q4 KF+413K0NLFZHIPGstLdZMiLlp+4+wXzQ+mdCsAQPCMVxJzskbdL3D7qmDLk1U1Zmii3xa+PhB+a5 /x7a58nWC29wlzNlk46jd1PXsYe/HCcUOw60yodyfW5IUYTi/ReB2YBArCUtCjgpGVYqc3b8BVcsH AicIdy/5+Q63jyTMJrt9noNTTFXqpOgJn4P9V1Ee04dSvBHRqoHv5/IvGEoTHGrJyq5B7XEPfnfen /rTv/pD1i415+tR358iXobzW0QUsnB0qSOugUFMDvF7DO6/Ti2z/2ivN/6i3iYPTOIJTQn6AOMLa9 1RdrQo+TSpqvuH+hLDuw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jlteH-0006LY-El; Thu, 18 Jun 2020 12:28:05 +0000 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcJ-0004Y6-3F for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:26:05 +0000 Received: by mail-wr1-x444.google.com with SMTP id p5so5822224wrw.9 for ; Thu, 18 Jun 2020 05:26:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=74VV+hWrAL0zCCFEfjNCgA2uQ+DitYEe58gLlSBqoCk=; b=E5P4NQGYzL5UaWmwY1z+kkRyF7ccG4gozbkk0/NooJZsCcZoMcebVMGxpN+qI5+JEb KUYsfeDWJHlGXrBd7hZ6va1sAzy+qiEtqPcXWEKyXUJEBA8wwHQhdD9D2zXjvBWEbvdL u8bayVSKwjOiprCOYesKXtXr5gH0PJ6PpT0G88khtJoO1YfT+xz/V4pfL54ySbhe8FJu X1/e6ZIds9kc7nFl9gm0lqWzc9iv8H6JWXJxGvhsQCp9ob8f/POoF9hPi1Mh7PrPwczs 5+lJR544iurizR6YWCUfRRNMaIACJkWnTGd3MJX88UWJ/sMSZvSOgYwCIiBwwL2oHyV/ z3Rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=74VV+hWrAL0zCCFEfjNCgA2uQ+DitYEe58gLlSBqoCk=; b=OJZsDp3kS5T5zzn0qUkQRFw+QzNEHJwCiEfBMxB/xgkZOnE4tRSR+Sa8xL1nDhEopO tw9GoRmZw5SsvM92t9AnJgEWnCJBl5r/eOqAFS2ADFYGa4MPA0V5Y0Kp2Rw6dSU9N78k 04dI/XWcpxyCGP0JtyKYpfewzU3pQweXyQBicSdO6A/MlAxcVhuIr0n61tIZsphjZDlX 7zGbQrMUcal5NqtL8K2HVVHVr9gzxgXswupNPdRySKU1bdjdCmV9zAAcB4y0yUTIA+Yl Dkjjjbfuep4REdc8LaO+2FtoheiktKR9z8JBVpKVbtyI5nypEGFyNHCJAKBSdwnABe6W DzYw== X-Gm-Message-State: AOAM531vhbN5Hq7wj2hLr+d/GvprmGVNELJ4b6xINaQkO5Z4XBlRgXmy lgIqw1CvHcZyT0I9tijC7HtLdg== X-Google-Smtp-Source: ABdhPJxMV2JEH9hNNg4EugfqAogfJSPY6ETVKExk3/LwDntYxLcTJfTvQotP7mMsU3yBGKgoEo3BPQ== X-Received: by 2002:adf:fd81:: with SMTP id d1mr4505813wrr.96.1592483161018; Thu, 18 Jun 2020 05:26:01 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id o6sm3426241wrp.3.2020.06.18.05.25.59 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:26:00 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 07/15] arm64: kvm: Split hyp/tlb.c to VHE/nVHE Date: Thu, 18 Jun 2020 13:25:29 +0100 Message-Id: <20200618122537.9625-8-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052603_203541_BBA6B576 X-CRM114-Status: GOOD ( 29.38 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:444 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. tlb.c contains code for flushing the TLB, with parts shared between VHE/nVHE. These common routines are moved into a header file tlb.h, VHE-specific code remains in tlb.c and nVHE-specific code is moved to nvhe/tlb.c. The header file expects its users to implement two helper functions declared at the top of the file. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 8 +- arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/tlb.c | 70 +++++++++++++ arch/arm64/kvm/hyp/tlb.c | 171 +++---------------------------- arch/arm64/kvm/hyp/tlb.h | 134 ++++++++++++++++++++++++ 5 files changed, 222 insertions(+), 163 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/tlb.c create mode 100644 arch/arm64/kvm/hyp/tlb.h diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 4dc969ccda9e..e8a8aa6bc7bd 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -63,13 +63,10 @@ __efistub__ctype = _ctype; __kvm_nvhe___guest_exit = __guest_exit; __kvm_nvhe___hyp_stub_vectors = __hyp_stub_vectors; +__kvm_nvhe___icache_flags = __icache_flags; __kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; -__kvm_nvhe___kvm_flush_vm_context = __kvm_flush_vm_context; __kvm_nvhe___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; __kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; -__kvm_nvhe___kvm_tlb_flush_local_vmid = __kvm_tlb_flush_local_vmid; -__kvm_nvhe___kvm_tlb_flush_vmid = __kvm_tlb_flush_vmid; -__kvm_nvhe___kvm_tlb_flush_vmid_ipa = __kvm_tlb_flush_vmid_ipa; __kvm_nvhe___kvm_vcpu_run_nvhe = __kvm_vcpu_run_nvhe; __kvm_nvhe___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2; __kvm_nvhe___vgic_v3_init_lrs = __vgic_v3_init_lrs; @@ -79,8 +76,11 @@ __kvm_nvhe___vgic_v3_save_aprs = __vgic_v3_save_aprs; __kvm_nvhe___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; __kvm_nvhe_abort_guest_exit_end = abort_guest_exit_end; __kvm_nvhe_abort_guest_exit_start = abort_guest_exit_start; +__kvm_nvhe_arm64_const_caps_ready = arm64_const_caps_ready; __kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; __kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; +__kvm_nvhe_cpu_hwcap_keys = cpu_hwcap_keys; +__kvm_nvhe_cpu_hwcaps = cpu_hwcaps; __kvm_nvhe_hyp_panic = hyp_panic; __kvm_nvhe_idmap_t0sz = idmap_t0sz; __kvm_nvhe_kimage_voffset = kimage_voffset; diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index fef6f1881765..3bfc51de1679 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := hyp-init.o ../hyp-entry.o +obj-y := tlb.o hyp-init.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c new file mode 100644 index 000000000000..111c4b0a23d3 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -0,0 +1,70 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include + +#include +#include +#include + +#include "../tlb.h" + +static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, + struct tlb_inv_context *cxt) +{ + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { + u64 val; + + /* + * For CPUs that are affected by ARM 1319367, we need to + * avoid a host Stage-1 walk while we have the guest's + * VMID set in the VTTBR in order to invalidate TLBs. + * We're guaranteed that the S1 MMU is enabled, so we can + * simply set the EPD bits to avoid any further TLB fill. + */ + val = cxt->tcr = read_sysreg_el1(SYS_TCR); + val |= TCR_EPD1_MASK | TCR_EPD0_MASK; + write_sysreg_el1(val, SYS_TCR); + isb(); + } + + /* __load_guest_stage2() includes an ISB for the workaround. */ + __load_guest_stage2(kvm); + asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT)); +} + +static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, + struct tlb_inv_context *cxt) +{ + write_sysreg(0, vttbr_el2); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { + /* Ensure write of the host VMID */ + isb(); + /* Restore the host's TCR_EL1 */ + write_sysreg_el1(cxt->tcr, SYS_TCR); + } +} + +void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +{ + __tlb_flush_vmid_ipa(kvm, ipa); +} + +void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm) +{ + __tlb_flush_vmid(kvm); +} + +void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +{ + __tlb_flush_local_vmid(vcpu); +} + +void __hyp_text __kvm_flush_vm_context(void) +{ + __tlb_flush_vm_context(); +} diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index d063a576d511..4e190f8c7e9c 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -10,14 +10,10 @@ #include #include -struct tlb_inv_context { - unsigned long flags; - u64 tcr; - u64 sctlr; -}; +#include "tlb.h" -static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, + struct tlb_inv_context *cxt) { u64 val; @@ -60,41 +56,8 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, isb(); } -static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { - u64 val; - - /* - * For CPUs that are affected by ARM 1319367, we need to - * avoid a host Stage-1 walk while we have the guest's - * VMID set in the VTTBR in order to invalidate TLBs. - * We're guaranteed that the S1 MMU is enabled, so we can - * simply set the EPD bits to avoid any further TLB fill. - */ - val = cxt->tcr = read_sysreg_el1(SYS_TCR); - val |= TCR_EPD1_MASK | TCR_EPD0_MASK; - write_sysreg_el1(val, SYS_TCR); - isb(); - } - - /* __load_guest_stage2() includes an ISB for the workaround. */ - __load_guest_stage2(kvm); - asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT)); -} - -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - if (has_vhe()) - __tlb_switch_to_guest_vhe(kvm, cxt); - else - __tlb_switch_to_guest_nvhe(kvm, cxt); -} - -static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, + struct tlb_inv_context *cxt) { /* * We're done with the TLB operation, let's restore the host's @@ -113,130 +76,22 @@ static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm, local_irq_restore(cxt->flags); } -static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - write_sysreg(0, vttbr_el2); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { - /* Ensure write of the host VMID */ - isb(); - /* Restore the host's TCR_EL1 */ - write_sysreg_el1(cxt->tcr, SYS_TCR); - } -} - -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - if (has_vhe()) - __tlb_switch_to_host_vhe(kvm, cxt); - else - __tlb_switch_to_host_nvhe(kvm, cxt); -} - -void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { - struct tlb_inv_context cxt; - - dsb(ishst); - - /* Switch to requested VMID */ - kvm = kern_hyp_va(kvm); - __tlb_switch_to_guest(kvm, &cxt); - - /* - * We could do so much better if we had the VA as well. - * Instead, we invalidate Stage-2 for this IPA, and the - * whole of Stage-1. Weep... - */ - ipa >>= 12; - __tlbi(ipas2e1is, ipa); - - /* - * We have to ensure completion of the invalidation at Stage-2, - * since a table walk on another CPU could refill a TLB with a - * complete (S1 + S2) walk based on the old Stage-2 mapping if - * the Stage-1 invalidation happened first. - */ - dsb(ish); - __tlbi(vmalle1is); - dsb(ish); - isb(); - - /* - * If the host is running at EL1 and we have a VPIPT I-cache, - * then we must perform I-cache maintenance at EL2 in order for - * it to have an effect on the guest. Since the guest cannot hit - * I-cache lines allocated with a different VMID, we don't need - * to worry about junk out of guest reset (we nuke the I-cache on - * VMID rollover), but we do need to be careful when remapping - * executable pages for the same guest. This can happen when KSM - * takes a CoW fault on an executable page, copies the page into - * a page that was previously mapped in the guest and then needs - * to invalidate the guest view of the I-cache for that page - * from EL1. To solve this, we invalidate the entire I-cache when - * unmapping a page from a guest if we have a VPIPT I-cache but - * the host is running at EL1. As above, we could do better if - * we had the VA. - * - * The moral of this story is: if you have a VPIPT I-cache, then - * you should be running with VHE enabled. - */ - if (!has_vhe() && icache_is_vpipt()) - __flush_icache_all(); - - __tlb_switch_to_host(kvm, &cxt); + __tlb_flush_vmid_ipa(kvm, ipa); } -void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm) +void __kvm_tlb_flush_vmid(struct kvm *kvm) { - struct tlb_inv_context cxt; - - dsb(ishst); - - /* Switch to requested VMID */ - kvm = kern_hyp_va(kvm); - __tlb_switch_to_guest(kvm, &cxt); - - __tlbi(vmalls12e1is); - dsb(ish); - isb(); - - __tlb_switch_to_host(kvm, &cxt); + __tlb_flush_vmid(kvm); } -void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) { - struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); - struct tlb_inv_context cxt; - - /* Switch to requested VMID */ - __tlb_switch_to_guest(kvm, &cxt); - - __tlbi(vmalle1); - dsb(nsh); - isb(); - - __tlb_switch_to_host(kvm, &cxt); + __tlb_flush_local_vmid(vcpu); } -void __hyp_text __kvm_flush_vm_context(void) +void __kvm_flush_vm_context(void) { - dsb(ishst); - __tlbi(alle1is); - - /* - * VIPT and PIPT caches are not affected by VMID, so no maintenance - * is necessary across a VMID rollover. - * - * VPIPT caches constrain lookup and maintenance to the active VMID, - * so we need to invalidate lines with a stale VMID to avoid an ABA - * race after multiple rollovers. - * - */ - if (icache_is_vpipt()) - asm volatile("ic ialluis"); - - dsb(ish); + __tlb_flush_vm_context(); } diff --git a/arch/arm64/kvm/hyp/tlb.h b/arch/arm64/kvm/hyp/tlb.h new file mode 100644 index 000000000000..841ef400c8ec --- /dev/null +++ b/arch/arm64/kvm/hyp/tlb.h @@ -0,0 +1,134 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#ifndef __ARM64_KVM_HYP_TLB_H__ +#define __ARM64_KVM_HYP_TLB_H__ + +#include + +#include +#include +#include + +struct tlb_inv_context { + unsigned long flags; + u64 tcr; + u64 sctlr; +}; + +static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, + struct tlb_inv_context *cxt); +static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, + struct tlb_inv_context *cxt); + +static inline void __hyp_text +__tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + kvm = kern_hyp_va(kvm); + __tlb_switch_to_guest(kvm, &cxt); + + /* + * We could do so much better if we had the VA as well. + * Instead, we invalidate Stage-2 for this IPA, and the + * whole of Stage-1. Weep... + */ + ipa >>= 12; + __tlbi(ipas2e1is, ipa); + + /* + * We have to ensure completion of the invalidation at Stage-2, + * since a table walk on another CPU could refill a TLB with a + * complete (S1 + S2) walk based on the old Stage-2 mapping if + * the Stage-1 invalidation happened first. + */ + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* + * If the host is running at EL1 and we have a VPIPT I-cache, + * then we must perform I-cache maintenance at EL2 in order for + * it to have an effect on the guest. Since the guest cannot hit + * I-cache lines allocated with a different VMID, we don't need + * to worry about junk out of guest reset (we nuke the I-cache on + * VMID rollover), but we do need to be careful when remapping + * executable pages for the same guest. This can happen when KSM + * takes a CoW fault on an executable page, copies the page into + * a page that was previously mapped in the guest and then needs + * to invalidate the guest view of the I-cache for that page + * from EL1. To solve this, we invalidate the entire I-cache when + * unmapping a page from a guest if we have a VPIPT I-cache but + * the host is running at EL1. As above, we could do better if + * we had the VA. + * + * The moral of this story is: if you have a VPIPT I-cache, then + * you should be running with VHE enabled. + */ + if (!has_vhe() && icache_is_vpipt()) + __flush_icache_all(); + + __tlb_switch_to_host(kvm, &cxt); +} + +static inline void __hyp_text __tlb_flush_vmid(struct kvm *kvm) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + kvm = kern_hyp_va(kvm); + __tlb_switch_to_guest(kvm, &cxt); + + __tlbi(vmalls12e1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(kvm, &cxt); +} + +static inline void __hyp_text __tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); + struct tlb_inv_context cxt; + + /* Switch to requested VMID */ + __tlb_switch_to_guest(kvm, &cxt); + + __tlbi(vmalle1); + dsb(nsh); + isb(); + + __tlb_switch_to_host(kvm, &cxt); +} + +static inline void __hyp_text __tlb_flush_vm_context(void) +{ + dsb(ishst); + __tlbi(alle1is); + + /* + * VIPT and PIPT caches are not affected by VMID, so no maintenance + * is necessary across a VMID rollover. + * + * VPIPT caches constrain lookup and maintenance to the active VMID, + * so we need to invalidate lines with a stale VMID to avoid an ABA + * race after multiple rollovers. + * + */ + if (icache_is_vpipt()) + asm volatile("ic ialluis"); + + dsb(ish); +} + +#endif /* __ARM64_KVM_HYP_TLB_H__ */ From patchwork Thu Jun 18 12:25:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611969 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 80856913 for ; Thu, 18 Jun 2020 12:28:54 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 145032075E for ; Thu, 18 Jun 2020 12:28:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="KDo8VSPk"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="dYB4cMM5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 145032075E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KCMfW6vaIQcEY7wuxcjR7NOPCdFw3X985Ut6khkVycg=; b=KDo8VSPk/pgbI1 L0jxOyYB/NsqkSFRPpvArKcyU995OniBGZIwDl5lFLkk2T01+kPcQSoT1lgFAayAt2Ny1tATUdKmo NuNOaZRjeaVKl4rmPJc3L6kTQ79JG6Mbo+0R3UTFpX5m6bOW7edM1vIpGyXCUJEHsjl6K6+fqMMgr MPmeVwGZDIUMCs2e1jTaFA7m0Wu9KRedpeGIOxS8H4aIe81OYJCcZ/lDoUuYEr8JnRMQjSx3FEsxd WppSRbyoMwdfom+Iw+kJYp1ILqhZOyC97qfbzsgBEq0xUGyOA84AuyHCXBWGD0LFUvMfKEKqctB6F GWtgDwmG8zQP6a869v1g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltf0-00072N-Pk; Thu, 18 Jun 2020 12:28:51 +0000 Received: from mail-wr1-x443.google.com ([2a00:1450:4864:20::443]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcL-0004aa-OX for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:26:12 +0000 Received: by mail-wr1-x443.google.com with SMTP id c3so5810858wru.12 for ; Thu, 18 Jun 2020 05:26:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Rcpgcxmo5Nd68pIgIkcIsuYMyFoFL5x/ynJEKfOU0Hs=; b=dYB4cMM5mXJE8nkTcOLEuiDcvDu4Wk66HQMwS5LO/uXhq7s04Gft5DeCPo0oiKZNg5 jOmcW+b1SemNrijXW6SnGjjtyId0H58HKT9VKH24Nvp1cBNeeaYYW/jCA9YZYMrZbeiq Ts5RXf/aFHcCBcgD5jg4umsQWrm7+bgE29Y8jXodQdC4Vb4K/tDUMDzFn9NwOzLfi5rY sJK0xRgodmfFkrlOYL343fC1Sc4oJ88FufQ6PKct743b86d4VxJO95MXj+SYlEVyK7BV 1b288OyCdLMPKjlnc5htWXctDUcDkJYpw0nsKaIbnFFkr0p4lgf9WUcIcuOEclei2ucQ nygg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Rcpgcxmo5Nd68pIgIkcIsuYMyFoFL5x/ynJEKfOU0Hs=; b=lcjNsz+xZsUXZVv7Rq37VwURZc2LJIRg+RVu2A+o4B6PgGcOiNUuEmQsQDAG4DjwfO 9QCAxV+07xQonfDbxiPgwZ0EaoCbfeAbZfxG2geiq78mZYRJYtvcjnWZcW5A4xwuFh/3 FOo7paA51+3tDhBeFzf/0rahZVB8l9U2YVGjHDIRYT3rMa8AGmJ+jme5vKkxG6qQZf/a Vl+7uenMdz0qNCdu9iR9/WsAc1MdFEOrQuF//Dns+q6u1YfXgYvQjd5LduE2TE4JrhtS L3ocDXSApZbQAQKDSHA0eOdMiyUZvL5LwvqrLXJliRmfHrGcT52ogQhse34KFA13j9D8 NpBw== X-Gm-Message-State: AOAM531njwEiDX3A4GWJCrLnHalDbtC3C1M3Sg0QneemVL23yTZfm97K lAcSF5qyMEZbDTD7kJTuAmtMMKAOlgB+uobl X-Google-Smtp-Source: ABdhPJxdnZ12aAiYYs8tx38aixEd1YP4hclwmMKweyCF50rWR0cRAlNcaOGQznyZZD00F/LuKwPGqA== X-Received: by 2002:adf:ed05:: with SMTP id a5mr1335942wro.176.1592483162852; Thu, 18 Jun 2020 05:26:02 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id h27sm3932037wrb.18.2020.06.18.05.26.01 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:26:01 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 08/15] arm64: kvm: Split hyp/switch.c to VHE/nVHE Date: Thu, 18 Jun 2020 13:25:30 +0100 Message-Id: <20200618122537.9625-9-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052605_926619_3308C7D2 X-CRM114-Status: GOOD ( 20.12 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:443 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, Andrew Scull , David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. switch.c implements context-switching for KVM, with large parts shared between VHE/nVHE. These common routines are moved to switch.h, VHE-specific code is left in switch.c and nVHE-specific code is moved to nvhe/switch.c. Previously __kvm_vcpu_run needed a different symbol name for VHE/nVHE. This is cleaned up and the caller in arm.c simplified. Signed-off-by: David Brazdil Signed-off-by: Andrew Scull Reported-by: kernel test robot Reported-by: kernel test robot --- arch/arm64/include/asm/kvm_asm.h | 4 +- arch/arm64/include/asm/kvm_hyp.h | 5 + arch/arm64/kernel/image-vars.h | 31 +- arch/arm64/kvm/arm.c | 6 +- arch/arm64/kvm/hyp/hyp-entry.S | 2 + arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/switch.c | 271 +++++++++++ arch/arm64/kvm/hyp/switch.c | 749 +------------------------------ arch/arm64/kvm/hyp/switch.h | 507 +++++++++++++++++++++ arch/arm64/kvm/hyp/sysreg-sr.c | 4 +- 10 files changed, 835 insertions(+), 746 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/switch.c create mode 100644 arch/arm64/kvm/hyp/switch.h diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index bab14b64c4fc..42bd10b53b5f 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -88,9 +88,7 @@ extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); extern void __kvm_timer_set_cntvoff(u64 cntvoff); -extern int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu); - -extern int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu); +extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); extern void __kvm_enable_ssbs(void); diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index ce3080834bfa..1cb5903a2693 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -81,11 +81,16 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu); void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); +#ifndef __KVM_NVHE_HYPERVISOR__ void activate_traps_vhe_load(struct kvm_vcpu *vcpu); void deactivate_traps_vhe_put(void); +#endif u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); + +#ifdef __KVM_NVHE_HYPERVISOR__ void __noreturn __hyp_do_panic(unsigned long, ...); +#endif #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index e8a8aa6bc7bd..855f9718d6d9 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,18 +61,35 @@ __efistub__ctype = _ctype; * memory mappings. */ +__kvm_nvhe___debug_switch_to_guest = __debug_switch_to_guest; +__kvm_nvhe___debug_switch_to_host = __debug_switch_to_host; +__kvm_nvhe___fpsimd_restore_state = __fpsimd_restore_state; +__kvm_nvhe___fpsimd_save_state = __fpsimd_save_state; +__kvm_nvhe___guest_enter = __guest_enter; __kvm_nvhe___guest_exit = __guest_exit; +__kvm_nvhe___hyp_panic_string = __hyp_panic_string; __kvm_nvhe___hyp_stub_vectors = __hyp_stub_vectors; __kvm_nvhe___icache_flags = __icache_flags; __kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; __kvm_nvhe___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; __kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; -__kvm_nvhe___kvm_vcpu_run_nvhe = __kvm_vcpu_run_nvhe; +__kvm_nvhe___sysreg32_restore_state = __sysreg32_restore_state; +__kvm_nvhe___sysreg32_save_state = __sysreg32_save_state; +__kvm_nvhe___sysreg_restore_state_nvhe = __sysreg_restore_state_nvhe; +__kvm_nvhe___sysreg_save_state_nvhe = __sysreg_save_state_nvhe; +__kvm_nvhe___timer_disable_traps = __timer_disable_traps; +__kvm_nvhe___timer_enable_traps = __timer_enable_traps; +__kvm_nvhe___vgic_v2_perform_cpuif_access = __vgic_v2_perform_cpuif_access; +__kvm_nvhe___vgic_v3_activate_traps = __vgic_v3_activate_traps; +__kvm_nvhe___vgic_v3_deactivate_traps = __vgic_v3_deactivate_traps; __kvm_nvhe___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2; __kvm_nvhe___vgic_v3_init_lrs = __vgic_v3_init_lrs; +__kvm_nvhe___vgic_v3_perform_cpuif_access = __vgic_v3_perform_cpuif_access; __kvm_nvhe___vgic_v3_read_vmcr = __vgic_v3_read_vmcr; __kvm_nvhe___vgic_v3_restore_aprs = __vgic_v3_restore_aprs; +__kvm_nvhe___vgic_v3_restore_state = __vgic_v3_restore_state; __kvm_nvhe___vgic_v3_save_aprs = __vgic_v3_save_aprs; +__kvm_nvhe___vgic_v3_save_state = __vgic_v3_save_state; __kvm_nvhe___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; __kvm_nvhe_abort_guest_exit_end = abort_guest_exit_end; __kvm_nvhe_abort_guest_exit_start = abort_guest_exit_start; @@ -81,13 +98,23 @@ __kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; __kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; __kvm_nvhe_cpu_hwcap_keys = cpu_hwcap_keys; __kvm_nvhe_cpu_hwcaps = cpu_hwcaps; -__kvm_nvhe_hyp_panic = hyp_panic; +#ifdef CONFIG_ARM64_PSEUDO_NMI +__kvm_nvhe_gic_pmr_sync = gic_pmr_sync; +#endif __kvm_nvhe_idmap_t0sz = idmap_t0sz; __kvm_nvhe_kimage_voffset = kimage_voffset; __kvm_nvhe_kvm_host_data = kvm_host_data; __kvm_nvhe_kvm_patch_vector_branch = kvm_patch_vector_branch; +__kvm_nvhe_kvm_skip_instr32 = kvm_skip_instr32; __kvm_nvhe_kvm_update_va_mask = kvm_update_va_mask; +__kvm_nvhe_kvm_vgic_global_state = kvm_vgic_global_state; __kvm_nvhe_panic = panic; +#ifdef CONFIG_ARM64_SVE +__kvm_nvhe_sve_load_state = sve_load_state; +__kvm_nvhe_sve_save_state = sve_save_state; +#endif +__kvm_nvhe_vgic_v2_cpuif_trap = vgic_v2_cpuif_trap; +__kvm_nvhe_vgic_v3_cpuif_trap = vgic_v3_cpuif_trap; #endif /* CONFIG_KVM */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 90cb90561446..59bbe6ce2d54 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -748,11 +748,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) trace_kvm_entry(*vcpu_pc(vcpu)); guest_enter_irqoff(); - if (has_vhe()) { - ret = kvm_vcpu_run_vhe(vcpu); - } else { - ret = kvm_call_hyp_ret(__kvm_vcpu_run_nvhe, vcpu); - } + ret = kvm_call_hyp_ret(__kvm_vcpu_run, vcpu); vcpu->mode = OUTSIDE_GUEST_MODE; vcpu->stat.exits++; diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 7e3c72fa634f..8316ee67d6a0 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -182,6 +182,7 @@ el2_error: eret sb +#ifdef __KVM_NVHE_HYPERVISOR__ SYM_FUNC_START(__hyp_do_panic) mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\ PSR_MODE_EL1h) @@ -191,6 +192,7 @@ SYM_FUNC_START(__hyp_do_panic) eret sb SYM_FUNC_END(__hyp_do_panic) +#endif SYM_CODE_START(__hyp_panic) get_host_ctxt x0, x1 diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 3bfc51de1679..336b1bf64ceb 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := tlb.o hyp-init.o ../hyp-entry.o +obj-y := switch.o tlb.o hyp-init.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c new file mode 100644 index 000000000000..8f004d7da177 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -0,0 +1,271 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include +#include +#include +#include + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../switch.h" + +static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) +{ + u64 val; + + ___activate_traps(vcpu); + __activate_traps_common(vcpu); + + val = CPTR_EL2_DEFAULT; + val |= CPTR_EL2_TTA | CPTR_EL2_TZ | CPTR_EL2_TAM; + if (!update_fp_enabled(vcpu)) { + val |= CPTR_EL2_TFP; + __activate_traps_fpsimd32(vcpu); + } + + write_sysreg(val, cptr_el2); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { + struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; + + isb(); + /* + * At this stage, and thanks to the above isb(), S2 is + * configured and enabled. We can now restore the guest's S1 + * configuration: SCTLR, and only then TCR. + */ + write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); + isb(); + write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); + } +} + +static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) +{ + u64 mdcr_el2; + + ___deactivate_traps(vcpu); + + mdcr_el2 = read_sysreg(mdcr_el2); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { + u64 val; + + /* + * Set the TCR and SCTLR registers in the exact opposite + * sequence as __activate_traps (first prevent walks, + * then force the MMU on). A generous sprinkling of isb() + * ensure that things happen in this exact order. + */ + val = read_sysreg_el1(SYS_TCR); + write_sysreg_el1(val | TCR_EPD1_MASK | TCR_EPD0_MASK, SYS_TCR); + isb(); + val = read_sysreg_el1(SYS_SCTLR); + write_sysreg_el1(val | SCTLR_ELx_M, SYS_SCTLR); + isb(); + } + + __deactivate_traps_common(); + + mdcr_el2 &= MDCR_EL2_HPMN_MASK; + mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; + + write_sysreg(mdcr_el2, mdcr_el2); + write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2); + write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); +} + +static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu) +{ + write_sysreg(0, vttbr_el2); +} + +/* Save VGICv3 state on non-VHE systems */ +static void __hyp_text __hyp_vgic_save_state(struct kvm_vcpu *vcpu) +{ + if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { + __vgic_v3_save_state(&vcpu->arch.vgic_cpu.vgic_v3); + __vgic_v3_deactivate_traps(&vcpu->arch.vgic_cpu.vgic_v3); + } +} + +/* Restore VGICv3 state on non_VEH systems */ +static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) +{ + if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { + __vgic_v3_activate_traps(&vcpu->arch.vgic_cpu.vgic_v3); + __vgic_v3_restore_state(&vcpu->arch.vgic_cpu.vgic_v3); + } +} + +/** + * Disable host events, enable guest events + */ +static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) +{ + struct kvm_host_data *host; + struct kvm_pmu_events *pmu; + + host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); + pmu = &host->pmu_events; + + if (pmu->events_host) + write_sysreg(pmu->events_host, pmcntenclr_el0); + + if (pmu->events_guest) + write_sysreg(pmu->events_guest, pmcntenset_el0); + + return (pmu->events_host || pmu->events_guest); +} + +/** + * Disable guest events, enable host events + */ +static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) +{ + struct kvm_host_data *host; + struct kvm_pmu_events *pmu; + + host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); + pmu = &host->pmu_events; + + if (pmu->events_guest) + write_sysreg(pmu->events_guest, pmcntenclr_el0); + + if (pmu->events_host) + write_sysreg(pmu->events_host, pmcntenset_el0); +} + +/* Switch to the guest for legacy non-VHE systems */ +int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + bool pmu_switch_needed; + u64 exit_code; + + /* + * Having IRQs masked via PMR when entering the guest means the GIC + * will not signal the CPU of interrupts of lower priority, and the + * only way to get out will be via guest exceptions. + * Naturally, we want to avoid this. + */ + if (system_uses_irq_prio_masking()) { + gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); + pmr_sync(); + } + + vcpu = kern_hyp_va(vcpu); + + host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt; + host_ctxt->__hyp_running_vcpu = vcpu; + guest_ctxt = &vcpu->arch.ctxt; + + pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); + + __sysreg_save_state_nvhe(host_ctxt); + + /* + * We must restore the 32-bit state before the sysregs, thanks + * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). + * + * Also, and in order to be able to deal with erratum #1319537 (A57) + * and #1319367 (A72), we must ensure that all VM-related sysreg are + * restored before we enable S2 translation. + */ + __sysreg32_restore_state(vcpu); + __sysreg_restore_state_nvhe(guest_ctxt); + + __activate_vm(kern_hyp_va(vcpu->kvm)); + __activate_traps(vcpu); + + __hyp_vgic_restore_state(vcpu); + __timer_enable_traps(vcpu); + + __debug_switch_to_guest(vcpu); + + __set_guest_arch_workaround_state(vcpu); + + do { + /* Jump in the fire! */ + exit_code = __guest_enter(vcpu, host_ctxt); + + /* And we're baaack! */ + } while (fixup_guest_exit(vcpu, &exit_code)); + + __set_host_arch_workaround_state(vcpu); + + __sysreg_save_state_nvhe(guest_ctxt); + __sysreg32_save_state(vcpu); + __timer_disable_traps(vcpu); + __hyp_vgic_save_state(vcpu); + + __deactivate_traps(vcpu); + __deactivate_vm(vcpu); + + __sysreg_restore_state_nvhe(host_ctxt); + + if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) + __fpsimd_save_fpexc32(vcpu); + + /* + * This must come after restoring the host sysregs, since a non-VHE + * system may enable SPE here and make use of the TTBRs. + */ + __debug_switch_to_host(vcpu); + + if (pmu_switch_needed) + __pmu_switch_to_host(host_ctxt); + + /* Returning to host will clear PSR.I, remask PMR if needed */ + if (system_uses_irq_prio_masking()) + gic_write_pmr(GIC_PRIO_IRQOFF); + + return exit_code; +} + +void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) +{ + u64 spsr = read_sysreg_el2(SYS_SPSR); + u64 elr = read_sysreg_el2(SYS_ELR); + u64 par = read_sysreg(par_el1); + struct kvm_vcpu *vcpu = host_ctxt->__hyp_running_vcpu; + unsigned long str_va; + + if (read_sysreg(vttbr_el2)) { + __timer_disable_traps(vcpu); + __deactivate_traps(vcpu); + __deactivate_vm(vcpu); + __sysreg_restore_state_nvhe(host_ctxt); + } + + /* + * Force the panic string to be loaded from the literal pool, + * making sure it is a kernel address and not a PC-relative + * reference. + */ + asm volatile("ldr %0, =%1" : "=r" (str_va) : "S" (__hyp_panic_string)); + + __hyp_do_panic(str_va, + spsr, elr, + read_sysreg(esr_el2), read_sysreg_el2(SYS_FAR), + read_sysreg(hpfar_el2), par, vcpu); + unreachable(); +} diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 9270b14157b5..6d82fbda1848 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -24,76 +24,16 @@ #include #include -/* Check whether the FP regs were dirtied while in the host-side run loop: */ -static bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) -{ - /* - * When the system doesn't support FP/SIMD, we cannot rely on - * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an - * abort on the very first access to FP and thus we should never - * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always - * trap the accesses. - */ - if (!system_supports_fpsimd() || - vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE) - vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | - KVM_ARM64_FP_HOST); - - return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED); -} - -/* Save the 32-bit only FPSIMD system register state */ -static void __hyp_text __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) -{ - if (!vcpu_el1_is_32bit(vcpu)) - return; - - vcpu->arch.ctxt.sys_regs[FPEXC32_EL2] = read_sysreg(fpexc32_el2); -} - -static void __hyp_text __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) -{ - /* - * We are about to set CPTR_EL2.TFP to trap all floating point - * register accesses to EL2, however, the ARM ARM clearly states that - * traps are only taken to EL2 if the operation would not otherwise - * trap to EL1. Therefore, always make sure that for 32-bit guests, - * we set FPEXC.EN to prevent traps to EL1, when setting the TFP bit. - * If FP/ASIMD is not implemented, FPEXC is UNDEFINED and any access to - * it will cause an exception. - */ - if (vcpu_el1_is_32bit(vcpu) && system_supports_fpsimd()) { - write_sysreg(1 << 30, fpexc32_el2); - isb(); - } -} - -static void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu) -{ - /* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */ - write_sysreg(1 << 15, hstr_el2); +#include "switch.h" - /* - * Make sure we trap PMU access from EL0 to EL2. Also sanitize - * PMSELR_EL0 to make sure it never contains the cycle - * counter, which could make a PMXEVCNTR_EL0 access UNDEF at - * EL1 instead of being trapped to EL2. - */ - write_sysreg(0, pmselr_el0); - write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); - write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); -} +const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n"; -static void __hyp_text __deactivate_traps_common(void) -{ - write_sysreg(0, hstr_el2); - write_sysreg(0, pmuserenr_el0); -} - -static void activate_traps_vhe(struct kvm_vcpu *vcpu) +static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; + ___activate_traps(vcpu); + val = read_sysreg(cpacr_el1); val |= CPACR_EL1_TTA; val &= ~CPACR_EL1_ZEN; @@ -121,59 +61,14 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu) write_sysreg(kvm_get_hyp_vector(), vbar_el1); } -NOKPROBE_SYMBOL(activate_traps_vhe); - -static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu) -{ - u64 val; - - __activate_traps_common(vcpu); - - val = CPTR_EL2_DEFAULT; - val |= CPTR_EL2_TTA | CPTR_EL2_TZ | CPTR_EL2_TAM; - if (!update_fp_enabled(vcpu)) { - val |= CPTR_EL2_TFP; - __activate_traps_fpsimd32(vcpu); - } - - write_sysreg(val, cptr_el2); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { - struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; - - isb(); - /* - * At this stage, and thanks to the above isb(), S2 is - * configured and enabled. We can now restore the guest's S1 - * configuration: SCTLR, and only then TCR. - */ - write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); - isb(); - write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); - } -} +NOKPROBE_SYMBOL(__activate_traps); -static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) +static void __deactivate_traps(struct kvm_vcpu *vcpu) { - u64 hcr = vcpu->arch.hcr_el2; - - if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM)) - hcr |= HCR_TVM; - - write_sysreg(hcr, hcr_el2); - - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE)) - write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); + extern char vectors[]; /* kernel exception vectors */ - if (has_vhe()) - activate_traps_vhe(vcpu); - else - __activate_traps_nvhe(vcpu); -} + ___deactivate_traps(vcpu); -static void deactivate_traps_vhe(void) -{ - extern char vectors[]; /* kernel exception vectors */ write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); /* @@ -186,57 +81,7 @@ static void deactivate_traps_vhe(void) write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1); write_sysreg(vectors, vbar_el1); } -NOKPROBE_SYMBOL(deactivate_traps_vhe); - -static void __hyp_text __deactivate_traps_nvhe(void) -{ - u64 mdcr_el2 = read_sysreg(mdcr_el2); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { - u64 val; - - /* - * Set the TCR and SCTLR registers in the exact opposite - * sequence as __activate_traps_nvhe (first prevent walks, - * then force the MMU on). A generous sprinkling of isb() - * ensure that things happen in this exact order. - */ - val = read_sysreg_el1(SYS_TCR); - write_sysreg_el1(val | TCR_EPD1_MASK | TCR_EPD0_MASK, SYS_TCR); - isb(); - val = read_sysreg_el1(SYS_SCTLR); - write_sysreg_el1(val | SCTLR_ELx_M, SYS_SCTLR); - isb(); - } - - __deactivate_traps_common(); - - mdcr_el2 &= MDCR_EL2_HPMN_MASK; - mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; - - write_sysreg(mdcr_el2, mdcr_el2); - write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2); - write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); -} - -static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) -{ - /* - * If we pended a virtual abort, preserve it until it gets - * cleared. See D1.14.3 (Virtual Interrupts) for details, but - * the crucial bit is "On taking a vSError interrupt, - * HCR_EL2.VSE is cleared to 0." - */ - if (vcpu->arch.hcr_el2 & HCR_VSE) { - vcpu->arch.hcr_el2 &= ~HCR_VSE; - vcpu->arch.hcr_el2 |= read_sysreg(hcr_el2) & HCR_VSE; - } - - if (has_vhe()) - deactivate_traps_vhe(); - else - __deactivate_traps_nvhe(); -} +NOKPROBE_SYMBOL(__deactivate_traps); void activate_traps_vhe_load(struct kvm_vcpu *vcpu) { @@ -256,446 +101,6 @@ void deactivate_traps_vhe_put(void) __deactivate_traps_common(); } -static void __hyp_text __activate_vm(struct kvm *kvm) -{ - __load_guest_stage2(kvm); -} - -static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu) -{ - write_sysreg(0, vttbr_el2); -} - -/* Save VGICv3 state on non-VHE systems */ -static void __hyp_text __hyp_vgic_save_state(struct kvm_vcpu *vcpu) -{ - if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { - __vgic_v3_save_state(&vcpu->arch.vgic_cpu.vgic_v3); - __vgic_v3_deactivate_traps(&vcpu->arch.vgic_cpu.vgic_v3); - } -} - -/* Restore VGICv3 state on non_VEH systems */ -static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) -{ - if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { - __vgic_v3_activate_traps(&vcpu->arch.vgic_cpu.vgic_v3); - __vgic_v3_restore_state(&vcpu->arch.vgic_cpu.vgic_v3); - } -} - -static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) -{ - u64 par, tmp; - - /* - * Resolve the IPA the hard way using the guest VA. - * - * Stage-1 translation already validated the memory access - * rights. As such, we can use the EL1 translation regime, and - * don't have to distinguish between EL0 and EL1 access. - * - * We do need to save/restore PAR_EL1 though, as we haven't - * saved the guest context yet, and we may return early... - */ - par = read_sysreg(par_el1); - asm volatile("at s1e1r, %0" : : "r" (far)); - isb(); - - tmp = read_sysreg(par_el1); - write_sysreg(par, par_el1); - - if (unlikely(tmp & SYS_PAR_EL1_F)) - return false; /* Translation failed, back to guest */ - - /* Convert PAR to HPFAR format */ - *hpfar = PAR_TO_HPFAR(tmp); - return true; -} - -static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) -{ - u8 ec; - u64 esr; - u64 hpfar, far; - - esr = vcpu->arch.fault.esr_el2; - ec = ESR_ELx_EC(esr); - - if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) - return true; - - far = read_sysreg_el2(SYS_FAR); - - /* - * The HPFAR can be invalid if the stage 2 fault did not - * happen during a stage 1 page table walk (the ESR_EL2.S1PTW - * bit is clear) and one of the two following cases are true: - * 1. The fault was due to a permission fault - * 2. The processor carries errata 834220 - * - * Therefore, for all non S1PTW faults where we either have a - * permission fault or the errata workaround is enabled, we - * resolve the IPA using the AT instruction. - */ - if (!(esr & ESR_ELx_S1PTW) && - (cpus_have_final_cap(ARM64_WORKAROUND_834220) || - (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) { - if (!__translate_far_to_hpfar(far, &hpfar)) - return false; - } else { - hpfar = read_sysreg(hpfar_el2); - } - - vcpu->arch.fault.far_el2 = far; - vcpu->arch.fault.hpfar_el2 = hpfar; - return true; -} - -/* Check for an FPSIMD/SVE trap and handle as appropriate */ -static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) -{ - bool vhe, sve_guest, sve_host; - u8 hsr_ec; - - if (!system_supports_fpsimd()) - return false; - - if (system_supports_sve()) { - sve_guest = vcpu_has_sve(vcpu); - sve_host = vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE; - vhe = true; - } else { - sve_guest = false; - sve_host = false; - vhe = has_vhe(); - } - - hsr_ec = kvm_vcpu_trap_get_class(vcpu); - if (hsr_ec != ESR_ELx_EC_FP_ASIMD && - hsr_ec != ESR_ELx_EC_SVE) - return false; - - /* Don't handle SVE traps for non-SVE vcpus here: */ - if (!sve_guest) - if (hsr_ec != ESR_ELx_EC_FP_ASIMD) - return false; - - /* Valid trap. Switch the context: */ - - if (vhe) { - u64 reg = read_sysreg(cpacr_el1) | CPACR_EL1_FPEN; - - if (sve_guest) - reg |= CPACR_EL1_ZEN; - - write_sysreg(reg, cpacr_el1); - } else { - write_sysreg(read_sysreg(cptr_el2) & ~(u64)CPTR_EL2_TFP, - cptr_el2); - } - - isb(); - - if (vcpu->arch.flags & KVM_ARM64_FP_HOST) { - /* - * In the SVE case, VHE is assumed: it is enforced by - * Kconfig and kvm_arch_init(). - */ - if (sve_host) { - struct thread_struct *thread = container_of( - vcpu->arch.host_fpsimd_state, - struct thread_struct, uw.fpsimd_state); - - sve_save_state(sve_pffr(thread), - &vcpu->arch.host_fpsimd_state->fpsr); - } else { - __fpsimd_save_state(vcpu->arch.host_fpsimd_state); - } - - vcpu->arch.flags &= ~KVM_ARM64_FP_HOST; - } - - if (sve_guest) { - sve_load_state(vcpu_sve_pffr(vcpu), - &vcpu->arch.ctxt.gp_regs.fp_regs.fpsr, - sve_vq_from_vl(vcpu->arch.sve_max_vl) - 1); - write_sysreg_s(vcpu->arch.ctxt.sys_regs[ZCR_EL1], SYS_ZCR_EL12); - } else { - __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); - } - - /* Skip restoring fpexc32 for AArch64 guests */ - if (!(read_sysreg(hcr_el2) & HCR_RW)) - write_sysreg(vcpu->arch.ctxt.sys_regs[FPEXC32_EL2], - fpexc32_el2); - - vcpu->arch.flags |= KVM_ARM64_FP_ENABLED; - - return true; -} - -static bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) -{ - u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu)); - int rt = kvm_vcpu_sys_get_rt(vcpu); - u64 val = vcpu_get_reg(vcpu, rt); - - /* - * The normal sysreg handling code expects to see the traps, - * let's not do anything here. - */ - if (vcpu->arch.hcr_el2 & HCR_TVM) - return false; - - switch (sysreg) { - case SYS_SCTLR_EL1: - write_sysreg_el1(val, SYS_SCTLR); - break; - case SYS_TTBR0_EL1: - write_sysreg_el1(val, SYS_TTBR0); - break; - case SYS_TTBR1_EL1: - write_sysreg_el1(val, SYS_TTBR1); - break; - case SYS_TCR_EL1: - write_sysreg_el1(val, SYS_TCR); - break; - case SYS_ESR_EL1: - write_sysreg_el1(val, SYS_ESR); - break; - case SYS_FAR_EL1: - write_sysreg_el1(val, SYS_FAR); - break; - case SYS_AFSR0_EL1: - write_sysreg_el1(val, SYS_AFSR0); - break; - case SYS_AFSR1_EL1: - write_sysreg_el1(val, SYS_AFSR1); - break; - case SYS_MAIR_EL1: - write_sysreg_el1(val, SYS_MAIR); - break; - case SYS_AMAIR_EL1: - write_sysreg_el1(val, SYS_AMAIR); - break; - case SYS_CONTEXTIDR_EL1: - write_sysreg_el1(val, SYS_CONTEXTIDR); - break; - default: - return false; - } - - __kvm_skip_instr(vcpu); - return true; -} - -static bool __hyp_text esr_is_ptrauth_trap(u32 esr) -{ - u32 ec = ESR_ELx_EC(esr); - - if (ec == ESR_ELx_EC_PAC) - return true; - - if (ec != ESR_ELx_EC_SYS64) - return false; - - switch (esr_sys64_to_sysreg(esr)) { - case SYS_APIAKEYLO_EL1: - case SYS_APIAKEYHI_EL1: - case SYS_APIBKEYLO_EL1: - case SYS_APIBKEYHI_EL1: - case SYS_APDAKEYLO_EL1: - case SYS_APDAKEYHI_EL1: - case SYS_APDBKEYLO_EL1: - case SYS_APDBKEYHI_EL1: - case SYS_APGAKEYLO_EL1: - case SYS_APGAKEYHI_EL1: - return true; - } - - return false; -} - -#define __ptrauth_save_key(regs, key) \ -({ \ - regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \ - regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \ -}) - -static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) -{ - struct kvm_cpu_context *ctxt; - u64 val; - - if (!vcpu_has_ptrauth(vcpu) || - !esr_is_ptrauth_trap(kvm_vcpu_get_hsr(vcpu))) - return false; - - ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt; - __ptrauth_save_key(ctxt->sys_regs, APIA); - __ptrauth_save_key(ctxt->sys_regs, APIB); - __ptrauth_save_key(ctxt->sys_regs, APDA); - __ptrauth_save_key(ctxt->sys_regs, APDB); - __ptrauth_save_key(ctxt->sys_regs, APGA); - - vcpu_ptrauth_enable(vcpu); - - val = read_sysreg(hcr_el2); - val |= (HCR_API | HCR_APK); - write_sysreg(val, hcr_el2); - - return true; -} - -/* - * Return true when we were able to fixup the guest exit and should return to - * the guest, false when we should restore the host state and return to the - * main run loop. - */ -static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) -{ - if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) - vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); - - /* - * We're using the raw exception code in order to only process - * the trap if no SError is pending. We will come back to the - * same PC once the SError has been injected, and replay the - * trapping instruction. - */ - if (*exit_code != ARM_EXCEPTION_TRAP) - goto exit; - - if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && - kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 && - handle_tx2_tvm(vcpu)) - return true; - - /* - * We trap the first access to the FP/SIMD to save the host context - * and restore the guest context lazily. - * If FP/SIMD is not implemented, handle the trap and inject an - * undefined instruction exception to the guest. - * Similarly for trapped SVE accesses. - */ - if (__hyp_handle_fpsimd(vcpu)) - return true; - - if (__hyp_handle_ptrauth(vcpu)) - return true; - - if (!__populate_fault_info(vcpu)) - return true; - - if (static_branch_unlikely(&vgic_v2_cpuif_trap)) { - bool valid; - - valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW && - kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT && - kvm_vcpu_dabt_isvalid(vcpu) && - !kvm_vcpu_dabt_isextabt(vcpu) && - !kvm_vcpu_dabt_iss1tw(vcpu); - - if (valid) { - int ret = __vgic_v2_perform_cpuif_access(vcpu); - - if (ret == 1) - return true; - - /* Promote an illegal access to an SError.*/ - if (ret == -1) - *exit_code = ARM_EXCEPTION_EL1_SERROR; - - goto exit; - } - } - - if (static_branch_unlikely(&vgic_v3_cpuif_trap) && - (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 || - kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { - int ret = __vgic_v3_perform_cpuif_access(vcpu); - - if (ret == 1) - return true; - } - -exit: - /* Return to the host kernel and handle the exit */ - return false; -} - -static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) -{ - if (!cpus_have_final_cap(ARM64_SSBD)) - return false; - - return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG); -} - -static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) -{ -#ifdef CONFIG_ARM64_SSBD - /* - * The host runs with the workaround always present. If the - * guest wants it disabled, so be it... - */ - if (__needs_ssbd_off(vcpu) && - __hyp_this_cpu_read(arm64_ssbd_callback_required)) - arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL); -#endif -} - -static void __hyp_text __set_host_arch_workaround_state(struct kvm_vcpu *vcpu) -{ -#ifdef CONFIG_ARM64_SSBD - /* - * If the guest has disabled the workaround, bring it back on. - */ - if (__needs_ssbd_off(vcpu) && - __hyp_this_cpu_read(arm64_ssbd_callback_required)) - arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL); -#endif -} - -/** - * Disable host events, enable guest events - */ -static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) -{ - struct kvm_host_data *host; - struct kvm_pmu_events *pmu; - - host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); - pmu = &host->pmu_events; - - if (pmu->events_host) - write_sysreg(pmu->events_host, pmcntenclr_el0); - - if (pmu->events_guest) - write_sysreg(pmu->events_guest, pmcntenset_el0); - - return (pmu->events_host || pmu->events_guest); -} - -/** - * Disable guest events, enable host events - */ -static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) -{ - struct kvm_host_data *host; - struct kvm_pmu_events *pmu; - - host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); - pmu = &host->pmu_events; - - if (pmu->events_guest) - write_sysreg(pmu->events_guest, pmcntenclr_el0); - - if (pmu->events_host) - write_sysreg(pmu->events_host, pmcntenset_el0); -} - /* Switch to the guest for VHE systems running in EL2 */ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { @@ -752,7 +157,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) } NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe); -int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) +int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { int ret; @@ -787,126 +192,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) return ret; } -/* Switch to the guest for legacy non-VHE systems */ -int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) -{ - struct kvm_cpu_context *host_ctxt; - struct kvm_cpu_context *guest_ctxt; - bool pmu_switch_needed; - u64 exit_code; - - /* - * Having IRQs masked via PMR when entering the guest means the GIC - * will not signal the CPU of interrupts of lower priority, and the - * only way to get out will be via guest exceptions. - * Naturally, we want to avoid this. - */ - if (system_uses_irq_prio_masking()) { - gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); - pmr_sync(); - } - - vcpu = kern_hyp_va(vcpu); - - host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt; - host_ctxt->__hyp_running_vcpu = vcpu; - guest_ctxt = &vcpu->arch.ctxt; - - pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); - - __sysreg_save_state_nvhe(host_ctxt); - - /* - * We must restore the 32-bit state before the sysregs, thanks - * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). - * - * Also, and in order to be able to deal with erratum #1319537 (A57) - * and #1319367 (A72), we must ensure that all VM-related sysreg are - * restored before we enable S2 translation. - */ - __sysreg32_restore_state(vcpu); - __sysreg_restore_state_nvhe(guest_ctxt); - - __activate_vm(kern_hyp_va(vcpu->kvm)); - __activate_traps(vcpu); - - __hyp_vgic_restore_state(vcpu); - __timer_enable_traps(vcpu); - - __debug_switch_to_guest(vcpu); - - __set_guest_arch_workaround_state(vcpu); - - do { - /* Jump in the fire! */ - exit_code = __guest_enter(vcpu, host_ctxt); - - /* And we're baaack! */ - } while (fixup_guest_exit(vcpu, &exit_code)); - - __set_host_arch_workaround_state(vcpu); - - __sysreg_save_state_nvhe(guest_ctxt); - __sysreg32_save_state(vcpu); - __timer_disable_traps(vcpu); - __hyp_vgic_save_state(vcpu); - - __deactivate_traps(vcpu); - __deactivate_vm(vcpu); - - __sysreg_restore_state_nvhe(host_ctxt); - - if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) - __fpsimd_save_fpexc32(vcpu); - - /* - * This must come after restoring the host sysregs, since a non-VHE - * system may enable SPE here and make use of the TTBRs. - */ - __debug_switch_to_host(vcpu); - - if (pmu_switch_needed) - __pmu_switch_to_host(host_ctxt); - - /* Returning to host will clear PSR.I, remask PMR if needed */ - if (system_uses_irq_prio_masking()) - gic_write_pmr(GIC_PRIO_IRQOFF); - - return exit_code; -} - -static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n"; - -static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par, - struct kvm_cpu_context *__host_ctxt) -{ - struct kvm_vcpu *vcpu; - unsigned long str_va; - - vcpu = __host_ctxt->__hyp_running_vcpu; - - if (read_sysreg(vttbr_el2)) { - __timer_disable_traps(vcpu); - __deactivate_traps(vcpu); - __deactivate_vm(vcpu); - __sysreg_restore_state_nvhe(__host_ctxt); - } - - /* - * Force the panic string to be loaded from the literal pool, - * making sure it is a kernel address and not a PC-relative - * reference. - */ - asm volatile("ldr %0, =%1" : "=r" (str_va) : "S" (__hyp_panic_string)); - - __hyp_do_panic(str_va, - spsr, elr, - read_sysreg(esr_el2), read_sysreg_el2(SYS_FAR), - read_sysreg(hpfar_el2), par, vcpu); -} - -static void __hyp_call_panic_vhe(u64 spsr, u64 elr, u64 par, - struct kvm_cpu_context *host_ctxt) +static void __hyp_call_panic(u64 spsr, u64 elr, u64 par, + struct kvm_cpu_context *host_ctxt) { struct kvm_vcpu *vcpu; vcpu = host_ctxt->__hyp_running_vcpu; @@ -919,18 +206,14 @@ static void __hyp_call_panic_vhe(u64 spsr, u64 elr, u64 par, read_sysreg_el2(SYS_ESR), read_sysreg_el2(SYS_FAR), read_sysreg(hpfar_el2), par, vcpu); } -NOKPROBE_SYMBOL(__hyp_call_panic_vhe); +NOKPROBE_SYMBOL(__hyp_call_panic); -void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) +void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) { u64 spsr = read_sysreg_el2(SYS_SPSR); u64 elr = read_sysreg_el2(SYS_ELR); u64 par = read_sysreg(par_el1); - if (!has_vhe()) - __hyp_call_panic_nvhe(spsr, elr, par, host_ctxt); - else - __hyp_call_panic_vhe(spsr, elr, par, host_ctxt); - + __hyp_call_panic(spsr, elr, par, host_ctxt); unreachable(); } diff --git a/arch/arm64/kvm/hyp/switch.h b/arch/arm64/kvm/hyp/switch.h new file mode 100644 index 000000000000..5b71d52c41f4 --- /dev/null +++ b/arch/arm64/kvm/hyp/switch.h @@ -0,0 +1,507 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#ifndef __ARM64_KVM_HYP_SWITCH_H__ +#define __ARM64_KVM_HYP_SWITCH_H__ + +#include +#include +#include +#include +#include + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +extern const char __hyp_panic_string[]; + +/* Check whether the FP regs were dirtied while in the host-side run loop: */ +static inline bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) +{ + /* + * When the system doesn't support FP/SIMD, we cannot rely on + * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an + * abort on the very first access to FP and thus we should never + * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always + * trap the accesses. + */ + if (!system_supports_fpsimd() || + vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE) + vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | + KVM_ARM64_FP_HOST); + + return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED); +} + +/* Save the 32-bit only FPSIMD system register state */ +static inline void __hyp_text __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) +{ + if (!vcpu_el1_is_32bit(vcpu)) + return; + + vcpu->arch.ctxt.sys_regs[FPEXC32_EL2] = read_sysreg(fpexc32_el2); +} + +static inline void __hyp_text __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) +{ + /* + * We are about to set CPTR_EL2.TFP to trap all floating point + * register accesses to EL2, however, the ARM ARM clearly states that + * traps are only taken to EL2 if the operation would not otherwise + * trap to EL1. Therefore, always make sure that for 32-bit guests, + * we set FPEXC.EN to prevent traps to EL1, when setting the TFP bit. + * If FP/ASIMD is not implemented, FPEXC is UNDEFINED and any access to + * it will cause an exception. + */ + if (vcpu_el1_is_32bit(vcpu) && system_supports_fpsimd()) { + write_sysreg(1 << 30, fpexc32_el2); + isb(); + } +} + +static inline void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu) +{ + /* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */ + write_sysreg(1 << 15, hstr_el2); + + /* + * Make sure we trap PMU access from EL0 to EL2. Also sanitize + * PMSELR_EL0 to make sure it never contains the cycle + * counter, which could make a PMXEVCNTR_EL0 access UNDEF at + * EL1 instead of being trapped to EL2. + */ + write_sysreg(0, pmselr_el0); + write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); + write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); +} + +static inline void __hyp_text __deactivate_traps_common(void) +{ + write_sysreg(0, hstr_el2); + write_sysreg(0, pmuserenr_el0); +} + +static inline void __hyp_text ___activate_traps(struct kvm_vcpu *vcpu) +{ + u64 hcr = vcpu->arch.hcr_el2; + + if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM)) + hcr |= HCR_TVM; + + write_sysreg(hcr, hcr_el2); + + if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE)) + write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); +} + +static inline void __hyp_text ___deactivate_traps(struct kvm_vcpu *vcpu) +{ + /* + * If we pended a virtual abort, preserve it until it gets + * cleared. See D1.14.3 (Virtual Interrupts) for details, but + * the crucial bit is "On taking a vSError interrupt, + * HCR_EL2.VSE is cleared to 0." + */ + if (vcpu->arch.hcr_el2 & HCR_VSE) { + vcpu->arch.hcr_el2 &= ~HCR_VSE; + vcpu->arch.hcr_el2 |= read_sysreg(hcr_el2) & HCR_VSE; + } +} + +static inline void __hyp_text __activate_vm(struct kvm *kvm) +{ + __load_guest_stage2(kvm); +} + +static inline bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) +{ + u64 par, tmp; + + /* + * Resolve the IPA the hard way using the guest VA. + * + * Stage-1 translation already validated the memory access + * rights. As such, we can use the EL1 translation regime, and + * don't have to distinguish between EL0 and EL1 access. + * + * We do need to save/restore PAR_EL1 though, as we haven't + * saved the guest context yet, and we may return early... + */ + par = read_sysreg(par_el1); + asm volatile("at s1e1r, %0" : : "r" (far)); + isb(); + + tmp = read_sysreg(par_el1); + write_sysreg(par, par_el1); + + if (unlikely(tmp & SYS_PAR_EL1_F)) + return false; /* Translation failed, back to guest */ + + /* Convert PAR to HPFAR format */ + *hpfar = PAR_TO_HPFAR(tmp); + return true; +} + +static inline bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) +{ + u8 ec; + u64 esr; + u64 hpfar, far; + + esr = vcpu->arch.fault.esr_el2; + ec = ESR_ELx_EC(esr); + + if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) + return true; + + far = read_sysreg_el2(SYS_FAR); + + /* + * The HPFAR can be invalid if the stage 2 fault did not + * happen during a stage 1 page table walk (the ESR_EL2.S1PTW + * bit is clear) and one of the two following cases are true: + * 1. The fault was due to a permission fault + * 2. The processor carries errata 834220 + * + * Therefore, for all non S1PTW faults where we either have a + * permission fault or the errata workaround is enabled, we + * resolve the IPA using the AT instruction. + */ + if (!(esr & ESR_ELx_S1PTW) && + (cpus_have_final_cap(ARM64_WORKAROUND_834220) || + (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) { + if (!__translate_far_to_hpfar(far, &hpfar)) + return false; + } else { + hpfar = read_sysreg(hpfar_el2); + } + + vcpu->arch.fault.far_el2 = far; + vcpu->arch.fault.hpfar_el2 = hpfar; + return true; +} + +/* Check for an FPSIMD/SVE trap and handle as appropriate */ +static inline bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) +{ + bool vhe, sve_guest, sve_host; + u8 hsr_ec; + + if (!system_supports_fpsimd()) + return false; + + if (system_supports_sve()) { + sve_guest = vcpu_has_sve(vcpu); + sve_host = vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE; + vhe = true; + } else { + sve_guest = false; + sve_host = false; + vhe = has_vhe(); + } + + hsr_ec = kvm_vcpu_trap_get_class(vcpu); + if (hsr_ec != ESR_ELx_EC_FP_ASIMD && + hsr_ec != ESR_ELx_EC_SVE) + return false; + + /* Don't handle SVE traps for non-SVE vcpus here: */ + if (!sve_guest) + if (hsr_ec != ESR_ELx_EC_FP_ASIMD) + return false; + + /* Valid trap. Switch the context: */ + + if (vhe) { + u64 reg = read_sysreg(cpacr_el1) | CPACR_EL1_FPEN; + + if (sve_guest) + reg |= CPACR_EL1_ZEN; + + write_sysreg(reg, cpacr_el1); + } else { + write_sysreg(read_sysreg(cptr_el2) & ~(u64)CPTR_EL2_TFP, + cptr_el2); + } + + isb(); + + if (vcpu->arch.flags & KVM_ARM64_FP_HOST) { + /* + * In the SVE case, VHE is assumed: it is enforced by + * Kconfig and kvm_arch_init(). + */ + if (sve_host) { + struct thread_struct *thread = container_of( + vcpu->arch.host_fpsimd_state, + struct thread_struct, uw.fpsimd_state); + + sve_save_state(sve_pffr(thread), + &vcpu->arch.host_fpsimd_state->fpsr); + } else { + __fpsimd_save_state(vcpu->arch.host_fpsimd_state); + } + + vcpu->arch.flags &= ~KVM_ARM64_FP_HOST; + } + + if (sve_guest) { + sve_load_state(vcpu_sve_pffr(vcpu), + &vcpu->arch.ctxt.gp_regs.fp_regs.fpsr, + sve_vq_from_vl(vcpu->arch.sve_max_vl) - 1); + write_sysreg_s(vcpu->arch.ctxt.sys_regs[ZCR_EL1], SYS_ZCR_EL12); + } else { + __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); + } + + /* Skip restoring fpexc32 for AArch64 guests */ + if (!(read_sysreg(hcr_el2) & HCR_RW)) + write_sysreg(vcpu->arch.ctxt.sys_regs[FPEXC32_EL2], + fpexc32_el2); + + vcpu->arch.flags |= KVM_ARM64_FP_ENABLED; + + return true; +} + +static inline bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) +{ + u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu)); + int rt = kvm_vcpu_sys_get_rt(vcpu); + u64 val = vcpu_get_reg(vcpu, rt); + + /* + * The normal sysreg handling code expects to see the traps, + * let's not do anything here. + */ + if (vcpu->arch.hcr_el2 & HCR_TVM) + return false; + + switch (sysreg) { + case SYS_SCTLR_EL1: + write_sysreg_el1(val, SYS_SCTLR); + break; + case SYS_TTBR0_EL1: + write_sysreg_el1(val, SYS_TTBR0); + break; + case SYS_TTBR1_EL1: + write_sysreg_el1(val, SYS_TTBR1); + break; + case SYS_TCR_EL1: + write_sysreg_el1(val, SYS_TCR); + break; + case SYS_ESR_EL1: + write_sysreg_el1(val, SYS_ESR); + break; + case SYS_FAR_EL1: + write_sysreg_el1(val, SYS_FAR); + break; + case SYS_AFSR0_EL1: + write_sysreg_el1(val, SYS_AFSR0); + break; + case SYS_AFSR1_EL1: + write_sysreg_el1(val, SYS_AFSR1); + break; + case SYS_MAIR_EL1: + write_sysreg_el1(val, SYS_MAIR); + break; + case SYS_AMAIR_EL1: + write_sysreg_el1(val, SYS_AMAIR); + break; + case SYS_CONTEXTIDR_EL1: + write_sysreg_el1(val, SYS_CONTEXTIDR); + break; + default: + return false; + } + + __kvm_skip_instr(vcpu); + return true; +} + +static inline bool __hyp_text esr_is_ptrauth_trap(u32 esr) +{ + u32 ec = ESR_ELx_EC(esr); + + if (ec == ESR_ELx_EC_PAC) + return true; + + if (ec != ESR_ELx_EC_SYS64) + return false; + + switch (esr_sys64_to_sysreg(esr)) { + case SYS_APIAKEYLO_EL1: + case SYS_APIAKEYHI_EL1: + case SYS_APIBKEYLO_EL1: + case SYS_APIBKEYHI_EL1: + case SYS_APDAKEYLO_EL1: + case SYS_APDAKEYHI_EL1: + case SYS_APDBKEYLO_EL1: + case SYS_APDBKEYHI_EL1: + case SYS_APGAKEYLO_EL1: + case SYS_APGAKEYHI_EL1: + return true; + } + + return false; +} + +#define __ptrauth_save_key(regs, key) \ +({ \ + regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \ + regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \ +}) + +static inline bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *ctxt; + u64 val; + + if (!vcpu_has_ptrauth(vcpu) || + !esr_is_ptrauth_trap(kvm_vcpu_get_hsr(vcpu))) + return false; + + ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt; + __ptrauth_save_key(ctxt->sys_regs, APIA); + __ptrauth_save_key(ctxt->sys_regs, APIB); + __ptrauth_save_key(ctxt->sys_regs, APDA); + __ptrauth_save_key(ctxt->sys_regs, APDB); + __ptrauth_save_key(ctxt->sys_regs, APGA); + + vcpu_ptrauth_enable(vcpu); + + val = read_sysreg(hcr_el2); + val |= (HCR_API | HCR_APK); + write_sysreg(val, hcr_el2); + + return true; +} + +/* + * Return true when we were able to fixup the guest exit and should return to + * the guest, false when we should restore the host state and return to the + * main run loop. + */ +static inline bool __hyp_text +fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) +{ + if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) + vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); + + /* + * We're using the raw exception code in order to only process + * the trap if no SError is pending. We will come back to the + * same PC once the SError has been injected, and replay the + * trapping instruction. + */ + if (*exit_code != ARM_EXCEPTION_TRAP) + goto exit; + + if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && + kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 && + handle_tx2_tvm(vcpu)) + return true; + + /* + * We trap the first access to the FP/SIMD to save the host context + * and restore the guest context lazily. + * If FP/SIMD is not implemented, handle the trap and inject an + * undefined instruction exception to the guest. + * Similarly for trapped SVE accesses. + */ + if (__hyp_handle_fpsimd(vcpu)) + return true; + + if (__hyp_handle_ptrauth(vcpu)) + return true; + + if (!__populate_fault_info(vcpu)) + return true; + + if (static_branch_unlikely(&vgic_v2_cpuif_trap)) { + bool valid; + + valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW && + kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT && + kvm_vcpu_dabt_isvalid(vcpu) && + !kvm_vcpu_dabt_isextabt(vcpu) && + !kvm_vcpu_dabt_iss1tw(vcpu); + + if (valid) { + int ret = __vgic_v2_perform_cpuif_access(vcpu); + + if (ret == 1) + return true; + + /* Promote an illegal access to an SError.*/ + if (ret == -1) + *exit_code = ARM_EXCEPTION_EL1_SERROR; + + goto exit; + } + } + + if (static_branch_unlikely(&vgic_v3_cpuif_trap) && + (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 || + kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { + int ret = __vgic_v3_perform_cpuif_access(vcpu); + + if (ret == 1) + return true; + } + +exit: + /* Return to the host kernel and handle the exit */ + return false; +} + +static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) +{ + if (!cpus_have_final_cap(ARM64_SSBD)) + return false; + + return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG); +} + +static inline void __hyp_text +__set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) +{ +#ifdef CONFIG_ARM64_SSBD + /* + * The host runs with the workaround always present. If the + * guest wants it disabled, so be it... + */ + if (__needs_ssbd_off(vcpu) && + __hyp_this_cpu_read(arm64_ssbd_callback_required)) + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL); +#endif +} + +static inline void __hyp_text +__set_host_arch_workaround_state(struct kvm_vcpu *vcpu) +{ +#ifdef CONFIG_ARM64_SSBD + /* + * If the guest has disabled the workaround, bring it back on. + */ + if (__needs_ssbd_off(vcpu) && + __hyp_this_cpu_read(arm64_ssbd_callback_required)) + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL); +#endif +} + +#endif /* __ARM64_KVM_HYP_SWITCH_H__ */ diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c index cc7e957f5b2c..2493439a5c54 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/sysreg-sr.c @@ -114,7 +114,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) /* * Must only be done for guest registers, hence the context * test. We're coming from the host, so SCTLR.M is already - * set. Pairs with __activate_traps_nvhe(). + * set. Pairs with nVHE's __activate_traps(). */ write_sysreg_el1((ctxt->sys_regs[TCR_EL1] | TCR_EPD1_MASK | TCR_EPD0_MASK), @@ -142,7 +142,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) ctxt->__hyp_running_vcpu) { /* * Must only be done for host registers, hence the context - * test. Pairs with __deactivate_traps_nvhe(). + * test. Pairs with nVHE's __deactivate_traps(). */ isb(); /* From patchwork Thu Jun 18 12:25:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611967 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 39E45912 for ; Thu, 18 Jun 2020 12:28:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EE68F2075E for ; Thu, 18 Jun 2020 12:28:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="lamwm/K9"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="JpV5iOwf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EE68F2075E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/HPHDRbMzI+UsdtF3kPjaO6UGLWNDrlIJkmNWvAMYus=; b=lamwm/K982miko PBgQxyoVsB///4MXzESnWqQQl5c7WySJMhFXZ1Fjfd6Pn81hfPb3sXKP1HgVSGxEbP9YPFa+jhnhC Ji3qSbpJVTodjvxP/nTgNq4fxed/9ntYnntvKHsymPQUn/2dtG4Tvxg0TN547MYcT/U4VdtW33Wja GzL0Dur4OdgNNwHkOdtFcafWWnq5g7Ye0tHvq5LbK/YYrmKTJPWXmIS1zv8ABZZOH0JRQzwoNp+jx n2GvQ1U3euJzO0m4OYKXY99dOqN73maOium5sDZp3MOQVw2P/BM1f+rJeLlSGCceht1t/NBRK6hBF HvizclamXOgnIhGPrzyg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jlted-0006ho-CA; Thu, 18 Jun 2020 12:28:27 +0000 Received: from mail-wm1-x342.google.com ([2a00:1450:4864:20::342]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcM-0004bL-Il for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:26:10 +0000 Received: by mail-wm1-x342.google.com with SMTP id t194so5435822wmt.4 for ; Thu, 18 Jun 2020 05:26:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XcpRwaPArKDy0b6hHpLf47AlwwErC4+BE9dFfkHDgDI=; b=JpV5iOwfBr/VBwM27HRqXEOzrqdmnVYl9BSZOWjgVjFGzkl5X2+NyRLXytUGifWc03 /RAj6gMZFBLmgySO+qG3NT4zGYgPWVE6K8rdwmL4G3ZikR5T46jz0e9mvcW6pKKW/bZi pbiXgacKbIrhDQgWMqQPLJ/NJbPxq/E4J9y5Aq9Q6HBYzI3D2d+cw22T71h/HsV2fLvH 7l+SxdAxge6/ZjBsg4Sl56FHOFImlFo5y8RtaDQKbw2jqCwtikpR1V0ElPhrmnmQABhL puyttSlaYS/6uuPPz9VWL5FVzKk8GfkpXqEPrAhTYfVgQ4MfufWGanifO1CbqNKb/d95 duWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XcpRwaPArKDy0b6hHpLf47AlwwErC4+BE9dFfkHDgDI=; b=akTTg6KBkAyLwiC3YzYqm2KVmJUPNl/IBmm0h8K0cYe5zS6aU0bqlYIiwUXXnfIs2N 8F1iWLNsqDbdHlMNNtyuc37eYlIheRto3Q2clwGzR4IvKrlg/s2s1kMJMdFxUkAM6Kzo w3JGvzqO9TDjlJ5bPd75hq+aGH+KsM+6mAUiFow4cxN6f+K9ljLXPGDMlZcqaICkjkbL xzE77Q+sIDlNHQ3GvnGUeNiDI/2OBvxH4y+meyJq+AcFlBNLbVnTjyzY1CSURosVpHEC Dx3+Y41Ijxi7oNUl7Ll92Nq/UNK+vr7mfNDZp4tXBqTaeg2XDEa863W/ejhE1BuXVntH dxJA== X-Gm-Message-State: AOAM531clR/yQhzOFXPMF3kD4yeTs17EcHFWb2ExRlVCSCIi9uASW342 QkWe++a2A0J8fCA+uVhSB7U8Mg== X-Google-Smtp-Source: ABdhPJzcv2pZsYAYYYEjlBP982+H2ZdVWYFB3qnk6XLnTWSQfW8HkJBNDaHZcA3ZIw2yjUv5TunEwQ== X-Received: by 2002:a1c:7dd5:: with SMTP id y204mr3601055wmc.182.1592483164433; Thu, 18 Jun 2020 05:26:04 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id d11sm3502535wrm.64.2020.06.18.05.26.03 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:26:03 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 09/15] arm64: kvm: Split hyp/debug-sr.c to VHE/nVHE Date: Thu, 18 Jun 2020 13:25:31 +0100 Message-Id: <20200618122537.9625-10-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052606_670989_C792B9AD X-CRM114-Status: GOOD ( 19.96 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:342 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. debug-sr.c contains KVM's code for context-switching debug registers, with some parts shared between VHE/nVHE. These common routines are moved to debug-sr.h, VHE-specific code is left in debug-sr.c and nVHE-specific code is moved to nvhe/debug-sr.c. Functions are slightly refactored to move code hidden behind `has_vhe()` checks to the corresponding .c files. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 3 - arch/arm64/kvm/hyp/debug-sr.c | 210 +---------------------------- arch/arm64/kvm/hyp/debug-sr.h | 172 +++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 77 +++++++++++ 5 files changed, 256 insertions(+), 208 deletions(-) create mode 100644 arch/arm64/kvm/hyp/debug-sr.h create mode 100644 arch/arm64/kvm/hyp/nvhe/debug-sr.c diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 855f9718d6d9..8096e6f1f2bf 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,8 +61,6 @@ __efistub__ctype = _ctype; * memory mappings. */ -__kvm_nvhe___debug_switch_to_guest = __debug_switch_to_guest; -__kvm_nvhe___debug_switch_to_host = __debug_switch_to_host; __kvm_nvhe___fpsimd_restore_state = __fpsimd_restore_state; __kvm_nvhe___fpsimd_save_state = __fpsimd_save_state; __kvm_nvhe___guest_enter = __guest_enter; @@ -71,7 +69,6 @@ __kvm_nvhe___hyp_panic_string = __hyp_panic_string; __kvm_nvhe___hyp_stub_vectors = __hyp_stub_vectors; __kvm_nvhe___icache_flags = __icache_flags; __kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; -__kvm_nvhe___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; __kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; __kvm_nvhe___sysreg32_restore_state = __sysreg32_restore_state; __kvm_nvhe___sysreg32_save_state = __sysreg32_save_state; diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c index e95af204fec7..28c0a54cda2a 100644 --- a/arch/arm64/kvm/hyp/debug-sr.c +++ b/arch/arm64/kvm/hyp/debug-sr.c @@ -4,221 +4,23 @@ * Author: Marc Zyngier */ -#include #include -#include -#include #include -#include -#define read_debug(r,n) read_sysreg(r##n##_el1) -#define write_debug(v,r,n) write_sysreg(v, r##n##_el1) +#include "debug-sr.h" -#define save_debug(ptr,reg,nr) \ - switch (nr) { \ - case 15: ptr[15] = read_debug(reg, 15); \ - /* Fall through */ \ - case 14: ptr[14] = read_debug(reg, 14); \ - /* Fall through */ \ - case 13: ptr[13] = read_debug(reg, 13); \ - /* Fall through */ \ - case 12: ptr[12] = read_debug(reg, 12); \ - /* Fall through */ \ - case 11: ptr[11] = read_debug(reg, 11); \ - /* Fall through */ \ - case 10: ptr[10] = read_debug(reg, 10); \ - /* Fall through */ \ - case 9: ptr[9] = read_debug(reg, 9); \ - /* Fall through */ \ - case 8: ptr[8] = read_debug(reg, 8); \ - /* Fall through */ \ - case 7: ptr[7] = read_debug(reg, 7); \ - /* Fall through */ \ - case 6: ptr[6] = read_debug(reg, 6); \ - /* Fall through */ \ - case 5: ptr[5] = read_debug(reg, 5); \ - /* Fall through */ \ - case 4: ptr[4] = read_debug(reg, 4); \ - /* Fall through */ \ - case 3: ptr[3] = read_debug(reg, 3); \ - /* Fall through */ \ - case 2: ptr[2] = read_debug(reg, 2); \ - /* Fall through */ \ - case 1: ptr[1] = read_debug(reg, 1); \ - /* Fall through */ \ - default: ptr[0] = read_debug(reg, 0); \ - } - -#define restore_debug(ptr,reg,nr) \ - switch (nr) { \ - case 15: write_debug(ptr[15], reg, 15); \ - /* Fall through */ \ - case 14: write_debug(ptr[14], reg, 14); \ - /* Fall through */ \ - case 13: write_debug(ptr[13], reg, 13); \ - /* Fall through */ \ - case 12: write_debug(ptr[12], reg, 12); \ - /* Fall through */ \ - case 11: write_debug(ptr[11], reg, 11); \ - /* Fall through */ \ - case 10: write_debug(ptr[10], reg, 10); \ - /* Fall through */ \ - case 9: write_debug(ptr[9], reg, 9); \ - /* Fall through */ \ - case 8: write_debug(ptr[8], reg, 8); \ - /* Fall through */ \ - case 7: write_debug(ptr[7], reg, 7); \ - /* Fall through */ \ - case 6: write_debug(ptr[6], reg, 6); \ - /* Fall through */ \ - case 5: write_debug(ptr[5], reg, 5); \ - /* Fall through */ \ - case 4: write_debug(ptr[4], reg, 4); \ - /* Fall through */ \ - case 3: write_debug(ptr[3], reg, 3); \ - /* Fall through */ \ - case 2: write_debug(ptr[2], reg, 2); \ - /* Fall through */ \ - case 1: write_debug(ptr[1], reg, 1); \ - /* Fall through */ \ - default: write_debug(ptr[0], reg, 0); \ - } - -static void __hyp_text __debug_save_spe_nvhe(u64 *pmscr_el1) -{ - u64 reg; - - /* Clear pmscr in case of early return */ - *pmscr_el1 = 0; - - /* SPE present on this CPU? */ - if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1), - ID_AA64DFR0_PMSVER_SHIFT)) - return; - - /* Yes; is it owned by EL3? */ - reg = read_sysreg_s(SYS_PMBIDR_EL1); - if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT)) - return; - - /* No; is the host actually using the thing? */ - reg = read_sysreg_s(SYS_PMBLIMITR_EL1); - if (!(reg & BIT(SYS_PMBLIMITR_EL1_E_SHIFT))) - return; - - /* Yes; save the control register and disable data generation */ - *pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1); - write_sysreg_s(0, SYS_PMSCR_EL1); - isb(); - - /* Now drain all buffered data to memory */ - psb_csync(); - dsb(nsh); -} - -static void __hyp_text __debug_restore_spe_nvhe(u64 pmscr_el1) -{ - if (!pmscr_el1) - return; - - /* The host page table is installed, but not yet synchronised */ - isb(); - - /* Re-enable data generation */ - write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1); -} - -static void __hyp_text __debug_save_state(struct kvm_vcpu *vcpu, - struct kvm_guest_debug_arch *dbg, - struct kvm_cpu_context *ctxt) -{ - u64 aa64dfr0; - int brps, wrps; - - aa64dfr0 = read_sysreg(id_aa64dfr0_el1); - brps = (aa64dfr0 >> 12) & 0xf; - wrps = (aa64dfr0 >> 20) & 0xf; - - save_debug(dbg->dbg_bcr, dbgbcr, brps); - save_debug(dbg->dbg_bvr, dbgbvr, brps); - save_debug(dbg->dbg_wcr, dbgwcr, wrps); - save_debug(dbg->dbg_wvr, dbgwvr, wrps); - - ctxt->sys_regs[MDCCINT_EL1] = read_sysreg(mdccint_el1); -} - -static void __hyp_text __debug_restore_state(struct kvm_vcpu *vcpu, - struct kvm_guest_debug_arch *dbg, - struct kvm_cpu_context *ctxt) +void __debug_switch_to_guest(struct kvm_vcpu *vcpu) { - u64 aa64dfr0; - int brps, wrps; - - aa64dfr0 = read_sysreg(id_aa64dfr0_el1); - - brps = (aa64dfr0 >> 12) & 0xf; - wrps = (aa64dfr0 >> 20) & 0xf; - - restore_debug(dbg->dbg_bcr, dbgbcr, brps); - restore_debug(dbg->dbg_bvr, dbgbvr, brps); - restore_debug(dbg->dbg_wcr, dbgwcr, wrps); - restore_debug(dbg->dbg_wvr, dbgwvr, wrps); - - write_sysreg(ctxt->sys_regs[MDCCINT_EL1], mdccint_el1); + __debug_switch_to_guest_common(vcpu); } -void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) +void __debug_switch_to_host(struct kvm_vcpu *vcpu) { - struct kvm_cpu_context *host_ctxt; - struct kvm_cpu_context *guest_ctxt; - struct kvm_guest_debug_arch *host_dbg; - struct kvm_guest_debug_arch *guest_dbg; - - /* - * Non-VHE: Disable and flush SPE data generation - * VHE: The vcpu can run, but it can't hide. - */ - if (!has_vhe()) - __debug_save_spe_nvhe(&vcpu->arch.host_debug_state.pmscr_el1); - - if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) - return; - - host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt; - guest_ctxt = &vcpu->arch.ctxt; - host_dbg = &vcpu->arch.host_debug_state.regs; - guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr); - - __debug_save_state(vcpu, host_dbg, host_ctxt); - __debug_restore_state(vcpu, guest_dbg, guest_ctxt); -} - -void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) -{ - struct kvm_cpu_context *host_ctxt; - struct kvm_cpu_context *guest_ctxt; - struct kvm_guest_debug_arch *host_dbg; - struct kvm_guest_debug_arch *guest_dbg; - - if (!has_vhe()) - __debug_restore_spe_nvhe(vcpu->arch.host_debug_state.pmscr_el1); - - if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) - return; - - host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt; - guest_ctxt = &vcpu->arch.ctxt; - host_dbg = &vcpu->arch.host_debug_state.regs; - guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr); - - __debug_save_state(vcpu, guest_dbg, guest_ctxt); - __debug_restore_state(vcpu, host_dbg, host_ctxt); - - vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY; + __debug_switch_to_host_common(vcpu); } -u32 __hyp_text __kvm_get_mdcr_el2(void) +u32 __kvm_get_mdcr_el2(void) { return read_sysreg(mdcr_el2); } diff --git a/arch/arm64/kvm/hyp/debug-sr.h b/arch/arm64/kvm/hyp/debug-sr.h new file mode 100644 index 000000000000..62b5deeb301d --- /dev/null +++ b/arch/arm64/kvm/hyp/debug-sr.h @@ -0,0 +1,172 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#ifndef __ARM64_KVM_HYP_DEBUG_SR_H__ +#define __ARM64_KVM_HYP_DEBUG_SR_H__ + +#include +#include + +#include +#include +#include +#include + +#define read_debug(r,n) read_sysreg(r##n##_el1) +#define write_debug(v,r,n) write_sysreg(v, r##n##_el1) + +#define save_debug(ptr,reg,nr) \ + switch (nr) { \ + case 15: ptr[15] = read_debug(reg, 15); \ + /* Fall through */ \ + case 14: ptr[14] = read_debug(reg, 14); \ + /* Fall through */ \ + case 13: ptr[13] = read_debug(reg, 13); \ + /* Fall through */ \ + case 12: ptr[12] = read_debug(reg, 12); \ + /* Fall through */ \ + case 11: ptr[11] = read_debug(reg, 11); \ + /* Fall through */ \ + case 10: ptr[10] = read_debug(reg, 10); \ + /* Fall through */ \ + case 9: ptr[9] = read_debug(reg, 9); \ + /* Fall through */ \ + case 8: ptr[8] = read_debug(reg, 8); \ + /* Fall through */ \ + case 7: ptr[7] = read_debug(reg, 7); \ + /* Fall through */ \ + case 6: ptr[6] = read_debug(reg, 6); \ + /* Fall through */ \ + case 5: ptr[5] = read_debug(reg, 5); \ + /* Fall through */ \ + case 4: ptr[4] = read_debug(reg, 4); \ + /* Fall through */ \ + case 3: ptr[3] = read_debug(reg, 3); \ + /* Fall through */ \ + case 2: ptr[2] = read_debug(reg, 2); \ + /* Fall through */ \ + case 1: ptr[1] = read_debug(reg, 1); \ + /* Fall through */ \ + default: ptr[0] = read_debug(reg, 0); \ + } + +#define restore_debug(ptr,reg,nr) \ + switch (nr) { \ + case 15: write_debug(ptr[15], reg, 15); \ + /* Fall through */ \ + case 14: write_debug(ptr[14], reg, 14); \ + /* Fall through */ \ + case 13: write_debug(ptr[13], reg, 13); \ + /* Fall through */ \ + case 12: write_debug(ptr[12], reg, 12); \ + /* Fall through */ \ + case 11: write_debug(ptr[11], reg, 11); \ + /* Fall through */ \ + case 10: write_debug(ptr[10], reg, 10); \ + /* Fall through */ \ + case 9: write_debug(ptr[9], reg, 9); \ + /* Fall through */ \ + case 8: write_debug(ptr[8], reg, 8); \ + /* Fall through */ \ + case 7: write_debug(ptr[7], reg, 7); \ + /* Fall through */ \ + case 6: write_debug(ptr[6], reg, 6); \ + /* Fall through */ \ + case 5: write_debug(ptr[5], reg, 5); \ + /* Fall through */ \ + case 4: write_debug(ptr[4], reg, 4); \ + /* Fall through */ \ + case 3: write_debug(ptr[3], reg, 3); \ + /* Fall through */ \ + case 2: write_debug(ptr[2], reg, 2); \ + /* Fall through */ \ + case 1: write_debug(ptr[1], reg, 1); \ + /* Fall through */ \ + default: write_debug(ptr[0], reg, 0); \ + } + +static inline void __hyp_text +__debug_save_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, + struct kvm_cpu_context *ctxt) +{ + u64 aa64dfr0; + int brps, wrps; + + aa64dfr0 = read_sysreg(id_aa64dfr0_el1); + brps = (aa64dfr0 >> 12) & 0xf; + wrps = (aa64dfr0 >> 20) & 0xf; + + save_debug(dbg->dbg_bcr, dbgbcr, brps); + save_debug(dbg->dbg_bvr, dbgbvr, brps); + save_debug(dbg->dbg_wcr, dbgwcr, wrps); + save_debug(dbg->dbg_wvr, dbgwvr, wrps); + + ctxt->sys_regs[MDCCINT_EL1] = read_sysreg(mdccint_el1); +} + +static inline void __hyp_text +__debug_restore_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, + struct kvm_cpu_context *ctxt) +{ + u64 aa64dfr0; + int brps, wrps; + + aa64dfr0 = read_sysreg(id_aa64dfr0_el1); + + brps = (aa64dfr0 >> 12) & 0xf; + wrps = (aa64dfr0 >> 20) & 0xf; + + restore_debug(dbg->dbg_bcr, dbgbcr, brps); + restore_debug(dbg->dbg_bvr, dbgbvr, brps); + restore_debug(dbg->dbg_wcr, dbgwcr, wrps); + restore_debug(dbg->dbg_wvr, dbgwvr, wrps); + + write_sysreg(ctxt->sys_regs[MDCCINT_EL1], mdccint_el1); +} + +static inline void __hyp_text +__debug_switch_to_guest_common(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + struct kvm_guest_debug_arch *host_dbg; + struct kvm_guest_debug_arch *guest_dbg; + + if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) + return; + + host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt; + guest_ctxt = &vcpu->arch.ctxt; + host_dbg = &vcpu->arch.host_debug_state.regs; + guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr); + + __debug_save_state(vcpu, host_dbg, host_ctxt); + __debug_restore_state(vcpu, guest_dbg, guest_ctxt); +} + +static inline void __hyp_text +__debug_switch_to_host_common(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + struct kvm_guest_debug_arch *host_dbg; + struct kvm_guest_debug_arch *guest_dbg; + + if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) + return; + + host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt; + guest_ctxt = &vcpu->arch.ctxt; + host_dbg = &vcpu->arch.host_debug_state.regs; + guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr); + + __debug_save_state(vcpu, guest_dbg, guest_ctxt); + __debug_restore_state(vcpu, host_dbg, host_ctxt); + + vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY; +} + +#endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 336b1bf64ceb..95a06786bf26 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := switch.o tlb.o hyp-init.o ../hyp-entry.o +obj-y := debug-sr.o switch.o tlb.o hyp-init.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c new file mode 100644 index 000000000000..b3752cfdcf3d --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -0,0 +1,77 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include + +#include +#include +#include +#include + +#include "../debug-sr.h" + +static void __hyp_text __debug_save_spe(u64 *pmscr_el1) +{ + u64 reg; + + /* Clear pmscr in case of early return */ + *pmscr_el1 = 0; + + /* SPE present on this CPU? */ + if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1), + ID_AA64DFR0_PMSVER_SHIFT)) + return; + + /* Yes; is it owned by EL3? */ + reg = read_sysreg_s(SYS_PMBIDR_EL1); + if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT)) + return; + + /* No; is the host actually using the thing? */ + reg = read_sysreg_s(SYS_PMBLIMITR_EL1); + if (!(reg & BIT(SYS_PMBLIMITR_EL1_E_SHIFT))) + return; + + /* Yes; save the control register and disable data generation */ + *pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1); + write_sysreg_s(0, SYS_PMSCR_EL1); + isb(); + + /* Now drain all buffered data to memory */ + psb_csync(); + dsb(nsh); +} + +static void __hyp_text __debug_restore_spe(u64 pmscr_el1) +{ + if (!pmscr_el1) + return; + + /* The host page table is installed, but not yet synchronised */ + isb(); + + /* Re-enable data generation */ + write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1); +} + +void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) +{ + /* Disable and flush SPE data generation */ + __debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1); + __debug_switch_to_guest_common(vcpu); +} + +void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) +{ + __debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1); + __debug_switch_to_host_common(vcpu); +} + +u32 __hyp_text __kvm_get_mdcr_el2(void) +{ + return read_sysreg(mdcr_el2); +} From patchwork Thu Jun 18 12:25:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611979 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0568E912 for ; Thu, 18 Jun 2020 12:31:03 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D4A8420707 for ; Thu, 18 Jun 2020 12:31:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="mo33UIGz"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="XWKEcKWm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D4A8420707 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qVXozjY5DUTXBf6OgNqGyglBQR8Oee79vfVbi4GZQzY=; b=mo33UIGzVGm8Ra igy6qLlkpJPshEAjgrIBBQX9lGJA6Fji6tOGoSBMiZeJG1hE2BL2ctobuJENnVffUwo0WfmW8kQtF m2FNZoyWI8urMa9xPJOcRvAdle8bI/k5TVlugWxmUduj/9UQdnD1Dz0Vvh5YvW4GK5Tbn+45lYVUM zbZDG6Rg5p3cM+NMLO8GVgVcUO2zPARZy6am/JpEFVQuPolIvkUB/WXroLlwGTyphC6F90Wqm2Vd7 8y9s02F3z3+tI8tuDPGAe40RPBFroolgRt4tcVh/rfJRMwLiCZ/y7EjgoPDxznpgJPfQPXjY2h+A2 bG3EqfJND0CByS9T2EWw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jlth1-0002aJ-Jf; Thu, 18 Jun 2020 12:30:55 +0000 Received: from mail-wr1-x442.google.com ([2a00:1450:4864:20::442]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcP-0004dN-1f for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:26:13 +0000 Received: by mail-wr1-x442.google.com with SMTP id t18so5855496wru.6 for ; Thu, 18 Jun 2020 05:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=US2jDZSS0E14B7YxEAcJj2jZB1b+kDg78rl8vqlDScA=; b=XWKEcKWmgqAzQJJWmfT+N3sLhoJqTVAgfNwRVaUqeCCl7RGbRyqMIIDPsGLy1W9cBp 40ZjJaK+sgBYKjmlKByCoI5c+i26IeB+sGf5gvw5Vy7YMkFc2Q95DK77EFlTUjb/Ue5I ydaIlWHD8P2k73NQUIPuVoz8F3oLApP42racDxy5VeikJagK6wRheDx22jkltKh8Ev0u yXxUGHCs7a8ATU8FFVMR92st/30vhkvntvNxEUs9HhjhmB6e4pP3xzoPOpktkoWSr22P KICAu1PnAlAN1EQGzacqy/YwKF2gKOCt1Ml9/KJNk4kH2nAHyD5+c4KN3qOzbcUawJFc RvbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=US2jDZSS0E14B7YxEAcJj2jZB1b+kDg78rl8vqlDScA=; b=mcVTszw3Tid7BeQBOQldsMiN8bXq73oJdgMBcXKNf2Zef784YbupYcN160fwUvhLHn i5ViLxKWHg7o7VTPr/49Ey5pZAHRif+cea7yXCc4NaOhmUZpmTUaNMnvNgobz7ejQDoa 2lBVRKyCC4XXLEdhX35n9xVQCvIhczUS3BdgmnKD7dEoCnJmw4Q0LVM+ZB59R/YSoe64 pGQwvb479YbKHc/YqkShacjlz4T+AmJ21mzR9KKnMA7qYQGgYBxCbJCmclZzfB6o22Lu QVaE5SSOGJ99T79fgBuCHlVS6wzF6MVzbkBJtD6tGAbZWoUZbYwVI847ObVSMUTa6UsX HZMA== X-Gm-Message-State: AOAM530ZgRhkIHQIuqLmFxzWBpp4XE+vmtexx/Aleilry8kbihM8KniZ wpXLJ5lK52Oo+q4gr4dtE2yFGQ== X-Google-Smtp-Source: ABdhPJyCh9J3MV6MAIMA2i9n2gIcCx1yiswyMUAc9kLLr3n/qEAWTe1ynIVgCAbaH53xEvJ0FTDgqA== X-Received: by 2002:adf:b697:: with SMTP id j23mr4683977wre.201.1592483166534; Thu, 18 Jun 2020 05:26:06 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id x205sm3514091wmx.21.2020.06.18.05.26.05 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:26:05 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 10/15] arm64: kvm: Split hyp/sysreg-sr.c to VHE/nVHE Date: Thu, 18 Jun 2020 13:25:32 +0100 Message-Id: <20200618122537.9625-11-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052609_839968_61CC7C4D X-CRM114-Status: GOOD ( 20.73 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:442 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. sysreg-sr.c contains KVM's code for saving/restoring system registers, with some parts shared between VHE/nVHE. These common routines are moved to sysreg-sr.h, VHE-specific code is left in sysreg-sr.c and nVHE-specific code is moved to nvhe/sysreg-sr.c. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_hyp.h | 4 + arch/arm64/kernel/image-vars.h | 5 - arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 56 ++++++++ arch/arm64/kvm/hyp/sysreg-sr.c | 215 +--------------------------- arch/arm64/kvm/hyp/sysreg-sr.h | 211 +++++++++++++++++++++++++++ 7 files changed, 279 insertions(+), 216 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/sysreg-sr.c create mode 100644 arch/arm64/kvm/hyp/sysreg-sr.h diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 1cb5903a2693..c8bbd221aac0 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -66,12 +66,16 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu); void __timer_enable_traps(struct kvm_vcpu *vcpu); void __timer_disable_traps(struct kvm_vcpu *vcpu); +#ifdef __KVM_NVHE_HYPERVISOR__ void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt); void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt); +#else void sysreg_save_host_state_vhe(struct kvm_cpu_context *ctxt); void sysreg_restore_host_state_vhe(struct kvm_cpu_context *ctxt); void sysreg_save_guest_state_vhe(struct kvm_cpu_context *ctxt); void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt); +#endif + void __sysreg32_save_state(struct kvm_vcpu *vcpu); void __sysreg32_restore_state(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 8096e6f1f2bf..ddaae7267ab1 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -68,12 +68,7 @@ __kvm_nvhe___guest_exit = __guest_exit; __kvm_nvhe___hyp_panic_string = __hyp_panic_string; __kvm_nvhe___hyp_stub_vectors = __hyp_stub_vectors; __kvm_nvhe___icache_flags = __icache_flags; -__kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; __kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; -__kvm_nvhe___sysreg32_restore_state = __sysreg32_restore_state; -__kvm_nvhe___sysreg32_save_state = __sysreg32_save_state; -__kvm_nvhe___sysreg_restore_state_nvhe = __sysreg_restore_state_nvhe; -__kvm_nvhe___sysreg_save_state_nvhe = __sysreg_save_state_nvhe; __kvm_nvhe___timer_disable_traps = __timer_disable_traps; __kvm_nvhe___timer_enable_traps = __timer_enable_traps; __kvm_nvhe___vgic_v2_perform_cpuif_access = __vgic_v2_perform_cpuif_access; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 59bbe6ce2d54..62ceb546393e 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1302,7 +1302,7 @@ static void cpu_init_hyp_mode(void) */ if (this_cpu_has_cap(ARM64_SSBS) && arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) { - kvm_call_hyp(__kvm_enable_ssbs); + kvm_call_hyp_nvhe(__kvm_enable_ssbs); } } diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 95a06786bf26..d242e437cf89 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := debug-sr.o switch.o tlb.o hyp-init.o ../hyp-entry.o +obj-y := sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c new file mode 100644 index 000000000000..55ab924d841a --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c @@ -0,0 +1,56 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2012-2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include + +#include +#include +#include +#include + +#include "../sysreg-sr.h" + +/* + * Non-VHE: Both host and guest must save everything. + */ + +void __hyp_text __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) +{ + __sysreg_save_el1_state(ctxt); + __sysreg_save_common_state(ctxt); + __sysreg_save_user_state(ctxt); + __sysreg_save_el2_return_state(ctxt); +} + +void __hyp_text __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) +{ + __sysreg_restore_el1_state(ctxt); + __sysreg_restore_common_state(ctxt); + __sysreg_restore_user_state(ctxt); + __sysreg_restore_el2_return_state(ctxt); +} + +void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu) +{ + ___sysreg32_save_state(vcpu); +} + +void __hyp_text __sysreg32_restore_state(struct kvm_vcpu *vcpu) +{ + ___sysreg32_restore_state(vcpu); +} + +void __hyp_text __kvm_enable_ssbs(void) +{ + u64 tmp; + + asm volatile( + "mrs %0, sctlr_el2\n" + "orr %0, %0, %1\n" + "msr sctlr_el2, %0" + : "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS)); +} diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c index 2493439a5c54..5b559b00e9e5 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/sysreg-sr.c @@ -12,9 +12,9 @@ #include #include +#include "sysreg-sr.h" + /* - * Non-VHE: Both host and guest must save everything. - * * VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and * pstate, which are handled as part of the el2 return state) on every * switch (sp_el0 is being dealt with in the assembly code). @@ -24,59 +24,6 @@ * classes are handled as part of kvm_arch_vcpu_load and kvm_arch_vcpu_put. */ -static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt) -{ - ctxt->sys_regs[MDSCR_EL1] = read_sysreg(mdscr_el1); -} - -static void __hyp_text __sysreg_save_user_state(struct kvm_cpu_context *ctxt) -{ - ctxt->sys_regs[TPIDR_EL0] = read_sysreg(tpidr_el0); - ctxt->sys_regs[TPIDRRO_EL0] = read_sysreg(tpidrro_el0); -} - -static void __hyp_text __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) -{ - ctxt->sys_regs[CSSELR_EL1] = read_sysreg(csselr_el1); - ctxt->sys_regs[SCTLR_EL1] = read_sysreg_el1(SYS_SCTLR); - ctxt->sys_regs[CPACR_EL1] = read_sysreg_el1(SYS_CPACR); - ctxt->sys_regs[TTBR0_EL1] = read_sysreg_el1(SYS_TTBR0); - ctxt->sys_regs[TTBR1_EL1] = read_sysreg_el1(SYS_TTBR1); - ctxt->sys_regs[TCR_EL1] = read_sysreg_el1(SYS_TCR); - ctxt->sys_regs[ESR_EL1] = read_sysreg_el1(SYS_ESR); - ctxt->sys_regs[AFSR0_EL1] = read_sysreg_el1(SYS_AFSR0); - ctxt->sys_regs[AFSR1_EL1] = read_sysreg_el1(SYS_AFSR1); - ctxt->sys_regs[FAR_EL1] = read_sysreg_el1(SYS_FAR); - ctxt->sys_regs[MAIR_EL1] = read_sysreg_el1(SYS_MAIR); - ctxt->sys_regs[VBAR_EL1] = read_sysreg_el1(SYS_VBAR); - ctxt->sys_regs[CONTEXTIDR_EL1] = read_sysreg_el1(SYS_CONTEXTIDR); - ctxt->sys_regs[AMAIR_EL1] = read_sysreg_el1(SYS_AMAIR); - ctxt->sys_regs[CNTKCTL_EL1] = read_sysreg_el1(SYS_CNTKCTL); - ctxt->sys_regs[PAR_EL1] = read_sysreg(par_el1); - ctxt->sys_regs[TPIDR_EL1] = read_sysreg(tpidr_el1); - - ctxt->gp_regs.sp_el1 = read_sysreg(sp_el1); - ctxt->gp_regs.elr_el1 = read_sysreg_el1(SYS_ELR); - ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(SYS_SPSR); -} - -static void __hyp_text __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) -{ - ctxt->gp_regs.regs.pc = read_sysreg_el2(SYS_ELR); - ctxt->gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); - - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) - ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2); -} - -void __hyp_text __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) -{ - __sysreg_save_el1_state(ctxt); - __sysreg_save_common_state(ctxt); - __sysreg_save_user_state(ctxt); - __sysreg_save_el2_return_state(ctxt); -} - void sysreg_save_host_state_vhe(struct kvm_cpu_context *ctxt) { __sysreg_save_common_state(ctxt); @@ -90,111 +37,6 @@ void sysreg_save_guest_state_vhe(struct kvm_cpu_context *ctxt) } NOKPROBE_SYMBOL(sysreg_save_guest_state_vhe); -static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt) -{ - write_sysreg(ctxt->sys_regs[MDSCR_EL1], mdscr_el1); -} - -static void __hyp_text __sysreg_restore_user_state(struct kvm_cpu_context *ctxt) -{ - write_sysreg(ctxt->sys_regs[TPIDR_EL0], tpidr_el0); - write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0); -} - -static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) -{ - write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2); - write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1); - - if (has_vhe() || - !cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { - write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); - write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); - } else if (!ctxt->__hyp_running_vcpu) { - /* - * Must only be done for guest registers, hence the context - * test. We're coming from the host, so SCTLR.M is already - * set. Pairs with nVHE's __activate_traps(). - */ - write_sysreg_el1((ctxt->sys_regs[TCR_EL1] | - TCR_EPD1_MASK | TCR_EPD0_MASK), - SYS_TCR); - isb(); - } - - write_sysreg_el1(ctxt->sys_regs[CPACR_EL1], SYS_CPACR); - write_sysreg_el1(ctxt->sys_regs[TTBR0_EL1], SYS_TTBR0); - write_sysreg_el1(ctxt->sys_regs[TTBR1_EL1], SYS_TTBR1); - write_sysreg_el1(ctxt->sys_regs[ESR_EL1], SYS_ESR); - write_sysreg_el1(ctxt->sys_regs[AFSR0_EL1], SYS_AFSR0); - write_sysreg_el1(ctxt->sys_regs[AFSR1_EL1], SYS_AFSR1); - write_sysreg_el1(ctxt->sys_regs[FAR_EL1], SYS_FAR); - write_sysreg_el1(ctxt->sys_regs[MAIR_EL1], SYS_MAIR); - write_sysreg_el1(ctxt->sys_regs[VBAR_EL1], SYS_VBAR); - write_sysreg_el1(ctxt->sys_regs[CONTEXTIDR_EL1],SYS_CONTEXTIDR); - write_sysreg_el1(ctxt->sys_regs[AMAIR_EL1], SYS_AMAIR); - write_sysreg_el1(ctxt->sys_regs[CNTKCTL_EL1], SYS_CNTKCTL); - write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1); - write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1); - - if (!has_vhe() && - cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT) && - ctxt->__hyp_running_vcpu) { - /* - * Must only be done for host registers, hence the context - * test. Pairs with nVHE's __deactivate_traps(). - */ - isb(); - /* - * At this stage, and thanks to the above isb(), S2 is - * deconfigured and disabled. We can now restore the host's - * S1 configuration: SCTLR, and only then TCR. - */ - write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); - isb(); - write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); - } - - write_sysreg(ctxt->gp_regs.sp_el1, sp_el1); - write_sysreg_el1(ctxt->gp_regs.elr_el1, SYS_ELR); - write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],SYS_SPSR); -} - -static void __hyp_text -__sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) -{ - u64 pstate = ctxt->gp_regs.regs.pstate; - u64 mode = pstate & PSR_AA32_MODE_MASK; - - /* - * Safety check to ensure we're setting the CPU up to enter the guest - * in a less privileged mode. - * - * If we are attempting a return to EL2 or higher in AArch64 state, - * program SPSR_EL2 with M=EL2h and the IL bit set which ensures that - * we'll take an illegal exception state exception immediately after - * the ERET to the guest. Attempts to return to AArch32 Hyp will - * result in an illegal exception return because EL2's execution state - * is determined by SCR_EL3.RW. - */ - if (!(mode & PSR_MODE32_BIT) && mode >= PSR_MODE_EL2t) - pstate = PSR_MODE_EL2h | PSR_IL_BIT; - - write_sysreg_el2(ctxt->gp_regs.regs.pc, SYS_ELR); - write_sysreg_el2(pstate, SYS_SPSR); - - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) - write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2); -} - -void __hyp_text __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) -{ - __sysreg_restore_el1_state(ctxt); - __sysreg_restore_common_state(ctxt); - __sysreg_restore_user_state(ctxt); - __sysreg_restore_el2_return_state(ctxt); -} - void sysreg_restore_host_state_vhe(struct kvm_cpu_context *ctxt) { __sysreg_restore_common_state(ctxt); @@ -208,48 +50,14 @@ void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt) } NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe); -void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu) +void __sysreg32_save_state(struct kvm_vcpu *vcpu) { - u64 *spsr, *sysreg; - - if (!vcpu_el1_is_32bit(vcpu)) - return; - - spsr = vcpu->arch.ctxt.gp_regs.spsr; - sysreg = vcpu->arch.ctxt.sys_regs; - - spsr[KVM_SPSR_ABT] = read_sysreg(spsr_abt); - spsr[KVM_SPSR_UND] = read_sysreg(spsr_und); - spsr[KVM_SPSR_IRQ] = read_sysreg(spsr_irq); - spsr[KVM_SPSR_FIQ] = read_sysreg(spsr_fiq); - - sysreg[DACR32_EL2] = read_sysreg(dacr32_el2); - sysreg[IFSR32_EL2] = read_sysreg(ifsr32_el2); - - if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) - sysreg[DBGVCR32_EL2] = read_sysreg(dbgvcr32_el2); + ___sysreg32_save_state(vcpu); } -void __hyp_text __sysreg32_restore_state(struct kvm_vcpu *vcpu) +void __sysreg32_restore_state(struct kvm_vcpu *vcpu) { - u64 *spsr, *sysreg; - - if (!vcpu_el1_is_32bit(vcpu)) - return; - - spsr = vcpu->arch.ctxt.gp_regs.spsr; - sysreg = vcpu->arch.ctxt.sys_regs; - - write_sysreg(spsr[KVM_SPSR_ABT], spsr_abt); - write_sysreg(spsr[KVM_SPSR_UND], spsr_und); - write_sysreg(spsr[KVM_SPSR_IRQ], spsr_irq); - write_sysreg(spsr[KVM_SPSR_FIQ], spsr_fiq); - - write_sysreg(sysreg[DACR32_EL2], dacr32_el2); - write_sysreg(sysreg[IFSR32_EL2], ifsr32_el2); - - if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) - write_sysreg(sysreg[DBGVCR32_EL2], dbgvcr32_el2); + ___sysreg32_restore_state(vcpu); } /** @@ -320,14 +128,3 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) vcpu->arch.sysregs_loaded_on_cpu = false; } - -void __hyp_text __kvm_enable_ssbs(void) -{ - u64 tmp; - - asm volatile( - "mrs %0, sctlr_el2\n" - "orr %0, %0, %1\n" - "msr sctlr_el2, %0" - : "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS)); -} diff --git a/arch/arm64/kvm/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/sysreg-sr.h new file mode 100644 index 000000000000..7bc102c60294 --- /dev/null +++ b/arch/arm64/kvm/hyp/sysreg-sr.h @@ -0,0 +1,211 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2012-2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#ifndef __ARM64_KVM_HYP_SYSREG_SR_H__ +#define __ARM64_KVM_HYP_SYSREG_SR_H__ + +#include +#include + +#include +#include +#include +#include + +static inline void __hyp_text +__sysreg_save_common_state(struct kvm_cpu_context *ctxt) +{ + ctxt->sys_regs[MDSCR_EL1] = read_sysreg(mdscr_el1); +} + +static inline void __hyp_text +__sysreg_save_user_state(struct kvm_cpu_context *ctxt) +{ + ctxt->sys_regs[TPIDR_EL0] = read_sysreg(tpidr_el0); + ctxt->sys_regs[TPIDRRO_EL0] = read_sysreg(tpidrro_el0); +} + +static inline void __hyp_text +__sysreg_save_el1_state(struct kvm_cpu_context *ctxt) +{ + ctxt->sys_regs[CSSELR_EL1] = read_sysreg(csselr_el1); + ctxt->sys_regs[SCTLR_EL1] = read_sysreg_el1(SYS_SCTLR); + ctxt->sys_regs[CPACR_EL1] = read_sysreg_el1(SYS_CPACR); + ctxt->sys_regs[TTBR0_EL1] = read_sysreg_el1(SYS_TTBR0); + ctxt->sys_regs[TTBR1_EL1] = read_sysreg_el1(SYS_TTBR1); + ctxt->sys_regs[TCR_EL1] = read_sysreg_el1(SYS_TCR); + ctxt->sys_regs[ESR_EL1] = read_sysreg_el1(SYS_ESR); + ctxt->sys_regs[AFSR0_EL1] = read_sysreg_el1(SYS_AFSR0); + ctxt->sys_regs[AFSR1_EL1] = read_sysreg_el1(SYS_AFSR1); + ctxt->sys_regs[FAR_EL1] = read_sysreg_el1(SYS_FAR); + ctxt->sys_regs[MAIR_EL1] = read_sysreg_el1(SYS_MAIR); + ctxt->sys_regs[VBAR_EL1] = read_sysreg_el1(SYS_VBAR); + ctxt->sys_regs[CONTEXTIDR_EL1] = read_sysreg_el1(SYS_CONTEXTIDR); + ctxt->sys_regs[AMAIR_EL1] = read_sysreg_el1(SYS_AMAIR); + ctxt->sys_regs[CNTKCTL_EL1] = read_sysreg_el1(SYS_CNTKCTL); + ctxt->sys_regs[PAR_EL1] = read_sysreg(par_el1); + ctxt->sys_regs[TPIDR_EL1] = read_sysreg(tpidr_el1); + + ctxt->gp_regs.sp_el1 = read_sysreg(sp_el1); + ctxt->gp_regs.elr_el1 = read_sysreg_el1(SYS_ELR); + ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(SYS_SPSR); +} + +static inline void __hyp_text +__sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) +{ + ctxt->gp_regs.regs.pc = read_sysreg_el2(SYS_ELR); + ctxt->gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); + + if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) + ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2); +} + +static inline void __hyp_text +__sysreg_restore_common_state(struct kvm_cpu_context *ctxt) +{ + write_sysreg(ctxt->sys_regs[MDSCR_EL1], mdscr_el1); +} + +static inline void __hyp_text +__sysreg_restore_user_state(struct kvm_cpu_context *ctxt) +{ + write_sysreg(ctxt->sys_regs[TPIDR_EL0], tpidr_el0); + write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0); +} + +static inline void __hyp_text +__sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) +{ + write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2); + write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1); + + if (has_vhe() || + !cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { + write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); + write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); + } else if (!ctxt->__hyp_running_vcpu) { + /* + * Must only be done for guest registers, hence the context + * test. We're coming from the host, so SCTLR.M is already + * set. Pairs with nVHE's __activate_traps(). + */ + write_sysreg_el1((ctxt->sys_regs[TCR_EL1] | + TCR_EPD1_MASK | TCR_EPD0_MASK), + SYS_TCR); + isb(); + } + + write_sysreg_el1(ctxt->sys_regs[CPACR_EL1], SYS_CPACR); + write_sysreg_el1(ctxt->sys_regs[TTBR0_EL1], SYS_TTBR0); + write_sysreg_el1(ctxt->sys_regs[TTBR1_EL1], SYS_TTBR1); + write_sysreg_el1(ctxt->sys_regs[ESR_EL1], SYS_ESR); + write_sysreg_el1(ctxt->sys_regs[AFSR0_EL1], SYS_AFSR0); + write_sysreg_el1(ctxt->sys_regs[AFSR1_EL1], SYS_AFSR1); + write_sysreg_el1(ctxt->sys_regs[FAR_EL1], SYS_FAR); + write_sysreg_el1(ctxt->sys_regs[MAIR_EL1], SYS_MAIR); + write_sysreg_el1(ctxt->sys_regs[VBAR_EL1], SYS_VBAR); + write_sysreg_el1(ctxt->sys_regs[CONTEXTIDR_EL1],SYS_CONTEXTIDR); + write_sysreg_el1(ctxt->sys_regs[AMAIR_EL1], SYS_AMAIR); + write_sysreg_el1(ctxt->sys_regs[CNTKCTL_EL1], SYS_CNTKCTL); + write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1); + write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1); + + if (!has_vhe() && + cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT) && + ctxt->__hyp_running_vcpu) { + /* + * Must only be done for host registers, hence the context + * test. Pairs with nVHE's __deactivate_traps(). + */ + isb(); + /* + * At this stage, and thanks to the above isb(), S2 is + * deconfigured and disabled. We can now restore the host's + * S1 configuration: SCTLR, and only then TCR. + */ + write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); + isb(); + write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); + } + + write_sysreg(ctxt->gp_regs.sp_el1, sp_el1); + write_sysreg_el1(ctxt->gp_regs.elr_el1, SYS_ELR); + write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],SYS_SPSR); +} + +static inline void __hyp_text +__sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) +{ + u64 pstate = ctxt->gp_regs.regs.pstate; + u64 mode = pstate & PSR_AA32_MODE_MASK; + + /* + * Safety check to ensure we're setting the CPU up to enter the guest + * in a less privileged mode. + * + * If we are attempting a return to EL2 or higher in AArch64 state, + * program SPSR_EL2 with M=EL2h and the IL bit set which ensures that + * we'll take an illegal exception state exception immediately after + * the ERET to the guest. Attempts to return to AArch32 Hyp will + * result in an illegal exception return because EL2's execution state + * is determined by SCR_EL3.RW. + */ + if (!(mode & PSR_MODE32_BIT) && mode >= PSR_MODE_EL2t) + pstate = PSR_MODE_EL2h | PSR_IL_BIT; + + write_sysreg_el2(ctxt->gp_regs.regs.pc, SYS_ELR); + write_sysreg_el2(pstate, SYS_SPSR); + + if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) + write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2); +} + +static inline void __hyp_text ___sysreg32_save_state(struct kvm_vcpu *vcpu) +{ + u64 *spsr, *sysreg; + + if (!vcpu_el1_is_32bit(vcpu)) + return; + + spsr = vcpu->arch.ctxt.gp_regs.spsr; + sysreg = vcpu->arch.ctxt.sys_regs; + + spsr[KVM_SPSR_ABT] = read_sysreg(spsr_abt); + spsr[KVM_SPSR_UND] = read_sysreg(spsr_und); + spsr[KVM_SPSR_IRQ] = read_sysreg(spsr_irq); + spsr[KVM_SPSR_FIQ] = read_sysreg(spsr_fiq); + + sysreg[DACR32_EL2] = read_sysreg(dacr32_el2); + sysreg[IFSR32_EL2] = read_sysreg(ifsr32_el2); + + if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) + sysreg[DBGVCR32_EL2] = read_sysreg(dbgvcr32_el2); +} + +static inline void __hyp_text ___sysreg32_restore_state(struct kvm_vcpu *vcpu) +{ + u64 *spsr, *sysreg; + + if (!vcpu_el1_is_32bit(vcpu)) + return; + + spsr = vcpu->arch.ctxt.gp_regs.spsr; + sysreg = vcpu->arch.ctxt.sys_regs; + + write_sysreg(spsr[KVM_SPSR_ABT], spsr_abt); + write_sysreg(spsr[KVM_SPSR_UND], spsr_und); + write_sysreg(spsr[KVM_SPSR_IRQ], spsr_irq); + write_sysreg(spsr[KVM_SPSR_FIQ], spsr_fiq); + + write_sysreg(sysreg[DACR32_EL2], dacr32_el2); + write_sysreg(sysreg[IFSR32_EL2], ifsr32_el2); + + if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) + write_sysreg(sysreg[DBGVCR32_EL2], dbgvcr32_el2); +} + +#endif /* __ARM64_KVM_HYP_SYSREG_SR_H__ */ From patchwork Thu Jun 18 12:25:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611971 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 02887912 for ; Thu, 18 Jun 2020 12:29:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C7FB62075E for ; Thu, 18 Jun 2020 12:29:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="c5yepmd9"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="cAT7JpNu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C7FB62075E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=w5VnT922RR3nC5S4rfmlM95wVSfkk6tq1fKYxG/3uQU=; b=c5yepmd9lxUEyx xj25Ptr+aXF3pwId8oWCONSGMrQYtOX5BcBQ6WxHlUx/6HyrZoN8eX6dzHnjIN6iHR1IiClzvt7Xo YlhmBnfoMU28pOPbVG7p/c/lOAaXFlepUQ9H5qak1JQ2iZ4I8uSTY1q6cFz65XZygr7vDaTKVEQUS KKmhzzYUZczJ+ctID7uBEguFIqKvUN0UL3N08Rq+lV4BflwsbAuLqISe5zezTq/Nenx6kG/27tlGs VzMPuOX9t6spmKUiyIjY2y504KpUGMtlv2LGzpquuKkF5pqqotbDYGVfEDyjgqRu6jcMNM0Efv67r yd8rCCa9Mj6TFnxPqYrw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltfR-0007LY-AX; Thu, 18 Jun 2020 12:29:17 +0000 Received: from mail-wm1-x342.google.com ([2a00:1450:4864:20::342]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcQ-0004ec-FH for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:26:13 +0000 Received: by mail-wm1-x342.google.com with SMTP id a6so1622805wmm.5 for ; Thu, 18 Jun 2020 05:26:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZWFc0DVeN94ZorqAG0h9WFrQjJa3oixBCGpefiLhYCk=; b=cAT7JpNu8uYVJil5fY5XhY46HnbgEBY79aUy9b0HsXpqe+pJHcIa6qJerEc/bB5+O+ Gz62x7AgUxwQ/BFtfwrofVT/+c0XxDC9yK3l/Q2W6aS/Y2paBbPxsBBpnGXy4Y8UMGlf O0cpju70gmPfxRH5GOekLp67QDsk3bLLxrojuzzpn1EEK5jHe7pqUA8e958AfepvwssZ iV0UyEvDvPxyGhBrWue2gtxLQ8aCNq/oLNqKqJyfzd/jlL46F1zSHfFVIZTqNkgexiZx fnl/epwDF21x7kEzroS5fiZVhlKLuuaNS0ny2gJd50CDOxx0LED5J+27mYQoUSrS0JFG G1pA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZWFc0DVeN94ZorqAG0h9WFrQjJa3oixBCGpefiLhYCk=; b=BYfcryAxwby9T/mzjKLPchYqUERqxGL2FCmGxcqwtNKFT49G6546ZFqT5brz6p3PgK W/rvvoBS2XiKWiuARksTO3q/nescrXwz7IEzLatWRnOXb7KwzK7E/p2HdzI9xUPj2x3y kDrNcT9BnuxexCEk/2YlBkENgUCE7ksMIYoCU8ShTldXLz9ClLiEyIvp9uRBBDPDSEqt ToFDVKWlmE+NUYDxD7duAnEF695f3Q5QyUoeP8YlUSsteZt0UQzVbrqBSIjSYnJS7zq8 ba53k5IhyDVaWWQMrwQhgmitVmGQ6m07krEqlJ+MOSPgq/Ce/AzJgLhQDhv0ixwGOy5S 5bFg== X-Gm-Message-State: AOAM530mVq6B0OCe6xWyBZxD9euT+JfCjb0fZqp9kpLqQla9Df93Y2NV 0h4tDIuVbodBGnUhoab+bWJcMQ== X-Google-Smtp-Source: ABdhPJya5GO8Zl0HV6LSvzozL5C9Dr+dLPC276lCmkiUmdKYLUZWlckw0Wm4xUiI+YGZ54d6YuYw/Q== X-Received: by 2002:a1c:4804:: with SMTP id v4mr3813586wma.21.1592483168452; Thu, 18 Jun 2020 05:26:08 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id d11sm3502736wrm.64.2020.06.18.05.26.07 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:26:07 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 11/15] arm64: kvm: Split hyp/timer-sr.c to VHE/nVHE Date: Thu, 18 Jun 2020 13:25:33 +0100 Message-Id: <20200618122537.9625-12-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052610_551770_538AC291 X-CRM114-Status: GOOD ( 17.10 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:342 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. timer-sr.c contains a HVC handler for setting CNTVOFF_EL2 and two helper functions for controlling access to physical counter. The former is shared between VHE/nVHE and is kept in timer-sr.c but compiled under both configs. The latter are nVHE-specific and are moved to nvhe/timer-sr.c. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_hyp.h | 2 ++ arch/arm64/kernel/image-vars.h | 3 --- arch/arm64/kvm/hyp/nvhe/Makefile | 3 ++- arch/arm64/kvm/hyp/nvhe/timer-sr.c | 43 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/timer-sr.c | 36 ------------------------- 5 files changed, 47 insertions(+), 40 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/timer-sr.c diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index c8bbd221aac0..8a1510f521fe 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -63,8 +63,10 @@ void __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if); void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if); int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu); +#ifdef __KVM_NVHE_HYPERVISOR__ void __timer_enable_traps(struct kvm_vcpu *vcpu); void __timer_disable_traps(struct kvm_vcpu *vcpu); +#endif #ifdef __KVM_NVHE_HYPERVISOR__ void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt); diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index ddaae7267ab1..94bfc61b3f51 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -68,9 +68,6 @@ __kvm_nvhe___guest_exit = __guest_exit; __kvm_nvhe___hyp_panic_string = __hyp_panic_string; __kvm_nvhe___hyp_stub_vectors = __hyp_stub_vectors; __kvm_nvhe___icache_flags = __icache_flags; -__kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; -__kvm_nvhe___timer_disable_traps = __timer_disable_traps; -__kvm_nvhe___timer_enable_traps = __timer_enable_traps; __kvm_nvhe___vgic_v2_perform_cpuif_access = __vgic_v2_perform_cpuif_access; __kvm_nvhe___vgic_v3_activate_traps = __vgic_v3_activate_traps; __kvm_nvhe___vgic_v3_deactivate_traps = __vgic_v3_deactivate_traps; diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index d242e437cf89..4ec34abce0a9 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,8 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o ../hyp-entry.o +obj-y := ../timer-sr.o timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o \ + hyp-init.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/timer-sr.c b/arch/arm64/kvm/hyp/nvhe/timer-sr.c new file mode 100644 index 000000000000..f0e694743883 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/timer-sr.c @@ -0,0 +1,43 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2012-2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include +#include + +#include + +/* + * Should only be called on non-VHE systems. + * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). + */ +void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) +{ + u64 val; + + /* Allow physical timer/counter access for the host */ + val = read_sysreg(cnthctl_el2); + val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN; + write_sysreg(val, cnthctl_el2); +} + +/* + * Should only be called on non-VHE systems. + * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). + */ +void __hyp_text __timer_enable_traps(struct kvm_vcpu *vcpu) +{ + u64 val; + + /* + * Disallow physical timer access for the guest + * Physical counter access is allowed + */ + val = read_sysreg(cnthctl_el2); + val &= ~CNTHCTL_EL1PCEN; + val |= CNTHCTL_EL1PCTEN; + write_sysreg(val, cnthctl_el2); +} diff --git a/arch/arm64/kvm/hyp/timer-sr.c b/arch/arm64/kvm/hyp/timer-sr.c index fb5c0be33223..6c620d807857 100644 --- a/arch/arm64/kvm/hyp/timer-sr.c +++ b/arch/arm64/kvm/hyp/timer-sr.c @@ -4,45 +4,9 @@ * Author: Marc Zyngier */ -#include -#include -#include - #include void __hyp_text __kvm_timer_set_cntvoff(u64 cntvoff) { write_sysreg(cntvoff, cntvoff_el2); } - -/* - * Should only be called on non-VHE systems. - * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). - */ -void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) -{ - u64 val; - - /* Allow physical timer/counter access for the host */ - val = read_sysreg(cnthctl_el2); - val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN; - write_sysreg(val, cnthctl_el2); -} - -/* - * Should only be called on non-VHE systems. - * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). - */ -void __hyp_text __timer_enable_traps(struct kvm_vcpu *vcpu) -{ - u64 val; - - /* - * Disallow physical timer access for the guest - * Physical counter access is allowed - */ - val = read_sysreg(cnthctl_el2); - val &= ~CNTHCTL_EL1PCEN; - val |= CNTHCTL_EL1PCTEN; - write_sysreg(val, cnthctl_el2); -} From patchwork Thu Jun 18 12:25:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611975 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 21E3C14DD for ; Thu, 18 Jun 2020 12:30:08 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BF191207E8 for ; Thu, 18 Jun 2020 12:30:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Yhb/kU6I"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="GEbrZUYq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BF191207E8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=shLbJ2AttNlH7ULdacrIdPNlGrl055TJm2jaXjJ+tbw=; b=Yhb/kU6I6vkPnd w3nLFXwlQ0CBCVw9K+VSPxuEtX3S4CsS218O6KzFM7bs5655fI6gK5YeHbcPQlCRgVE9R66MTRyPh WiOLqNAc6U0sr5tgdjU3NRKe8rSK9fl7W/fZ2jCxt0f+FegfUWTWPBgC+LPVYDvR1C96c0kF42wyx S9GqiOsE//gYUKeNRxP24W+sA76vrZw+wyn9KGwRRZOCndWUkYbPNb8lrICUfqOOcaygp1ZgJ472b ipA2xnXGO3vqLIziHe/ElJ/kGHZrlrBBhGNvOQtJuUA68PRzSiNqh3Kh2G+rKeY+EVk+dW0Zn5+Mo aDh+Q3oay7uSyIqISt6w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltg4-0007xu-KJ; Thu, 18 Jun 2020 12:29:56 +0000 Received: from mail-wr1-x442.google.com ([2a00:1450:4864:20::442]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcR-0004gc-Sn for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:26:14 +0000 Received: by mail-wr1-x442.google.com with SMTP id l11so5871349wru.0 for ; Thu, 18 Jun 2020 05:26:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=E1RONziHkaXcIbp8EZDtcBXhqYuNXkWrxwk7rGyJjWU=; b=GEbrZUYqxyO+kyfPLW1XsrO90u8JIgVJkPp7Ozi4c+c3BAL1hjqsyehMHSraKA9IaF iFVKkO6pAzNc1X4cQYOdnMRyAve1mAXOGX3kykmGwsSwV+kD4Hn9EFHcUWruCoVxe/0A ErGcs1QOOt7cihX/KUvc0I8ouOORh83beB6KQULWl9r+55Bx85RVg7KTcC6iQVbWmDLP EfRtkUPnPFfKS4Y4Llfn2K9nMlRZz2NLNE/KAtkfHAxw9DyALzAC5UGBgYPZezEMj04A rHOw1Hu4XAuXLvJTUjAXN5H15zJP+Fo8dwD3GYdAzLaJktcA2MMwG13bVp1Vh6Baku0o JxNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=E1RONziHkaXcIbp8EZDtcBXhqYuNXkWrxwk7rGyJjWU=; b=LnkCrsLUfzltYzhTcCt5DyIG/MxqgKF/PcP9WumjSjZfgOOgMCVwIB2M+fe6CPpE1X v29Krzg1mzeU5w1TeQZLT9BDeEKkakzjr+OKRJPk9GcHJar9pe9stuxnYqrH1vfLdPNl iSYaISrRcqUNtk1QE4s4CXRSYoz1DWfQYf9K6m93+VeolrfsE/+Fg9XZXhOUuipH55h+ 6kkoleLj/a9m8uDVPvOMICArX8PTPIIHdgRMCAI2Qst6Sf8AE5M0jvvdWLUYqDq/H0xc h6mmIRaUc5xL69CDeLaFOlr7hrqPedeXBQUHw3HfnPBlrUDf4joLHm+sg6HcDbM3U6/A 5f1A== X-Gm-Message-State: AOAM5325DGCw4b6xmsVUkNKplinBIad5PN5qlpigOjhd02emlu7Nw+gZ +5DpZX3Hgv2MaM1elYRc7aq2Vw== X-Google-Smtp-Source: ABdhPJxtj4KuEKVAUVM/ayFvRfAHHTsgx+xpEM7sEoPG8ylQ3t5pkgRZ9BQ6Ls0lNwmXJ9BWTKycUA== X-Received: by 2002:a5d:6a4b:: with SMTP id t11mr4289837wrw.404.1592483170221; Thu, 18 Jun 2020 05:26:10 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id j5sm3377432wrm.57.2020.06.18.05.26.09 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:26:09 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 12/15] arm64: kvm: Compile remaining hyp/ files for both VHE/nVHE Date: Thu, 18 Jun 2020 13:25:34 +0100 Message-Id: <20200618122537.9625-13-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052612_037516_3254E07A X-CRM114-Status: GOOD ( 12.19 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:442 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. The following files in hyp/ contain only code shared by VHE/nVHE: vgic-v3-sr.c, aarch32.c, vgic-v2-cpuif-proxy.c, entry.S, fpsimd.S Compile them under both configurations. Deletions in image-vars.h reflect eliminated dependencies of nVHE code on the rest of the kernel. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 19 ------------------- arch/arm64/kvm/hyp/nvhe/Makefile | 5 +++-- 2 files changed, 3 insertions(+), 21 deletions(-) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 94bfc61b3f51..2cc3e7673dc2 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,27 +61,9 @@ __efistub__ctype = _ctype; * memory mappings. */ -__kvm_nvhe___fpsimd_restore_state = __fpsimd_restore_state; -__kvm_nvhe___fpsimd_save_state = __fpsimd_save_state; -__kvm_nvhe___guest_enter = __guest_enter; -__kvm_nvhe___guest_exit = __guest_exit; __kvm_nvhe___hyp_panic_string = __hyp_panic_string; __kvm_nvhe___hyp_stub_vectors = __hyp_stub_vectors; __kvm_nvhe___icache_flags = __icache_flags; -__kvm_nvhe___vgic_v2_perform_cpuif_access = __vgic_v2_perform_cpuif_access; -__kvm_nvhe___vgic_v3_activate_traps = __vgic_v3_activate_traps; -__kvm_nvhe___vgic_v3_deactivate_traps = __vgic_v3_deactivate_traps; -__kvm_nvhe___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2; -__kvm_nvhe___vgic_v3_init_lrs = __vgic_v3_init_lrs; -__kvm_nvhe___vgic_v3_perform_cpuif_access = __vgic_v3_perform_cpuif_access; -__kvm_nvhe___vgic_v3_read_vmcr = __vgic_v3_read_vmcr; -__kvm_nvhe___vgic_v3_restore_aprs = __vgic_v3_restore_aprs; -__kvm_nvhe___vgic_v3_restore_state = __vgic_v3_restore_state; -__kvm_nvhe___vgic_v3_save_aprs = __vgic_v3_save_aprs; -__kvm_nvhe___vgic_v3_save_state = __vgic_v3_save_state; -__kvm_nvhe___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; -__kvm_nvhe_abort_guest_exit_end = abort_guest_exit_end; -__kvm_nvhe_abort_guest_exit_start = abort_guest_exit_start; __kvm_nvhe_arm64_const_caps_ready = arm64_const_caps_ready; __kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; __kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; @@ -94,7 +76,6 @@ __kvm_nvhe_idmap_t0sz = idmap_t0sz; __kvm_nvhe_kimage_voffset = kimage_voffset; __kvm_nvhe_kvm_host_data = kvm_host_data; __kvm_nvhe_kvm_patch_vector_branch = kvm_patch_vector_branch; -__kvm_nvhe_kvm_skip_instr32 = kvm_skip_instr32; __kvm_nvhe_kvm_update_va_mask = kvm_update_va_mask; __kvm_nvhe_kvm_vgic_global_state = kvm_vgic_global_state; __kvm_nvhe_panic = panic; diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 4ec34abce0a9..d51ae163430d 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,8 +7,9 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := ../timer-sr.o timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o \ - hyp-init.o ../hyp-entry.o +obj-y := ../vgic-v3-sr.o ../timer-sr.o timer-sr.o ../aarch32.o \ + ../vgic-v2-cpuif-proxy.o sysreg-sr.o debug-sr.o ../entry.o switch.o \ + ../fpsimd.o tlb.o hyp-init.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) From patchwork Thu Jun 18 12:25:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611977 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B3EEE912 for ; Thu, 18 Jun 2020 12:30:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6BB102075E for ; Thu, 18 Jun 2020 12:30:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="HUlgOFyq"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="XImm0NWh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6BB102075E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GH+1ZEMmRf3Isahh8nN6C2rjZQ6msvdQ/CSLQJeY1OA=; b=HUlgOFyqXkQfdK c4dGLjH32dsi5XrX7NsRRPWOmalPTrHdr382LlIApWdaFiVDTB5GA/vSbzA78CC++WBgV/U4tFvG9 P2KvibXqcgJ8ChO+IcDw39qUjw8lobPxyoa33s2fXhEyaufQWgttmgzuISMOjKnQhDlJioAOfsHh2 ZaDrzodPyMAJ/SotvWUQBceZtttARb9NNwWRZbOnX/18i8/PlNbPfn/3anjisvOqIfyx7aA/ttY/A FInTj7h9R2f3fOrKZXnh3PlOMhb6q/WjJDgouNi6MIqYG7jkkpbXJ9zXBqCE/pZ4EXdzHlurgzQ13 BHYwOKheUWpZY+TWCT9Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltgh-0002N8-U2; Thu, 18 Jun 2020 12:30:35 +0000 Received: from mail-wr1-x441.google.com ([2a00:1450:4864:20::441]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcU-0004hp-2F for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:26:16 +0000 Received: by mail-wr1-x441.google.com with SMTP id e1so5858678wrt.5 for ; Thu, 18 Jun 2020 05:26:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5ceSCnqPU+qhFBApWd37Hj9Iarz1DhO8p+J0NPEP0HY=; b=XImm0NWhTRSHFfFBJjAzpQWgVma/Daqep8ONE/WtU0eu796P/BJ1Ml/kY6divma1BZ divyMCuPOWy2+Na9sCUHeEQHXDGfXSrgILV1ChiotyZiFrKWVTKqgspfJUGxjfiEFgAL bs5UNOB+ODc6oS4w10pOJ3JbNBTelTewFdDqnIa1dm5DMNHubszjkQpu58Yxu8UYt6kH 0U8YLbm977SDUYSQKiIzCGXda5Ch3oM5q6dqt+qPLbB2dq6mnOVBJyXBQWnz3EztZbB2 sQWwr4W/YZTBEUbTz1tEwW8ducrc7HBqZu1rOprt/H5743BsmmMTZQKJee9lmFylU00/ TXyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5ceSCnqPU+qhFBApWd37Hj9Iarz1DhO8p+J0NPEP0HY=; b=bzKooUtthGY4ot8E6Sq2EZ7pprGctVwBMKmhcCsb3qm6Z4SqrEbxgZhQ6KvfDzwCW6 xlhQ8LpnwDnS4mRkVqh81YPWp7fnuMyWwm1mm/9KUHN3eDdcfqaYl50aRL5XOnU7xbCr 6LnpQbvaYccWosVbUlmPnELPU4+4hHY/pJH7a4haAL6WgAZIcOVsv444vMQNUmoBncNr vIAqDE6OKkU8w5U057OWpoHFOirlT2ffPuQa6cTMGlfLsasUi4y1few5cxLhBf/vRcT6 2UdmXnLY4Yo6+VzVnAi0Ivnf2hp/88RZkATHGfEitzeOo5SN967G8XcxAfye5HHTYsHe LHnA== X-Gm-Message-State: AOAM533grd/Ax05Lid53csU+Gyc6++zgaPBY3sMEEZy/KaNIkB30gohs XAs1w3+zcgo14gz5X/sd2/fRKg== X-Google-Smtp-Source: ABdhPJwR+2X7OijhjCKVfnY0bd5pmPZUZ3Q1eOHRC2W/NjdYmx/NZTyrTPK2hq0x8R87nRQuJSwogQ== X-Received: by 2002:a5d:654c:: with SMTP id z12mr4248011wrv.315.1592483171911; Thu, 18 Jun 2020 05:26:11 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id 125sm3467359wmc.23.2020.06.18.05.26.10 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:26:11 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 13/15] arm64: kvm: Add comments around __kvm_nvhe_ symbol aliases Date: Thu, 18 Jun 2020 13:25:35 +0100 Message-Id: <20200618122537.9625-14-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052614_146018_A2F4A838 X-CRM114-Status: GOOD ( 12.27 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:441 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. With all source files split between VHE/nVHE, add comments around the list of symbols where nVHE code still links against kernel proper. Split them into groups and explain how each group is currently used. Some of these dependencies will be removed in the future. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 53 +++++++++++++++++++++------------- 1 file changed, 33 insertions(+), 20 deletions(-) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 2cc3e7673dc2..da8f39fae5e8 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,30 +61,43 @@ __efistub__ctype = _ctype; * memory mappings. */ -__kvm_nvhe___hyp_panic_string = __hyp_panic_string; -__kvm_nvhe___hyp_stub_vectors = __hyp_stub_vectors; -__kvm_nvhe___icache_flags = __icache_flags; -__kvm_nvhe_arm64_const_caps_ready = arm64_const_caps_ready; -__kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; -__kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; -__kvm_nvhe_cpu_hwcap_keys = cpu_hwcap_keys; -__kvm_nvhe_cpu_hwcaps = cpu_hwcaps; +/* If nVHE code panics, it ERETs into panic() in EL1. */ +__kvm_nvhe___hyp_panic_string = __hyp_panic_string; +__kvm_nvhe_panic = panic; + +/* Values used by the hyp-init vector. */ +__kvm_nvhe___hyp_stub_vectors = __hyp_stub_vectors; +__kvm_nvhe_idmap_t0sz = idmap_t0sz; + +/* Alternative callbacks, referenced in .altinstructions. Executed in EL1. */ +__kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; +__kvm_nvhe_kvm_patch_vector_branch = kvm_patch_vector_branch; +__kvm_nvhe_kvm_update_va_mask = kvm_update_va_mask; + +/* Values used to convert between memory mappings, read-only after init. */ +__kvm_nvhe_kimage_voffset = kimage_voffset; + +/* Data shared with the kernel. */ +__kvm_nvhe_cpu_hwcaps = cpu_hwcaps; +__kvm_nvhe_cpu_hwcap_keys = cpu_hwcap_keys; +__kvm_nvhe___icache_flags = __icache_flags; +__kvm_nvhe_kvm_vgic_global_state = kvm_vgic_global_state; +__kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; +__kvm_nvhe_kvm_host_data = kvm_host_data; + +/* Static keys shared with the kernel. */ +__kvm_nvhe_arm64_const_caps_ready = arm64_const_caps_ready; #ifdef CONFIG_ARM64_PSEUDO_NMI -__kvm_nvhe_gic_pmr_sync = gic_pmr_sync; +__kvm_nvhe_gic_pmr_sync = gic_pmr_sync; #endif -__kvm_nvhe_idmap_t0sz = idmap_t0sz; -__kvm_nvhe_kimage_voffset = kimage_voffset; -__kvm_nvhe_kvm_host_data = kvm_host_data; -__kvm_nvhe_kvm_patch_vector_branch = kvm_patch_vector_branch; -__kvm_nvhe_kvm_update_va_mask = kvm_update_va_mask; -__kvm_nvhe_kvm_vgic_global_state = kvm_vgic_global_state; -__kvm_nvhe_panic = panic; +__kvm_nvhe_vgic_v2_cpuif_trap = vgic_v2_cpuif_trap; +__kvm_nvhe_vgic_v3_cpuif_trap = vgic_v3_cpuif_trap; + +/* SVE support, currently unused by nVHE. */ #ifdef CONFIG_ARM64_SVE -__kvm_nvhe_sve_load_state = sve_load_state; -__kvm_nvhe_sve_save_state = sve_save_state; +__kvm_nvhe_sve_save_state = sve_save_state; +__kvm_nvhe_sve_load_state = sve_load_state; #endif -__kvm_nvhe_vgic_v2_cpuif_trap = vgic_v2_cpuif_trap; -__kvm_nvhe_vgic_v3_cpuif_trap = vgic_v3_cpuif_trap; #endif /* CONFIG_KVM */ From patchwork Thu Jun 18 12:25:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611981 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1613C912 for ; Thu, 18 Jun 2020 12:31:21 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DF7C3207E8 for ; Thu, 18 Jun 2020 12:31:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="bxti1biN"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="Kg/p7x59" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DF7C3207E8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Qc29ZXm8TzrHrXWEp7ck7JiDC92hqkunazDn37Ev5IE=; b=bxti1biNmVW5xe 2jRdpJ8siyOUMHGCTeCnRhwVl2LNvviXFmvzUx+qZYlKUtBv4ru3wzTCG4ixqPg2V2tXbTh5Kogoa OCPkNTA8OvuYe2gmAJocC6lgfL9XwzGe5hNEv5re4EtO6osKq6xmocDnf0YML8eaM3lAYz24Z6KBf 0NdveqSk3OhCM/5znWBvBpLV3cjVDqNk+oRw1csGTHNBJubEvkOqdxN673HrHuu9IoqIpyNAZsGW3 z6hmA4JNYyUGZq0Cq5lYg1COC2br0zg60vcHnk2dxwERFysCranKrUTEfKvmBcdJ4WtjIyAJIZg6O iBEVJX0n4F/YTyrkV9+w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jlthH-0002oR-12; Thu, 18 Jun 2020 12:31:11 +0000 Received: from mail-wm1-x344.google.com ([2a00:1450:4864:20::344]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcW-0004jZ-Fu for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:26:25 +0000 Received: by mail-wm1-x344.google.com with SMTP id r9so5006482wmh.2 for ; Thu, 18 Jun 2020 05:26:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TsZ+e0RMnftOu12/wJbsm288Ncd0dKF67O/gO3c9yCM=; b=Kg/p7x59v+1KN99KxMUWXsQ2OVhrIVeSRS/bGqD7qRvpafGi0QuHmJv/QQbDa2ySqA XSTuozWZ3Dmjlv+n1X+lAy3jc2znUMjbQZDNiEhJI4CdrE8nWDLCNhc/Iz/Kelm6u6D7 wLy2Ark8ts6DQJq0/Z3QCGOBcvdwIB+tE3eADZJ0xOFTEgAX2iRs/mUzG4CChOFGuqcM XJ5Kw46yEQGisKbLVoElhonvb3MvYQP9C8lQM+cqvk2Qrsj90a0vY4g3TaXXo14Fh9VI l99bGgCXy4iovuwYwZJ0jHnEx0D6V/AuONGjuMFlKgqqW75kPxp0EzPgSi1Sox0L8h6W I62g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TsZ+e0RMnftOu12/wJbsm288Ncd0dKF67O/gO3c9yCM=; b=Hp8bJjdpuEDZl64Zi/PrVwYQOQKkY8pEOeFKq+KZPnazelm0j1zjAJe3/XdYC8a5ws yucAl5RbknTiHHPbr+cAWyDShRS+1mTCW1J4TkkEU5ZQGpK1LEPjQ0wA79ZGQ77qQ3Jk 7TkOh5lZZG/xMEXONE3GDJJrxcqUpIuermNpd0sazPQBUKtBKv593aafF8BFmTg8EFIC 2CowYZ63UFMMjYz9fe/Tiw3wyqaQ+85UbgLX5dAv7/FC/+l4HGTt0mpTe5/fKHsxZmmg 65RnbrGsbPleKJ/qdzvjHtozkLLFzd7RiIqmeJoZxustfPH5HGlbYJDNrkNWyx2IAJ2Y bAzQ== X-Gm-Message-State: AOAM532FXPnbOM3aaS0MpfzCWL4FF8hrr48UPS7Qsx+Nlv4reJNA/bSX lGMmv8A4VORca7yr/3DTAoCfbA== X-Google-Smtp-Source: ABdhPJxf5vbjbHo4teCdr0gr0rh3Xfm1KG3qmvdfSIQIsEDJx6DQTMkAeyiuUiEPoAHedx5HLojXow== X-Received: by 2002:a7b:c8d6:: with SMTP id f22mr3670879wml.108.1592483173640; Thu, 18 Jun 2020 05:26:13 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id e10sm3334372wrn.11.2020.06.18.05.26.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:26:12 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 14/15] arm64: kvm: Remove __hyp_text macro, use build rules instead Date: Thu, 18 Jun 2020 13:25:36 +0100 Message-Id: <20200618122537.9625-15-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052617_845978_6E0270D8 X-CRM114-Status: GOOD ( 18.03 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:344 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org With nVHE code now fully separated from the rest of the kernel, the effects of the __hyp_text macro (which had to be applied on all nVHE code) can be achieved with build rules instead. The macro used to: (a) move code to .hyp.text ELF section, now done by renaming .text using `objcopy`, and (b) `notrace` and `__noscs` would negate effects of CC_FLAGS_FTRACE and CC_FLAGS_SCS, respectivelly, now those flags are erased from KBUILD_CFLAGS (same way as in EFI stub). Note that by removing __hyp_text from code shared with VHE, all VHE code is now compiled into .text and without `notrace` and `__noscs`. Use of '.pushsection .hyp.text' removed from assembly files as this is now also covered by the build rules. For MAINTAINERS: if needed to re-run, uses of macro were removed with the following command. Formatting was fixed up manually. find arch/arm64/kvm/hyp -type f -name '*.c' -o -name '*.h' \ -exec sed -i 's/ __hyp_text//g' {} + Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_emulate.h | 2 +- arch/arm64/include/asm/kvm_hyp.h | 2 - arch/arm64/kvm/hyp/aarch32.c | 6 +- arch/arm64/kvm/hyp/debug-sr.h | 18 ++-- arch/arm64/kvm/hyp/entry.S | 1 - arch/arm64/kvm/hyp/fpsimd.S | 1 - arch/arm64/kvm/hyp/hyp-entry.S | 1 - arch/arm64/kvm/hyp/nvhe/Makefile | 8 +- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 10 +- arch/arm64/kvm/hyp/nvhe/switch.c | 18 ++-- arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 10 +- arch/arm64/kvm/hyp/nvhe/timer-sr.c | 4 +- arch/arm64/kvm/hyp/nvhe/tlb.c | 14 ++- arch/arm64/kvm/hyp/switch.h | 39 ++++--- arch/arm64/kvm/hyp/sysreg-sr.h | 27 ++--- arch/arm64/kvm/hyp/timer-sr.c | 2 +- arch/arm64/kvm/hyp/tlb.c | 6 +- arch/arm64/kvm/hyp/tlb.h | 15 ++- arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c | 4 +- arch/arm64/kvm/hyp/vgic-v3-sr.c | 130 ++++++++++------------- 20 files changed, 143 insertions(+), 175 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 4d0f8ea600ba..269a76cd51ff 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -516,7 +516,7 @@ static __always_inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_i * Skip an instruction which has been emulated at hyp while most guest sysregs * are live. */ -static __always_inline void __hyp_text __kvm_skip_instr(struct kvm_vcpu *vcpu) +static __always_inline void __kvm_skip_instr(struct kvm_vcpu *vcpu) { *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); vcpu->arch.ctxt.gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 8a1510f521fe..fc4900a6bfe4 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -12,8 +12,6 @@ #include #include -#define __hyp_text __section(.hyp.text) notrace __noscs - #define read_sysreg_elx(r,nvh,vh) \ ({ \ u64 reg; \ diff --git a/arch/arm64/kvm/hyp/aarch32.c b/arch/arm64/kvm/hyp/aarch32.c index 25c0e47d57cb..f9ff67dfbf0b 100644 --- a/arch/arm64/kvm/hyp/aarch32.c +++ b/arch/arm64/kvm/hyp/aarch32.c @@ -44,7 +44,7 @@ static const unsigned short cc_map[16] = { /* * Check if a trapped instruction should have been executed or not. */ -bool __hyp_text kvm_condition_valid32(const struct kvm_vcpu *vcpu) +bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) { unsigned long cpsr; u32 cpsr_cond; @@ -93,7 +93,7 @@ bool __hyp_text kvm_condition_valid32(const struct kvm_vcpu *vcpu) * * IT[7:0] -> CPSR[26:25],CPSR[15:10] */ -static void __hyp_text kvm_adjust_itstate(struct kvm_vcpu *vcpu) +static void kvm_adjust_itstate(struct kvm_vcpu *vcpu) { unsigned long itbits, cond; unsigned long cpsr = *vcpu_cpsr(vcpu); @@ -123,7 +123,7 @@ static void __hyp_text kvm_adjust_itstate(struct kvm_vcpu *vcpu) * kvm_skip_instr - skip a trapped instruction and proceed to the next * @vcpu: The vcpu pointer */ -void __hyp_text kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr) +void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr) { u32 pc = *vcpu_pc(vcpu); bool is_thumb; diff --git a/arch/arm64/kvm/hyp/debug-sr.h b/arch/arm64/kvm/hyp/debug-sr.h index 62b5deeb301d..24e8acf9ec10 100644 --- a/arch/arm64/kvm/hyp/debug-sr.h +++ b/arch/arm64/kvm/hyp/debug-sr.h @@ -88,9 +88,9 @@ default: write_debug(ptr[0], reg, 0); \ } -static inline void __hyp_text -__debug_save_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, - struct kvm_cpu_context *ctxt) +static inline void __debug_save_state(struct kvm_vcpu *vcpu, + struct kvm_guest_debug_arch *dbg, + struct kvm_cpu_context *ctxt) { u64 aa64dfr0; int brps, wrps; @@ -107,9 +107,9 @@ __debug_save_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, ctxt->sys_regs[MDCCINT_EL1] = read_sysreg(mdccint_el1); } -static inline void __hyp_text -__debug_restore_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, - struct kvm_cpu_context *ctxt) +static inline void __debug_restore_state(struct kvm_vcpu *vcpu, + struct kvm_guest_debug_arch *dbg, + struct kvm_cpu_context *ctxt) { u64 aa64dfr0; int brps, wrps; @@ -127,8 +127,7 @@ __debug_restore_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, write_sysreg(ctxt->sys_regs[MDCCINT_EL1], mdccint_el1); } -static inline void __hyp_text -__debug_switch_to_guest_common(struct kvm_vcpu *vcpu) +static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; @@ -147,8 +146,7 @@ __debug_switch_to_guest_common(struct kvm_vcpu *vcpu) __debug_restore_state(vcpu, guest_dbg, guest_ctxt); } -static inline void __hyp_text -__debug_switch_to_host_common(struct kvm_vcpu *vcpu) +static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 90186cf6473e..dfb4e6d359ab 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -21,7 +21,6 @@ #define CPU_SP_EL0_OFFSET (CPU_XREG_OFFSET(30) + 8) .text - .pushsection .hyp.text, "ax" /* * We treat x18 as callee-saved as the host may use it as a platform diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index 5b8ff517ff10..01f114aa47b0 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -9,7 +9,6 @@ #include .text - .pushsection .hyp.text, "ax" SYM_FUNC_START(__fpsimd_save_state) fpsimd_save x0, 1 diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 8316ee67d6a0..689fccbc9de7 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -16,7 +16,6 @@ #include .text - .pushsection .hyp.text, "ax" .macro do_el2_call /* diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index d51ae163430d..5bc92674dab5 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -22,7 +22,13 @@ $(obj)/%.hyp.o: $(obj)/%.hyp.tmp.o FORCE $(call if_changed,hypcopy) quiet_cmd_hypcopy = HYPCOPY $@ - cmd_hypcopy = $(OBJCOPY) --prefix-symbols=__kvm_nvhe_ $< $@ + cmd_hypcopy = $(OBJCOPY) --prefix-symbols=__kvm_nvhe_ \ + --rename-section=.text=.hyp.text \ + $< $@ + +# Remove ftrace and Shadow Call Stack CFLAGS. +# This is equivalent to the 'notrace' and '__noscs' annotations. +KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS), $(KBUILD_CFLAGS)) # KVM nVHE code is run at a different exception code with a different map, so # compiler instrumentation that inserts callbacks or checks into the code may diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index b3752cfdcf3d..bb5c529da394 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -14,7 +14,7 @@ #include "../debug-sr.h" -static void __hyp_text __debug_save_spe(u64 *pmscr_el1) +static void __debug_save_spe(u64 *pmscr_el1) { u64 reg; @@ -46,7 +46,7 @@ static void __hyp_text __debug_save_spe(u64 *pmscr_el1) dsb(nsh); } -static void __hyp_text __debug_restore_spe(u64 pmscr_el1) +static void __debug_restore_spe(u64 pmscr_el1) { if (!pmscr_el1) return; @@ -58,20 +58,20 @@ static void __hyp_text __debug_restore_spe(u64 pmscr_el1) write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1); } -void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) +void __debug_switch_to_guest(struct kvm_vcpu *vcpu) { /* Disable and flush SPE data generation */ __debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1); __debug_switch_to_guest_common(vcpu); } -void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) +void __debug_switch_to_host(struct kvm_vcpu *vcpu) { __debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1); __debug_switch_to_host_common(vcpu); } -u32 __hyp_text __kvm_get_mdcr_el2(void) +u32 __kvm_get_mdcr_el2(void) { return read_sysreg(mdcr_el2); } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 8f004d7da177..9fcf902c556c 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -26,7 +26,7 @@ #include "../switch.h" -static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) +static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; @@ -57,7 +57,7 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) } } -static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) +static void __deactivate_traps(struct kvm_vcpu *vcpu) { u64 mdcr_el2; @@ -92,13 +92,13 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); } -static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu) +static void __deactivate_vm(struct kvm_vcpu *vcpu) { write_sysreg(0, vttbr_el2); } /* Save VGICv3 state on non-VHE systems */ -static void __hyp_text __hyp_vgic_save_state(struct kvm_vcpu *vcpu) +static void __hyp_vgic_save_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { __vgic_v3_save_state(&vcpu->arch.vgic_cpu.vgic_v3); @@ -107,7 +107,7 @@ static void __hyp_text __hyp_vgic_save_state(struct kvm_vcpu *vcpu) } /* Restore VGICv3 state on non_VEH systems */ -static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) +static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { __vgic_v3_activate_traps(&vcpu->arch.vgic_cpu.vgic_v3); @@ -118,7 +118,7 @@ static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) /** * Disable host events, enable guest events */ -static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) +static bool __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) { struct kvm_host_data *host; struct kvm_pmu_events *pmu; @@ -138,7 +138,7 @@ static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) /** * Disable guest events, enable host events */ -static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) +static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) { struct kvm_host_data *host; struct kvm_pmu_events *pmu; @@ -154,7 +154,7 @@ static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) } /* Switch to the guest for legacy non-VHE systems */ -int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; @@ -241,7 +241,7 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) return exit_code; } -void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) +void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) { u64 spsr = read_sysreg_el2(SYS_SPSR); u64 elr = read_sysreg_el2(SYS_ELR); diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c index 55ab924d841a..b1da891bf307 100644 --- a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c @@ -18,7 +18,7 @@ * Non-VHE: Both host and guest must save everything. */ -void __hyp_text __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) +void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) { __sysreg_save_el1_state(ctxt); __sysreg_save_common_state(ctxt); @@ -26,7 +26,7 @@ void __hyp_text __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) __sysreg_save_el2_return_state(ctxt); } -void __hyp_text __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) +void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) { __sysreg_restore_el1_state(ctxt); __sysreg_restore_common_state(ctxt); @@ -34,17 +34,17 @@ void __hyp_text __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) __sysreg_restore_el2_return_state(ctxt); } -void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu) +void __sysreg32_save_state(struct kvm_vcpu *vcpu) { ___sysreg32_save_state(vcpu); } -void __hyp_text __sysreg32_restore_state(struct kvm_vcpu *vcpu) +void __sysreg32_restore_state(struct kvm_vcpu *vcpu) { ___sysreg32_restore_state(vcpu); } -void __hyp_text __kvm_enable_ssbs(void) +void __kvm_enable_ssbs(void) { u64 tmp; diff --git a/arch/arm64/kvm/hyp/nvhe/timer-sr.c b/arch/arm64/kvm/hyp/nvhe/timer-sr.c index f0e694743883..8b80a4c4c4c6 100644 --- a/arch/arm64/kvm/hyp/nvhe/timer-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/timer-sr.c @@ -14,7 +14,7 @@ * Should only be called on non-VHE systems. * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). */ -void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) +void __timer_disable_traps(struct kvm_vcpu *vcpu) { u64 val; @@ -28,7 +28,7 @@ void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) * Should only be called on non-VHE systems. * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). */ -void __hyp_text __timer_enable_traps(struct kvm_vcpu *vcpu) +void __timer_enable_traps(struct kvm_vcpu *vcpu) { u64 val; diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index 111c4b0a23d3..329c23e52ff7 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -12,8 +12,7 @@ #include "../tlb.h" -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __tlb_switch_to_guest(struct kvm *kvm, struct tlb_inv_context *cxt) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { u64 val; @@ -36,8 +35,7 @@ static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT)); } -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __tlb_switch_to_host(struct kvm *kvm, struct tlb_inv_context *cxt) { write_sysreg(0, vttbr_el2); @@ -49,22 +47,22 @@ static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, } } -void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { __tlb_flush_vmid_ipa(kvm, ipa); } -void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm) +void __kvm_tlb_flush_vmid(struct kvm *kvm) { __tlb_flush_vmid(kvm); } -void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) { __tlb_flush_local_vmid(vcpu); } -void __hyp_text __kvm_flush_vm_context(void) +void __kvm_flush_vm_context(void) { __tlb_flush_vm_context(); } diff --git a/arch/arm64/kvm/hyp/switch.h b/arch/arm64/kvm/hyp/switch.h index 5b71d52c41f4..ddea73e97bb5 100644 --- a/arch/arm64/kvm/hyp/switch.h +++ b/arch/arm64/kvm/hyp/switch.h @@ -30,7 +30,7 @@ extern const char __hyp_panic_string[]; /* Check whether the FP regs were dirtied while in the host-side run loop: */ -static inline bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) +static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) { /* * When the system doesn't support FP/SIMD, we cannot rely on @@ -48,7 +48,7 @@ static inline bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) } /* Save the 32-bit only FPSIMD system register state */ -static inline void __hyp_text __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) +static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) { if (!vcpu_el1_is_32bit(vcpu)) return; @@ -56,7 +56,7 @@ static inline void __hyp_text __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) vcpu->arch.ctxt.sys_regs[FPEXC32_EL2] = read_sysreg(fpexc32_el2); } -static inline void __hyp_text __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) +static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) { /* * We are about to set CPTR_EL2.TFP to trap all floating point @@ -73,7 +73,7 @@ static inline void __hyp_text __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) } } -static inline void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu) +static inline void __activate_traps_common(struct kvm_vcpu *vcpu) { /* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */ write_sysreg(1 << 15, hstr_el2); @@ -89,13 +89,13 @@ static inline void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu) write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); } -static inline void __hyp_text __deactivate_traps_common(void) +static inline void __deactivate_traps_common(void) { write_sysreg(0, hstr_el2); write_sysreg(0, pmuserenr_el0); } -static inline void __hyp_text ___activate_traps(struct kvm_vcpu *vcpu) +static inline void ___activate_traps(struct kvm_vcpu *vcpu) { u64 hcr = vcpu->arch.hcr_el2; @@ -108,7 +108,7 @@ static inline void __hyp_text ___activate_traps(struct kvm_vcpu *vcpu) write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); } -static inline void __hyp_text ___deactivate_traps(struct kvm_vcpu *vcpu) +static inline void ___deactivate_traps(struct kvm_vcpu *vcpu) { /* * If we pended a virtual abort, preserve it until it gets @@ -122,12 +122,12 @@ static inline void __hyp_text ___deactivate_traps(struct kvm_vcpu *vcpu) } } -static inline void __hyp_text __activate_vm(struct kvm *kvm) +static inline void __activate_vm(struct kvm *kvm) { __load_guest_stage2(kvm); } -static inline bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) +static inline bool __translate_far_to_hpfar(u64 far, u64 *hpfar) { u64 par, tmp; @@ -156,7 +156,7 @@ static inline bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) return true; } -static inline bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) +static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) { u8 ec; u64 esr; @@ -196,7 +196,7 @@ static inline bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) } /* Check for an FPSIMD/SVE trap and handle as appropriate */ -static inline bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) +static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) { bool vhe, sve_guest, sve_host; u8 hsr_ec; @@ -278,7 +278,7 @@ static inline bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) return true; } -static inline bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) +static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) { u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu)); int rt = kvm_vcpu_sys_get_rt(vcpu); @@ -333,7 +333,7 @@ static inline bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) return true; } -static inline bool __hyp_text esr_is_ptrauth_trap(u32 esr) +static inline bool esr_is_ptrauth_trap(u32 esr) { u32 ec = ESR_ELx_EC(esr); @@ -366,7 +366,7 @@ static inline bool __hyp_text esr_is_ptrauth_trap(u32 esr) regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \ }) -static inline bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) +static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *ctxt; u64 val; @@ -396,8 +396,7 @@ static inline bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) * the guest, false when we should restore the host state and return to the * main run loop. */ -static inline bool __hyp_text -fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); @@ -469,7 +468,7 @@ fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) return false; } -static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) +static inline bool __needs_ssbd_off(struct kvm_vcpu *vcpu) { if (!cpus_have_final_cap(ARM64_SSBD)) return false; @@ -477,8 +476,7 @@ static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG); } -static inline void __hyp_text -__set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) +static inline void __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) { #ifdef CONFIG_ARM64_SSBD /* @@ -491,8 +489,7 @@ __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) #endif } -static inline void __hyp_text -__set_host_arch_workaround_state(struct kvm_vcpu *vcpu) +static inline void __set_host_arch_workaround_state(struct kvm_vcpu *vcpu) { #ifdef CONFIG_ARM64_SSBD /* diff --git a/arch/arm64/kvm/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/sysreg-sr.h index 7bc102c60294..a42edd07403c 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/sysreg-sr.h @@ -15,21 +15,18 @@ #include #include -static inline void __hyp_text -__sysreg_save_common_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) { ctxt->sys_regs[MDSCR_EL1] = read_sysreg(mdscr_el1); } -static inline void __hyp_text -__sysreg_save_user_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) { ctxt->sys_regs[TPIDR_EL0] = read_sysreg(tpidr_el0); ctxt->sys_regs[TPIDRRO_EL0] = read_sysreg(tpidrro_el0); } -static inline void __hyp_text -__sysreg_save_el1_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) { ctxt->sys_regs[CSSELR_EL1] = read_sysreg(csselr_el1); ctxt->sys_regs[SCTLR_EL1] = read_sysreg_el1(SYS_SCTLR); @@ -54,8 +51,7 @@ __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(SYS_SPSR); } -static inline void __hyp_text -__sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) { ctxt->gp_regs.regs.pc = read_sysreg_el2(SYS_ELR); ctxt->gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); @@ -64,21 +60,18 @@ __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2); } -static inline void __hyp_text -__sysreg_restore_common_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_restore_common_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt->sys_regs[MDSCR_EL1], mdscr_el1); } -static inline void __hyp_text -__sysreg_restore_user_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt->sys_regs[TPIDR_EL0], tpidr_el0); write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0); } -static inline void __hyp_text -__sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2); write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1); @@ -137,7 +130,7 @@ __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],SYS_SPSR); } -static inline void __hyp_text +static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) { u64 pstate = ctxt->gp_regs.regs.pstate; @@ -164,7 +157,7 @@ __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2); } -static inline void __hyp_text ___sysreg32_save_state(struct kvm_vcpu *vcpu) +static inline void ___sysreg32_save_state(struct kvm_vcpu *vcpu) { u64 *spsr, *sysreg; @@ -186,7 +179,7 @@ static inline void __hyp_text ___sysreg32_save_state(struct kvm_vcpu *vcpu) sysreg[DBGVCR32_EL2] = read_sysreg(dbgvcr32_el2); } -static inline void __hyp_text ___sysreg32_restore_state(struct kvm_vcpu *vcpu) +static inline void ___sysreg32_restore_state(struct kvm_vcpu *vcpu) { u64 *spsr, *sysreg; diff --git a/arch/arm64/kvm/hyp/timer-sr.c b/arch/arm64/kvm/hyp/timer-sr.c index 6c620d807857..4cda674a8be6 100644 --- a/arch/arm64/kvm/hyp/timer-sr.c +++ b/arch/arm64/kvm/hyp/timer-sr.c @@ -6,7 +6,7 @@ #include -void __hyp_text __kvm_timer_set_cntvoff(u64 cntvoff) +void __kvm_timer_set_cntvoff(u64 cntvoff) { write_sysreg(cntvoff, cntvoff_el2); } diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index 4e190f8c7e9c..ebf07bb718ad 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -12,8 +12,7 @@ #include "tlb.h" -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __tlb_switch_to_guest(struct kvm *kvm, struct tlb_inv_context *cxt) { u64 val; @@ -56,8 +55,7 @@ static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, isb(); } -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __tlb_switch_to_host(struct kvm *kvm, struct tlb_inv_context *cxt) { /* * We're done with the TLB operation, let's restore the host's diff --git a/arch/arm64/kvm/hyp/tlb.h b/arch/arm64/kvm/hyp/tlb.h index 841ef400c8ec..25dba94d3f51 100644 --- a/arch/arm64/kvm/hyp/tlb.h +++ b/arch/arm64/kvm/hyp/tlb.h @@ -19,13 +19,10 @@ struct tlb_inv_context { u64 sctlr; }; -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt); -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt); +static void __tlb_switch_to_guest(struct kvm *kvm, struct tlb_inv_context *cxt); +static void __tlb_switch_to_host(struct kvm *kvm, struct tlb_inv_context *cxt); -static inline void __hyp_text -__tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +static inline void __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { struct tlb_inv_context cxt; @@ -79,7 +76,7 @@ __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) __tlb_switch_to_host(kvm, &cxt); } -static inline void __hyp_text __tlb_flush_vmid(struct kvm *kvm) +static inline void __tlb_flush_vmid(struct kvm *kvm) { struct tlb_inv_context cxt; @@ -96,7 +93,7 @@ static inline void __hyp_text __tlb_flush_vmid(struct kvm *kvm) __tlb_switch_to_host(kvm, &cxt); } -static inline void __hyp_text __tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +static inline void __tlb_flush_local_vmid(struct kvm_vcpu *vcpu) { struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); struct tlb_inv_context cxt; @@ -111,7 +108,7 @@ static inline void __hyp_text __tlb_flush_local_vmid(struct kvm_vcpu *vcpu) __tlb_switch_to_host(kvm, &cxt); } -static inline void __hyp_text __tlb_flush_vm_context(void) +static inline void __tlb_flush_vm_context(void) { dsb(ishst); __tlbi(alle1is); diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c index 4f3a087e36d5..bd1bab551d48 100644 --- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c +++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c @@ -13,7 +13,7 @@ #include #include -static bool __hyp_text __is_be(struct kvm_vcpu *vcpu) +static bool __is_be(struct kvm_vcpu *vcpu) { if (vcpu_mode_is_32bit(vcpu)) return !!(read_sysreg_el2(SYS_SPSR) & PSR_AA32_E_BIT); @@ -32,7 +32,7 @@ static bool __hyp_text __is_be(struct kvm_vcpu *vcpu) * 0: Not a GICV access * -1: Illegal GICV access successfully performed */ -int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) +int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) { struct kvm *kvm = kern_hyp_va(vcpu->kvm); struct vgic_dist *vgic = &kvm->arch.vgic; diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index 10ed539835c1..d31eb6266f2e 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -16,7 +16,7 @@ #define vtr_to_nr_pre_bits(v) ((((u32)(v) >> 26) & 7) + 1) #define vtr_to_nr_apr_regs(v) (1 << (vtr_to_nr_pre_bits(v) - 5)) -static u64 __hyp_text __gic_v3_get_lr(unsigned int lr) +static u64 __gic_v3_get_lr(unsigned int lr) { switch (lr & 0xf) { case 0: @@ -56,7 +56,7 @@ static u64 __hyp_text __gic_v3_get_lr(unsigned int lr) unreachable(); } -static void __hyp_text __gic_v3_set_lr(u64 val, int lr) +static void __gic_v3_set_lr(u64 val, int lr) { switch (lr & 0xf) { case 0: @@ -110,7 +110,7 @@ static void __hyp_text __gic_v3_set_lr(u64 val, int lr) } } -static void __hyp_text __vgic_v3_write_ap0rn(u32 val, int n) +static void __vgic_v3_write_ap0rn(u32 val, int n) { switch (n) { case 0: @@ -128,7 +128,7 @@ static void __hyp_text __vgic_v3_write_ap0rn(u32 val, int n) } } -static void __hyp_text __vgic_v3_write_ap1rn(u32 val, int n) +static void __vgic_v3_write_ap1rn(u32 val, int n) { switch (n) { case 0: @@ -146,7 +146,7 @@ static void __hyp_text __vgic_v3_write_ap1rn(u32 val, int n) } } -static u32 __hyp_text __vgic_v3_read_ap0rn(int n) +static u32 __vgic_v3_read_ap0rn(int n) { u32 val; @@ -170,7 +170,7 @@ static u32 __hyp_text __vgic_v3_read_ap0rn(int n) return val; } -static u32 __hyp_text __vgic_v3_read_ap1rn(int n) +static u32 __vgic_v3_read_ap1rn(int n) { u32 val; @@ -194,7 +194,7 @@ static u32 __hyp_text __vgic_v3_read_ap1rn(int n) return val; } -void __hyp_text __vgic_v3_save_state(struct vgic_v3_cpu_if *cpu_if) +void __vgic_v3_save_state(struct vgic_v3_cpu_if *cpu_if) { u64 used_lrs = cpu_if->used_lrs; @@ -229,7 +229,7 @@ void __hyp_text __vgic_v3_save_state(struct vgic_v3_cpu_if *cpu_if) } } -void __hyp_text __vgic_v3_restore_state(struct vgic_v3_cpu_if *cpu_if) +void __vgic_v3_restore_state(struct vgic_v3_cpu_if *cpu_if) { u64 used_lrs = cpu_if->used_lrs; int i; @@ -255,7 +255,7 @@ void __hyp_text __vgic_v3_restore_state(struct vgic_v3_cpu_if *cpu_if) } } -void __hyp_text __vgic_v3_activate_traps(struct vgic_v3_cpu_if *cpu_if) +void __vgic_v3_activate_traps(struct vgic_v3_cpu_if *cpu_if) { /* * VFIQEn is RES1 if ICC_SRE_EL1.SRE is 1. This causes a @@ -302,7 +302,7 @@ void __hyp_text __vgic_v3_activate_traps(struct vgic_v3_cpu_if *cpu_if) write_gicreg(cpu_if->vgic_hcr, ICH_HCR_EL2); } -void __hyp_text __vgic_v3_deactivate_traps(struct vgic_v3_cpu_if *cpu_if) +void __vgic_v3_deactivate_traps(struct vgic_v3_cpu_if *cpu_if) { u64 val; @@ -328,7 +328,7 @@ void __hyp_text __vgic_v3_deactivate_traps(struct vgic_v3_cpu_if *cpu_if) write_gicreg(0, ICH_HCR_EL2); } -void __hyp_text __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if) +void __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if) { u64 val; u32 nr_pre_bits; @@ -361,7 +361,7 @@ void __hyp_text __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if) } } -void __hyp_text __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if) +void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if) { u64 val; u32 nr_pre_bits; @@ -394,7 +394,7 @@ void __hyp_text __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if) } } -void __hyp_text __vgic_v3_init_lrs(void) +void __vgic_v3_init_lrs(void) { int max_lr_idx = vtr_to_max_lr_idx(read_gicreg(ICH_VTR_EL2)); int i; @@ -403,28 +403,28 @@ void __hyp_text __vgic_v3_init_lrs(void) __gic_v3_set_lr(0, i); } -u64 __hyp_text __vgic_v3_get_ich_vtr_el2(void) +u64 __vgic_v3_get_ich_vtr_el2(void) { return read_gicreg(ICH_VTR_EL2); } -u64 __hyp_text __vgic_v3_read_vmcr(void) +u64 __vgic_v3_read_vmcr(void) { return read_gicreg(ICH_VMCR_EL2); } -void __hyp_text __vgic_v3_write_vmcr(u32 vmcr) +void __vgic_v3_write_vmcr(u32 vmcr) { write_gicreg(vmcr, ICH_VMCR_EL2); } -static int __hyp_text __vgic_v3_bpr_min(void) +static int __vgic_v3_bpr_min(void) { /* See Pseudocode for VPriorityGroup */ return 8 - vtr_to_nr_pre_bits(read_gicreg(ICH_VTR_EL2)); } -static int __hyp_text __vgic_v3_get_group(struct kvm_vcpu *vcpu) +static int __vgic_v3_get_group(struct kvm_vcpu *vcpu) { u32 esr = kvm_vcpu_get_hsr(vcpu); u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT; @@ -434,9 +434,8 @@ static int __hyp_text __vgic_v3_get_group(struct kvm_vcpu *vcpu) #define GICv3_IDLE_PRIORITY 0xff -static int __hyp_text __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, - u32 vmcr, - u64 *lr_val) +static int __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, u32 vmcr, + u64 *lr_val) { unsigned int used_lrs = vcpu->arch.vgic_cpu.vgic_v3.used_lrs; u8 priority = GICv3_IDLE_PRIORITY; @@ -474,8 +473,8 @@ static int __hyp_text __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, return lr; } -static int __hyp_text __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, - int intid, u64 *lr_val) +static int __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, int intid, + u64 *lr_val) { unsigned int used_lrs = vcpu->arch.vgic_cpu.vgic_v3.used_lrs; int i; @@ -494,7 +493,7 @@ static int __hyp_text __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, return -1; } -static int __hyp_text __vgic_v3_get_highest_active_priority(void) +static int __vgic_v3_get_highest_active_priority(void) { u8 nr_apr_regs = vtr_to_nr_apr_regs(read_gicreg(ICH_VTR_EL2)); u32 hap = 0; @@ -526,12 +525,12 @@ static int __hyp_text __vgic_v3_get_highest_active_priority(void) return GICv3_IDLE_PRIORITY; } -static unsigned int __hyp_text __vgic_v3_get_bpr0(u32 vmcr) +static unsigned int __vgic_v3_get_bpr0(u32 vmcr) { return (vmcr & ICH_VMCR_BPR0_MASK) >> ICH_VMCR_BPR0_SHIFT; } -static unsigned int __hyp_text __vgic_v3_get_bpr1(u32 vmcr) +static unsigned int __vgic_v3_get_bpr1(u32 vmcr) { unsigned int bpr; @@ -550,7 +549,7 @@ static unsigned int __hyp_text __vgic_v3_get_bpr1(u32 vmcr) * Convert a priority to a preemption level, taking the relevant BPR * into account by zeroing the sub-priority bits. */ -static u8 __hyp_text __vgic_v3_pri_to_pre(u8 pri, u32 vmcr, int grp) +static u8 __vgic_v3_pri_to_pre(u8 pri, u32 vmcr, int grp) { unsigned int bpr; @@ -568,7 +567,7 @@ static u8 __hyp_text __vgic_v3_pri_to_pre(u8 pri, u32 vmcr, int grp) * matter what the guest does with its BPR, we can always set/get the * same value of a priority. */ -static void __hyp_text __vgic_v3_set_active_priority(u8 pri, u32 vmcr, int grp) +static void __vgic_v3_set_active_priority(u8 pri, u32 vmcr, int grp) { u8 pre, ap; u32 val; @@ -587,7 +586,7 @@ static void __hyp_text __vgic_v3_set_active_priority(u8 pri, u32 vmcr, int grp) } } -static int __hyp_text __vgic_v3_clear_highest_active_priority(void) +static int __vgic_v3_clear_highest_active_priority(void) { u8 nr_apr_regs = vtr_to_nr_apr_regs(read_gicreg(ICH_VTR_EL2)); u32 hap = 0; @@ -625,7 +624,7 @@ static int __hyp_text __vgic_v3_clear_highest_active_priority(void) return GICv3_IDLE_PRIORITY; } -static void __hyp_text __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 lr_val; u8 lr_prio, pmr; @@ -661,7 +660,7 @@ static void __hyp_text __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int r vcpu_set_reg(vcpu, rt, ICC_IAR1_EL1_SPURIOUS); } -static void __hyp_text __vgic_v3_clear_active_lr(int lr, u64 lr_val) +static void __vgic_v3_clear_active_lr(int lr, u64 lr_val) { lr_val &= ~ICH_LR_ACTIVE_BIT; if (lr_val & ICH_LR_HW) { @@ -674,7 +673,7 @@ static void __hyp_text __vgic_v3_clear_active_lr(int lr, u64 lr_val) __gic_v3_set_lr(lr_val, lr); } -static void __hyp_text __vgic_v3_bump_eoicount(void) +static void __vgic_v3_bump_eoicount(void) { u32 hcr; @@ -683,8 +682,7 @@ static void __hyp_text __vgic_v3_bump_eoicount(void) write_gicreg(hcr, ICH_HCR_EL2); } -static void __hyp_text __vgic_v3_write_dir(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 vid = vcpu_get_reg(vcpu, rt); u64 lr_val; @@ -707,7 +705,7 @@ static void __hyp_text __vgic_v3_write_dir(struct kvm_vcpu *vcpu, __vgic_v3_clear_active_lr(lr, lr_val); } -static void __hyp_text __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 vid = vcpu_get_reg(vcpu, rt); u64 lr_val; @@ -744,17 +742,17 @@ static void __hyp_text __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int __vgic_v3_clear_active_lr(lr, lr_val); } -static void __hyp_text __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vcpu_set_reg(vcpu, rt, !!(vmcr & ICH_VMCR_ENG0_MASK)); } -static void __hyp_text __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vcpu_set_reg(vcpu, rt, !!(vmcr & ICH_VMCR_ENG1_MASK)); } -static void __hyp_text __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 val = vcpu_get_reg(vcpu, rt); @@ -766,7 +764,7 @@ static void __hyp_text __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, __vgic_v3_write_vmcr(vmcr); } -static void __hyp_text __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 val = vcpu_get_reg(vcpu, rt); @@ -778,17 +776,17 @@ static void __hyp_text __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, __vgic_v3_write_vmcr(vmcr); } -static void __hyp_text __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vcpu_set_reg(vcpu, rt, __vgic_v3_get_bpr0(vmcr)); } -static void __hyp_text __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vcpu_set_reg(vcpu, rt, __vgic_v3_get_bpr1(vmcr)); } -static void __hyp_text __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 val = vcpu_get_reg(vcpu, rt); u8 bpr_min = __vgic_v3_bpr_min() - 1; @@ -805,7 +803,7 @@ static void __hyp_text __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int __vgic_v3_write_vmcr(vmcr); } -static void __hyp_text __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 val = vcpu_get_reg(vcpu, rt); u8 bpr_min = __vgic_v3_bpr_min(); @@ -825,7 +823,7 @@ static void __hyp_text __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int __vgic_v3_write_vmcr(vmcr); } -static void __hyp_text __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) +static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { u32 val; @@ -837,7 +835,7 @@ static void __hyp_text __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n vcpu_set_reg(vcpu, rt, val); } -static void __hyp_text __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) +static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { u32 val = vcpu_get_reg(vcpu, rt); @@ -847,56 +845,49 @@ static void __hyp_text __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int __vgic_v3_write_ap1rn(val, n); } -static void __hyp_text __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, +static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_read_apxrn(vcpu, rt, 0); } -static void __hyp_text __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu, +static void __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_read_apxrn(vcpu, rt, 1); } -static void __hyp_text __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_read_apxrn(vcpu, rt, 2); } -static void __hyp_text __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_read_apxrn(vcpu, rt, 3); } -static void __hyp_text __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_write_apxrn(vcpu, rt, 0); } -static void __hyp_text __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_write_apxrn(vcpu, rt, 1); } -static void __hyp_text __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_write_apxrn(vcpu, rt, 2); } -static void __hyp_text __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_write_apxrn(vcpu, rt, 3); } -static void __hyp_text __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 lr_val; int lr, lr_grp, grp; @@ -915,16 +906,14 @@ static void __hyp_text __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, vcpu_set_reg(vcpu, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); } -static void __hyp_text __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vmcr &= ICH_VMCR_PMR_MASK; vmcr >>= ICH_VMCR_PMR_SHIFT; vcpu_set_reg(vcpu, rt, vmcr); } -static void __hyp_text __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 val = vcpu_get_reg(vcpu, rt); @@ -936,15 +925,13 @@ static void __hyp_text __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, write_gicreg(vmcr, ICH_VMCR_EL2); } -static void __hyp_text __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 val = __vgic_v3_get_highest_active_priority(); vcpu_set_reg(vcpu, rt, val); } -static void __hyp_text __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 vtr, val; @@ -965,8 +952,7 @@ static void __hyp_text __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, vcpu_set_reg(vcpu, rt, val); } -static void __hyp_text __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 val = vcpu_get_reg(vcpu, rt); @@ -983,7 +969,7 @@ static void __hyp_text __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, write_gicreg(vmcr, ICH_VMCR_EL2); } -int __hyp_text __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) +int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { int rt; u32 esr; From patchwork Thu Jun 18 12:25:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11611973 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8696C912 for ; Thu, 18 Jun 2020 12:29:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 491CE2075E for ; Thu, 18 Jun 2020 12:29:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="SSK78KUR"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="TEXYXyhl" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 491CE2075E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hEFVUjYqKuoQMW5+8XYfZDdvcGB34WH2DQP6WIBTLFc=; b=SSK78KURGX9HEe DorS5ng+92JqzSRlcRI560mphXbfdlKDAfOtmuxlAJLBswqfVBqL9CrWMznkIkhggrUWewpTjjMYN RrqOimDubGW1FcgPGhjDDlC3FiWb1aC0FPgMhhp61OjAW9jtj/QjCDTNX6Abk+pWl6eANkV0EWZQN aPhcL1qlKl6Uuvuhgbyde26L1ErGHSWP4LJYwqo34liRI2MAziPBcnP/bMP1fgVqnLxZpTahFiD+f HcRi8wOJ9Ea0VZbY8BvqsJfSO5zUCnJDOuKJP9QtLy37qWDq+nXdFpzlu4PUSV+2q5sNK3CfQ84Ey CJyZPK6ZWLoevjYPtGfQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltfj-0007jK-4A; Thu, 18 Jun 2020 12:29:35 +0000 Received: from mail-wr1-x443.google.com ([2a00:1450:4864:20::443]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jltcX-0004kK-1X for linux-arm-kernel@lists.infradead.org; Thu, 18 Jun 2020 12:26:18 +0000 Received: by mail-wr1-x443.google.com with SMTP id x6so5824595wrm.13 for ; Thu, 18 Jun 2020 05:26:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eoJNhGWephHh2kHxeUxYOokymywaEEsB1soK0bgoMag=; b=TEXYXyhlg3yz0B8UIHkZXn6oAo3avJRrYkTyA9xaGkqdf5nonfGKN+VqZuKiArfpyg C4p/meD1W8zwxsQnM57O6GwJ4OdMceJ9XCYBBSHM7mMozUWAEhbCA6F0qm/dj2K9xK/e k/mK2fqb1vNFuJxs07bips/lRohz68SPOfmnqdd/TRIFyt5B/5+ceGinZ5PANTif+PWr EsACQ5qWGE413ytIYmfBVWTRRrjBhtqAGsGUFtYXw/Qx7SCxH8auMx0siwVhfPpoacYz Z4e+upQ9X957zGzIo//NEh3QnseeF0MDGnex47/HvPWSesAyd77vTORe6vQxrdY/0Gsg YedA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eoJNhGWephHh2kHxeUxYOokymywaEEsB1soK0bgoMag=; b=cU8MGV9L+CJK3LaNwTpa1KmwH3GFURZlRt1li9KNMPnv3w0BtLUCSufzRSnj9eKugB rP/aL9GXrzYdx0QspxE9Bp+Yb1COz4GTEVDTeQ2zMaKbBJC7l5EvAKu5/sbY5tSJxpoE wxAKhPcPU98vPA+A8katkwPxtUjn59zpPD7lpm7bK0T8IKsFwiITAKgIeNVtwC/zI0wc gT5iHZ8USWFt6HUGf+9fktZlNEcP+exlA+Al5cL3m0GyXyjSEA517/6APK+xtbAsPHm5 sowHP7CldcxaQL1YXggU6pUAN67L6963YcbVwjrsrz0xUpd3nr+i5jNm9fXFUUGBR/vA FZvg== X-Gm-Message-State: AOAM53282Dm8PE9mVa+g2TyVPWIni2Ae3aa/fp8pglHWvDarnoKIBEDE 0Vd6RvFDOiMQBoe0o7O3XwjejQ== X-Google-Smtp-Source: ABdhPJxWu2jWVCT4QdDVDT6jQzuNA/gmHgg6owA60mBPTRcDkKwon5A4vNDF90GMtDZwYVwwGKyFJQ== X-Received: by 2002:adf:97cb:: with SMTP id t11mr4371001wrb.314.1592483175166; Thu, 18 Jun 2020 05:26:15 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id t188sm3491013wmt.27.2020.06.18.05.26.14 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:26:14 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v3 15/15] arm64: kvm: Lift instrumentation restrictions on VHE Date: Thu, 18 Jun 2020 13:25:37 +0100 Message-Id: <20200618122537.9625-16-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200618_052617_288982_16B23815 X-CRM114-Status: UNSURE ( 9.87 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:443 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org With VHE and nVHE executable code completely separated, remove build config that disabled GCOV/KASAN/UBSAN/KCOV instrumentation for VHE as these now execute under the same memory mappings as the rest of the kernel. No violations are currently being reported by either KASAN or UBSAN. Signed-off-by: David Brazdil --- arch/arm64/kvm/hyp/Makefile | 8 -------- 1 file changed, 8 deletions(-) diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index 5f4f217532e0..cd0c3936d266 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -11,11 +11,3 @@ obj-$(CONFIG_KVM_INDIRECT_VECTORS) += smccc_wa.o hyp-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ debug-sr.o entry.o switch.o fpsimd.o tlb.o hyp-entry.o - -# KVM code is run at a different exception code with a different map, so -# compiler instrumentation that inserts callbacks or checks into the code may -# cause crashes. Just disable it. -GCOV_PROFILE := n -KASAN_SANITIZE := n -UBSAN_SANITIZE := n -KCOV_INSTRUMENT := n