From patchwork Thu May 19 13:40:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12854971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1F224C4332F for ; Thu, 19 May 2022 13:44:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=75M1RW4LU02sVSA3P0ilB+vtPvSqs2J0hmjdlUmDXM8=; b=VrFhhgkIvU10YZ TnZCSV4wTLRp7TNdT9bE+/t2semf1GW5cIsA2g0loYXQXboplpYfQLftYTUcU3c8tAkwkb1qcdC8G osG2jdmIWjiXB9pAMDcI40U4bhceENXBw50Lj/KK8EAyV4DUenqF2bhRei82M1m6P3GELvZp6UsLg Z7n6ceuWzf8SYJFq7W23bb6OPkAhxxakJYTqUiFzHsMN0r37xz0D+rAQzoKgW/D8nDohWhm/vLC8g 8cqBRhOrf5S3dZ4hfcr5Pfb+7sTOsFDk2QGjm5M4pHkH9ErXi4ur1EDiLoD3f/8iq+TN7X01NF0Y6 AvxLh2pehAAxC3HZyD6A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQL-00751b-6R; Thu, 19 May 2022 13:42:41 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQA-0074zO-Ej for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:42:32 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4CB7B6176B; Thu, 19 May 2022 13:42:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 40F4CC385B8; Thu, 19 May 2022 13:42:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967748; bh=+SbCfNgWLW4Z/wQFkJ7g6gPp4VOqgUcRYGDcNHgxoBg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PMtezuAy/ngvBpF5do3D42T34NTAcmsd4Q86k7oJ9AxvbxFpqs+fK67NszNsHoK4W 4q1u8ibvPKHnktBQhxIKnOiqNRL4zXkyfCmsuRmuRD9uRLfZRbc68yPr+PSOmiSxWq wTiFGs9rMhKzHmgI+wpkViqlLiA1GLGh0YuRwv6KueV2WCeHET8/gscWkOMgUvzDPN eXwweQaQ3NLJ2SGyMvqDNukhnGGRusfNLTNzPRT/y9d1WKi8hR9iHdldn0p/Bsd2wp CKfqx1U8wVzAITh/Tk8F77lcSu0eRrd8XRXFTEdsdTQsSqBluQsskMBJ0p8nt1z6Pa OauJgVfHfCLIw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 01/89] KVM: arm64: Handle all ID registers trapped for a protected VM Date: Thu, 19 May 2022 14:40:36 +0100 Message-Id: <20220519134204.5379-2-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064230_606770_1683963A X-CRM114-Status: GOOD ( 15.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier A protected VM accessing ID_AA64ISAR2_EL1 gets punished with an UNDEF, while it really should only get a zero back if the register is not handled by the hypervisor emulation (as mandated by the architecture). Introduce all the missing ID registers (including the unallocated ones), and have them to return 0. Reported-by: Will Deacon Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 42 ++++++++++++++++++++++++------ 1 file changed, 34 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 33f5181af330..188fed1c174b 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -246,15 +246,9 @@ u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) case SYS_ID_AA64MMFR2_EL1: return get_pvm_id_aa64mmfr2(vcpu); default: - /* - * Should never happen because all cases are covered in - * pvm_sys_reg_descs[]. - */ - WARN_ON(1); - break; + /* Unhandled ID register, RAZ */ + return 0; } - - return 0; } static u64 read_id_reg(const struct kvm_vcpu *vcpu, @@ -335,6 +329,16 @@ static bool pvm_gic_read_sre(struct kvm_vcpu *vcpu, /* Mark the specified system register as an AArch64 feature id register. */ #define AARCH64(REG) { SYS_DESC(REG), .access = pvm_access_id_aarch64 } +/* + * sys_reg_desc initialiser for architecturally unallocated cpufeature ID + * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2 + * (1 <= crm < 8, 0 <= Op2 < 8). + */ +#define ID_UNALLOCATED(crm, op2) { \ + Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ + .access = pvm_access_id_aarch64, \ +} + /* Mark the specified system register as Read-As-Zero/Write-Ignored */ #define RAZ_WI(REG) { SYS_DESC(REG), .access = pvm_access_raz_wi } @@ -378,24 +382,46 @@ static const struct sys_reg_desc pvm_sys_reg_descs[] = { AARCH32(SYS_MVFR0_EL1), AARCH32(SYS_MVFR1_EL1), AARCH32(SYS_MVFR2_EL1), + ID_UNALLOCATED(3,3), AARCH32(SYS_ID_PFR2_EL1), AARCH32(SYS_ID_DFR1_EL1), AARCH32(SYS_ID_MMFR5_EL1), + ID_UNALLOCATED(3,7), /* AArch64 ID registers */ /* CRm=4 */ AARCH64(SYS_ID_AA64PFR0_EL1), AARCH64(SYS_ID_AA64PFR1_EL1), + ID_UNALLOCATED(4,2), + ID_UNALLOCATED(4,3), AARCH64(SYS_ID_AA64ZFR0_EL1), + ID_UNALLOCATED(4,5), + ID_UNALLOCATED(4,6), + ID_UNALLOCATED(4,7), AARCH64(SYS_ID_AA64DFR0_EL1), AARCH64(SYS_ID_AA64DFR1_EL1), + ID_UNALLOCATED(5,2), + ID_UNALLOCATED(5,3), AARCH64(SYS_ID_AA64AFR0_EL1), AARCH64(SYS_ID_AA64AFR1_EL1), + ID_UNALLOCATED(5,6), + ID_UNALLOCATED(5,7), AARCH64(SYS_ID_AA64ISAR0_EL1), AARCH64(SYS_ID_AA64ISAR1_EL1), + AARCH64(SYS_ID_AA64ISAR2_EL1), + ID_UNALLOCATED(6,3), + ID_UNALLOCATED(6,4), + ID_UNALLOCATED(6,5), + ID_UNALLOCATED(6,6), + ID_UNALLOCATED(6,7), AARCH64(SYS_ID_AA64MMFR0_EL1), AARCH64(SYS_ID_AA64MMFR1_EL1), AARCH64(SYS_ID_AA64MMFR2_EL1), + ID_UNALLOCATED(7,3), + ID_UNALLOCATED(7,4), + ID_UNALLOCATED(7,5), + ID_UNALLOCATED(7,6), + ID_UNALLOCATED(7,7), /* Scalable Vector Registers are restricted. */ From patchwork Thu May 19 13:40:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12854970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 81B87C433FE for ; Thu, 19 May 2022 13:44:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uCt6bNCA/1PHtxmC9GoZDFrbGE7mTeLVzZR56Y3vABM=; b=M/D67/Z/97xnvC vvL8QfSk6EKenW2FdLzjwhbmubewAjiQks4dXtMM+qkxUSoKmBlj2r1X5N6XPGE9VRZNFXYqJdNbD zYxKLKBx26nGxkLcP+0/l44vVrxFzY1WlHEQHJYaeBrfwAHIG6RBBjvs9BM4K+Umx6JIhRAKlwJtP BmTk5/rctigCk93hyurW/1PwtCbvo/Oo28HQqcNay5AB1XwCh/RrK28WDYppAih8ZDC1u2D8IUeDj /cZgW5BC2NZzJ2cfBbhA4+pykqvmh7uATqMK6COuenc6VkHN68aWKupcgRUvfkV7EHEzOffv38Osi 9Oy7IoiVuq6MPStYJkmw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQT-00752p-KO; Thu, 19 May 2022 13:42:49 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQE-007506-0w for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:42:35 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 45CE26176B; Thu, 19 May 2022 13:42:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 389CEC34117; Thu, 19 May 2022 13:42:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967752; bh=mMTpj6Xf/moTi8ciNje9mfw0SiHtqrhrJpD/i+TIPXM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E3l6TybAXqwnFz+PuQUJNPYnXdGaGAa/sINGfllzMjmJ9fmoxX60ta2V6T8T5wAXS r0TxsBQbV/p1EQXXAMwz5JHFLvJYywVGR3B/6xd92EYJGRjKAbmxvUu8EWey52622A IEKimIrbD9LJaQjtsazUZvwfYw5xCi6+cGqJK2dABHIo57bynSetnVWHVsoB41W8Du LiWHiXR1/MtGUum1ISCJbEMgkYDtLsJJHcYP7vk4UyY1Vavri18c1+18acC8dq1DWh /3wLJExnd7ks7/gDNefTSpFTXJf1T4IRTHys6br0PyC983WymUXbK8x6WulJ7gKyxq LsN8wRpQQrx1A== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 02/89] KVM: arm64: Remove redundant hyp_assert_lock_held() assertions Date: Thu, 19 May 2022 14:40:37 +0100 Message-Id: <20220519134204.5379-3-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064234_135018_35B42842 X-CRM114-Status: GOOD ( 11.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org host_stage2_try() asserts that the KVM host lock is held, so there's no need to duplicate the assertion in its wrappers. Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 78edf077fa3b..1e78acf9662e 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -314,15 +314,11 @@ static int host_stage2_adjust_range(u64 addr, struct kvm_mem_range *range) int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot) { - hyp_assert_lock_held(&host_kvm.lock); - return host_stage2_try(__host_stage2_idmap, addr, addr + size, prot); } int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) { - hyp_assert_lock_held(&host_kvm.lock); - return host_stage2_try(kvm_pgtable_stage2_set_owner, &host_kvm.pgt, addr, size, &host_s2_pool, owner_id); } From patchwork Thu May 19 13:40:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12854972 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6647BC433F5 for ; Thu, 19 May 2022 13:44:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RVl1lSNkRqX7C7nbrCCtH7zT289KbK17l+MRLx5pM+o=; b=udWPBW8O382mqj jdPGIo0Ey55vbzbSfuq5nL/nGCybt4/1zFjyUu9WoH7/UHbKmgRcGrhHKwBo+I4oPmEXN2E7q44qp cgG3O0TvOTXW2v0lKUu+Va4Bw07HUbS/N1CPJh8FmVmGUyY30Z9tkWI+BxW1g2p0jFIO4+F+ItuTZ dD963fFq5ekDKQXIjDgRpAMyLU4wneUjV1goy1/qjvNiO8ivkbREuzNPet7IQBmbLSlWpH/RjMShZ 2ixaMj+7y0XFj6Z7+RfsVFPS0Z9PsW926Lfl6ZNQR/95TzKtTOBbKdLHzSIVhrDrBzxdy8y3bNWTz mFvudvHd6AmRlZvJ/1tw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQc-00754y-PI; Thu, 19 May 2022 13:42:58 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQK-007518-24 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:42:41 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DB83EB824AA; Thu, 19 May 2022 13:42:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 31739C34116; Thu, 19 May 2022 13:42:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967756; bh=a1MY3h8a9S0aqHMlLAuerB2YDvdA2T/4rKK2cTmBIjk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rc3H0+shhVFHwPMvinJmw9KbK6Wd7pCcy7vGlnghkhkcYdvjkbWike3yQLEVQW/Zd +xTv76VIhV8wzancT/VpPkQSaRc2hPgeEq+DEzIRWp1qNVZJKC2X+vguZgJsgt5K4N RuW1gVyHFNT5pPzS9EeXBzHZLgm+tTodQLi4iY7ykFYqazZZNg3ezntCT+VMph2EVq 6D5ndeU/zh68orEiPXpJ68UrZmixn3sj/sk82MlJl1T7SLL78GZNXWs3X2mQMbGBKL ddi+x9x7ms5SyBzeyRNknIwAUS90I+I7CMPUxFTJklU8O5St8Go29zvFMWEWS0hhtV /MGeeBdpnzACw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 03/89] KVM: arm64: Return error from kvm_arch_init_vm() on allocation failure Date: Thu, 19 May 2022 14:40:38 +0100 Message-Id: <20220519134204.5379-4-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064240_286214_683FCEA7 X-CRM114-Status: GOOD ( 14.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If we fail to allocate the 'supported_cpus' cpumask in kvm_arch_init_vm() then be sure to return -ENOMEM instead of success (0) on the failure path. Signed-off-by: Will Deacon Reviewed-by: Alexandru Elisei --- arch/arm64/kvm/arm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 523bc934fe2f..775b52871b51 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -146,8 +146,10 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) if (ret) goto out_free_stage2_pgd; - if (!zalloc_cpumask_var(&kvm->arch.supported_cpus, GFP_KERNEL)) + if (!zalloc_cpumask_var(&kvm->arch.supported_cpus, GFP_KERNEL)) { + ret = -ENOMEM; goto out_free_stage2_pgd; + } cpumask_copy(kvm->arch.supported_cpus, cpu_possible_mask); kvm_vgic_early_init(kvm); From patchwork Thu May 19 13:40:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12854973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3C7ACC433F5 for ; Thu, 19 May 2022 13:44:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GU0JzdeXtOiuiE4+EtrQ0VU61DriSnqfiU7lRsccWYQ=; b=u4Z0K3MHY6MVTy h1k5LstaLX/KoIGBnQkzQ5mDbOsxSvcyLPsBz2r/iVnnpSHYYolnTH7Y3IMCRcFqouY6lqO2pJC6h pIbTzHB4zyIxtKPTYARDEfzO+3MxYuwtBjXyX5o/qXVx2iGZ4Jg+ZruaK+uazirIKif8TQ2Oq6MQY U4eaql1WNRefRH2KNpJNynZHve0KWgpj1c841kWtBAretvjfpLC2Qe4IokRmPf3JXYJ1k6yk026f4 pGwWaJzaOmooV2e7zdEgVClovmddwMr4PlF8DUg1sRNWE4Cee1osK+ETNhnsy3SpZSo4VmUyU8Cxw Vz1k/CcDpDVJtc5C706A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQm-00758C-F8; Thu, 19 May 2022 13:43:08 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQL-00751k-PK for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:42:43 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 605766179A; Thu, 19 May 2022 13:42:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2AAEAC34119; Thu, 19 May 2022 13:42:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967760; bh=xcVvqsHLiKii5TVw7ayvStaj1DRy3KpRJuyB7N0Jz0A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZG/eePp/PWXECZvdHDH7DKHdJxX3dQjXjqg9wbFIsCi3gNfOfwYko+EvXS0zCQ2Px tyAWzxciMD+HUEqtyJgqSRCncVZxfcqIWeZWjEJVVdbwsBXkczZs06nVKYJhDOLEzZ w6MLdfM48ntbGr+x1yAb+tYilIgmYy6F/mUz1f4qUF/MB2GfoS9OKuv9fRflr28jbF kBA1KXBpwM0k9fSsvXcFWOOjSYIr38SVym8glE98/4pAQ5/W/chDZryR2ss+sRw1Vd j1aKjA3DWMUNHUWCMsm0gsdJcjR+AU57RXUXO8W6bRzJPrQvEhxhNlqRxr8ZGPDSjD qbxLUJHBg7DMQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, David Brazdil Subject: [PATCH 04/89] KVM: arm64: Ignore 'kvm-arm.mode=protected' when using VHE Date: Thu, 19 May 2022 14:40:39 +0100 Message-Id: <20220519134204.5379-5-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064241_940928_923DB4B4 X-CRM114-Status: GOOD ( 16.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Ignore 'kvm-arm.mode=protected' when using VHE so that kvm_get_mode() only returns KVM_MODE_PROTECTED on systems where the feature is available. Cc: David Brazdil Acked-by: Mark Rutland Signed-off-by: Will Deacon --- Documentation/admin-guide/kernel-parameters.txt | 1 - arch/arm64/kernel/cpufeature.c | 10 +--------- arch/arm64/kvm/arm.c | 6 +++++- 3 files changed, 6 insertions(+), 11 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 3f1cc5e317ed..63a764ec7fec 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2438,7 +2438,6 @@ protected: nVHE-based mode with support for guests whose state is kept private from the host. - Not valid if the kernel is running in EL2. Defaults to VHE/nVHE based on hardware support. Setting mode to "protected" will disable kexec and hibernation diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index d72c4b4d389c..1bbb7cfc76df 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1925,15 +1925,7 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) #ifdef CONFIG_KVM static bool is_kvm_protected_mode(const struct arm64_cpu_capabilities *entry, int __unused) { - if (kvm_get_mode() != KVM_MODE_PROTECTED) - return false; - - if (is_kernel_in_hyp_mode()) { - pr_warn("Protected KVM not available with VHE\n"); - return false; - } - - return true; + return kvm_get_mode() == KVM_MODE_PROTECTED; } #endif /* CONFIG_KVM */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 775b52871b51..7f8731306c2a 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -2167,7 +2167,11 @@ static int __init early_kvm_mode_cfg(char *arg) return -EINVAL; if (strcmp(arg, "protected") == 0) { - kvm_mode = KVM_MODE_PROTECTED; + if (!is_kernel_in_hyp_mode()) + kvm_mode = KVM_MODE_PROTECTED; + else + pr_warn_once("Protected KVM not available with VHE\n"); + return 0; } From patchwork Thu May 19 13:40:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12854974 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 26EC5C433EF for ; Thu, 19 May 2022 13:44:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fb2ct8uZaEYYgoe4gwfdGsYxwDDXDgibqdoykOUNesI=; b=ZAIlCU2m21X7lm ii98ft0PVFh7GqKGAq85gTy4JY4r250O2mWiQY0raSXYPNR3Y3OC9PH8srnfuEQhlFBZxVCJcSstS SwP6zu3ACWE5yofP8V6GPWcjJT9yCGQO5oTEiOli3gNuCpx+swZJsrtsyDxv4ZRF2aJdEM0f5iyTR tImhkZiKpuVTY8/LSk/jBkK03mImbDsblkMahdGS24Lhc19aYN/y8u6UqFh0D1xRW01lQKASElxWh THuf6DCuNLIxnmoBdlZQHygxH4AlYIGXtNce2lGyHTOgn/PDzh59QwBChifxt9dCL4vef8kuMwj+t fBqMwFA2YMTPotR0E+Bw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQz-0075DG-1B; Thu, 19 May 2022 13:43:21 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQR-00752P-GT for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:42:48 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 30548B824A5; Thu, 19 May 2022 13:42:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4DD7AC34100; Thu, 19 May 2022 13:42:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967765; bh=zEyj8WLPsyQCMesj24K7D/KNcr+nQZHaLAKHGDzvuq4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fD8yVP55ECUDTOIB2ZF91hhiW/pXGiOLRiH9DKxoF7SKZ3tRhXC/oVAYXimZ6UkrM x4FY73V72GW1+D4lL4qnxnASYOOVFx43W3TcuBHcnLH86nuNNDUqMpmh9qc/QPgAw+ mnDgOHMcuIyIjRH080UE8TEP5c3VV8hgTmjUjaz0x6Jruxinnc/iZZ6YXvkDqMnefn C51HST7GtdYV5X/a63U6pctf6w8xqghITrg/sedqgU1aeT3HE4/rLJNMJ28kINcp2s Qa2zmIgtsaB0hNLWo9VzrMowddP3w1ZE7D5EyS5VUJcEZBCjUhRJDS/k2edTmPVwaz DrMacKmPY4bHg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, David Brazdil Subject: [PATCH 05/89] KVM: arm64: Extend comment in has_vhe() Date: Thu, 19 May 2022 14:40:40 +0100 Message-Id: <20220519134204.5379-6-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064247_740491_9E4E8E88 X-CRM114-Status: GOOD ( 13.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org has_vhe() expands to a compile-time constant when evaluated from the VHE or nVHE code, alternatively checking a static key when called from elsewhere in the kernel. On face value, this looks like a case of premature optimization, but in fact this allows symbol references on VHE-specific code paths to be dropped from the nVHE object. Expand the comment in has_vhe() to make this clearer, hopefully discouraging anybody from simplifying the code. Cc: David Brazdil Acked-by: Mark Rutland Signed-off-by: Will Deacon --- arch/arm64/include/asm/virt.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h index 3c8af033a997..0e80db4327b6 100644 --- a/arch/arm64/include/asm/virt.h +++ b/arch/arm64/include/asm/virt.h @@ -113,6 +113,9 @@ static __always_inline bool has_vhe(void) /* * Code only run in VHE/NVHE hyp context can assume VHE is present or * absent. Otherwise fall back to caps. + * This allows the compiler to discard VHE-specific code from the + * nVHE object, reducing the number of external symbol references + * needed to link. */ if (is_vhe_hyp_code()) return true; From patchwork Thu May 19 13:40:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12854975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 32738C433FE for ; Thu, 19 May 2022 13:45:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OmW+WGYl60pTbVS5AfKenkNZi91ZWnGGRmhG3I0GlSY=; b=XMsan5dDyd+SFB apjfUTHNIl9avd9kXHHrfwSeyTpHmhnMMtyc/y1GapRL5DClN9ixdIy0ID9KgmHiQ73vWItTFzLCV 3nPR7aDJz02EO0UsfEhLZuMkfgAfERt79AZkCl9h/aiv1wZv6xA50plBb+HkAiPAT9/oiyRWhGx6X e9GRqMnEM2A90pTOStIPe6ueHLTa0XqKxMTwipqY1vZZsDrS5eShMeOVdevHM2WtqmJzvgsABzRtC k2xQGXTtbBfK9djfuz64unam9fsSRR/6y62igk2Rq4exyx4lXtVR+CxjPmwSgYv45YcMa9PGZ+Fis n01ykoniJldhxtf+zXTg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRY-0075Ty-Gs; Thu, 19 May 2022 13:43:57 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQT-00752v-T6 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:42:51 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 783DF6173E; Thu, 19 May 2022 13:42:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7076EC34117; Thu, 19 May 2022 13:42:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967768; bh=3z3t2BrkvfovB2h5pwI3p5yIYTZRtSIyBD74PPTs0Yw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G96M87Oxdl/MhNzXxjemh8P8hD9d+d9tOTeB1WXdxsMsH/EX7wiqHwKlO5j4HVvMB wppa1c+r5WyEKdbCnTaUShA+X7bXIr9pA2c6YB91FBoyEpHaDekbZd8KYShe9IjrIg BEZZBNvs+EiDogN8GOQi0dblPZK60+tPLi7j3z7IaA3DbWMh7En4SWPK8B5anJsFV5 krURr+mlbzyq44nBBAoo4S+S2XsRIt/yn0eJGgr5gSgsNm9uP6SuTVVPf14nOrnRdL XgRvIlCY06gv78RiJd88/i8CdBGh5f/ctqmlqy0lkb8PIHLvHrCs5AihtV08LBnJvV YUJHDdmCFdWbQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 06/89] KVM: arm64: Drop stale comment Date: Thu, 19 May 2022 14:40:41 +0100 Message-Id: <20220519134204.5379-7-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064250_020327_3AAF4708 X-CRM114-Status: GOOD ( 13.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier The layout of 'struct kvm_vcpu_arch' has evolved significantly since the initial port of KVM/arm64, so remove the stale comment suggesting that a prefix of the structure is used exclusively from assembly code. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 5 ----- 1 file changed, 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e3b25dc6c367..14ed7c7ad797 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -339,11 +339,6 @@ struct kvm_vcpu_arch { struct arch_timer_cpu timer_cpu; struct kvm_pmu pmu; - /* - * Anything that is not used directly from assembly code goes - * here. - */ - /* * Guest registers we preserve during guest debugging. * From patchwork Thu May 19 13:40:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12854976 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BFC74C433FE for ; Thu, 19 May 2022 13:45:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xNhyjBxpIoo89q6uUkyAnrpqygKfWzvTIioRCZStBfs=; b=COqWdTIkkCP4SO 6nZBglizzp265oE68ntp38wwp30kCptPUblH2KYKxUI38rZDUamK4e79gpI6Oj/8PlyrfJDhrPKP4 Z69GLefPgAZmEoSSfVtjw8QwD531tut091dgrEN0BKZaB3eEcCYipVbkbxbY6nW8WHNI97dgP5TmE TRhpHEKKm+A6hfJHWQ2F4n4NGflkDVFw5aPDlxNZ6NZN+LcJUgn6dvHh1sm+FShWZfljwNraMrUoI mlC3jB2Cxtz0sTddpbIC+vQc5DEo6XPSuCpDo2ORrXLyyL34y+ULtAXD6fzXGIyFVxCf8RkJWSyvr 98uav/hsAhmhRrxlSDEg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRx-0075hN-98; Thu, 19 May 2022 13:44:22 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQX-00753c-SB for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:42:55 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6F85C6176B; Thu, 19 May 2022 13:42:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 670B3C34116; Thu, 19 May 2022 13:42:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967772; bh=KylN5F7OBemLgjR/NfTk3I4TCHIFYQD56mt7BpV3Ah4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RiqPzk+iNLqmlON7sfdQpR9X2SO9AnM4CZrDiG0TLX76KVTCasJoLfBMUFIlysDQG mA0tzDYlwHJhXtru1VdUgzT4j8E2YFnsdCDSHUyoFlS5liPTIHfMR3Rf+0B+jrtcEB n69jGofclGAolmDnYg3ELgPB8xuCdrjwmdOg/DMHZtlRIL5q0fnsMZ8ZjC7U7jw8IJ STjzA5vz0MLhbQ0W/vKuHryuy79VNd47QR2OPku2nZs3/SUBnoRTS4R2omX8yF2Ft7 Uc6yEgkN+vtLVzNOphrbZgbR5E9ibRgX+62TToE+RNKTlHUxKFWpd+iIDiOUiBp8Di Y75RfgaJ5Pxkg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 07/89] KVM: arm64: Move hyp refcount manipulation helpers Date: Thu, 19 May 2022 14:40:42 +0100 Message-Id: <20220519134204.5379-8-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064254_045378_334AA850 X-CRM114-Status: GOOD ( 12.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret We will soon need to manipulate struct hyp_page refcounts from outside page_alloc.c, so move the helpers to a header file. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 18 ++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/page_alloc.c | 19 ------------------- 2 files changed, 18 insertions(+), 19 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 592b7edb3edb..418b66a82a50 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -45,4 +45,22 @@ static inline int hyp_page_count(void *addr) return p->refcount; } +static inline void hyp_page_ref_inc(struct hyp_page *p) +{ + BUG_ON(p->refcount == USHRT_MAX); + p->refcount++; +} + +static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) +{ + BUG_ON(!p->refcount); + p->refcount--; + return (p->refcount == 0); +} + +static inline void hyp_set_page_refcounted(struct hyp_page *p) +{ + BUG_ON(p->refcount); + p->refcount = 1; +} #endif /* __KVM_HYP_MEMORY_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index d40f0b30b534..1ded09fc9b10 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -144,25 +144,6 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, return p; } -static inline void hyp_page_ref_inc(struct hyp_page *p) -{ - BUG_ON(p->refcount == USHRT_MAX); - p->refcount++; -} - -static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) -{ - BUG_ON(!p->refcount); - p->refcount--; - return (p->refcount == 0); -} - -static inline void hyp_set_page_refcounted(struct hyp_page *p) -{ - BUG_ON(p->refcount); - p->refcount = 1; -} - static void __hyp_put_page(struct hyp_pool *pool, struct hyp_page *p) { if (hyp_page_ref_dec_and_test(p)) From patchwork Thu May 19 13:40:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12854977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B11CC433F5 for ; Thu, 19 May 2022 13:46:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=W9rg3y4n6jONYkIBOArWuUxr7SCm42XOtHGp8cfbMJs=; b=XhcdqLdEH6aYQY DyEMwokl1UojfFk7C8YXA9A260e9esm89o94tM5p0eYqcOOIzISrWN/wiquY/iBgjq/YS4ur2FTPa AKlLIOfqzjoITOP3PIM/2vIOd8N2NtaLLc9uxsmMdYkq4YnpZ3uAnE4UK4bMxu8TnLRWq06B/xfNr rzhy5cgWwbucJ/KlgG/Jk+VArjAr7hh2LoOU8LZqqA10QC2kOVERoClgelvhnFFo1Ak8+Nzf9Q/1v yoCCaejoU3ex0Np93QB/yT3aMtGvFb4jsWNHSln/kJsAVzvoj6IWZxHmHWWXAzkba7wn/+Awq6NBc u0kOT5p8q8DWKTy591uw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSi-0076AE-8n; Thu, 19 May 2022 13:45:08 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQd-007553-BH for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:01 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0E34BB824A6; Thu, 19 May 2022 13:42:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5E564C34117; Thu, 19 May 2022 13:42:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967776; bh=c8lrI2T7pA2HznSihwDmJJiSD07lpb1MnUvgkAjomMg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pRsZdFFnh8NypPazxOFE0eXARXvA7MuCC5zvD3FoqqdeXRLO/lRvHPLavJYJxpOm2 xGYAYajpy4VQzCPwE9VzPm56PXyCt8pxJ2MvUlMbQ3S+CzRr8HjCS1kh0Dn/ue3e6Z 6VwIAbj5yOMWFgwssw+MzeFBdRLF0F1KbVo3ORQE//+ItNjOEZJ8JL5C3wVjqodjFB f6f9qGE1uMKW1JUJA4fg44F4vK/S34z24TCL9iJtS0SzrDL/Qz+BO62As95H5Uy/th iA4KC15lE0qo+/2e9NnbILqKmTvdaWoeozK5TaRMOVEHXyXPTYNF8N8DV0P173Em9l /C2LugrD+e6+A== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 08/89] KVM: arm64: Back hyp_vmemmap for all of memory Date: Thu, 19 May 2022 14:40:43 +0100 Message-Id: <20220519134204.5379-9-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064259_754411_471A30F1 X-CRM114-Status: GOOD ( 24.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret The EL2 vmemmap in nVHE Protected mode is currently very sparse: only memory pages owned by the hypervisor itself have a matching struct hyp_page. But since the size of these structs has been reduced significantly, it appears that we can afford backing the vmemmap for all of memory. This will simplify a lot memory tracking as the hypervisor will have a place to store metadata (e.g. refcounts) that wouldn't otherwise fit in the 4 SW bits we have in the host stage-2 page-table for instance. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pkvm.h | 26 +++++++++++++++++++++++ arch/arm64/kvm/hyp/include/nvhe/mm.h | 14 +------------ arch/arm64/kvm/hyp/nvhe/mm.c | 31 ++++++++++++++++++++++++---- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 4 +--- arch/arm64/kvm/hyp/nvhe/setup.c | 7 +++---- arch/arm64/kvm/pkvm.c | 18 ++-------------- 6 files changed, 60 insertions(+), 40 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index 9f4ad2a8df59..8f7b8a2314bb 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -14,6 +14,32 @@ extern struct memblock_region kvm_nvhe_sym(hyp_memory)[]; extern unsigned int kvm_nvhe_sym(hyp_memblock_nr); +static inline unsigned long +hyp_vmemmap_memblock_size(struct memblock_region *reg, size_t vmemmap_entry_size) +{ + unsigned long nr_pages = reg->size >> PAGE_SHIFT; + unsigned long start, end; + + start = (reg->base >> PAGE_SHIFT) * vmemmap_entry_size; + end = start + nr_pages * vmemmap_entry_size; + start = ALIGN_DOWN(start, PAGE_SIZE); + end = ALIGN(end, PAGE_SIZE); + + return end - start; +} + +static inline unsigned long hyp_vmemmap_pages(size_t vmemmap_entry_size) +{ + unsigned long res = 0, i; + + for (i = 0; i < kvm_nvhe_sym(hyp_memblock_nr); i++) { + res += hyp_vmemmap_memblock_size(&kvm_nvhe_sym(hyp_memory)[i], + vmemmap_entry_size); + } + + return res >> PAGE_SHIFT; +} + static inline unsigned long __hyp_pgtable_max_pages(unsigned long nr_pages) { unsigned long total = 0, i; diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index 2d08510c6cc1..73309ccc192e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -15,23 +15,11 @@ extern hyp_spinlock_t pkvm_pgd_lock; int hyp_create_idmap(u32 hyp_va_bits); int hyp_map_vectors(void); -int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t back); +int hyp_back_vmemmap(phys_addr_t back); int pkvm_cpu_set_vector(enum arm64_hyp_spectre_vector slot); int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot); int pkvm_create_mappings_locked(void *from, void *to, enum kvm_pgtable_prot prot); unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, enum kvm_pgtable_prot prot); -static inline void hyp_vmemmap_range(phys_addr_t phys, unsigned long size, - unsigned long *start, unsigned long *end) -{ - unsigned long nr_pages = size >> PAGE_SHIFT; - struct hyp_page *p = hyp_phys_to_page(phys); - - *start = (unsigned long)p; - *end = *start + nr_pages * sizeof(struct hyp_page); - *start = ALIGN_DOWN(*start, PAGE_SIZE); - *end = ALIGN(*end, PAGE_SIZE); -} - #endif /* __KVM_HYP_MM_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index cdbe8e246418..168e7fbe9a3c 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -105,13 +105,36 @@ int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot) return ret; } -int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t back) +int hyp_back_vmemmap(phys_addr_t back) { - unsigned long start, end; + unsigned long i, start, size, end = 0; + int ret; - hyp_vmemmap_range(phys, size, &start, &end); + for (i = 0; i < hyp_memblock_nr; i++) { + start = hyp_memory[i].base; + start = ALIGN_DOWN((u64)hyp_phys_to_page(start), PAGE_SIZE); + /* + * The begining of the hyp_vmemmap region for the current + * memblock may already be backed by the page backing the end + * the previous region, so avoid mapping it twice. + */ + start = max(start, end); + + end = hyp_memory[i].base + hyp_memory[i].size; + end = PAGE_ALIGN((u64)hyp_phys_to_page(end)); + if (start >= end) + continue; + + size = end - start; + ret = __pkvm_create_mappings(start, size, back, PAGE_HYP); + if (ret) + return ret; + + memset(hyp_phys_to_virt(back), 0, size); + back += size; + } - return __pkvm_create_mappings(start, end - start, back, PAGE_HYP); + return 0; } static void *__hyp_bp_vect_base; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 1ded09fc9b10..dc87589440b8 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -230,10 +230,8 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, /* Init the vmemmap portion */ p = hyp_phys_to_page(phys); - for (i = 0; i < nr_pages; i++) { - p[i].order = 0; + for (i = 0; i < nr_pages; i++) hyp_set_page_refcounted(&p[i]); - } /* Attach the unused pages to the buddy tree */ for (i = reserved_pages; i < nr_pages; i++) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 27af337f9fea..7d2b325efb50 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -31,12 +31,11 @@ static struct hyp_pool hpool; static int divide_memory_pool(void *virt, unsigned long size) { - unsigned long vstart, vend, nr_pages; + unsigned long nr_pages; hyp_early_alloc_init(virt, size); - hyp_vmemmap_range(__hyp_pa(virt), size, &vstart, &vend); - nr_pages = (vend - vstart) >> PAGE_SHIFT; + nr_pages = hyp_vmemmap_pages(sizeof(struct hyp_page)); vmemmap_base = hyp_early_alloc_contig(nr_pages); if (!vmemmap_base) return -ENOMEM; @@ -78,7 +77,7 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, if (ret) return ret; - ret = hyp_back_vmemmap(phys, size, hyp_virt_to_phys(vmemmap_base)); + ret = hyp_back_vmemmap(hyp_virt_to_phys(vmemmap_base)); if (ret) return ret; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index ebecb7c045f4..34229425b25d 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -53,7 +53,7 @@ static int __init register_memblock_regions(void) void __init kvm_hyp_reserve(void) { - u64 nr_pages, prev, hyp_mem_pages = 0; + u64 hyp_mem_pages = 0; int ret; if (!is_hyp_mode_available() || is_kernel_in_hyp_mode()) @@ -71,21 +71,7 @@ void __init kvm_hyp_reserve(void) hyp_mem_pages += hyp_s1_pgtable_pages(); hyp_mem_pages += host_s2_pgtable_pages(); - - /* - * The hyp_vmemmap needs to be backed by pages, but these pages - * themselves need to be present in the vmemmap, so compute the number - * of pages needed by looking for a fixed point. - */ - nr_pages = 0; - do { - prev = nr_pages; - nr_pages = hyp_mem_pages + prev; - nr_pages = DIV_ROUND_UP(nr_pages * STRUCT_HYP_PAGE_SIZE, - PAGE_SIZE); - nr_pages += __hyp_pgtable_max_pages(nr_pages); - } while (nr_pages != prev); - hyp_mem_pages += nr_pages; + hyp_mem_pages += hyp_vmemmap_pages(STRUCT_HYP_PAGE_SIZE); /* * Try to allocate a PMD-aligned region to reduce TLB pressure once From patchwork Thu May 19 13:40:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12854978 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 01496C433F5 for ; Thu, 19 May 2022 13:46:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+2daAGnI8HPlfD5AYxMX/lwMNFDZM7muSO0Elkmdhzk=; b=RJUlcLZmQqWJkb SyFbSzejjGeQNJ4XFLFvvQ8utueFkNDlNHBZ3zTYx7pqdxxdneSSfgqynmDPlUoWNn8p+TlX+/3Ng 3Tjjz96YXjLndC641u6DknrjFxAKZa8i3mQ/PhCGn/Eftppm0lQXm3lnYl76+PYDkEDGXuDtxzN7Q noejG5XxwH9xVNhA0v9Sg2YD+AAA81kFHLrGBvl8p1J6x5qyMdkGrIxn5vSPt51baZGlYsYA6JBTD DeQ3gIK7kD9ubUVT2otAes4yeq2l5hfEzcge+EaRNQe+6PFg5fiNVo+cH0jyDo4Mz+Uw0PFCRsBus jW8vV/cDXK5xoEyG5HLQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgT5-0076M4-Ey; Thu, 19 May 2022 13:45:32 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQg-00756n-6U for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:03 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 64FCC6176B; Thu, 19 May 2022 13:43:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 55950C36AE3; Thu, 19 May 2022 13:42:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967780; bh=NrldTKw4gy/AB6RKI6l9qxaqANtkJHYc69sbOYfX8H0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EYcUG+Rd5Iy1H2+7orRU5jbJJSXTyoid9xZEwJUJdhKF1jpfKD/5Hz5oxS8MNYswp jO3O3fNqUjpngNKH/am3JaOJYSiU8P9fElf6AGXTTsDcQ76myYidcj8l13D63TYj09 THvgbGo5pfRtjUlNwZBY5opYZygHg/nPN/tDIlZf1M3nxaz7JY+tGN4rYdDKemJvFg kKz1/P0yUUTvdgueD/mQtXumo0jDKve6WvJhn8NyGPa60gliRDP4/48UH3rkr1EvKq sIA2Jr6zy/IRkJsXuG0QxmdL9aKJ8bJf59SkzEo5qAqi3blh//oj0lpYrrP7cDCyVt xPa003YwbLtOA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 09/89] KVM: arm64: Unify identifiers used to distinguish host and hypervisor Date: Thu, 19 May 2022 14:40:44 +0100 Message-Id: <20220519134204.5379-10-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064302_367369_DC74F60D X-CRM114-Status: GOOD ( 16.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The 'pkvm_component_id' enum type provides constants to refer to the host and the hypervisor, yet this information is duplicated by the 'pkvm_hyp_id' constant. Remove the definition of 'pkvm_hyp_id' and move the 'pkvm_component_id' type definition to 'mem_protect.h' so that it can be used outside of the memory protection code. Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 6 +++++- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 8 -------- arch/arm64/kvm/hyp/nvhe/setup.c | 2 +- 3 files changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 80e99836eac7..f5705a1e972f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -51,7 +51,11 @@ struct host_kvm { }; extern struct host_kvm host_kvm; -extern const u8 pkvm_hyp_id; +/* This corresponds to page-table locking order */ +enum pkvm_component_id { + PKVM_ID_HOST, + PKVM_ID_HYP, +}; int __pkvm_prot_finalize(void); int __pkvm_host_share_hyp(u64 pfn); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 1e78acf9662e..ff86f5bd230f 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -26,8 +26,6 @@ struct host_kvm host_kvm; static struct hyp_pool host_s2_pool; -const u8 pkvm_hyp_id = 1; - static void host_lock_component(void) { hyp_spin_lock(&host_kvm.lock); @@ -380,12 +378,6 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) BUG_ON(ret && ret != -EAGAIN); } -/* This corresponds to locking order */ -enum pkvm_component_id { - PKVM_ID_HOST, - PKVM_ID_HYP, -}; - struct pkvm_mem_transition { u64 nr_pages; diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 7d2b325efb50..311197a223e6 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -197,7 +197,7 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level, state = pkvm_getstate(kvm_pgtable_hyp_pte_prot(pte)); switch (state) { case PKVM_PAGE_OWNED: - return host_stage2_set_owner_locked(phys, PAGE_SIZE, pkvm_hyp_id); + return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP); case PKVM_PAGE_SHARED_OWNED: prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_BORROWED); break; From patchwork Thu May 19 13:40:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855041 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 392BEC433F5 for ; Thu, 19 May 2022 13:47:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Wp9pxPz4yOoyCKeHSu4qUTe7ndnLs9yI0BP9pKnDjmc=; b=Ouke/jJhSbtfc4 wZtNUa2KmvkSMtUId4RHVUgEr4qiQrWPm4Fs0wGbH4lNG+nqmABPrp8NsLpKHUUz6fXrN2NweZhna /zrx+bYGZUiaSH8WVwpQvRC90PieEAuo03QF8i7uaghKJdb7r1sNUdbGNILxjTsMKcgcm2k4D66Iw PYe81aAqh01uRcaLqHk+DN5BF3Z1WNJDAzPnsPcS3oQV7KIo4IXWmLRM7B0mk2tZF+f1lDTLE9523 KPz9S7CJDuRFHZ+RKKfa4jD9BYqJvT5px8Cc4LbB1Px00DzUshXlIY6GwAcKf3aF5BAAEEV7sQJSe vjC4E4qPi92zEEvrzHmQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTb-0076a2-6i; Thu, 19 May 2022 13:46:04 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQl-00757u-DU for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:09 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id ECAF9B824AA; Thu, 19 May 2022 13:43:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4D396C36AE5; Thu, 19 May 2022 13:43:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967784; bh=pr8cqSQ8LvDyMaXSEUTsEP4ASMbp/gW/nK760YXNIiY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E1ymsAlqWNYi4MaLBoT7GrxsUccaUMDmwKl4xv0+QaTqkv6BNMmmhOEAuscujcOn2 BuAgQSkWx0WQQFf7HlU5mA+F/75KfDuiAjH2rhSG7kS6I9e1A6/Evdd4/a7b8zD8Ns FgopfyyoMPQdR689n/Gy99tgOEZUEyFro3NIe29XtHkS5DpNrVp2Dp/5DI1iF4jBnc HSJaJCrp9DRebNcGq+DDrv/BRbY3JEcIN0yDexAeptt9G6doRvfEkKwqv/bMthhEld IEQOUak1DFZW4Q7DoJCtTYBjVXSYqx0z0XU3m4GuyuDEWOdq1HPq01YzU6HWJ1u98K l7lcgt2aNgRhg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 10/89] KVM: arm64: Implement do_donate() helper for donating memory Date: Thu, 19 May 2022 14:40:45 +0100 Message-Id: <20220519134204.5379-11-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064307_841009_B2D02C86 X-CRM114-Status: GOOD ( 17.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Transferring ownership information of a memory region from one component to another can be achieved using a "donate" operation, which results in the previous owner losing access to the underlying pages entirely. Implement a do_donate() helper, along the same lines as do_{un,}share, and provide this functionality for the host-{to,from}-hyp cases as this will later be used to donate/reclaim memory pages to store VM metadata at EL2. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 239 ++++++++++++++++++ 2 files changed, 241 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index f5705a1e972f..c87b19b2d468 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -60,6 +60,8 @@ enum pkvm_component_id { int __pkvm_prot_finalize(void); int __pkvm_host_share_hyp(u64 pfn); int __pkvm_host_unshare_hyp(u64 pfn); +int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); +int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index ff86f5bd230f..c30402737548 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -391,6 +391,9 @@ struct pkvm_mem_transition { /* Address in the completer's address space */ u64 completer_addr; } host; + struct { + u64 completer_addr; + } hyp; }; } initiator; @@ -404,6 +407,10 @@ struct pkvm_mem_share { const enum kvm_pgtable_prot completer_prot; }; +struct pkvm_mem_donation { + const struct pkvm_mem_transition tx; +}; + struct check_walk_data { enum pkvm_page_state desired; enum pkvm_page_state (*get_page_state)(kvm_pte_t pte); @@ -503,6 +510,46 @@ static int host_initiate_unshare(u64 *completer_addr, return __host_set_page_state_range(addr, size, PKVM_PAGE_OWNED); } +static int host_initiate_donation(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + u8 owner_id = tx->completer.id; + u64 size = tx->nr_pages * PAGE_SIZE; + + *completer_addr = tx->initiator.host.completer_addr; + return host_stage2_set_owner_locked(tx->initiator.addr, size, owner_id); +} + +static bool __host_ack_skip_pgtable_check(const struct pkvm_mem_transition *tx) +{ + return !(IS_ENABLED(CONFIG_NVHE_EL2_DEBUG) || + tx->initiator.id != PKVM_ID_HYP); +} + +static int __host_ack_transition(u64 addr, const struct pkvm_mem_transition *tx, + enum pkvm_page_state state) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + + if (__host_ack_skip_pgtable_check(tx)) + return 0; + + return __host_check_page_state_range(addr, size, state); +} + +static int host_ack_donation(u64 addr, const struct pkvm_mem_transition *tx) +{ + return __host_ack_transition(addr, tx, PKVM_NOPAGE); +} + +static int host_complete_donation(u64 addr, const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + u8 host_id = tx->completer.id; + + return host_stage2_set_owner_locked(addr, size, host_id); +} + static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte) { if (!kvm_pte_valid(pte)) @@ -523,6 +570,27 @@ static int __hyp_check_page_state_range(u64 addr, u64 size, return check_page_state_range(&pkvm_pgtable, addr, size, &d); } +static int hyp_request_donation(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + u64 addr = tx->initiator.addr; + + *completer_addr = tx->initiator.hyp.completer_addr; + return __hyp_check_page_state_range(addr, size, PKVM_PAGE_OWNED); +} + +static int hyp_initiate_donation(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + int ret; + + *completer_addr = tx->initiator.hyp.completer_addr; + ret = kvm_pgtable_hyp_unmap(&pkvm_pgtable, tx->initiator.addr, size); + return (ret != size) ? -EFAULT : 0; +} + static bool __hyp_ack_skip_pgtable_check(const struct pkvm_mem_transition *tx) { return !(IS_ENABLED(CONFIG_NVHE_EL2_DEBUG) || @@ -554,6 +622,16 @@ static int hyp_ack_unshare(u64 addr, const struct pkvm_mem_transition *tx) PKVM_PAGE_SHARED_BORROWED); } +static int hyp_ack_donation(u64 addr, const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + + if (__hyp_ack_skip_pgtable_check(tx)) + return 0; + + return __hyp_check_page_state_range(addr, size, PKVM_NOPAGE); +} + static int hyp_complete_share(u64 addr, const struct pkvm_mem_transition *tx, enum kvm_pgtable_prot perms) { @@ -572,6 +650,15 @@ static int hyp_complete_unshare(u64 addr, const struct pkvm_mem_transition *tx) return (ret != size) ? -EFAULT : 0; } +static int hyp_complete_donation(u64 addr, + const struct pkvm_mem_transition *tx) +{ + void *start = (void *)addr, *end = start + (tx->nr_pages * PAGE_SIZE); + enum kvm_pgtable_prot prot = pkvm_mkstate(PAGE_HYP, PKVM_PAGE_OWNED); + + return pkvm_create_mappings_locked(start, end, prot); +} + static int check_share(struct pkvm_mem_share *share) { const struct pkvm_mem_transition *tx = &share->tx; @@ -724,6 +811,94 @@ static int do_unshare(struct pkvm_mem_share *share) return WARN_ON(__do_unshare(share)); } +static int check_donation(struct pkvm_mem_donation *donation) +{ + const struct pkvm_mem_transition *tx = &donation->tx; + u64 completer_addr; + int ret; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + ret = host_request_owned_transition(&completer_addr, tx); + break; + case PKVM_ID_HYP: + ret = hyp_request_donation(&completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + if (ret) + return ret; + + switch (tx->completer.id){ + case PKVM_ID_HOST: + ret = host_ack_donation(completer_addr, tx); + break; + case PKVM_ID_HYP: + ret = hyp_ack_donation(completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + return ret; +} + +static int __do_donate(struct pkvm_mem_donation *donation) +{ + const struct pkvm_mem_transition *tx = &donation->tx; + u64 completer_addr; + int ret; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + ret = host_initiate_donation(&completer_addr, tx); + break; + case PKVM_ID_HYP: + ret = hyp_initiate_donation(&completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + if (ret) + return ret; + + switch (tx->completer.id){ + case PKVM_ID_HOST: + ret = host_complete_donation(completer_addr, tx); + break; + case PKVM_ID_HYP: + ret = hyp_complete_donation(completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + return ret; +} + +/* + * do_donate(): + * + * The page owner transfers ownership to another component, losing access + * as a consequence. + * + * Initiator: OWNED => NOPAGE + * Completer: NOPAGE => OWNED + */ +static int do_donate(struct pkvm_mem_donation *donation) +{ + int ret; + + ret = check_donation(donation); + if (ret) + return ret; + + return WARN_ON(__do_donate(donation)); +} + int __pkvm_host_share_hyp(u64 pfn) { int ret; @@ -789,3 +964,67 @@ int __pkvm_host_unshare_hyp(u64 pfn) return ret; } + +int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages) +{ + int ret; + u64 host_addr = hyp_pfn_to_phys(pfn); + u64 hyp_addr = (u64)__hyp_va(host_addr); + struct pkvm_mem_donation donation = { + .tx = { + .nr_pages = nr_pages, + .initiator = { + .id = PKVM_ID_HOST, + .addr = host_addr, + .host = { + .completer_addr = hyp_addr, + }, + }, + .completer = { + .id = PKVM_ID_HYP, + }, + }, + }; + + host_lock_component(); + hyp_lock_component(); + + ret = do_donate(&donation); + + hyp_unlock_component(); + host_unlock_component(); + + return ret; +} + +int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages) +{ + int ret; + u64 host_addr = hyp_pfn_to_phys(pfn); + u64 hyp_addr = (u64)__hyp_va(host_addr); + struct pkvm_mem_donation donation = { + .tx = { + .nr_pages = nr_pages, + .initiator = { + .id = PKVM_ID_HYP, + .addr = hyp_addr, + .hyp = { + .completer_addr = host_addr, + }, + }, + .completer = { + .id = PKVM_ID_HOST, + }, + }, + }; + + host_lock_component(); + hyp_lock_component(); + + ret = do_donate(&donation); + + hyp_unlock_component(); + host_unlock_component(); + + return ret; +} From patchwork Thu May 19 13:40:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855042 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CA9FDC433F5 for ; Thu, 19 May 2022 13:48:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wiDqbhx97rAiT3iC9k6wCL234338EWP1mrHF/cHARgI=; b=wJDixMTz86D0Op mxvzSnRtVgTJISqLfOiPw0Hk2Be+3KGIKWnhsxe4L1qvdFCoeaftY154VOi8qb0+O3btJna28hK3C HDKM0YVkUd5KKsv7z7JEYho9IfUAWX6sxglR6NqjGP0LLcuV5DiwCoadOMbp7dkdDl6UfxpGIWVcb EHqa8lC1iOBBblcEE0aDVIzbBhxsvnfwcAYIZsqnGyji67LDX6+SUYHRNR5aDGYpF8ll5oo4I4xpY V1vTYXcv2M6JzgMVHVrgqZn10JWbsxbfPaK61oexRQVwCmkNkYHm5SHkIvfEfAvkB/rpSueT+TH83 yP9WISaTiy+eH5lHTqsA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUa-00776k-Oz; Thu, 19 May 2022 13:47:05 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQn-007596-Rn for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:11 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 561896179E; Thu, 19 May 2022 13:43:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 43F89C34116; Thu, 19 May 2022 13:43:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967788; bh=oi7nWzoEc9504XVLiQbs3WOiWyACarQzLX8Qdeo+Dpw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ai111Mz0yHXfWZMGyOqaSqwA9EjJszWyeipWxoXH4bKfwJiONtB6AokrT5YO6fxXy 9u6ezzZPGE4TeufbhdNMQF/ZQ12AY4knHRhVLZTDuKlUkXc5DQIyv55G5TRcdaELC9 rM0UsX8PBisyFeOpGQHneOoOtLkNOfmZ/r7tamviaFfjxKWW+RtYgEIxlIS5np+A2D rLM/stmgcRcPBPaBWg80PvCimoaZG+MkEZ4a9lUdNuWV8XVarZpqr4Z+8tSJZvtHn7 0ZUH9opzUbAo1QFK9461OePcWNuWp2JcLZPAoROtYECwKWOfRfTYGkzbPsvotb8l71 9wltMwOLSDvOw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 11/89] KVM: arm64: Prevent the donation of no-map pages Date: Thu, 19 May 2022 14:40:46 +0100 Message-Id: <20220519134204.5379-12-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064310_072237_E7619719 X-CRM114-Status: GOOD ( 15.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Memory regions marked as no-map in DT routinely include TrustZone carevouts and such. Although donating such pages to the hypervisor may not breach confidentiality, it may be used to corrupt its state in uncontrollable ways. To prevent this, let's block host-initiated memory transitions targeting no-map pages altogether in nVHE protected mode as there should be no valid reason to do this currently. Thankfully, the pKVM EL2 hypervisor has a full copy of the host's list of memblock regions, hence allowing to check for the presence of the MEMBLOCK_NOMAP flag on any given region at EL2 easily. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index c30402737548..a7156fd13bc8 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -193,7 +193,7 @@ struct kvm_mem_range { u64 end; }; -static bool find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) +static struct memblock_region *find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) { int cur, left = 0, right = hyp_memblock_nr; struct memblock_region *reg; @@ -216,18 +216,28 @@ static bool find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) } else { range->start = reg->base; range->end = end; - return true; + return reg; } } - return false; + return NULL; } bool addr_is_memory(phys_addr_t phys) { struct kvm_mem_range range; - return find_mem_range(phys, &range); + return !!find_mem_range(phys, &range); +} + +static bool addr_is_allowed_memory(phys_addr_t phys) +{ + struct memblock_region *reg; + struct kvm_mem_range range; + + reg = find_mem_range(phys, &range); + + return reg && !(reg->flags & MEMBLOCK_NOMAP); } static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) @@ -346,7 +356,7 @@ static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot pr static int host_stage2_idmap(u64 addr) { struct kvm_mem_range range; - bool is_memory = find_mem_range(addr, &range); + bool is_memory = !!find_mem_range(addr, &range); enum kvm_pgtable_prot prot; int ret; @@ -424,7 +434,7 @@ static int __check_page_state_visitor(u64 addr, u64 end, u32 level, struct check_walk_data *d = arg; kvm_pte_t pte = *ptep; - if (kvm_pte_valid(pte) && !addr_is_memory(kvm_pte_to_phys(pte))) + if (kvm_pte_valid(pte) && !addr_is_allowed_memory(kvm_pte_to_phys(pte))) return -EINVAL; return d->get_page_state(pte) == d->desired ? 0 : -EPERM; From patchwork Thu May 19 13:40:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B3A8BC433F5 for ; Thu, 19 May 2022 13:49:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OOlAkfHdz0EUPhU2ZeLekp+DTrvysHy//fCJJlHilr0=; b=q5Q41on5z8eJdU GfHEIHFHduR6BCFWdy7p+G+4SBZI9WQa/QiJY9aUORkzRVVHtaOSuBOOxtW5+Zyivhz1Rykq4Zq/4 lUXXKMd1gOdEoiLYeIm4zP3OZ9JpIxfsK+S+nugwtNOpdgG39hqnPXLbRgXuKO/BUg2c+Vy06dPqE 0nntbOSkU8BFX7Y6k3h3LFS1d/FLZ2xgGsUxQU0QYIf16q/3qLQKdPKYCxynuaZkbsDIAcZQLjX6X Vpx5dBjsv8k0pv0jqN2pZNPwootVfuhOUrukL/xaAEMoiQGxg0U8vWDTmpJcaAF70Or58DqgwPyWA HqtROuV7Exyy1U6BuDyQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgVV-0077cS-Kw; Thu, 19 May 2022 13:48:02 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQr-0075Ap-R4 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:16 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4D7D46179E; Thu, 19 May 2022 13:43:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3C79AC385AA; Thu, 19 May 2022 13:43:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967792; bh=rqJsuNHC5J67qTOpa+Ff2jw1DRLb/EfZKopigsrJd9c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fMS1kSil4cRjmHp3YvnJAetCxK+KRh5iaRd1K7nH1X2T+zTdS70HNVDEjUCOlUbGV vNlL4A81+ZUq6sJTzsFiFwV2d5soK56rywnrVjOCaFQdyq9KG1Ip8io8M48G7jn8UJ ferU1dIeB68hZV6+nxWehiF9CWcvMGOvE1tSzOOhIpl4SEWGO78OMZaaKTJHos01jS FukAuSMC9im33tf6D4i8tRq0fo+VoDEjzbRg9mgKf7d7YsMzEfJ5q3uHU6Qix0lqSW B4WS4BsuERmOIdQNioN9PShWZupP8HgJaEJLLOa2Ng2z0LTjwO0rhUoLOD8kVzRf4Y nV7PsfvN8h0wA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 12/89] KVM: arm64: Add helpers to pin memory shared with hyp Date: Thu, 19 May 2022 14:40:47 +0100 Message-Id: <20220519134204.5379-13-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064314_017965_8CBDBEE4 X-CRM114-Status: GOOD ( 17.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Add helpers allowing the hypervisor to check whether a range of pages are currently shared by the host, and 'pin' them if so by blocking host unshare operations until the memory has been unpinned. This will allow the hypervisor to take references on host-provided data-structures (struct kvm and such) and be guaranteed these pages will remain in a stable state until it decides to release them, e.g. during guest teardown. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 3 ++ arch/arm64/kvm/hyp/include/nvhe/memory.h | 7 ++- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 48 +++++++++++++++++++ 3 files changed, 57 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index c87b19b2d468..998bf165af71 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -69,6 +69,9 @@ int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id); int kvm_host_prepare_stage2(void *pgt_pool_base); void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); +int hyp_pin_shared_mem(void *from, void *to); +void hyp_unpin_shared_mem(void *from, void *to); + static __always_inline void __load_host_stage2(void) { if (static_branch_likely(&kvm_protected_mode_initialized)) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 418b66a82a50..e8a78b72aabf 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -51,10 +51,15 @@ static inline void hyp_page_ref_inc(struct hyp_page *p) p->refcount++; } -static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) +static inline void hyp_page_ref_dec(struct hyp_page *p) { BUG_ON(!p->refcount); p->refcount--; +} + +static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) +{ + hyp_page_ref_dec(p); return (p->refcount == 0); } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index a7156fd13bc8..1262dbae7f06 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -625,6 +625,9 @@ static int hyp_ack_unshare(u64 addr, const struct pkvm_mem_transition *tx) { u64 size = tx->nr_pages * PAGE_SIZE; + if (tx->initiator.id == PKVM_ID_HOST && hyp_page_count((void *)addr)) + return -EBUSY; + if (__hyp_ack_skip_pgtable_check(tx)) return 0; @@ -1038,3 +1041,48 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages) return ret; } + +int hyp_pin_shared_mem(void *from, void *to) +{ + u64 cur, start = ALIGN_DOWN((u64)from, PAGE_SIZE); + u64 end = PAGE_ALIGN((u64)to); + u64 size = end - start; + int ret; + + host_lock_component(); + hyp_lock_component(); + + ret = __host_check_page_state_range(__hyp_pa(start), size, + PKVM_PAGE_SHARED_OWNED); + if (ret) + goto unlock; + + ret = __hyp_check_page_state_range(start, size, + PKVM_PAGE_SHARED_BORROWED); + if (ret) + goto unlock; + + for (cur = start; cur < end; cur += PAGE_SIZE) + hyp_page_ref_inc(hyp_virt_to_page(cur)); + +unlock: + hyp_unlock_component(); + host_unlock_component(); + + return ret; +} + +void hyp_unpin_shared_mem(void *from, void *to) +{ + u64 cur, start = ALIGN_DOWN((u64)from, PAGE_SIZE); + u64 end = PAGE_ALIGN((u64)to); + + host_lock_component(); + hyp_lock_component(); + + for (cur = start; cur < end; cur += PAGE_SIZE) + hyp_page_ref_dec(hyp_virt_to_page(cur)); + + hyp_unlock_component(); + host_unlock_component(); +} From patchwork Thu May 19 13:40:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855044 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 66BDEC433EF for ; Thu, 19 May 2022 13:50:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zwVnm9SapdRzPXIjs5Lh8xbNnG0RBKpfKWT1ARTvlaI=; b=p0wCR/JkVQtD6S xczceRWWH3oqkx5M2fDhNhJXoWsav5kQSkFDp+YTGuzewT+3ZsEzwkOCdULMW/ULV0ON8xIqYFDCP aQueeC6eg8bk00N2pF1d5h45d/r8bdxyeM62ET88KF2jP3yqbp+rmUS6tS7SPyGjCOsOMS5gvBJuF JqHQA7VYb/4Tl8cUb6RsGnGYhdX8Yr8xzkDCsK/9ylQHMNxMGS2kOkt9omRtT0gXpvwHfARkrHO2W c8G3tMjDooxgH4Ec3kB9DO3xqXms8/31TLAaSo4OOW+e7hl93yebUAhrkFk59aoixbmtTKB0SC5CQ y2ziz3ywZAnetT3AL85A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgW5-0077zZ-Ma; Thu, 19 May 2022 13:48:38 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgQv-0075C9-Lf for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:18 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4628A6179A; Thu, 19 May 2022 13:43:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 346F0C34116; Thu, 19 May 2022 13:43:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967796; bh=6W42BbvuGKpquFZ1XJASbhkp2wtoef3MgNQDEujq2o8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mMqMPD910AAFCVWgdj65tP95AmXft8uCmnN/fqmZapGFskPRmc+WazL9JMpeVV7ZH gyLGAGjAyZL5t/097OCsEa1DZ8P40iucSTM2vu9g5lQxjBbrwKNrrvNo8JQs2ZVOXf ev6BsDvEeosHfwV3eUT/oCTJmUI0zviOHWwrkiWk+ZMomFD/d/gCtE5oMFzLhpTWUn 8tLhsCYUJ0vzm7NVNypXL66TQBRSGtLB3EeuDfJ8sHmwhP/d9osUXgyCsUKnQQ3y1G EX9Rq99h5ZRfKCBLvRi8fJ+ILdeCzq0T7SMqTODWiRixoa7CtHGTAjao7ZGE6vNbxj cvhtSD/gdzn2A== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 13/89] KVM: arm64: Include asm/kvm_mmu.h in nvhe/mem_protect.h Date: Thu, 19 May 2022 14:40:48 +0100 Message-Id: <20220519134204.5379-14-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064317_800772_E4570CDF X-CRM114-Status: GOOD ( 12.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org nvhe/mem_protect.h refers to __load_stage2() in the definition of __load_host_stage2() but doesn't include the relevant header. Include asm/kvm_mmu.h in nvhe/mem_protect.h so that users of the latter don't have to do this themselves. Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 998bf165af71..3bea816296dc 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -8,6 +8,7 @@ #define __KVM_NVHE_MEM_PROTECT__ #include #include +#include #include #include #include From patchwork Thu May 19 13:40:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 19EF5C433F5 for ; Thu, 19 May 2022 13:50:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZHrYhfH3+ypioDLz1hU3caONBzRweT+4wxcoBwbLwqc=; b=O2CXRXWdnx8pKC EH3prxsUKTFpzVn6SkXlCt6lWYbDFfnRlkejPy5R36AAJoToQuc2onfKbUcyKAi97fM2X5fAsSjIX fZZ60awNs7xTGfstlQvm/M+9E8wQ6QWRryqokqP+I4+/ny+2+ui64Fx/5WbSl/Iji00bMrT9yzMl0 HjOfhKM3qwYdT5l/cYJ+Hz1hxgVJQLqn6otYgcFVD6Nu3M7XBRvS35MK/tPSdYea/DZoOQVG3RyYM nXEqfZSQfZwCDjSSyRUCRERGQLWBwipPjJrkbNfOdqqDljpcl7tYAEYeqAbTZqlMp0S+x/xAC0Wzv 6WmmDUjU6Ap4Z/Uya7zQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgWs-0078OU-A8; Thu, 19 May 2022 13:49:26 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgR1-0075ET-AE for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:25 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id EDB2AB82477; Thu, 19 May 2022 13:43:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2D2A7C3411A; Thu, 19 May 2022 13:43:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967800; bh=nNZBdOA5NigbPc1M0gOv4ihVOwnmvSo9vvj+XEN8QEE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j7NoBFU8bwOn6I+AuPUl7vI/GLMz9EE1gyvmRDAPQCAsqxaCXQgvzkWTxTMlEeU4b nKCxfCfk3UAb5Suzl20gX+EIh86HcGnn/OVC84fZm+c7yPrBTo3kkTEYMFXPqY6oFW Yrf5KPq7DWjcBRqtOYodbQ/2zfa6ms8Hj++BSXMMV18576Jr+Ka305u6AiCBu0sH6E q2B6ltG0V+vBJyh0Cgl3M7bU1BaqLxC1QbN+a9R5qehpeFQV1QOk8+LxfmURxMkP7T XI1N9IfmeI2Vjenox9AJWKR12NwgBNPZqsfx/CCNqX0/MBm6hSRr5GzRoWB0ObjjuD 40tH47LOOXvkw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 14/89] KVM: arm64: Add hyp_spinlock_t static initializer Date: Thu, 19 May 2022 14:40:49 +0100 Message-Id: <20220519134204.5379-15-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064323_570224_29CA02E4 X-CRM114-Status: GOOD ( 13.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Having a static initializer for hyp_spinlock_t simplifies its use when there isn't an initializing function. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/spinlock.h | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h index 4652fd04bdbe..7c7ea8c55405 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h +++ b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h @@ -28,9 +28,17 @@ typedef union hyp_spinlock { }; } hyp_spinlock_t; +#define __HYP_SPIN_LOCK_INITIALIZER \ + { .__val = 0 } + +#define __HYP_SPIN_LOCK_UNLOCKED \ + ((hyp_spinlock_t) __HYP_SPIN_LOCK_INITIALIZER) + +#define DEFINE_HYP_SPINLOCK(x) hyp_spinlock_t x = __HYP_SPIN_LOCK_UNLOCKED + #define hyp_spin_lock_init(l) \ do { \ - *(l) = (hyp_spinlock_t){ .__val = 0 }; \ + *(l) = __HYP_SPIN_LOCK_UNLOCKED; \ } while (0) static inline void hyp_spin_lock(hyp_spinlock_t *lock) From patchwork Thu May 19 13:40:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 72A9DC433FE for ; Thu, 19 May 2022 13:52:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=E7p+NkQc7RoawHphns6hWEKb85k2QxIRRT6BeXVXTTE=; b=iMfxm/bJVfMKu7 c3nJNR3ZNxgrRnryd0b+GaulTUjzEPcT79RwYdDt0LYoP8LkrEuXTAzqlSx5QCPp4U+SJoN+sAsS3 1Z1e5vx8ZJe1z04Pgahec6q+V+BC82UGcHvqFwh0Opzt/Gj7UNo5zuJfBBMb1ZIhCKVRNGl/T/rp4 5a2uqWAyY80DOHlURal42jpd6Sk0Pu5L0PVH0Ae7KbV1FGhH9tvgbu48LwYUuys/PuopV7t/pXIMd GsjZPDifBYE72/FbebYmLrrMM8NV3x9BP+k98y7EMRTTvITms20k1XucBFI0i/QDxhtpBwEurFSLF 6P+lQGqmTpv76JP3zgcw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgYO-00797Z-Pb; Thu, 19 May 2022 13:51:02 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgR6-0075H0-R4 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:32 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 7B36BCE238E; Thu, 19 May 2022 13:43:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2739AC385AA; Thu, 19 May 2022 13:43:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967804; bh=4N4bcwyFZJoL+b/ml2TlF/N5TGi04swBT24h71w3TjA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Fe82ql/Zl3yxH8/uCzba9Q/59mAp9VvG4sh1jwT2thAMpfz/qW44LspbnHIqvluxR 8F/ytVRJvJul4abRmQX3Xz5not02ugsKkyMdam9OetWmOiMwwpP59RvaAtH+SItnPF 4eDfW1plyi8HEG6CbKIgF8ZZ18RLFS+A0hbTtfu2cbyxE2PPbRDc9/WpYAJpT4Xrou BZt9+h9hDHXZaOM5wA5P5y10pir0a6Zixj4Kz2nIXU7BUyArkt8MDMUXZU2LnDwaDw 97YbI3dPdy/fGj7xG91LtAYn3VmsQnCPzCW0w85BDqCEfeGdkOebkU0TdsLh5PiydY Cs7J14af0PAfw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 15/89] KVM: arm64: Introduce shadow VM state at EL2 Date: Thu, 19 May 2022 14:40:50 +0100 Message-Id: <20220519134204.5379-16-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064329_306965_8F9DA094 X-CRM114-Status: GOOD ( 31.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Introduce a table of shadow VM structures at EL2 and provide hypercalls to the host for creating and destroying shadow VMs. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 2 + arch/arm64/include/asm/kvm_host.h | 6 + arch/arm64/include/asm/kvm_pgtable.h | 8 + arch/arm64/include/asm/kvm_pkvm.h | 8 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 3 + arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 61 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 21 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 14 + arch/arm64/kvm/hyp/nvhe/pkvm.c | 398 ++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 8 + arch/arm64/kvm/hyp/pgtable.c | 9 + arch/arm64/kvm/pkvm.c | 1 + 12 files changed, 539 insertions(+) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/pkvm.h diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index d5b0386ef765..f5030e88eb58 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -76,6 +76,8 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs, __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_init_traps, + __KVM_HOST_SMCCC_FUNC___pkvm_init_shadow, + __KVM_HOST_SMCCC_FUNC___pkvm_teardown_shadow, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 14ed7c7ad797..9ba721fc1600 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -101,6 +101,10 @@ struct kvm_s2_mmu { struct kvm_arch_memory_slot { }; +struct kvm_protected_vm { + unsigned int shadow_handle; +}; + struct kvm_arch { struct kvm_s2_mmu mmu; @@ -140,6 +144,8 @@ struct kvm_arch { u8 pfr0_csv2; u8 pfr0_csv3; + + struct kvm_protected_vm pkvm; }; struct kvm_vcpu_fault_info { diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 9f339dffbc1a..2d6b5058f7d3 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -288,6 +288,14 @@ u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); */ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift); +/* + * kvm_pgtable_stage2_pgd_size() - Helper to compute size of a stage-2 PGD + * @vtcr: Content of the VTCR register. + * + * Return: the size (in bytes) of the stage-2 PGD + */ +size_t kvm_pgtable_stage2_pgd_size(u64 vtcr); + /** * __kvm_pgtable_stage2_init() - Initialise a guest stage-2 page-table. * @pgt: Uninitialised page-table structure to initialise. diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index 8f7b8a2314bb..11526e89fe5c 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -9,6 +9,9 @@ #include #include +/* Maximum number of protected VMs that can be created. */ +#define KVM_MAX_PVMS 255 + #define HYP_MEMBLOCK_REGIONS 128 extern struct memblock_region kvm_nvhe_sym(hyp_memory)[]; @@ -40,6 +43,11 @@ static inline unsigned long hyp_vmemmap_pages(size_t vmemmap_entry_size) return res >> PAGE_SHIFT; } +static inline unsigned long hyp_shadow_table_pages(void) +{ + return PAGE_ALIGN(KVM_MAX_PVMS * sizeof(void *)) >> PAGE_SHIFT; +} + static inline unsigned long __hyp_pgtable_max_pages(unsigned long nr_pages) { unsigned long total = 0, i; diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 3bea816296dc..3a0817b5c739 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -11,6 +11,7 @@ #include #include #include +#include #include /* @@ -68,10 +69,12 @@ bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id); int kvm_host_prepare_stage2(void *pgt_pool_base); +int kvm_guest_prepare_stage2(struct kvm_shadow_vm *vm, void *pgd); void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); int hyp_pin_shared_mem(void *from, void *to); void hyp_unpin_shared_mem(void *from, void *to); +void reclaim_guest_pages(struct kvm_shadow_vm *vm); static __always_inline void __load_host_stage2(void) { diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h new file mode 100644 index 000000000000..dc06b043bd83 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2021 Google LLC + * Author: Fuad Tabba + */ + +#ifndef __ARM64_KVM_NVHE_PKVM_H__ +#define __ARM64_KVM_NVHE_PKVM_H__ + +#include + +/* + * Holds the relevant data for maintaining the vcpu state completely at hyp. + */ +struct kvm_shadow_vcpu_state { + /* The data for the shadow vcpu. */ + struct kvm_vcpu shadow_vcpu; + + /* A pointer to the host's vcpu. */ + struct kvm_vcpu *host_vcpu; + + /* A pointer to the shadow vm. */ + struct kvm_shadow_vm *shadow_vm; +}; + +/* + * Holds the relevant data for running a protected vm. + */ +struct kvm_shadow_vm { + /* The data for the shadow kvm. */ + struct kvm kvm; + + /* The host's kvm structure. */ + struct kvm *host_kvm; + + /* The total size of the donated shadow area. */ + size_t shadow_area_size; + + struct kvm_pgtable pgt; + + /* Array of the shadow state per vcpu. */ + struct kvm_shadow_vcpu_state shadow_vcpu_states[0]; +}; + +static inline struct kvm_shadow_vcpu_state *get_shadow_state(struct kvm_vcpu *shadow_vcpu) +{ + return container_of(shadow_vcpu, struct kvm_shadow_vcpu_state, shadow_vcpu); +} + +static inline struct kvm_shadow_vm *get_shadow_vm(struct kvm_vcpu *shadow_vcpu) +{ + return get_shadow_state(shadow_vcpu)->shadow_vm; +} + +void hyp_shadow_table_init(void *tbl); +int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, + size_t shadow_size, unsigned long pgd_hva); +int __pkvm_teardown_shadow(unsigned int shadow_handle); + +#endif /* __ARM64_KVM_NVHE_PKVM_H__ */ + diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 5e2197db0d32..8e51cdab00b7 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -15,6 +15,7 @@ #include #include +#include #include DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); @@ -175,6 +176,24 @@ static void handle___pkvm_vcpu_init_traps(struct kvm_cpu_context *host_ctxt) __pkvm_vcpu_init_traps(kern_hyp_va(vcpu)); } +static void handle___pkvm_init_shadow(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm *, host_kvm, host_ctxt, 1); + DECLARE_REG(unsigned long, host_shadow_va, host_ctxt, 2); + DECLARE_REG(size_t, shadow_size, host_ctxt, 3); + DECLARE_REG(unsigned long, pgd, host_ctxt, 4); + + cpu_reg(host_ctxt, 1) = __pkvm_init_shadow(host_kvm, host_shadow_va, + shadow_size, pgd); +} + +static void handle___pkvm_teardown_shadow(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(unsigned int, shadow_handle, host_ctxt, 1); + + cpu_reg(host_ctxt, 1) = __pkvm_teardown_shadow(shadow_handle); +} + typedef void (*hcall_t)(struct kvm_cpu_context *); #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x @@ -204,6 +223,8 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__vgic_v3_save_aprs), HANDLE_FUNC(__vgic_v3_restore_aprs), HANDLE_FUNC(__pkvm_vcpu_init_traps), + HANDLE_FUNC(__pkvm_init_shadow), + HANDLE_FUNC(__pkvm_teardown_shadow), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 1262dbae7f06..502bc0d04858 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -141,6 +141,20 @@ int kvm_host_prepare_stage2(void *pgt_pool_base) return 0; } +int kvm_guest_prepare_stage2(struct kvm_shadow_vm *vm, void *pgd) +{ + vm->pgt.pgd = pgd; + return 0; +} + +void reclaim_guest_pages(struct kvm_shadow_vm *vm) +{ + unsigned long nr_pages; + + nr_pages = kvm_pgtable_stage2_pgd_size(vm->kvm.arch.vtcr) >> PAGE_SHIFT; + WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(vm->pgt.pgd), nr_pages)); +} + int __pkvm_prot_finalize(void) { struct kvm_s2_mmu *mmu = &host_kvm.arch.mmu; diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 99c8d8b73e70..77aeb787670b 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -7,6 +7,9 @@ #include #include #include +#include +#include +#include #include /* @@ -183,3 +186,398 @@ void __pkvm_vcpu_init_traps(struct kvm_vcpu *vcpu) pvm_init_traps_aa64mmfr0(vcpu); pvm_init_traps_aa64mmfr1(vcpu); } + +/* + * Start the shadow table handle at the offset defined instead of at 0. + * Mainly for sanity checking and debugging. + */ +#define HANDLE_OFFSET 0x1000 + +static unsigned int shadow_handle_to_idx(unsigned int shadow_handle) +{ + return shadow_handle - HANDLE_OFFSET; +} + +static unsigned int idx_to_shadow_handle(unsigned int idx) +{ + return idx + HANDLE_OFFSET; +} + +/* + * Spinlock for protecting the shadow table related state. + * Protects writes to shadow_table, nr_shadow_entries, and next_shadow_alloc, + * as well as reads and writes to last_shadow_vcpu_lookup. + */ +static DEFINE_HYP_SPINLOCK(shadow_lock); + +/* + * The table of shadow entries for protected VMs in hyp. + * Allocated at hyp initialization and setup. + */ +static struct kvm_shadow_vm **shadow_table; + +/* Current number of vms in the shadow table. */ +static unsigned int nr_shadow_entries; + +/* The next entry index to try to allocate from. */ +static unsigned int next_shadow_alloc; + +void hyp_shadow_table_init(void *tbl) +{ + WARN_ON(shadow_table); + shadow_table = tbl; +} + +/* + * Return the shadow vm corresponding to the handle. + */ +static struct kvm_shadow_vm *find_shadow_by_handle(unsigned int shadow_handle) +{ + unsigned int shadow_idx = shadow_handle_to_idx(shadow_handle); + + if (unlikely(shadow_idx >= KVM_MAX_PVMS)) + return NULL; + + return shadow_table[shadow_idx]; +} + +static void unpin_host_vcpus(struct kvm_shadow_vcpu_state *shadow_vcpu_states, + unsigned int nr_vcpus) +{ + int i; + + for (i = 0; i < nr_vcpus; i++) { + struct kvm_vcpu *host_vcpu = shadow_vcpu_states[i].host_vcpu; + hyp_unpin_shared_mem(host_vcpu, host_vcpu + 1); + } +} + +static int set_host_vcpus(struct kvm_shadow_vcpu_state *shadow_vcpu_states, + unsigned int nr_vcpus, + struct kvm_vcpu **vcpu_array, + size_t vcpu_array_size) +{ + int i; + + if (vcpu_array_size < sizeof(*vcpu_array) * nr_vcpus) + return -EINVAL; + + for (i = 0; i < nr_vcpus; i++) { + struct kvm_vcpu *host_vcpu = kern_hyp_va(vcpu_array[i]); + + if (hyp_pin_shared_mem(host_vcpu, host_vcpu + 1)) { + unpin_host_vcpus(shadow_vcpu_states, i); + return -EBUSY; + } + + shadow_vcpu_states[i].host_vcpu = host_vcpu; + } + + return 0; +} + +static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, + struct kvm_vcpu **vcpu_array, + unsigned int nr_vcpus) +{ + int i; + + vm->host_kvm = kvm; + vm->kvm.created_vcpus = nr_vcpus; + vm->kvm.arch.vtcr = host_kvm.arch.vtcr; + + for (i = 0; i < nr_vcpus; i++) { + struct kvm_shadow_vcpu_state *shadow_vcpu_state = &vm->shadow_vcpu_states[i]; + struct kvm_vcpu *shadow_vcpu = &shadow_vcpu_state->shadow_vcpu; + struct kvm_vcpu *host_vcpu = shadow_vcpu_state->host_vcpu; + + shadow_vcpu_state->shadow_vm = vm; + + shadow_vcpu->kvm = &vm->kvm; + shadow_vcpu->vcpu_id = READ_ONCE(host_vcpu->vcpu_id); + shadow_vcpu->vcpu_idx = i; + + shadow_vcpu->arch.hw_mmu = &vm->kvm.arch.mmu; + } + + return 0; +} + +static bool __exists_shadow(struct kvm *host_kvm) +{ + int i; + unsigned int nr_checked = 0; + + for (i = 0; i < KVM_MAX_PVMS && nr_checked < nr_shadow_entries; i++) { + if (!shadow_table[i]) + continue; + + if (unlikely(shadow_table[i]->host_kvm == host_kvm)) + return true; + + nr_checked++; + } + + return false; +} + +/* + * Allocate a shadow table entry and insert a pointer to the shadow vm. + * + * Return a unique handle to the protected VM on success, + * negative error code on failure. + */ +static unsigned int insert_shadow_table(struct kvm *kvm, + struct kvm_shadow_vm *vm, + size_t shadow_size) +{ + struct kvm_s2_mmu *mmu = &vm->kvm.arch.mmu; + unsigned int shadow_handle; + unsigned int vmid; + + hyp_assert_lock_held(&shadow_lock); + + if (unlikely(nr_shadow_entries >= KVM_MAX_PVMS)) + return -ENOMEM; + + /* + * Initializing protected state might have failed, yet a malicious host + * could trigger this function. Thus, ensure that shadow_table exists. + */ + if (unlikely(!shadow_table)) + return -EINVAL; + + /* Check that a shadow hasn't been created before for this host KVM. */ + if (unlikely(__exists_shadow(kvm))) + return -EEXIST; + + /* Find the next free entry in the shadow table. */ + while (shadow_table[next_shadow_alloc]) + next_shadow_alloc = (next_shadow_alloc + 1) % KVM_MAX_PVMS; + shadow_handle = idx_to_shadow_handle(next_shadow_alloc); + + vm->kvm.arch.pkvm.shadow_handle = shadow_handle; + vm->shadow_area_size = shadow_size; + + /* VMID 0 is reserved for the host */ + vmid = next_shadow_alloc + 1; + if (vmid > 0xff) + return -ENOMEM; + + atomic64_set(&mmu->vmid.id, vmid); + mmu->arch = &vm->kvm.arch; + mmu->pgt = &vm->pgt; + + shadow_table[next_shadow_alloc] = vm; + next_shadow_alloc = (next_shadow_alloc + 1) % KVM_MAX_PVMS; + nr_shadow_entries++; + + return shadow_handle; +} + +/* + * Deallocate and remove the shadow table entry corresponding to the handle. + */ +static void remove_shadow_table(unsigned int shadow_handle) +{ + hyp_assert_lock_held(&shadow_lock); + shadow_table[shadow_handle_to_idx(shadow_handle)] = NULL; + nr_shadow_entries--; +} + +static size_t pkvm_get_shadow_size(unsigned int nr_vcpus) +{ + /* Shadow space for the vm struct and all of its vcpu states. */ + return sizeof(struct kvm_shadow_vm) + + sizeof(struct kvm_shadow_vcpu_state) * nr_vcpus; +} + +/* + * Check whether the size of the area donated by the host is sufficient for + * the shadow structures required for nr_vcpus as well as the shadow vm. + */ +static int check_shadow_size(unsigned int nr_vcpus, size_t shadow_size) +{ + if (nr_vcpus < 1 || nr_vcpus > KVM_MAX_VCPUS) + return -EINVAL; + + /* + * Shadow size is rounded up when allocated and donated by the host, + * so it's likely to be larger than the sum of the struct sizes. + */ + if (shadow_size < pkvm_get_shadow_size(nr_vcpus)) + return -ENOMEM; + + return 0; +} + +static void *map_donated_memory_noclear(unsigned long host_va, size_t size) +{ + void *va = (void *)kern_hyp_va(host_va); + + if (!PAGE_ALIGNED(va) || !PAGE_ALIGNED(size)) + return NULL; + + if (__pkvm_host_donate_hyp(hyp_virt_to_pfn(va), size >> PAGE_SHIFT)) + return NULL; + + return va; +} + +static void *map_donated_memory(unsigned long host_va, size_t size) +{ + void *va = map_donated_memory_noclear(host_va, size); + + if (va) + memset(va, 0, size); + + return va; +} + +static void __unmap_donated_memory(void *va, size_t size) +{ + WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(va), size >> PAGE_SHIFT)); +} + +static void unmap_donated_memory(void *va, size_t size) +{ + if (!va) + return; + + memset(va, 0, size); + __unmap_donated_memory(va, size); +} + +static void unmap_donated_memory_noclear(void *va, size_t size) +{ + if (!va) + return; + + __unmap_donated_memory(va, size); +} + +/* + * Initialize the shadow copy of the protected VM state using the memory + * donated by the host. + * + * Unmaps the donated memory from the host at stage 2. + * + * kvm: A pointer to the host's struct kvm (host va). + * shadow_hva: The host va of the area being donated for the shadow state. + * Must be page aligned. + * shadow_size: The size of the area being donated for the shadow state. + * Must be a multiple of the page size. + * pgd_hva: The host va of the area being donated for the stage-2 PGD for + * the VM. Must be page aligned. Its size is implied by the VM's + * VTCR. + * Note: An array to the host KVM VCPUs (host VA) is passed via the pgd, as to + * not to be dependent on how the VCPU's are layed out in struct kvm. + * + * Return a unique handle to the protected VM on success, + * negative error code on failure. + */ +int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, + size_t shadow_size, unsigned long pgd_hva) +{ + struct kvm_shadow_vm *vm = NULL; + unsigned int nr_vcpus; + size_t pgd_size = 0; + void *pgd = NULL; + int ret; + + kvm = kern_hyp_va(kvm); + ret = hyp_pin_shared_mem(kvm, kvm + 1); + if (ret) + return ret; + + nr_vcpus = READ_ONCE(kvm->created_vcpus); + ret = check_shadow_size(nr_vcpus, shadow_size); + if (ret) + goto err_unpin_kvm; + + ret = -ENOMEM; + + vm = map_donated_memory(shadow_hva, shadow_size); + if (!vm) + goto err_remove_mappings; + + pgd_size = kvm_pgtable_stage2_pgd_size(host_kvm.arch.vtcr); + pgd = map_donated_memory_noclear(pgd_hva, pgd_size); + if (!pgd) + goto err_remove_mappings; + + ret = set_host_vcpus(vm->shadow_vcpu_states, nr_vcpus, pgd, pgd_size); + if (ret) + goto err_remove_mappings; + + ret = init_shadow_structs(kvm, vm, pgd, nr_vcpus); + if (ret < 0) + goto err_unpin_host_vcpus; + + /* Add the entry to the shadow table. */ + hyp_spin_lock(&shadow_lock); + ret = insert_shadow_table(kvm, vm, shadow_size); + if (ret < 0) + goto err_unlock_unpin_host_vcpus; + + ret = kvm_guest_prepare_stage2(vm, pgd); + if (ret) + goto err_remove_shadow_table; + hyp_spin_unlock(&shadow_lock); + + return vm->kvm.arch.pkvm.shadow_handle; + +err_remove_shadow_table: + remove_shadow_table(vm->kvm.arch.pkvm.shadow_handle); +err_unlock_unpin_host_vcpus: + hyp_spin_unlock(&shadow_lock); +err_unpin_host_vcpus: + unpin_host_vcpus(vm->shadow_vcpu_states, nr_vcpus); +err_remove_mappings: + unmap_donated_memory(vm, shadow_size); + unmap_donated_memory_noclear(pgd, pgd_size); +err_unpin_kvm: + hyp_unpin_shared_mem(kvm, kvm + 1); + return ret; +} + +int __pkvm_teardown_shadow(unsigned int shadow_handle) +{ + struct kvm_shadow_vm *vm; + size_t shadow_size; + int err; + + /* Lookup then remove entry from the shadow table. */ + hyp_spin_lock(&shadow_lock); + vm = find_shadow_by_handle(shadow_handle); + if (!vm) { + err = -ENOENT; + goto err_unlock; + } + + if (WARN_ON(hyp_page_count(vm))) { + err = -EBUSY; + goto err_unlock; + } + + /* Ensure the VMID is clean before it can be reallocated */ + __kvm_tlb_flush_vmid(&vm->kvm.arch.mmu); + remove_shadow_table(shadow_handle); + hyp_spin_unlock(&shadow_lock); + + /* Reclaim guest pages (including page-table pages) */ + reclaim_guest_pages(vm); + unpin_host_vcpus(vm->shadow_vcpu_states, vm->kvm.created_vcpus); + + /* Push the metadata pages to the teardown memcache */ + shadow_size = vm->shadow_area_size; + hyp_unpin_shared_mem(vm->host_kvm, vm->host_kvm + 1); + + memset(vm, 0, shadow_size); + unmap_donated_memory_noclear(vm, shadow_size); + return 0; + +err_unlock: + hyp_spin_unlock(&shadow_lock); + return err; +} diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 311197a223e6..e9e146e50254 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -16,6 +16,7 @@ #include #include #include +#include #include unsigned long hyp_nr_cpus; @@ -24,6 +25,7 @@ unsigned long hyp_nr_cpus; (unsigned long)__per_cpu_start) static void *vmemmap_base; +static void *shadow_table_base; static void *hyp_pgt_base; static void *host_s2_pgt_base; static struct kvm_pgtable_mm_ops pkvm_pgtable_mm_ops; @@ -40,6 +42,11 @@ static int divide_memory_pool(void *virt, unsigned long size) if (!vmemmap_base) return -ENOMEM; + nr_pages = hyp_shadow_table_pages(); + shadow_table_base = hyp_early_alloc_contig(nr_pages); + if (!shadow_table_base) + return -ENOMEM; + nr_pages = hyp_s1_pgtable_pages(); hyp_pgt_base = hyp_early_alloc_contig(nr_pages); if (!hyp_pgt_base) @@ -265,6 +272,7 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; + hyp_shadow_table_init(shadow_table_base); out: /* * We tail-called to here from handle___pkvm_init() and will not return, diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 2cb3867eb7c2..1d300313009d 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1200,6 +1200,15 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, return 0; } +size_t kvm_pgtable_stage2_pgd_size(u64 vtcr) +{ + u32 ia_bits = VTCR_EL2_IPA(vtcr); + u32 sl0 = FIELD_GET(VTCR_EL2_SL0_MASK, vtcr); + u32 start_level = VTCR_EL2_TGRAN_SL0_BASE - sl0; + + return kvm_pgd_pages(ia_bits, start_level) * PAGE_SIZE; +} + static int stage2_free_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, enum kvm_pgtable_walk_flags flag, void * const arg) diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 34229425b25d..3947063cc3a1 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -71,6 +71,7 @@ void __init kvm_hyp_reserve(void) hyp_mem_pages += hyp_s1_pgtable_pages(); hyp_mem_pages += host_s2_pgtable_pages(); + hyp_mem_pages += hyp_shadow_table_pages(); hyp_mem_pages += hyp_vmemmap_pages(STRUCT_HYP_PAGE_SIZE); /* From patchwork Thu May 19 13:40:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855094 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 99B6AC433EF for ; Thu, 19 May 2022 13:53:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MKXf58YxLnxAD1A8VSPNL5v63yQCTetYoDjA7gan1lk=; b=pGVEI+khtJnUUa 3P1jTpbi12MF/TcItpUjT5Bk7f+bw3hgrpPqy2DUG5Fa48dMEbo2XFVw1QjLsZWnrdcF10zZ8WqRK lV+kwxFYVOh6HCDgmlQfrCjuO84Gxt5hBIpdfqejirIq3b8ctqcJ54ijrIaN9jGAaXjNe79y6jVvH PIK6wVCUFqNrw7NOT2EawOCPnRRXXaLAfDUXLNJmX+3wOQxS0bm6lIwq9nMkVCNehrzrfj7A1XfG7 399D24RDY4iliP3NE+FfVIE2twocQsw6X6auCNtly5NdLFycSvL+AWsd8hhUNDRPDfcWOHzcldzwf kjFxwzfWIQsie1sYYJSA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgZ7-0079NL-PR; Thu, 19 May 2022 13:51:46 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgR9-0075ID-8t for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:33 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id EA1B9B824A6; Thu, 19 May 2022 13:43:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 46D70C36AE5; Thu, 19 May 2022 13:43:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967808; bh=VpCHFYe7b/5jyhZPAK7fuHkHfa8hXGsnhqm9qxHLxXU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gB0CNkl1yevGk58fK78reoE/0Yio09r1ADUJfaH5lY+bJD/N7olBepAghMJksnDgm ekGLJ/KMeoNZqPadOeiy923fx8In5qSr7MDO1XEzhu/ORKvM1ssle0lHTq++ZcippI /rxCbC1vaujql1njxf41PeSq8yt36mXQU8MBMHPf6HEnQd90U1ktmqB9xXpvqbBDYh jEK62sg01e/i45IzkXOSp57tJ2n8aiFTh+DAjLrxkLvH9jMCDmjsGt22ap2qEXaGUC f0D3+n4+XIQqT2EdwMG30QtgQeifz/lQV/FWhilfSz2SZ3vC5Z/h7RhA+N22aCrMIG MnCnhJWB/DkXQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 16/89] KVM: arm64: Instantiate VM shadow data from EL1 Date: Thu, 19 May 2022 14:40:51 +0100 Message-Id: <20220519134204.5379-17-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064331_660748_6A22FF16 X-CRM114-Status: GOOD ( 23.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Now that EL2 provides calls to create and destroy shadow VM structures, plumb these into the KVM code at EL1 so that a shadow VM is created on first vCPU run and destroyed later along with the 'struct kvm' at teardown time. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 6 ++ arch/arm64/include/asm/kvm_pkvm.h | 4 ++ arch/arm64/kvm/arm.c | 14 ++++ arch/arm64/kvm/hyp/hyp-constants.c | 3 + arch/arm64/kvm/pkvm.c | 112 +++++++++++++++++++++++++++++ 5 files changed, 139 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9ba721fc1600..13967fc9731a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -103,6 +103,12 @@ struct kvm_arch_memory_slot { struct kvm_protected_vm { unsigned int shadow_handle; + struct mutex shadow_lock; + + struct { + void *pgd; + void *shadow; + } hyp_donations; }; struct kvm_arch { diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index 11526e89fe5c..1dc7372950b1 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -14,6 +14,10 @@ #define HYP_MEMBLOCK_REGIONS 128 +int kvm_init_pvm(struct kvm *kvm); +int kvm_shadow_create(struct kvm *kvm); +void kvm_shadow_destroy(struct kvm *kvm); + extern struct memblock_region kvm_nvhe_sym(hyp_memory)[]; extern unsigned int kvm_nvhe_sym(hyp_memblock_nr); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 7f8731306c2a..14adfd09e882 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include @@ -146,6 +147,10 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) if (ret) goto out_free_stage2_pgd; + ret = kvm_init_pvm(kvm); + if (ret) + goto out_free_stage2_pgd; + if (!zalloc_cpumask_var(&kvm->arch.supported_cpus, GFP_KERNEL)) { ret = -ENOMEM; goto out_free_stage2_pgd; @@ -182,6 +187,9 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_vgic_destroy(kvm); + if (is_protected_kvm_enabled()) + kvm_shadow_destroy(kvm); + kvm_destroy_vcpus(kvm); kvm_unshare_hyp(kvm, kvm + 1); @@ -545,6 +553,12 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) if (ret) return ret; + if (is_protected_kvm_enabled()) { + ret = kvm_shadow_create(kvm); + if (ret) + return ret; + } + if (!irqchip_in_kernel(kvm)) { /* * Tell the rest of the code that there are userspace irqchip diff --git a/arch/arm64/kvm/hyp/hyp-constants.c b/arch/arm64/kvm/hyp/hyp-constants.c index b3742a6691e8..eee79527f901 100644 --- a/arch/arm64/kvm/hyp/hyp-constants.c +++ b/arch/arm64/kvm/hyp/hyp-constants.c @@ -2,9 +2,12 @@ #include #include +#include int main(void) { DEFINE(STRUCT_HYP_PAGE_SIZE, sizeof(struct hyp_page)); + DEFINE(KVM_SHADOW_VM_SIZE, sizeof(struct kvm_shadow_vm)); + DEFINE(KVM_SHADOW_VCPU_STATE_SIZE, sizeof(struct kvm_shadow_vcpu_state)); return 0; } diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 3947063cc3a1..b4466b31d7c8 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -6,6 +6,7 @@ #include #include +#include #include #include @@ -94,3 +95,114 @@ void __init kvm_hyp_reserve(void) kvm_info("Reserved %lld MiB at 0x%llx\n", hyp_mem_size >> 20, hyp_mem_base); } + +/* + * Allocates and donates memory for EL2 shadow structs. + * + * Allocates space for the shadow state, which includes the shadow vm as well as + * the shadow vcpu states. + * + * Stores an opaque handler in the kvm struct for future reference. + * + * Return 0 on success, negative error code on failure. + */ +static int __kvm_shadow_create(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu, **vcpu_array; + unsigned int shadow_handle; + size_t pgd_sz, shadow_sz; + void *pgd, *shadow_addr; + unsigned long idx; + int ret; + + if (kvm->created_vcpus < 1) + return -EINVAL; + + pgd_sz = kvm_pgtable_stage2_pgd_size(kvm->arch.vtcr); + /* + * The PGD pages will be reclaimed using a hyp_memcache which implies + * page granularity. So, use alloc_pages_exact() to get individual + * refcounts. + */ + pgd = alloc_pages_exact(pgd_sz, GFP_KERNEL_ACCOUNT); + if (!pgd) + return -ENOMEM; + + /* Allocate memory to donate to hyp for the kvm and vcpu state. */ + shadow_sz = PAGE_ALIGN(KVM_SHADOW_VM_SIZE + + KVM_SHADOW_VCPU_STATE_SIZE * kvm->created_vcpus); + shadow_addr = alloc_pages_exact(shadow_sz, GFP_KERNEL_ACCOUNT); + if (!shadow_addr) { + ret = -ENOMEM; + goto free_pgd; + } + + /* Stash the vcpu pointers into the PGD */ + BUILD_BUG_ON(KVM_MAX_VCPUS > (PAGE_SIZE / sizeof(u64))); + vcpu_array = pgd; + kvm_for_each_vcpu(idx, vcpu, kvm) { + /* Indexing of the vcpus to be sequential starting at 0. */ + if (WARN_ON(vcpu->vcpu_idx != idx)) { + ret = -EINVAL; + goto free_shadow; + } + + vcpu_array[idx] = vcpu; + } + + /* Donate the shadow memory to hyp and let hyp initialize it. */ + ret = kvm_call_hyp_nvhe(__pkvm_init_shadow, kvm, shadow_addr, shadow_sz, + pgd); + if (ret < 0) + goto free_shadow; + + shadow_handle = ret; + + /* Store the shadow handle given by hyp for future call reference. */ + kvm->arch.pkvm.shadow_handle = shadow_handle; + kvm->arch.pkvm.hyp_donations.pgd = pgd; + kvm->arch.pkvm.hyp_donations.shadow = shadow_addr; + return 0; + +free_shadow: + free_pages_exact(shadow_addr, shadow_sz); +free_pgd: + free_pages_exact(pgd, pgd_sz); + return ret; +} + +int kvm_shadow_create(struct kvm *kvm) +{ + int ret = 0; + + mutex_lock(&kvm->arch.pkvm.shadow_lock); + if (!kvm->arch.pkvm.shadow_handle) + ret = __kvm_shadow_create(kvm); + mutex_unlock(&kvm->arch.pkvm.shadow_lock); + + return ret; +} + +void kvm_shadow_destroy(struct kvm *kvm) +{ + size_t pgd_sz, shadow_sz; + + if (kvm->arch.pkvm.shadow_handle) + WARN_ON(kvm_call_hyp_nvhe(__pkvm_teardown_shadow, + kvm->arch.pkvm.shadow_handle)); + + kvm->arch.pkvm.shadow_handle = 0; + + shadow_sz = PAGE_ALIGN(KVM_SHADOW_VM_SIZE + + KVM_SHADOW_VCPU_STATE_SIZE * kvm->created_vcpus); + pgd_sz = kvm_pgtable_stage2_pgd_size(kvm->arch.vtcr); + + free_pages_exact(kvm->arch.pkvm.hyp_donations.shadow, shadow_sz); + free_pages_exact(kvm->arch.pkvm.hyp_donations.pgd, pgd_sz); +} + +int kvm_init_pvm(struct kvm *kvm) +{ + mutex_init(&kvm->arch.pkvm.shadow_lock); + return 0; +} From patchwork Thu May 19 13:40:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855096 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A07EFC433F5 for ; Thu, 19 May 2022 13:55:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gx5aiBbEzSRiBeMBPwAhJVQbRnUmB5nsgFegpbJxpQU=; b=EKFeX+c1Kq6f+q XNJizvjbMEJLxXbZ6HARdzHWPT329eR4bf2h+MmicMM0QL7uX8xCge7MgwZ7tvo4RV1aU4XeUgz2d XwfZSfqLt4Lt5KdvnCeta2M/aQynpOmSGVE+3U/z6T37Kw4u9X0sj5hPULNI/btdGBCvAcUcXerdO skcs24X46X0QsDz18LTNkT1UDm3yqFUDLB8FWp9kMu0w5kkFyBQpicdoEtsNBScovXiaJYjLSGf2K 2Z1Zoopj2u7LLF7kjA+rH7UShdU9LN5URF9jwJFXYtzB2DHzBcIFE9p8JwcGwH0dNJTXYMM6Zumto CgHaUt1JQZaY1PO6zqjQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgbR-007AIZ-BO; Thu, 19 May 2022 13:54:11 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRC-0075Js-7K for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:41 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4BF2A61783; Thu, 19 May 2022 13:43:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3F58AC36AE7; Thu, 19 May 2022 13:43:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967812; bh=cD3YOWLvpjjONoZobEcrvEksstSpx5sYWks15ko02iM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RjKxxp/2Zj5zWKQ+JdaVyYCqSxMyqjVvgIshZjNllODWAAkcQd6NR9s3vw1HVGqbU Wmfm7WFXdgYcJsu/zTE1XveGSZqVc8Uqf6RotIBWakrcCn3UQwsKv3y0+ea0vA5t2T o9gP5X1dha4nMuDUItKvxFZOBf6rIfjXaVDw3rm1UcTYoq29iPL1FsoMYbRvy6BWlH r/JT+bFDyGeCuvV2AfSJrHO9mJihFCwVnG4oM3D1aBSPIWPV6qTdJa896s8nQscJyC 31kvgvXPFWdPwmEB00lWmAa81Ui6WAfAQXdITBWyaTZZeEDuELlXl4glJUfZTKghVp MmheluJ5L695A== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 17/89] KVM: arm64: Make hyp stage-1 refcnt correct on the whole range Date: Thu, 19 May 2022 14:40:52 +0100 Message-Id: <20220519134204.5379-18-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064334_414765_622753D8 X-CRM114-Status: GOOD ( 19.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret We currently fixup the hypervisor stage-1 refcount only for specific portions of the hyp stage-1 VA space. In order to allow unmapping pages outside of these ranges, let's fixup the refcount for the entire hyp VA space. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/setup.c | 62 +++++++++++++++++++++++---------- 1 file changed, 43 insertions(+), 19 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index e9e146e50254..b306da2b5dae 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -167,12 +167,11 @@ static void hpool_put_page(void *addr) hyp_put_page(&hpool, addr); } -static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level, - kvm_pte_t *ptep, - enum kvm_pgtable_walk_flags flag, - void * const arg) +static int fix_host_ownership_walker(u64 addr, u64 end, u32 level, + kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, + void * const arg) { - struct kvm_pgtable_mm_ops *mm_ops = arg; enum kvm_pgtable_prot prot; enum pkvm_page_state state; kvm_pte_t pte = *ptep; @@ -181,15 +180,6 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level, if (!kvm_pte_valid(pte)) return 0; - /* - * Fix-up the refcount for the page-table pages as the early allocator - * was unable to access the hyp_vmemmap and so the buddy allocator has - * initialised the refcount to '1'. - */ - mm_ops->get_page(ptep); - if (flag != KVM_PGTABLE_WALK_LEAF) - return 0; - if (level != (KVM_PGTABLE_MAX_LEVELS - 1)) return -EINVAL; @@ -218,12 +208,30 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level, return host_stage2_idmap_locked(phys, PAGE_SIZE, prot); } -static int finalize_host_mappings(void) +static int fix_hyp_pgtable_refcnt_walker(u64 addr, u64 end, u32 level, + kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, + void * const arg) +{ + struct kvm_pgtable_mm_ops *mm_ops = arg; + kvm_pte_t pte = *ptep; + + /* + * Fix-up the refcount for the page-table pages as the early allocator + * was unable to access the hyp_vmemmap and so the buddy allocator has + * initialised the refcount to '1'. + */ + if (kvm_pte_valid(pte)) + mm_ops->get_page(ptep); + + return 0; +} + +static int fix_host_ownership(void) { struct kvm_pgtable_walker walker = { - .cb = finalize_host_mappings_walker, - .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, - .arg = pkvm_pgtable.mm_ops, + .cb = fix_host_ownership_walker, + .flags = KVM_PGTABLE_WALK_LEAF, }; int i, ret; @@ -239,6 +247,18 @@ static int finalize_host_mappings(void) return 0; } +static int fix_hyp_pgtable_refcnt(void) +{ + struct kvm_pgtable_walker walker = { + .cb = fix_hyp_pgtable_refcnt_walker, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .arg = pkvm_pgtable.mm_ops, + }; + + return kvm_pgtable_walk(&pkvm_pgtable, 0, BIT(pkvm_pgtable.ia_bits), + &walker); +} + void __noreturn __pkvm_init_finalise(void) { struct kvm_host_data *host_data = this_cpu_ptr(&kvm_host_data); @@ -268,7 +288,11 @@ void __noreturn __pkvm_init_finalise(void) }; pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops; - ret = finalize_host_mappings(); + ret = fix_host_ownership(); + if (ret) + goto out; + + ret = fix_hyp_pgtable_refcnt(); if (ret) goto out; From patchwork Thu May 19 13:40:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D0F3FC433EF for ; Thu, 19 May 2022 13:54:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dlMVUQ8Ljj7ZjPQBpZi+A6OqenZvgKLiO3JVBFvnBCA=; b=JVuG10oo9DpA2o 3KV9X4F8sCcWSoM3xbKCjYp4J1GPrI4HTG9jBxQkrVQYQOVVa1bCLpu/2NEB3u6zkCDTU2kt1tKzV 81+eVl7SuEsmX0GkTpiwe6Vnuwgutu2yzGowoKIRsvAlPzEHGGJ1MkbHPjYje5XgkqL3wIMpuEQ8t h1TAwoAAL31cJG6wKZ0JaHqLIqxbR2G8vgbrGsV/2g1+jABDp/uZkUdNgik/rrzua/E1IUMo0P6pD bdoJjE0TnfeHSq9Z3TFfioRkC6+k4kSPQ7sbWeTZxSOCweh0a3511mzTT7VrNlhW/3UYDNHW6g+eh NeYHUbdELblAgJXVEPmA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgaD-0079ky-05; Thu, 19 May 2022 13:52:53 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRF-0075La-Oi for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:41 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 47AB4617A0; Thu, 19 May 2022 13:43:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 36E9EC34117; Thu, 19 May 2022 13:43:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967816; bh=85shQP61BCSRjbXAkRjEzDdqJZbFWFzFv7BZf/OaUmU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TgnhwW8HZyvKsH5oQyPqeOqAEt7QEwzyfAEDHLdPSe3emDrRl2fCYuS0Dldl2ItKa FWCA3edEkx7BVHqHb8PI81qO4XPXgJFtFKSFK3LqXV0aBpSguRmJ9xT69Pgw/N40P7 vD7sa/6qrCRvinCefLpUO8F0PxUqhzGq8+OEuZ2AXgIuAoOM+/7j7CtPU+SeFAoypv wc5ZzTlEI/vOnowknkF6zPAUqrocTuJk79n272SUWCvDDfFA2B4g91lYg7e/CCm04G NqHnMAVmvWvUSrrWip4vAtPpiC/5wmxWTONRAIBtMaQoKK2v9fr7GKcdC894hMWaya J1EOqwsWnPPbQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 18/89] KVM: arm64: Factor out private range VA allocation Date: Thu, 19 May 2022 14:40:53 +0100 Message-Id: <20220519134204.5379-19-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064337_884923_B6CFB3AA X-CRM114-Status: GOOD ( 15.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret __pkvm_create_private_mapping() is currently responsible for allocating VA space in the hypervisor's "private" range and creating stage-1 mappings. In order to allow reusing the VA space allocation logic from other places, let's factor it out in a standalone function. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/mm.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index 168e7fbe9a3c..4377b067dc0e 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -37,6 +37,22 @@ static int __pkvm_create_mappings(unsigned long start, unsigned long size, return err; } +static unsigned long hyp_alloc_private_va_range(size_t size) +{ + unsigned long addr = __io_map_base; + + hyp_assert_lock_held(&pkvm_pgd_lock); + __io_map_base += PAGE_ALIGN(size); + + /* Are we overflowing on the vmemmap ? */ + if (__io_map_base > __hyp_vmemmap) { + __io_map_base = addr; + addr = (unsigned long)ERR_PTR(-ENOMEM); + } + + return addr; +} + unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, enum kvm_pgtable_prot prot) { @@ -45,16 +61,10 @@ unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, hyp_spin_lock(&pkvm_pgd_lock); - size = PAGE_ALIGN(size + offset_in_page(phys)); - addr = __io_map_base; - __io_map_base += size; - - /* Are we overflowing on the vmemmap ? */ - if (__io_map_base > __hyp_vmemmap) { - __io_map_base -= size; - addr = (unsigned long)ERR_PTR(-ENOMEM); + size = size + offset_in_page(phys); + addr = hyp_alloc_private_va_range(size); + if (IS_ERR((void *)addr)) goto out; - } err = kvm_pgtable_hyp_map(&pkvm_pgtable, addr, size, phys, prot); if (err) { From patchwork Thu May 19 13:40:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 718A5C433F5 for ; Thu, 19 May 2022 13:56:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+TOYfnUHqa6KDjdAmqK129zt1MhHhegXOuKCwHymC6s=; b=wEUBpREzv8cgu7 vC5helMH02u++msmXQ4hkSMc5XCK6BTaBOd6OYynze2jwZ9z7nPdt3SZbZ8vgO9FRTH1i3qOoHzeS 82IR3Hv1n3K0Kt3VaLRRXCMrOFph64b+2ghvVwsWW/IZDjFQjKRxhFYhn7h5CWgOt0pvuUAWrVONS dpT4b2EFa24V4Rmo/GTUm/UA8O7k0NPVnsJRawztFkBh8bOiNPqPzZb/2v2rK2wUftxo8Pyx32sv2 ScgPAjRc1YeYFTGu+xVGrDUuasZ6m7hNphwBAkxnoHuWd+qLq2uwXYtG8+5Klm7dNvr7ZYp67aoSo YU7zZPTediXVSgZJjUHw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgcU-007AoA-TQ; Thu, 19 May 2022 13:55:16 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRL-0075OF-CN for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:46 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 047DBB824AA; Thu, 19 May 2022 13:43:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2EFD8C34116; Thu, 19 May 2022 13:43:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967820; bh=ZKFkdmQPlHoIGxtsviQjoI5whn3KGkXdjZ9/Hq7eV5o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fYocypmRqQAjksl1sou8UeHVogLdcxfGY5ei08yODBnJD8cunzCU47bnzdvuMa3Or nrPcI3DlkILGnfJb1bx6tSBUtpxgErrAJYsSteRTTXECFZJxrf2p+TYvNzWJHMWdJG J6jfPUX7+mS8aaEAQD0X9B1q7BJkgmJUYuRItzWEKHkqnNYEnp3TNqLpkI2+zgNCCp kNESjIWdXbcnxRkBNWRU1ARTEWkMguHMCZnHJoMdplEOwZhNl8FVWCdAt14macV5cc W5pyD8izX+t1RZ0LF4XVXd3Ji+Xtiy6Ay0hdTwuRYRXZVCDf7u5Zj0v+eW3udpuAex PFE/l80nd0Ndg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 19/89] KVM: arm64: Add pcpu fixmap infrastructure at EL2 Date: Thu, 19 May 2022 14:40:54 +0100 Message-Id: <20220519134204.5379-20-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064343_782499_F1BF8208 X-CRM114-Status: GOOD ( 23.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret We will soon need to temporarily map pages into the hypervisor stage-1 in nVHE protected mode. To do this efficiently, let's introduce a per-cpu fixmap allowing to map a single page without needing to take any lock or to allocate memory. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 + arch/arm64/kvm/hyp/include/nvhe/mm.h | 4 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 1 - arch/arm64/kvm/hyp/nvhe/mm.c | 85 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 4 + 5 files changed, 95 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 3a0817b5c739..d11d9d68a680 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -59,6 +59,8 @@ enum pkvm_component_id { PKVM_ID_HYP, }; +extern unsigned long hyp_nr_cpus; + int __pkvm_prot_finalize(void); int __pkvm_host_share_hyp(u64 pfn); int __pkvm_host_unshare_hyp(u64 pfn); diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index 73309ccc192e..45b04bfee171 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -13,6 +13,10 @@ extern struct kvm_pgtable pkvm_pgtable; extern hyp_spinlock_t pkvm_pgd_lock; +int hyp_create_pcpu_fixmap(void); +void *hyp_fixmap_map(phys_addr_t phys); +int hyp_fixmap_unmap(void); + int hyp_create_idmap(u32 hyp_va_bits); int hyp_map_vectors(void); int hyp_back_vmemmap(phys_addr_t back); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 502bc0d04858..707bd832145f 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -21,7 +21,6 @@ #define KVM_HOST_S2_FLAGS (KVM_PGTABLE_S2_NOFWB | KVM_PGTABLE_S2_IDMAP) -extern unsigned long hyp_nr_cpus; struct host_kvm host_kvm; static struct hyp_pool host_s2_pool; diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index 4377b067dc0e..bdb39897343f 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -24,6 +25,7 @@ struct memblock_region hyp_memory[HYP_MEMBLOCK_REGIONS]; unsigned int hyp_memblock_nr; static u64 __io_map_base; +static DEFINE_PER_CPU(void *, hyp_fixmap_base); static int __pkvm_create_mappings(unsigned long start, unsigned long size, unsigned long phys, enum kvm_pgtable_prot prot) @@ -198,6 +200,89 @@ int hyp_map_vectors(void) return 0; } +void *hyp_fixmap_map(phys_addr_t phys) +{ + void *addr = *this_cpu_ptr(&hyp_fixmap_base); + int ret = kvm_pgtable_hyp_map(&pkvm_pgtable, (u64)addr, PAGE_SIZE, + phys, PAGE_HYP); + return ret ? NULL : addr; +} + +int hyp_fixmap_unmap(void) +{ + void *addr = *this_cpu_ptr(&hyp_fixmap_base); + int ret = kvm_pgtable_hyp_unmap(&pkvm_pgtable, (u64)addr, PAGE_SIZE); + + return (ret != PAGE_SIZE) ? -EINVAL : 0; +} + +static int __pin_pgtable_cb(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, void * const arg) +{ + if (!kvm_pte_valid(*ptep) || level != KVM_PGTABLE_MAX_LEVELS - 1) + return -EINVAL; + hyp_page_ref_inc(hyp_virt_to_page(ptep)); + + return 0; +} + +static int hyp_pin_pgtable_pages(u64 addr) +{ + struct kvm_pgtable_walker walker = { + .cb = __pin_pgtable_cb, + .flags = KVM_PGTABLE_WALK_LEAF, + }; + + return kvm_pgtable_walk(&pkvm_pgtable, addr, PAGE_SIZE, &walker); +} + +int hyp_create_pcpu_fixmap(void) +{ + unsigned long i; + int ret = 0; + u64 addr; + + hyp_spin_lock(&pkvm_pgd_lock); + + for (i = 0; i < hyp_nr_cpus; i++) { + addr = hyp_alloc_private_va_range(PAGE_SIZE); + if (IS_ERR((void *)addr)) { + ret = -ENOMEM; + goto unlock; + } + + /* + * Create a dummy mapping, to get the intermediate page-table + * pages allocated, then take a reference on the last level + * page to keep it around at all times. + */ + ret = kvm_pgtable_hyp_map(&pkvm_pgtable, addr, PAGE_SIZE, + __hyp_pa(__hyp_bss_start), PAGE_HYP); + if (ret) { + ret = -EINVAL; + goto unlock; + } + + ret = hyp_pin_pgtable_pages(addr); + if (ret) + goto unlock; + + ret = kvm_pgtable_hyp_unmap(&pkvm_pgtable, addr, PAGE_SIZE); + if (ret != PAGE_SIZE) { + ret = -EINVAL; + goto unlock; + } else { + ret = 0; + } + + *per_cpu_ptr(&hyp_fixmap_base, i) = (void *)addr; + } +unlock: + hyp_spin_unlock(&pkvm_pgd_lock); + + return ret; +} + int hyp_create_idmap(u32 hyp_va_bits) { unsigned long start, end; diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index b306da2b5dae..59a478dde533 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -296,6 +296,10 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; + ret = hyp_create_pcpu_fixmap(); + if (ret) + goto out; + hyp_shadow_table_init(shadow_table_base); out: /* From patchwork Thu May 19 13:40:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855105 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 056FDC433F5 for ; Thu, 19 May 2022 13:58:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=eBauKsvy6sV7MWgYj0rmQcDAY6u4Kp8DlAxf4YUEQMY=; b=sdyH7rsvcnqgPQ duFyy8Y35wTAIYaiFerKDIWZmVosJp5xlkxldNCp0JNChs2CSI1AVglyR04N556hKTbWBLGgnVOVk q7NB8PCXAN6uCQRlj34G7/SNozVfEl9f0drW2BBZp4vpE1x3hcFWTAiKs4TIQIJbEnE7PVzM/0/vd w90MJAMqf6KVaEQ/TaOsc4gHvl8aSap3wiwsLz7rZ73BNjGbLIC76H5EZ6OXlVUo2ilT4GrFQqDev lBvXo7c3K/WNlpcqqMe3VuQ2U/PgVv1vtia35osyL7T8dt01I6wYQU54eUvyAFMegD81dFiWGU93u AI3VFHesk8Q+3DfoAS3g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgdb-007BGx-83; Thu, 19 May 2022 13:56:24 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRP-0075Px-9D for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:50 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D10B7B824AE; Thu, 19 May 2022 13:43:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 24F6BC34115; Thu, 19 May 2022 13:43:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967824; bh=36rTuXtAR+VUb7beLZzh5UenBvWCVEXuByqF0KjZOKM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=acSdFq14kp6F/jS8n6rb0UX/CiSdq0EAigk9GvHBfASVBRcuNwfPRKeLHgR/eMKQ9 aPYsmXTfYKvalifc+pzV+DI8/Rg6XZXKJjTJo5cv9m2AeWnadZT9YFQ0W77jcDZTyA dnUly/5Nqkb2Ee05LJ1E2M2MLarvLAbfWTtHxfb9H0b2X932QyelgOMJAACj4CGPXb KM1CR9Uc0OMB6AJlwifmRq+/sndJ1oBGzPhVFE6m7TjVGSV/RF0UZWoi1Q6lUfHXvR YboPGo65nik/55plwEvIHb99USMem91B5Zy2X4BUZGgo0aSckRJ02PWWjisOz04qCL 9Z8rb3MGRB3kw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 20/89] KVM: arm64: Provide I-cache invalidation by VA at EL2 Date: Thu, 19 May 2022 14:40:55 +0100 Message-Id: <20220519134204.5379-21-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064347_691808_14B0D439 X-CRM114-Status: GOOD ( 15.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for handling cache maintenance of guest pages at EL2, introduce an EL2 copy of icache_inval_pou() which will later be plumbed into the stage-2 page-table cache maintenance callbacks. Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_hyp.h | 1 + arch/arm64/kernel/image-vars.h | 3 --- arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/hyp/nvhe/cache.S | 11 +++++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 3 +++ 5 files changed, 16 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index aa7fa2a08f06..fd99cf09972d 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -123,4 +123,5 @@ extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val); +extern unsigned long kvm_nvhe_sym(__icache_flags); #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 241c86b67d01..4e3b6d618ac1 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -80,9 +80,6 @@ KVM_NVHE_ALIAS(nvhe_hyp_panic_handler); /* Vectors installed by hyp-init on reset HVC. */ KVM_NVHE_ALIAS(__hyp_stub_vectors); -/* Kernel symbol used by icache_is_vpipt(). */ -KVM_NVHE_ALIAS(__icache_flags); - /* VMID bits set by the KVM VMID allocator */ KVM_NVHE_ALIAS(kvm_arm_vmid_bits); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 14adfd09e882..6a32eaf768e5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1832,6 +1832,7 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits) kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1); + kvm_nvhe_sym(__icache_flags) = __icache_flags; ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP); if (ret) diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S index 0c367eb5f4e2..85936c17ae40 100644 --- a/arch/arm64/kvm/hyp/nvhe/cache.S +++ b/arch/arm64/kvm/hyp/nvhe/cache.S @@ -12,3 +12,14 @@ SYM_FUNC_START(__pi_dcache_clean_inval_poc) ret SYM_FUNC_END(__pi_dcache_clean_inval_poc) SYM_FUNC_ALIAS(dcache_clean_inval_poc, __pi_dcache_clean_inval_poc) + +SYM_FUNC_START(__pi_icache_inval_pou) +alternative_if ARM64_HAS_CACHE_DIC + isb + ret +alternative_else_nop_endif + + invalidate_icache_by_line x0, x1, x2, x3 + ret +SYM_FUNC_END(__pi_icache_inval_pou) +SYM_FUNC_ALIAS(icache_inval_pou, __pi_icache_inval_pou) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 77aeb787670b..114c5565de7d 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -12,6 +12,9 @@ #include #include +/* Used by icache_is_vpipt(). */ +unsigned long __icache_flags; + /* * Set trap register values based on features in ID_AA64PFR0. */ From patchwork Thu May 19 13:40:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855106 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 022ABC433F5 for ; Thu, 19 May 2022 13:58:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=D2t+/JxeL1e+poGpPqkphS6vS5ve24ibMHGhHKlpBac=; b=exttHsXSmDu3PV 8Gr4ZzhxHZ698yJ8T6R2PW99abyCSZvDf/xiBJLfA68lupRl0Vv9iCmpSK/1x/c7loveOsC8OU7OH fw2pf80Pr6VpGm62Osfsx3mXRc6dRq9L/ZFkJ5h4BMv7TM7Yb9x1tE/q1lY7mv+wl4ikjwoNu9vwS Wk17Z/PpySEvZ6dejZtYe1040WkEIckIVOqqdnAItjZYTB4zm1y+7wA3qz8mT1gl249+EWrkNjKAm XSoLzYL9iFXKTsDIZ9fBeR1FNaw3Ndahx7x7YskxFQNrEDnz7pgVeME/KiostmuB0i3EzBd2BQGKC chow6hK//jgbxYUrqLMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgeY-007BhH-7T; Thu, 19 May 2022 13:57:23 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRS-0075RT-2a for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:52 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2F76D617A0; Thu, 19 May 2022 13:43:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20098C34100; Thu, 19 May 2022 13:43:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967828; bh=nVZ9zB3XgUyqAywpqI9eUm0OwoqK4ehgmfq+2eDII9U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mHlEdKD3vUC7HpGx5EwQzIXYe6miIviub9c8/ElWgLCOM5AffnJ6hPDCJjhTkHHpY axISPh8nfoRa75iZveZGHPFXXkzoMD/Vc8Kxg//o6cHPitQrkxQ31iEqtZNl1eM0qU 1uvyuEEQYlMtbZ1TErQZZus94+f+xSg8MrEBRFJ41FU+Oe0LC9dQSIZwN+XM+Z6ZBm /09yKExAZohllsWjmuJ0n/c9tVHfaxOWP83GxxnHLO3K5ePzoXIlfdMEmL6blaCcwL dcpemKNCjJxFuZHvkL6UhXI+Q0KoKPYyC9wulErNHjF+VqVwT03huP9WXi3vHAfwY4 VozIasJVjV8FA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 21/89] KVM: arm64: Allow non-coallescable pages in a hyp_pool Date: Thu, 19 May 2022 14:40:56 +0100 Message-Id: <20220519134204.5379-22-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064350_276690_28A305A8 X-CRM114-Status: GOOD ( 16.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret All the contiguous pages used to initialize a hyp_pool are considered coalesceable, which means that the hyp page allocator will actively try to merge them with their buddies on the hyp_put_page() path. However, using hyp_put_page() on a page that is not part of the inital memory range given to a hyp_pool() is currently unsupported. In order to allow dynamically extending hyp pools at run-time, add a check to __hyp_attach_page() to allow inserting 'external' pages into the free-list of order 0. This will be necessary to allow lazy donation of pages from the host to the hypervisor when allocating guest stage-2 page-table pages at EL2. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index dc87589440b8..7804da89e55d 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -93,11 +93,15 @@ static inline struct hyp_page *node_to_page(struct list_head *node) static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { + phys_addr_t phys = hyp_page_to_phys(p); unsigned short order = p->order; struct hyp_page *buddy; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); + if (phys < pool->range_start || phys >= pool->range_end) + goto insert; + /* * Only the first struct hyp_page of a high-order page (otherwise known * as the 'head') should have p->order set. The non-head pages should @@ -116,6 +120,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, p = min(p, buddy); } +insert: /* Mark the new head, and insert it */ p->order = order; page_add_to_list(p, &pool->free_area[order]); From patchwork Thu May 19 13:40:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E2FB0C433EF for ; Thu, 19 May 2022 13:59:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wGzXyMPISgoQgxtsngs2mEET3A98aA2tmhOqdwolyqw=; b=l56WvHU/+dPGQM M0kMMcqNt9bBGQNOlikrkfGoHue1z69YU5Fw6y2jQXe+KmuhisOFmFflCWXqgLp54q0iD4xsvfvpF 8/6DlNh78IXXKHuZG+YdJkyvidNdRg2YS+8+1XAn3Qy1GzxGvwf19/I8qH8ghLK5VZLNkfHfaVPV4 LEI5cGoKvxoqzPu4knuiKFBY7J1T3jcvmAbp5vFjaeJUAZ+J2bhRFONqaQtTtQVUkMFZTUleAiIf3 Z+9u6EqHzDqpovhIy+xyvBX7ZOdHAxh5pzQNiGAse+FOg+x/fjJoSZcg3AUNr/yWKgcrzqUPc+Dix IJ/FtxXe4YHXaSvC9PUg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgfg-007CH4-Gj; Thu, 19 May 2022 13:58:33 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRV-0075Tf-Io for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:55 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 23B8C6179A; Thu, 19 May 2022 13:43:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 174B3C385B8; Thu, 19 May 2022 13:43:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967832; bh=kSjeJ0wTpZrUE91/BBFbgiVUW1Hl8frwaPBerzRxfbQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tgx+SomTlrxVTbry9WhL3XPipAi6F9W/DTiJUqJ4017wxzPbMFQEHgu2IUUOaek4g ZDiCuSUZNAz3sFW6vLu0r9KwnxhAKqtN5Hr8IkqKmwYgPlFgEabSRRUs7Ez++bANFu SPqkWjLD7lC8lmfcTBt4V9X4JNSydN0RRmGcghrlPNSdPKGL8YHad6oC30QXu9vYqi DuDTgMi+8wM3+kG0HyFXeU1sXqjG+wW6PhGY1POc9tfJazeE5KUgsYD9zTvuafVqV2 NZYj1qJNcqjI3nHGtKHMw2SK6qHJuFXSUdnrOIxVxxht4VzAVwREOLDLUexkgun40D GTGhUGubUsS6A== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 22/89] KVM: arm64: Add generic hyp_memcache helpers Date: Thu, 19 May 2022 14:40:57 +0100 Message-Id: <20220519134204.5379-23-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064353_744789_1722F1D2 X-CRM114-Status: GOOD ( 18.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret The host and hypervisor will need to dynamically exchange memory pages soon. Indeed, the hypervisor will rely on the host to donate memory pages it can use to create guest stage-2 page-table and to store metadata. In order to ease this process, introduce a struct hyp_memcache which is essentially a linked list of available pages, indexed by physical addresses. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_host.h | 57 +++++++++++++++++++ arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 + arch/arm64/kvm/hyp/nvhe/mm.c | 33 +++++++++++ arch/arm64/kvm/mmu.c | 26 +++++++++ 4 files changed, 118 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 13967fc9731a..f4272ce76084 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -72,6 +72,63 @@ u32 __attribute_const__ kvm_target_cpu(void); int kvm_reset_vcpu(struct kvm_vcpu *vcpu); void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu); +struct kvm_hyp_memcache { + phys_addr_t head; + unsigned long nr_pages; +}; + +static inline void push_hyp_memcache(struct kvm_hyp_memcache *mc, + phys_addr_t *p, + phys_addr_t (*to_pa)(void *virt)) +{ + *p = mc->head; + mc->head = to_pa(p); + mc->nr_pages++; +} + +static inline void *pop_hyp_memcache(struct kvm_hyp_memcache *mc, + void *(*to_va)(phys_addr_t phys)) +{ + phys_addr_t *p = to_va(mc->head); + + if (!mc->nr_pages) + return NULL; + + mc->head = *p; + mc->nr_pages--; + + return p; +} + +static inline int __topup_hyp_memcache(struct kvm_hyp_memcache *mc, + unsigned long min_pages, + void *(*alloc_fn)(void *arg), + phys_addr_t (*to_pa)(void *virt), + void *arg) +{ + while (mc->nr_pages < min_pages) { + phys_addr_t *p = alloc_fn(arg); + + if (!p) + return -ENOMEM; + push_hyp_memcache(mc, p, to_pa); + } + + return 0; +} + +static inline void __free_hyp_memcache(struct kvm_hyp_memcache *mc, + void (*free_fn)(void *virt, void *arg), + void *(*to_va)(phys_addr_t phys), + void *arg) +{ + while (mc->nr_pages) + free_fn(pop_hyp_memcache(mc, to_va), arg); +} + +void free_hyp_memcache(struct kvm_hyp_memcache *mc); +int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages); + struct kvm_vmid { atomic64_t id; }; diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index d11d9d68a680..36eea31a1c5f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -77,6 +77,8 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); int hyp_pin_shared_mem(void *from, void *to); void hyp_unpin_shared_mem(void *from, void *to); void reclaim_guest_pages(struct kvm_shadow_vm *vm); +int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages, + struct kvm_hyp_memcache *host_mc); static __always_inline void __load_host_stage2(void) { diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index bdb39897343f..4e86a2123c05 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -307,3 +307,36 @@ int hyp_create_idmap(u32 hyp_va_bits) return __pkvm_create_mappings(start, end - start, start, PAGE_HYP_EXEC); } + +static void *admit_host_page(void *arg) +{ + struct kvm_hyp_memcache *host_mc = arg; + + if (!host_mc->nr_pages) + return NULL; + + /* + * The host still owns the pages in its memcache, so we need to go + * through a full host-to-hyp donation cycle to change it. Fortunately, + * __pkvm_host_donate_hyp() takes care of races for us, so if it + * succeeds we're good to go. + */ + if (__pkvm_host_donate_hyp(hyp_phys_to_pfn(host_mc->head), 1)) + return NULL; + + return pop_hyp_memcache(host_mc, hyp_phys_to_virt); +} + +/* Refill our local memcache by poping pages from the one provided by the host. */ +int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages, + struct kvm_hyp_memcache *host_mc) +{ + struct kvm_hyp_memcache tmp = *host_mc; + int ret; + + ret = __topup_hyp_memcache(mc, min_pages, admit_host_page, + hyp_virt_to_phys, &tmp); + *host_mc = tmp; + + return ret; +} diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 0d19259454d8..e77686d54e9f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -750,6 +750,32 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) } } +static void hyp_mc_free_fn(void *addr, void *unused) +{ + free_page((unsigned long)addr); +} + +static void *hyp_mc_alloc_fn(void *unused) +{ + return (void *)__get_free_page(GFP_KERNEL_ACCOUNT); +} + +void free_hyp_memcache(struct kvm_hyp_memcache *mc) +{ + if (is_protected_kvm_enabled()) + __free_hyp_memcache(mc, hyp_mc_free_fn, + kvm_host_va, NULL); +} + +int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages) +{ + if (!is_protected_kvm_enabled()) + return 0; + + return __topup_hyp_memcache(mc, min_pages, hyp_mc_alloc_fn, + kvm_host_pa, NULL); +} + /** * kvm_phys_addr_ioremap - map a device range to guest IPA * From patchwork Thu May 19 13:40:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855108 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BC40EC433F5 for ; Thu, 19 May 2022 14:00:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=icQ4HWdqWM/vPBmHaTpIFtKMlUdM56ZNRsZpwsXF01k=; b=T7xCJhcq6Lza9A NWP6FezsQs9FuG8O05q1e30n5TeJCLA5XFag931tWkia/ONmBox7MFiXm9KkjCgp20fwSC8qal+fR yT+21fOD7B0v8lGmLjaUPD8F62lxFahHmqMuqbiJMAjJJxJ3FXJxxH0hwq48gZn6baC+uuw3uBqeD 8J6gOhTuCm9u8rk/jW7axhCiRsvdn+orXgMwdOB3mpcOxfmpM8TAN+usCyiY0zKrmtm8O36B6fNYx LcfvDmMlCSaIwcNdF7JmVVc9wcLLVuF0AL81ejnZ0wjVm6IKaV9kurzQ0lEqwaKrpNdNkUP2C5Nkw Dir0aUtQm1SlNlDB8puQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrggJ-007CaW-Ka; Thu, 19 May 2022 13:59:12 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRZ-0075Vk-JR for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:59 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1B70A61783; Thu, 19 May 2022 13:43:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0E856C34119; Thu, 19 May 2022 13:43:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967836; bh=KTS0y+H0adznlFudoodX9+FGRdBy3wYYcqFTBfMRCQU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UdTvL+c34+yJHNIRCs4Pb/Dsx7F0JahVjbJZ7ysUGQeBe656/JAQ7v0fAh3JfyDO1 SRhbZ7Oosgm3WF283Dvg9L6lfF6z7oH3iMKdJ/G6Luad+CmGXrBKUsHK045k+zO/o7 oOb8zX0s5NNFfH8g1iwpJSJCbBCUl+1oKw4SkqiqzSf81KTEDEa+eYO9gRmtR7ORWz fO7MaBiBZpSwRYdh7jPxXc/430TB9AYuD1EzCGfIru2KjO8N9XHPbqgC7GMG16hrlA O1qyO8mOkLTPY96B2o4lSV9vYrkOImiKPoK7p3+9pfT2qKA0za8Hbncrx4L8NeAQ17 Z/TpMGUl3qtow== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 23/89] KVM: arm64: Instantiate guest stage-2 page-tables at EL2 Date: Thu, 19 May 2022 14:40:58 +0100 Message-Id: <20220519134204.5379-24-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064357_812808_DC22257D X-CRM114-Status: GOOD ( 20.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Extend the shadow initialisation at EL2 so that we instantiate a memory pool and a full 'struct kvm_s2_mmu' structure for each VM, with a stage-2 page-table entirely independent from the one managed by the host at EL1. For now, the new page-table is unused as there is no way for the host to map anything into it. Yet. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 6 ++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 127 ++++++++++++++++++++++++- 2 files changed, 130 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index dc06b043bd83..f841e2b252cd 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -9,6 +9,9 @@ #include +#include +#include + /* * Holds the relevant data for maintaining the vcpu state completely at hyp. */ @@ -37,6 +40,9 @@ struct kvm_shadow_vm { size_t shadow_area_size; struct kvm_pgtable pgt; + struct kvm_pgtable_mm_ops mm_ops; + struct hyp_pool pool; + hyp_spinlock_t lock; /* Array of the shadow state per vcpu. */ struct kvm_shadow_vcpu_state shadow_vcpu_states[0]; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 707bd832145f..992ef4b668b4 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -25,6 +25,21 @@ struct host_kvm host_kvm; static struct hyp_pool host_s2_pool; +static DEFINE_PER_CPU(struct kvm_shadow_vm *, __current_vm); +#define current_vm (*this_cpu_ptr(&__current_vm)) + +static void guest_lock_component(struct kvm_shadow_vm *vm) +{ + hyp_spin_lock(&vm->lock); + current_vm = vm; +} + +static void guest_unlock_component(struct kvm_shadow_vm *vm) +{ + current_vm = NULL; + hyp_spin_unlock(&vm->lock); +} + static void host_lock_component(void) { hyp_spin_lock(&host_kvm.lock); @@ -140,18 +155,124 @@ int kvm_host_prepare_stage2(void *pgt_pool_base) return 0; } +static bool guest_stage2_force_pte_cb(u64 addr, u64 end, + enum kvm_pgtable_prot prot) +{ + return true; +} + +static void *guest_s2_zalloc_pages_exact(size_t size) +{ + void *addr = hyp_alloc_pages(¤t_vm->pool, get_order(size)); + + WARN_ON(size != (PAGE_SIZE << get_order(size))); + hyp_split_page(hyp_virt_to_page(addr)); + + return addr; +} + +static void guest_s2_free_pages_exact(void *addr, unsigned long size) +{ + u8 order = get_order(size); + unsigned int i; + + for (i = 0; i < (1 << order); i++) + hyp_put_page(¤t_vm->pool, addr + (i * PAGE_SIZE)); +} + +static void *guest_s2_zalloc_page(void *mc) +{ + struct hyp_page *p; + void *addr; + + addr = hyp_alloc_pages(¤t_vm->pool, 0); + if (addr) + return addr; + + addr = pop_hyp_memcache(mc, hyp_phys_to_virt); + if (!addr) + return addr; + + memset(addr, 0, PAGE_SIZE); + p = hyp_virt_to_page(addr); + memset(p, 0, sizeof(*p)); + p->refcount = 1; + + return addr; +} + +static void guest_s2_get_page(void *addr) +{ + hyp_get_page(¤t_vm->pool, addr); +} + +static void guest_s2_put_page(void *addr) +{ + hyp_put_page(¤t_vm->pool, addr); +} + +static void clean_dcache_guest_page(void *va, size_t size) +{ + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); + hyp_fixmap_unmap(); +} + +static void invalidate_icache_guest_page(void *va, size_t size) +{ + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); + hyp_fixmap_unmap(); +} + int kvm_guest_prepare_stage2(struct kvm_shadow_vm *vm, void *pgd) { - vm->pgt.pgd = pgd; + struct kvm_s2_mmu *mmu = &vm->kvm.arch.mmu; + unsigned long nr_pages; + int ret; + + nr_pages = kvm_pgtable_stage2_pgd_size(vm->kvm.arch.vtcr) >> PAGE_SHIFT; + ret = hyp_pool_init(&vm->pool, hyp_virt_to_pfn(pgd), nr_pages, 0); + if (ret) + return ret; + + hyp_spin_lock_init(&vm->lock); + vm->mm_ops = (struct kvm_pgtable_mm_ops) { + .zalloc_pages_exact = guest_s2_zalloc_pages_exact, + .free_pages_exact = guest_s2_free_pages_exact, + .zalloc_page = guest_s2_zalloc_page, + .phys_to_virt = hyp_phys_to_virt, + .virt_to_phys = hyp_virt_to_phys, + .page_count = hyp_page_count, + .get_page = guest_s2_get_page, + .put_page = guest_s2_put_page, + .dcache_clean_inval_poc = clean_dcache_guest_page, + .icache_inval_pou = invalidate_icache_guest_page, + }; + + guest_lock_component(vm); + ret = __kvm_pgtable_stage2_init(mmu->pgt, mmu, &vm->mm_ops, 0, + guest_stage2_force_pte_cb); + guest_unlock_component(vm); + if (ret) + return ret; + + vm->kvm.arch.mmu.pgd_phys = __hyp_pa(vm->pgt.pgd); + return 0; } void reclaim_guest_pages(struct kvm_shadow_vm *vm) { - unsigned long nr_pages; + unsigned long nr_pages, pfn; nr_pages = kvm_pgtable_stage2_pgd_size(vm->kvm.arch.vtcr) >> PAGE_SHIFT; - WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(vm->pgt.pgd), nr_pages)); + pfn = hyp_virt_to_pfn(vm->pgt.pgd); + + guest_lock_component(vm); + kvm_pgtable_stage2_destroy(&vm->pgt); + vm->kvm.arch.mmu.pgd_phys = 0ULL; + guest_unlock_component(vm); + + WARN_ON(__pkvm_hyp_donate_host(pfn, nr_pages)); } int __pkvm_prot_finalize(void) From patchwork Thu May 19 13:40:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855109 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7AFD6C433F5 for ; Thu, 19 May 2022 14:01:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=cnCsozMEkurq/u6psbogLMVUjt8bN1cP0MPYHLmSpj0=; b=IV9Dy598r/k8br 7fzz6M1PshO1mufoMz1F0ZsJ5kGCLqVP2uRH4pcaJ1bSuK9zC4WvBmEyvEezogLDia28sGab+4NFt /8y9U9Kk483/esVJ5CP8ICl7M7BYpn8S6pEbyYPQ5HcfbDjzskRFg5OKeQ0lxwohI8ETBwiFloL0w to0lHw0RMw8Flggklt78A0EH1gBlxo6LF9YYjuiv+7YzyiO42wMZ8Q0aKFMu4HGnLsyZmxPdUOlFj dDs2cDDRBKJhzKjtvPNCxaIPyHAWSppiztlaR5c8X8jPOHOmAw2OphuWYbfo85LCyM3KHT47k2cvc fLVdKDIqYzLyEjqB6B0Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrghC-007D0u-Pz; Thu, 19 May 2022 14:00:06 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRd-0075Xl-Hl for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:03 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 10AF46178C; Thu, 19 May 2022 13:44:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 09026C36AE5; Thu, 19 May 2022 13:43:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967840; bh=ww7lcjAl/6HWNdYTQ7W0CxccWCjVsOE8/+vp9KoOlaM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=INs/18LVN+1265ynbPoPcB7JGaHB/uPPncEfZ7tJ7W4oh+MulUvoUqrN1mB1tadCf mmQicPQI5MxP3BnHckBzGiBJjvneWIctLUFtKJKJS/z54s/dbc8EjCuDqNNxoyJX9g Z9ukHZ4LcbB0JNV3ConQEqExUvhmktRXVVfHuHlTpep1zLfj/BtXd8Yzy1uf3ikpFW JMqYAK7/zegjzavFpbir1Bc7oJPIaOrnvcKCvDsG4nHvOhKWcZzKie6g5S9MdxJ58D a0Vl95/BNsRMjfofMaOghVysAZoCAseOY65PE1MG+nlL04nC6K3srr4p5Lm5ut7d1e 9Ss06D8qCP0TQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 24/89] KVM: arm64: Return guest memory from EL2 via dedicated teardown memcache Date: Thu, 19 May 2022 14:40:59 +0100 Message-Id: <20220519134204.5379-25-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064401_740081_2F764160 X-CRM114-Status: GOOD ( 17.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Rather than relying on the host to free the shadow VM pages explicitly on teardown, introduce a dedicated teardown memcache which allows the host to reclaim guest memory resources without having to keep track of all of the allocations made by EL2. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_host.h | 6 +----- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 +- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 17 +++++++++++------ arch/arm64/kvm/hyp/nvhe/pkvm.c | 8 +++++++- arch/arm64/kvm/pkvm.c | 12 +----------- 5 files changed, 21 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f4272ce76084..32ac88e60e6b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -161,11 +161,7 @@ struct kvm_arch_memory_slot { struct kvm_protected_vm { unsigned int shadow_handle; struct mutex shadow_lock; - - struct { - void *pgd; - void *shadow; - } hyp_donations; + struct kvm_hyp_memcache teardown_mc; }; struct kvm_arch { diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 36eea31a1c5f..663019992b67 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -76,7 +76,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); int hyp_pin_shared_mem(void *from, void *to); void hyp_unpin_shared_mem(void *from, void *to); -void reclaim_guest_pages(struct kvm_shadow_vm *vm); +void reclaim_guest_pages(struct kvm_shadow_vm *vm, struct kvm_hyp_memcache *mc); int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages, struct kvm_hyp_memcache *host_mc); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 992ef4b668b4..bcf84e157d4b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -260,19 +260,24 @@ int kvm_guest_prepare_stage2(struct kvm_shadow_vm *vm, void *pgd) return 0; } -void reclaim_guest_pages(struct kvm_shadow_vm *vm) +void reclaim_guest_pages(struct kvm_shadow_vm *vm, struct kvm_hyp_memcache *mc) { - unsigned long nr_pages, pfn; - - nr_pages = kvm_pgtable_stage2_pgd_size(vm->kvm.arch.vtcr) >> PAGE_SHIFT; - pfn = hyp_virt_to_pfn(vm->pgt.pgd); + void *addr; + /* Dump all pgtable pages in the hyp_pool */ guest_lock_component(vm); kvm_pgtable_stage2_destroy(&vm->pgt); vm->kvm.arch.mmu.pgd_phys = 0ULL; guest_unlock_component(vm); - WARN_ON(__pkvm_hyp_donate_host(pfn, nr_pages)); + /* Drain the hyp_pool into the memcache */ + addr = hyp_alloc_pages(&vm->pool, 0); + while (addr) { + memset(hyp_virt_to_page(addr), 0, sizeof(struct hyp_page)); + push_hyp_memcache(mc, addr, hyp_virt_to_phys); + WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(addr), 1)); + addr = hyp_alloc_pages(&vm->pool, 0); + } } int __pkvm_prot_finalize(void) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 114c5565de7d..a4a518b2a43b 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -546,8 +546,10 @@ int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, int __pkvm_teardown_shadow(unsigned int shadow_handle) { + struct kvm_hyp_memcache *mc; struct kvm_shadow_vm *vm; size_t shadow_size; + void *addr; int err; /* Lookup then remove entry from the shadow table. */ @@ -569,7 +571,8 @@ int __pkvm_teardown_shadow(unsigned int shadow_handle) hyp_spin_unlock(&shadow_lock); /* Reclaim guest pages (including page-table pages) */ - reclaim_guest_pages(vm); + mc = &vm->host_kvm->arch.pkvm.teardown_mc; + reclaim_guest_pages(vm, mc); unpin_host_vcpus(vm->shadow_vcpu_states, vm->kvm.created_vcpus); /* Push the metadata pages to the teardown memcache */ @@ -577,6 +580,9 @@ int __pkvm_teardown_shadow(unsigned int shadow_handle) hyp_unpin_shared_mem(vm->host_kvm, vm->host_kvm + 1); memset(vm, 0, shadow_size); + for (addr = vm; addr < (void *)vm + shadow_size; addr += PAGE_SIZE) + push_hyp_memcache(mc, addr, hyp_virt_to_phys); + unmap_donated_memory_noclear(vm, shadow_size); return 0; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index b4466b31d7c8..b174d6dfde36 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -160,8 +160,6 @@ static int __kvm_shadow_create(struct kvm *kvm) /* Store the shadow handle given by hyp for future call reference. */ kvm->arch.pkvm.shadow_handle = shadow_handle; - kvm->arch.pkvm.hyp_donations.pgd = pgd; - kvm->arch.pkvm.hyp_donations.shadow = shadow_addr; return 0; free_shadow: @@ -185,20 +183,12 @@ int kvm_shadow_create(struct kvm *kvm) void kvm_shadow_destroy(struct kvm *kvm) { - size_t pgd_sz, shadow_sz; - if (kvm->arch.pkvm.shadow_handle) WARN_ON(kvm_call_hyp_nvhe(__pkvm_teardown_shadow, kvm->arch.pkvm.shadow_handle)); kvm->arch.pkvm.shadow_handle = 0; - - shadow_sz = PAGE_ALIGN(KVM_SHADOW_VM_SIZE + - KVM_SHADOW_VCPU_STATE_SIZE * kvm->created_vcpus); - pgd_sz = kvm_pgtable_stage2_pgd_size(kvm->arch.vtcr); - - free_pages_exact(kvm->arch.pkvm.hyp_donations.shadow, shadow_sz); - free_pages_exact(kvm->arch.pkvm.hyp_donations.pgd, pgd_sz); + free_hyp_memcache(&kvm->arch.pkvm.teardown_mc); } int kvm_init_pvm(struct kvm *kvm) From patchwork Thu May 19 13:41:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855110 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A1985C433F5 for ; Thu, 19 May 2022 14:02:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9FIlFpN/EC4gJ1R4kWJ8wULJNYRzQHi1GWYq7jqypuQ=; b=v8cEiP2Vag1lkn ib+t+UuCB3TERTselOvMbcMg9X8kEQkW7QqIdlZuzlPi1xnuOtD5aFwyFaOnq4/9ahNIzTX0PfVlY ApskJT/NNFzBBd5/o62FojUfF9jmZQfSP+vQ8QUjJut3AdC2NFRGfLCAp2J57aw7QQGAVvkEQ+zXz N5UsvMrjQhotKay1IHBwcNIV1XOdfzrT3zv9ZxF4rxj/3JZQ2t0NdRLaCJBYy+vsOqhhuemhd2Rc5 1fricCsZAZYnF/YlQ0cAtyjaydjsyJV7Z5lQ8sNHuaSDbp2U7u/KGvoZ/iYz2h5rpSJDXh2T/8a6R +KGhwVLaH9Asivn51mzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrghh-007DCQ-BX; Thu, 19 May 2022 14:00:37 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRh-0075a3-Ep for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:07 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0D49161783; Thu, 19 May 2022 13:44:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F3DBDC36AE3; Thu, 19 May 2022 13:44:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967844; bh=fYK22XVNnsymxmFGaR6Sbc5K2VdALgSdCcf5yz3pqE8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sK9n1pq7vOZO6NYAKanNT0mgseTU9foVmwCnkKqJRBef+Ex9CGEMvTgUpW9QkFEei fS5hcEifdIqEWyDIG31y9MEbpHZk6UaMJkE93Gk1rH6N8dAZEV4tLVlitMsUS1oGfP e5Ts0wo73y+/VzGmwiHy3P8O9rXRNlTBxu3497BWdCoLDh7jR431fCzT2ZPqfJUUJC G31kZo1Q1THAaO37Afud0136QAX12zM5JQHenb8XQPh4hyTBdTP69RjjuxqiebfuND OGqP+lG5E4Z8yzS+uLB/mUr0+lgQoV/BNn2jZCqVQ+eC9L+A3N1O3KYYAAqTh1XxsA gRfBwl7C26ThQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 25/89] KVM: arm64: Add flags to struct hyp_page Date: Thu, 19 May 2022 14:41:00 +0100 Message-Id: <20220519134204.5379-26-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064405_643373_2476DAF5 X-CRM114-Status: GOOD ( 17.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Add a 'flags' field to struct hyp_page, and reduce the size of the order field to u8 to avoid growing the struct size. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 6 +++--- arch/arm64/kvm/hyp/include/nvhe/memory.h | 3 ++- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 14 +++++++------- 3 files changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 0a048dc06a7d..9330b13075f8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -7,7 +7,7 @@ #include #include -#define HYP_NO_ORDER USHRT_MAX +#define HYP_NO_ORDER 0xff struct hyp_pool { /* @@ -19,11 +19,11 @@ struct hyp_pool { struct list_head free_area[MAX_ORDER]; phys_addr_t range_start; phys_addr_t range_end; - unsigned short max_order; + u8 max_order; }; /* Allocation */ -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order); void hyp_split_page(struct hyp_page *page); void hyp_get_page(struct hyp_pool *pool, void *addr); void hyp_put_page(struct hyp_pool *pool, void *addr); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index e8a78b72aabf..29f2ebe306bc 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -9,7 +9,8 @@ struct hyp_page { unsigned short refcount; - unsigned short order; + u8 order; + u8 flags; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 7804da89e55d..01976a58d850 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -32,7 +32,7 @@ u64 __hyp_vmemmap; */ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { phys_addr_t addr = hyp_page_to_phys(p); @@ -51,7 +51,7 @@ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, /* Find a buddy page currently available for allocation */ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); @@ -94,8 +94,8 @@ static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { phys_addr_t phys = hyp_page_to_phys(p); - unsigned short order = p->order; struct hyp_page *buddy; + u8 order = p->order; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); @@ -128,7 +128,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy; @@ -182,7 +182,7 @@ void hyp_get_page(struct hyp_pool *pool, void *addr) void hyp_split_page(struct hyp_page *p) { - unsigned short order = p->order; + u8 order = p->order; unsigned int i; p->order = 0; @@ -194,10 +194,10 @@ void hyp_split_page(struct hyp_page *p) } } -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order) { - unsigned short i = order; struct hyp_page *p; + u8 i = order; hyp_spin_lock(&pool->lock); From patchwork Thu May 19 13:41:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855117 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0589BC433F5 for ; Thu, 19 May 2022 14:02:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=oCUoOlqAoq+A2J/IUnG8d60HgN4eXqT/WH37HcTx3yE=; b=aLNdNxwAj3X06v ynqd6SlgD6oWMdyx01pr50Meo5F0xEZ1mJObhThGWT4McodxBLExxdNWmGW8BzTsU/EbV3o1ryTG4 5OO/2WXcZGNB1YSBf4Vs3Gx9JNc7BJTXf2H8lnuMQ2PQbzOVVc5phVYUWqGQLMBd9mtZBWnAh7nOv LOnxXi5GRZPsz1r1pka79Ui+2sTVRSmrNnJ6qwJhQCKIZf/n5wx7JAYBERC6vgntkOJZ9cdQ01nCH lGUiYsqGYDbUE99DG10eUTu60R1qSrcvnchTmzvD25kbyovYxZh/EdM8kO5Z7Y+hwG+RGmKOLkd4b OxqXcdTMuRfCklgcbfwA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgiH-007DTw-BP; Thu, 19 May 2022 14:01:13 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRl-0075cY-Iw for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:11 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 04F1661795; Thu, 19 May 2022 13:44:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EBC73C385AA; Thu, 19 May 2022 13:44:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967848; bh=IhSeOWQnVbLScTEq7xNJkFboB+w6CZDfIqkfMBGCjiM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q9r6DPcBOzmStsuUUfc3UU7T7QcuduC5In0lc9hRBgmHPF+9ZpRFpGvm7y/bNY1Fz WzKnjAYdPbvwzbHhhlEeGIdkx44ml6A50hfRtUVu0j7l2VRmd3llDu9GA0o7BC7ZHl e6E9nAzaLoEwqDvGLMZMPVJQxJxyYp1uome95uL8qzP9Kx1anpNDThG9NNAQbJcAgx pMksAfq2wqlmNu0VtzOjETnS/BLYqsNiUosjhG7L22YO3XMhafBoGZhNkukkxsElT5 UMvZmGCxAmGH84KawM/xtHQPsSrfPhw5H6SSTe2v5qgFOKZhc1i4FL/pW0BhkCOoOF pUexcEEEbhcWg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 26/89] KVM: arm64: Provide a hypercall for the host to reclaim guest memory Date: Thu, 19 May 2022 14:41:01 +0100 Message-Id: <20220519134204.5379-27-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064409_735615_C40B0CF2 X-CRM114-Status: GOOD ( 21.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Implement a new hypercall, __pkvm_host_reclaim_page(), so that the host at EL1 can reclaim pages that were previously donated to EL2. This allows EL2 to defer clearing of guest memory on teardown and allows preemption in the host after reclaiming each page. Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/include/nvhe/memory.h | 7 ++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 8 ++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 91 ++++++++++++++++++- 5 files changed, 107 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index f5030e88eb58..a68381699c40 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -64,6 +64,7 @@ enum __kvm_host_smccc_func { /* Hypercalls available after pKVM finalisation */ __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, + __KVM_HOST_SMCCC_FUNC___pkvm_host_reclaim_page, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 663019992b67..ecedc545e608 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -64,6 +64,7 @@ extern unsigned long hyp_nr_cpus; int __pkvm_prot_finalize(void); int __pkvm_host_share_hyp(u64 pfn); int __pkvm_host_unshare_hyp(u64 pfn); +int __pkvm_host_reclaim_page(u64 pfn); int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 29f2ebe306bc..15b719fefc86 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -7,6 +7,13 @@ #include +/* + * Accesses to struct hyp_page flags are serialized by the host stage-2 + * page-table lock. + */ +#define HOST_PAGE_NEED_POISONING BIT(0) +#define HOST_PAGE_PENDING_RECLAIM BIT(1) + struct hyp_page { unsigned short refcount; u8 order; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 8e51cdab00b7..629d306c91c0 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -155,6 +155,13 @@ static void handle___pkvm_host_unshare_hyp(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = __pkvm_host_unshare_hyp(pfn); } +static void handle___pkvm_host_reclaim_page(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, pfn, host_ctxt, 1); + + cpu_reg(host_ctxt, 1) = __pkvm_host_reclaim_page(pfn); +} + static void handle___pkvm_create_private_mapping(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(phys_addr_t, phys, host_ctxt, 1); @@ -211,6 +218,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), + HANDLE_FUNC(__pkvm_host_reclaim_page), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index bcf84e157d4b..adb6a880c684 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -260,15 +260,51 @@ int kvm_guest_prepare_stage2(struct kvm_shadow_vm *vm, void *pgd) return 0; } +static int reclaim_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, + void * const arg) +{ + kvm_pte_t pte = *ptep; + struct hyp_page *page; + + if (!kvm_pte_valid(pte)) + return 0; + + page = hyp_phys_to_page(kvm_pte_to_phys(pte)); + switch (pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte))) { + case PKVM_PAGE_OWNED: + page->flags |= HOST_PAGE_NEED_POISONING; + fallthrough; + case PKVM_PAGE_SHARED_BORROWED: + case PKVM_PAGE_SHARED_OWNED: + page->flags |= HOST_PAGE_PENDING_RECLAIM; + break; + default: + return -EPERM; + } + + return 0; +} + void reclaim_guest_pages(struct kvm_shadow_vm *vm, struct kvm_hyp_memcache *mc) { + + struct kvm_pgtable_walker walker = { + .cb = reclaim_walker, + .flags = KVM_PGTABLE_WALK_LEAF + }; void *addr; - /* Dump all pgtable pages in the hyp_pool */ + host_lock_component(); guest_lock_component(vm); + + /* Reclaim all guest pages and dump all pgtable pages in the hyp_pool */ + BUG_ON(kvm_pgtable_walk(&vm->pgt, 0, BIT(vm->pgt.ia_bits), &walker)); kvm_pgtable_stage2_destroy(&vm->pgt); vm->kvm.arch.mmu.pgd_phys = 0ULL; + guest_unlock_component(vm); + host_unlock_component(); /* Drain the hyp_pool into the memcache */ addr = hyp_alloc_pages(&vm->pool, 0); @@ -1225,3 +1261,56 @@ void hyp_unpin_shared_mem(void *from, void *to) hyp_unlock_component(); host_unlock_component(); } + +static int hyp_zero_page(phys_addr_t phys) +{ + void *addr; + + addr = hyp_fixmap_map(phys); + if (!addr) + return -EINVAL; + memset(addr, 0, PAGE_SIZE); + __clean_dcache_guest_page(addr, PAGE_SIZE); + + return hyp_fixmap_unmap(); +} + +int __pkvm_host_reclaim_page(u64 pfn) +{ + u64 addr = hyp_pfn_to_phys(pfn); + struct hyp_page *page; + kvm_pte_t pte; + int ret; + + host_lock_component(); + + ret = kvm_pgtable_get_leaf(&host_kvm.pgt, addr, &pte, NULL); + if (ret) + goto unlock; + + if (host_get_page_state(pte) == PKVM_PAGE_OWNED) + goto unlock; + + page = hyp_phys_to_page(addr); + if (!(page->flags & HOST_PAGE_PENDING_RECLAIM)) { + ret = -EPERM; + goto unlock; + } + + if (page->flags & HOST_PAGE_NEED_POISONING) { + ret = hyp_zero_page(addr); + if (ret) + goto unlock; + page->flags &= ~HOST_PAGE_NEED_POISONING; + } + + ret = host_stage2_set_owner_locked(addr, PAGE_SIZE, PKVM_ID_HOST); + if (ret) + goto unlock; + page->flags &= ~HOST_PAGE_PENDING_RECLAIM; + +unlock: + host_unlock_component(); + + return ret; +} From patchwork Thu May 19 13:41:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855118 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B629EC433F5 for ; Thu, 19 May 2022 14:03:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=23PVai/Mwl0lPH95qTor6SPsxdDGMi286ZVt20cxfQ8=; b=Tb1bmQHirRzc3v l8k/Og6oYlxhyjbRAg7l5NKAt3CIxvFjHaLD5RqzpYOjQKCf03D49sGY+d8nznJKJynI8kzry8SaB Ph8NG2xhqLsjK3sPoTrUaoRZV3Q0CbxFoF53mcMfCGBQMCjAFXTSZLwcSb/CqiWLE8J8Je0mhza5Q z2SQii0vzWwHUVyVXUqbDpVi9oygkEw6ErQYPY2UdhQI/69j0PHKVaBkSZcWCqUtd/d9ouAs6BjeE UtVJUTPhZ7pMnQUxZrejM5ZKDYNNuD7fHqjYCZQY+sR1m/HJRZ0weap6EQ6MsG0BhA6+tWmvXxKVr Ji62th6uNDXf3MPgCdtQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgip-007Dhs-2y; Thu, 19 May 2022 14:01:47 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRs-0075fE-9Y for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:18 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 13B63CE2440; Thu, 19 May 2022 13:44:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E2348C34100; Thu, 19 May 2022 13:44:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967852; bh=3RiCLx91vvFl1eY7jTdmSanbOyZssF2WI8MnbolxOp4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tVjz539inNrid512QIfuKXH8AgCnxGbY4bE+QRsfW30XUSmNgTTNGortoeqc7Z0bv owdABQKqQWhZ8ijGi6dMzjCaNADYiil1TVlW/++fgyqGa/BrkUuxh0zMWrzUaDxFqS kCSVqhtNuwUG6gsN2ipKCJ//+gai7qOy2R3KfmR2oMTGN1yDMVMZ3NryZv6DVxXcI2 A3gChDgh6VLMeJhy4A2LWrvU+gJIDksmKpyHA2a25Omdz/vJrwJh5fCS0byIuRFWxc NgUrbxUbV3Eg5bzChzEbLzlKYpg/XppikPgSsfd7cdtM7stREYHTg7Uyq9986xt36r zDuAweUiltMnA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 27/89] KVM: arm64: Extend memory sharing to allow host-to-guest transitions Date: Thu, 19 May 2022 14:41:02 +0100 Message-Id: <20220519134204.5379-28-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064416_776936_CE62C211 X-CRM114-Status: GOOD ( 19.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for handling guest stage-2 mappings at EL2, extend our memory protection mechanisms to support sharing of pages from the host to a specific guest. Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_host.h | 8 +- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 100 ++++++++++++++++++ 3 files changed, 108 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 32ac88e60e6b..264b1d2c4eb6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -421,8 +421,12 @@ struct kvm_vcpu_arch { /* Don't run the guest (internal implementation need) */ bool pause; - /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; + union { + /* Cache some mmu pages needed inside spinlock regions */ + struct kvm_mmu_memory_cache mmu_page_cache; + /* Pages to be donated to pkvm/EL2 if it runs out */ + struct kvm_hyp_memcache pkvm_memcache; + }; /* Target CPU and feature flags */ int target; diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index ecedc545e608..364432276fe0 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -57,6 +57,7 @@ extern struct host_kvm host_kvm; enum pkvm_component_id { PKVM_ID_HOST, PKVM_ID_HYP, + PKVM_ID_GUEST, }; extern unsigned long hyp_nr_cpus; @@ -67,6 +68,7 @@ int __pkvm_host_unshare_hyp(u64 pfn); int __pkvm_host_reclaim_page(u64 pfn); int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index adb6a880c684..2e92be8bb463 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -579,11 +579,21 @@ struct pkvm_mem_transition { struct { u64 completer_addr; } hyp; + struct { + struct kvm_vcpu *vcpu; + } guest; }; } initiator; struct { enum pkvm_component_id id; + + union { + struct { + struct kvm_vcpu *vcpu; + phys_addr_t phys; + } guest; + }; } completer; }; @@ -847,6 +857,52 @@ static int hyp_complete_donation(u64 addr, return pkvm_create_mappings_locked(start, end, prot); } +static enum pkvm_page_state guest_get_page_state(kvm_pte_t pte) +{ + if (!kvm_pte_valid(pte)) + return PKVM_NOPAGE; + + return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); +} + +static int __guest_check_page_state_range(struct kvm_vcpu *vcpu, u64 addr, + u64 size, enum pkvm_page_state state) +{ + struct kvm_shadow_vm *vm = get_shadow_vm(vcpu); + struct check_walk_data d = { + .desired = state, + .get_page_state = guest_get_page_state, + }; + + hyp_assert_lock_held(&vm->lock); + return check_page_state_range(&vm->pgt, addr, size, &d); +} + +static int guest_ack_share(u64 addr, const struct pkvm_mem_transition *tx, + enum kvm_pgtable_prot perms) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + + if (perms != KVM_PGTABLE_PROT_RWX) + return -EPERM; + + return __guest_check_page_state_range(tx->completer.guest.vcpu, addr, + size, PKVM_NOPAGE); +} + +static int guest_complete_share(u64 addr, const struct pkvm_mem_transition *tx, + enum kvm_pgtable_prot perms) +{ + struct kvm_vcpu *vcpu = tx->completer.guest.vcpu; + struct kvm_shadow_vm *vm = get_shadow_vm(vcpu); + u64 size = tx->nr_pages * PAGE_SIZE; + enum kvm_pgtable_prot prot; + + prot = pkvm_mkstate(perms, PKVM_PAGE_SHARED_BORROWED); + return kvm_pgtable_stage2_map(&vm->pgt, addr, size, tx->completer.guest.phys, + prot, &vcpu->arch.pkvm_memcache); +} + static int check_share(struct pkvm_mem_share *share) { const struct pkvm_mem_transition *tx = &share->tx; @@ -868,6 +924,9 @@ static int check_share(struct pkvm_mem_share *share) case PKVM_ID_HYP: ret = hyp_ack_share(completer_addr, tx, share->completer_prot); break; + case PKVM_ID_GUEST: + ret = guest_ack_share(completer_addr, tx, share->completer_prot); + break; default: ret = -EINVAL; } @@ -896,6 +955,9 @@ static int __do_share(struct pkvm_mem_share *share) case PKVM_ID_HYP: ret = hyp_complete_share(completer_addr, tx, share->completer_prot); break; + case PKVM_ID_GUEST: + ret = guest_complete_share(completer_addr, tx, share->completer_prot); + break; default: ret = -EINVAL; } @@ -1262,6 +1324,44 @@ void hyp_unpin_shared_mem(void *from, void *to) host_unlock_component(); } +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu) +{ + int ret; + u64 host_addr = hyp_pfn_to_phys(pfn); + u64 guest_addr = hyp_pfn_to_phys(gfn); + struct kvm_shadow_vm *vm = get_shadow_vm(vcpu); + struct pkvm_mem_share share = { + .tx = { + .nr_pages = 1, + .initiator = { + .id = PKVM_ID_HOST, + .addr = host_addr, + .host = { + .completer_addr = guest_addr, + }, + }, + .completer = { + .id = PKVM_ID_GUEST, + .guest = { + .vcpu = vcpu, + .phys = host_addr, + }, + }, + }, + .completer_prot = KVM_PGTABLE_PROT_RWX, + }; + + host_lock_component(); + guest_lock_component(vm); + + ret = do_share(&share); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} + static int hyp_zero_page(phys_addr_t phys) { void *addr; From patchwork Thu May 19 13:41:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855119 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3A3FEC433F5 for ; Thu, 19 May 2022 14:04:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0tS6o5yPy3aPnqCU3x3DrA3T8knzyQ8kxgQ8uM+nJOM=; b=smtRhYuaxZ+r+A 0GLTGCcROGG/uv0v6PYOpOayOxBYVSe/NhhNkzP90mqI7wo8BIphbAsJZtz5S/UwOk7lpKyiiO/Yu XXU8AHHSaPlUUgEvUv87+nIB51oAW+vD5LCIDTZgh1HuSAnw3wpn1QClicx5pSm0OYWzY/W0fYFbq VwTxNf5VrY74sSb2D6lOOZtEInxlOHZ4jaPodFj7zH2McLEMZsmHQIFsVvqrOX/JeA08XZ9CR5DZR j4JLLEVJ7HdxrFxeP7GCsYtYBTeINUiYN9KlqVXRpY641LH8NsP+0Oi+ewjaBR9FJe6ZQDJGfhoVK G1u5n9QIAFBg+WcPguPg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgk3-007EIE-JG; Thu, 19 May 2022 14:03:04 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRv-0075h4-4G for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:21 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9223FB824AB; Thu, 19 May 2022 13:44:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DA714C34116; Thu, 19 May 2022 13:44:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967856; bh=XiJyGdjmz/udxli+VDDlv2mvjpHlKF0LXKV580K0H/I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=o35ZVjNgcp6WbMMfwaQ8Z+yMSMZng8itxBUf8cnEvIIA2fC7eyymbhyw+YSWUV8Nk 0DijcvQAsZQYX9Xqyo6Dlhht88ylyzMk3QxV+ulGlTOc6tmNKE+B5fUGMRTS7tGv/t 9Pq4cQwSrRu6bJPX5nlRa+jXiXoHwqRDaa6xJGiAg+tyTpCgDZvuMNjp4PDLgibu6S sBjmLsA1KBbE9BHI3pg7IkPRXRfRGeo636pOYBlMfHF8drGbAS+vn2MvkLg79JArGy 7pRQ3txaBlLLSK/KU4dOYkHV2+865VxTaBGfFVQoXUk9sFqz5xxtqs4MlXe+xcwbTC QpJtpd9xbkNIQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 28/89] KVM: arm64: Consolidate stage-2 init in one function Date: Thu, 19 May 2022 14:41:03 +0100 Message-Id: <20220519134204.5379-29-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064419_498346_00BE5D6F X-CRM114-Status: GOOD ( 21.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret The initialization of stage-2 page-tables of guests is currently split across two functions: kvm_init_stage2_mmu() and kvm_arm_setup_stage2(). That is presumably for historical reasons as kvm_arm_setup_stage2() originates from the (now defunct) KVM port for 32bit Arm. Simplify this code path by merging both functions into one, and while at it make sure to map the kvm struct into the hypervisor stage-1 early on to simplify the failure path. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_arm.h | 2 +- arch/arm64/include/asm/kvm_host.h | 2 -- arch/arm64/include/asm/kvm_mmu.h | 2 +- arch/arm64/kvm/arm.c | 27 +++++++++++++-------------- arch/arm64/kvm/mmu.c | 27 ++++++++++++++++++++++++++- arch/arm64/kvm/reset.c | 29 ----------------------------- 6 files changed, 41 insertions(+), 48 deletions(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 1767ded83888..98b60fa86853 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -134,7 +134,7 @@ * 40 bits wide (T0SZ = 24). Systems with a PARange smaller than 40 bits are * not known to exist and will break with this configuration. * - * The VTCR_EL2 is configured per VM and is initialised in kvm_arm_setup_stage2(). + * The VTCR_EL2 is configured per VM and is initialised in kvm_init_stage2_mmu. * * Note that when using 4K pages, we concatenate two first level page tables * together. With 16K pages, we concatenate 16 first level page tables. diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 264b1d2c4eb6..3c6ed1f3887d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -865,8 +865,6 @@ int kvm_set_ipa_limit(void); #define __KVM_HAVE_ARCH_VM_ALLOC struct kvm *kvm_arch_alloc_vm(void); -int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type); - static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 74735a864eee..eaac8dec97de 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -162,7 +162,7 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size, void free_hyp_pgds(void); void stage2_unmap_vm(struct kvm *kvm); -int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu); +int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type); void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, phys_addr_t pa, unsigned long size, bool writable); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 6a32eaf768e5..694ba3792e9d 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -135,28 +135,24 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) { int ret; - ret = kvm_arm_setup_stage2(kvm, type); - if (ret) - return ret; - - ret = kvm_init_stage2_mmu(kvm, &kvm->arch.mmu); - if (ret) - return ret; - ret = kvm_share_hyp(kvm, kvm + 1); if (ret) - goto out_free_stage2_pgd; + return ret; ret = kvm_init_pvm(kvm); if (ret) - goto out_free_stage2_pgd; + goto err_unshare_kvm; if (!zalloc_cpumask_var(&kvm->arch.supported_cpus, GFP_KERNEL)) { ret = -ENOMEM; - goto out_free_stage2_pgd; + goto err_unshare_kvm; } cpumask_copy(kvm->arch.supported_cpus, cpu_possible_mask); + ret = kvm_init_stage2_mmu(kvm, &kvm->arch.mmu, type); + if (ret) + goto err_free_cpumask; + kvm_vgic_early_init(kvm); /* The maximum number of VCPUs is limited by the host's GIC model */ @@ -164,9 +160,12 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) set_default_spectre(kvm); - return ret; -out_free_stage2_pgd: - kvm_free_stage2_pgd(&kvm->arch.mmu); + return 0; + +err_free_cpumask: + free_cpumask_var(kvm->arch.supported_cpus); +err_unshare_kvm: + kvm_unshare_hyp(kvm, kvm + 1); return ret; } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index e77686d54e9f..0071f035dde8 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -618,15 +618,40 @@ static struct kvm_pgtable_mm_ops kvm_s2_mm_ops = { * kvm_init_stage2_mmu - Initialise a S2 MMU structure * @kvm: The pointer to the KVM structure * @mmu: The pointer to the s2 MMU structure + * @type: The machine type of the virtual machine * * Allocates only the stage-2 HW PGD level table(s). * Note we don't need locking here as this is only called when the VM is * created, which can only be done once. */ -int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu) +int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type) { + u32 kvm_ipa_limit = get_kvm_ipa_limit(); int cpu, err; struct kvm_pgtable *pgt; + u64 mmfr0, mmfr1; + u32 phys_shift; + + if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK) + return -EINVAL; + + phys_shift = KVM_VM_TYPE_ARM_IPA_SIZE(type); + if (phys_shift) { + if (phys_shift > kvm_ipa_limit || + phys_shift < ARM64_MIN_PARANGE_BITS) + return -EINVAL; + } else { + phys_shift = KVM_PHYS_SHIFT; + if (phys_shift > kvm_ipa_limit) { + pr_warn_once("%s using unsupported default IPA limit, upgrade your VMM\n", + current->comm); + return -EINVAL; + } + } + + mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); + mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + kvm->arch.vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift); if (mmu->pgt != NULL) { kvm_err("kvm_arch already initialized?\n"); diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index ecc40c8cd6f6..c07265ea72fd 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -370,32 +370,3 @@ int kvm_set_ipa_limit(void) return 0; } - -int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) -{ - u64 mmfr0, mmfr1; - u32 phys_shift; - - if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK) - return -EINVAL; - - phys_shift = KVM_VM_TYPE_ARM_IPA_SIZE(type); - if (phys_shift) { - if (phys_shift > kvm_ipa_limit || - phys_shift < ARM64_MIN_PARANGE_BITS) - return -EINVAL; - } else { - phys_shift = KVM_PHYS_SHIFT; - if (phys_shift > kvm_ipa_limit) { - pr_warn_once("%s using unsupported default IPA limit, upgrade your VMM\n", - current->comm); - return -EINVAL; - } - } - - mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); - mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); - kvm->arch.vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift); - - return 0; -} From patchwork Thu May 19 13:41:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855120 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5C4C6C433EF for ; Thu, 19 May 2022 14:05:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4RsJ9uB0R7y694BzwUR1bXSC9ADeMS0Xz68SVxGwBQY=; b=o/6NhQlFNzlFLE NOpr5F8rP3al24cQ9H4SjXjM+z05YMPA6UTmdGDqr33btGxHQk4CKmsct/pFVVJkERQeJUiBvht/X Z6BhgLZF5NyfriBPAY7MdYH5E4pvcspa8mDH4zAh2spF6SqDXLfK+3nb7HfvPaXLzSQrGGKrpQMig Rj0ZBX6LGDu/BK+WpAbmcsJEeAY/m9f4f3zXXjzqgeBsRBMVMKGRd0wWZg0Dupk39zKG63bqCU0j5 lkkJbhlYiaoY/hTFkDcF4B1+/eLj4PPC82j7p9XFAYnEXTnbEEWnN6OBI4Suyxm7U/3Ze79xXJshX +6TC4SKpY2Lq/NEjnoKQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgks-007Egv-MQ; Thu, 19 May 2022 14:03:55 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRx-0075jR-BU for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:22 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E1E2B61783; Thu, 19 May 2022 13:44:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D38FBC34117; Thu, 19 May 2022 13:44:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967860; bh=9Zu+1PM964G8VB/qecrYRRnyKHnt3EWZgmbLdhsUNTc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CV0ZGuRQXsTvx7y4SinA0e6Ox7OFe71U5wvIU+soixC4yNptm76juK7kEUF1h9SxX oYpeant1QNqop7yI6rv39FS2nZqWWzDrARyBU65hR9ArrGfPkeSIh8NpwE97ooMo+a z0IIs2FNE7b2inyibH/xINq5w+o/hiaC6wnOd/Kp/lgVcGC4kGvxLkZfHnH4VyTNwb eXLX4TlMeo9wsJtqFHIjr4uZ4AXEiEqU5Xaf2t4BAKkKw4bXDEZFw8t23CAmRmbMIY FLRtGPgTpZhgPyd9e7j2mfy8bbyTtEkUZybVZlZt8c2lRCTp4qPkRmx5SwTueyrIV2 l8vZJZC129frg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 29/89] KVM: arm64: Check for PTE validity when checking for executable/cacheable Date: Thu, 19 May 2022 14:41:04 +0100 Message-Id: <20220519134204.5379-30-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064421_533052_99405AEE X-CRM114-Status: GOOD ( 13.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Don't blindly assume that the PTE is valid when checking whether it describes an executable or cacheable mapping. This makes sure that we don't issue CMOs for invalid mappings. Suggested-by: Will Deacon Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/pgtable.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 1d300313009d..a6676fd14cf9 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -700,12 +700,12 @@ static void stage2_put_pte(kvm_pte_t *ptep, struct kvm_s2_mmu *mmu, u64 addr, static bool stage2_pte_cacheable(struct kvm_pgtable *pgt, kvm_pte_t pte) { u64 memattr = pte & KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR; - return memattr == KVM_S2_MEMATTR(pgt, NORMAL); + return kvm_pte_valid(pte) && memattr == KVM_S2_MEMATTR(pgt, NORMAL); } static bool stage2_pte_executable(kvm_pte_t pte) { - return !(pte & KVM_PTE_LEAF_ATTR_HI_S2_XN); + return kvm_pte_valid(pte) && !(pte & KVM_PTE_LEAF_ATTR_HI_S2_XN); } static bool stage2_leaf_mapping_allowed(u64 addr, u64 end, u32 level, @@ -750,8 +750,7 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level, /* Perform CMOs before installation of the guest stage-2 PTE */ if (mm_ops->dcache_clean_inval_poc && stage2_pte_cacheable(pgt, new)) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(new, mm_ops), - granule); - + granule); if (mm_ops->icache_inval_pou && stage2_pte_executable(new)) mm_ops->icache_inval_pou(kvm_pte_follow(new, mm_ops), granule); @@ -1148,7 +1147,7 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops; kvm_pte_t pte = *ptep; - if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte)) + if (!stage2_pte_cacheable(pgt, pte)) return 0; if (mm_ops->dcache_clean_inval_poc) From patchwork Thu May 19 13:41:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D62F7C433EF for ; Thu, 19 May 2022 14:05:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KrZYAam1F4uepg4PgNH+K/wQhk7HHZ2g5eATYZLYSec=; b=fkkYuo6SmlK8v3 XOM78LUj48lBzyUr4+r/f2HfE3y9ju/HfphD65UTAGJ77rz4we4kKVWRSMRD/0I0IO7tNn53DZVMm 9xfrydrynr60ddryTnJOTLP36+d3Tu8y+y/JFO6ayaqvng+zLMVMqSmhEWi6/OdhyFgZV3SFfkIip BI88n3u81Jaqgwazokw5cpCcoDFWBAxSPXGtjlyCMA70xeZw5Ml7z9cj0witGm2okHabKrY0VSsDK kmUWpiw8+L9t92vjclpcldqeTpdciBq/O72+/Q9eKGbhh4FY2M/miZ2cnCWoGvydJTnOaFva/VTaV isq+XQJNWVOYGmDe4OmQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrglR-007Exb-22; Thu, 19 May 2022 14:04:29 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgS1-0075mc-A6 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:26 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DCDEC61798; Thu, 19 May 2022 13:44:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CA773C36AE5; Thu, 19 May 2022 13:44:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967864; bh=p7gI1CdAvMGFDucFGozIsSRUoT2cXsZgzqAgI+aWRJI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O7W4ZFQznwTwavYOMNgqoZkaJVWQsO9jN5kCKT1XWf5AaAmeKy3VxnW9sjsHkTLrG TfY6Z1cv3xDm6oV5E3McihQcd+gS4Vyn0F611oNXQlegU9gA0keNPi0EzvSHc2haHl NDdjUkPfBZFxp2ELKBx3uel0J7617Iv1ErnVXybdPJrzSMpd3Bif4Cg8eyF6eST2sk jWX9PujhExD/LYygdXm3cXPKf98gQR2loIkkuIl4K4arnxGzaUUBCG9Da4gWbMAc3V dsR4JViAgvGJnJWKzm5iSCzC6dwD1JRI63LsONf+4c9LakPeXQgPuvw+zYfW5JDy0O Ef8kzB05GrtWA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 30/89] KVM: arm64: Do not allow memslot changes after first VM run under pKVM Date: Thu, 19 May 2022 14:41:05 +0100 Message-Id: <20220519134204.5379-31-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064425_464537_AF8F36D4 X-CRM114-Status: GOOD ( 14.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba As the guest stage-2 page-tables will soon be managed entirely by EL2 when pKVM is enabled, guest memory will be pinned and the MMU notifiers in the host will be unable to reconfigure mappings at EL2 other than destrroying the guest and reclaiming all of the memory. Forbid memslot move/delete operations for VMs that have run under pKVM, returning -EPERM to userspace if such an operation is requested. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 0071f035dde8..67cac3340d49 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1679,6 +1679,13 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, hva_t hva, reg_end; int ret = 0; + /* In protected mode, cannot modify memslots once a VM has run. */ + if (is_protected_kvm_enabled() && + (change == KVM_MR_DELETE || change == KVM_MR_MOVE) && + kvm->arch.pkvm.shadow_handle) { + return -EPERM; + } + if (change != KVM_MR_CREATE && change != KVM_MR_MOVE && change != KVM_MR_FLAGS_ONLY) return 0; @@ -1755,6 +1762,10 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, gpa_t gpa = slot->base_gfn << PAGE_SHIFT; phys_addr_t size = slot->npages << PAGE_SHIFT; + /* Stage-2 is managed by hyp in protected mode. */ + if (is_protected_kvm_enabled()) + return; + write_lock(&kvm->mmu_lock); unmap_stage2_range(&kvm->arch.mmu, gpa, size); write_unlock(&kvm->mmu_lock); From patchwork Thu May 19 13:41:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855122 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0D11FC433F5 for ; Thu, 19 May 2022 14:06:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nqOmR90G1gEJxqzLr2iVjH45QB07jCL34+OxmwdYfOA=; b=gpKbisULjXll9V XHEtz5OCQBCAFMx7pA0rS9jTAUkvgQ8VtqAn724qsfGGgviZrizUVsoezdCyvj8yIR0bhj8nxBJkB h85uSGQzVhYBnFPyEJ3+xnQia31Ib8gRz1SVrKCM6awfLh4ShhmIUqZPaHgW9md0NB3lvi5H7fB5k 7mgNAFDJkgDV/ivippsuFiTTIO4dbhUwLWKYjPN2jrNcWUJXMY2W00XKtlStfBaEUn61MMBbS5SK4 bEJrr8yG1jAF6TNWhC/mBEaG6dpeKhKyrtgZQqdhRlLRBu4rCdu97QXkca6qg2NDopKI0Sn3cUDs5 oXWJojj2OoUjNySPWxkw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgm7-007FG5-Bn; Thu, 19 May 2022 14:05:11 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgS7-0075pr-0F for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:32 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9A906B82477; Thu, 19 May 2022 13:44:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C51C4C34115; Thu, 19 May 2022 13:44:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967868; bh=NQ6SWFoOzjdMQ7IQILXMfWfCkw6f84MvjUSYfV0Aqic=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Jr1/kz7V4b+WFeZPm4n6SvvfBLnsn11vQHxdy4iMDYXZkSDLIYQCw/pbXgtRv1zDw YUUeyyySJ3IXi+5xgI7BRYzdFotNAm9UMO1tewYm3kNtNK/N3H5fdCJgurDNLLiVaO yw26aRCPz9hyn305tg+QacfMEfwRetntZ9hrZ1mSHel/OpfFh2GZ+vCits5vM0GfSM imoQRSSP42318gOm1DMDjzcmIjk4ZIHhk40naZpJkLjDKIgODsqTAUz5g9DQ/UTBwd 7B99D763qW/FGQf5Y2oB6osRJC7aIvvtXHRW3cDutkDZ1Mq4cIf1ARJ1StG5pFrqrz PPbvi4EzP3IoA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 31/89] KVM: arm64: Disallow dirty logging and RO memslots with pKVM Date: Thu, 19 May 2022 14:41:06 +0100 Message-Id: <20220519134204.5379-32-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064431_280389_9D695E3E X-CRM114-Status: GOOD ( 15.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret The current implementation of pKVM doesn't support dirty logging or read-only memslots. Although support for these features is desirable, this will require future work, so let's cleanly report the limitations to userspace by failing the ioctls until then. Signed-off-by: Quentin Perret --- arch/arm64/kvm/mmu.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 67cac3340d49..df92b5f7ac63 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1679,11 +1679,17 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, hva_t hva, reg_end; int ret = 0; - /* In protected mode, cannot modify memslots once a VM has run. */ - if (is_protected_kvm_enabled() && - (change == KVM_MR_DELETE || change == KVM_MR_MOVE) && - kvm->arch.pkvm.shadow_handle) { - return -EPERM; + if (is_protected_kvm_enabled()) { + /* In protected mode, cannot modify memslots once a VM has run. */ + if ((change == KVM_MR_DELETE || change == KVM_MR_MOVE) && + kvm->arch.pkvm.shadow_handle) { + return -EPERM; + } + + if (new && + new->flags & (KVM_MEM_LOG_DIRTY_PAGES | KVM_MEM_READONLY)) { + return -EPERM; + } } if (change != KVM_MR_CREATE && change != KVM_MR_MOVE && From patchwork Thu May 19 13:41:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 410A3C433EF for ; Thu, 19 May 2022 14:07:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=J1KW3KURwbdg9Lj1KGjK+eAkul71V8kKwxfeg1ZzLrU=; b=SkjirlHt3wz3Hv 5M5sihmrFOFgEuNsDlI5YRt8oKQQeMN70AlG9vAsGyqdGsbw0VPSOvUrRCVDW1c+bkr2JHvtj/w0T 1hxGiAWoRmLMV3rGtwTLO9RudKha9aVzse8IrfEJJVH58E0lxAgGE65bY49LuvRb6tRpGn8D/dDmT SJ9kGkWFh0uR0BoQskSCIzilaUJ6uD5vfTPGFbjUVwI10/6dV380ysX7sxsBFJ2qExd+ZEsrWR4iO 4FZS6ip0rupJoofRpCKYxHeyDwtJc/smDfxaNFgAwwgQhRL4FzNfr9MIRP4jY9n9DbLO6lOVuCx92 ZtW9BGH6PNET07WPEUeA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgmq-007FZ6-RG; Thu, 19 May 2022 14:05:57 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgS9-0075rr-A0 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:35 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D303F617A3; Thu, 19 May 2022 13:44:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BC5E7C36AE5; Thu, 19 May 2022 13:44:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967872; bh=QDHr3L/fq3ClG6rylwskyI3NtEAihjTV6Wf96x2oT30=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tifGlaNNTa02n/YUhhJl1/bYo/VHDakmvfYCPAek1jkaBxr1B+67WoLAjqsKmy166 jWct6zdSl5cvsBwMcjwekYBEK5thXjrXy0aOKisKlO3km60qzxJX428Cx5aehgnfnB a9Vn24eKz9ec6pLglToe/bpFfzeAIMuuN6uj3ENhneAP5F+GlxJRWYgmh6ediYh/N+ 23fv/LtjIijHClN22GSZmjVNPfhi4ZmZ5lqsIMVx1PnSfXljQ1SQxABPj1NaxiYKID ARmcEA89Mjl7G2vuKgostaODlLextILI3hks13a3NmRxcSCChSHgo7k+YmqMIwzUw9 I+43tmBauGnEw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 32/89] KVM: arm64: Use the shadow vCPU structure in handle___kvm_vcpu_run() Date: Thu, 19 May 2022 14:41:07 +0100 Message-Id: <20220519134204.5379-33-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064433_497932_B4037540 X-CRM114-Status: GOOD ( 18.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org As a stepping stone towards deprivileging the host's access to the guest's vCPU structures, introduce some naive flush/sync routines to copy most of the host vCPU into the shadow vCPU on vCPU run and back again on return to EL1. This allows us to run using the shadow structure when KVM is initialised in protected mode. Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 4 ++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 82 +++++++++++++++++++++++++- arch/arm64/kvm/hyp/nvhe/pkvm.c | 27 +++++++++ 3 files changed, 111 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index f841e2b252cd..e600dc4965c4 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -63,5 +63,9 @@ int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, size_t shadow_size, unsigned long pgd_hva); int __pkvm_teardown_shadow(unsigned int shadow_handle); +struct kvm_shadow_vcpu_state * +pkvm_load_shadow_vcpu_state(unsigned int shadow_handle, unsigned int vcpu_idx); +void pkvm_put_shadow_vcpu_state(struct kvm_shadow_vcpu_state *shadow_state); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 629d306c91c0..7a0d95e28e00 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -22,11 +22,89 @@ DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); +static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) +{ + struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; + struct kvm_vcpu *host_vcpu = shadow_state->host_vcpu; + + shadow_vcpu->arch.ctxt = host_vcpu->arch.ctxt; + + shadow_vcpu->arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); + shadow_vcpu->arch.sve_max_vl = host_vcpu->arch.sve_max_vl; + + shadow_vcpu->arch.hw_mmu = host_vcpu->arch.hw_mmu; + + shadow_vcpu->arch.hcr_el2 = host_vcpu->arch.hcr_el2; + shadow_vcpu->arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; + shadow_vcpu->arch.cptr_el2 = host_vcpu->arch.cptr_el2; + + shadow_vcpu->arch.flags = host_vcpu->arch.flags; + + shadow_vcpu->arch.debug_ptr = kern_hyp_va(host_vcpu->arch.debug_ptr); + shadow_vcpu->arch.host_fpsimd_state = host_vcpu->arch.host_fpsimd_state; + + shadow_vcpu->arch.vsesr_el2 = host_vcpu->arch.vsesr_el2; + + shadow_vcpu->arch.vgic_cpu.vgic_v3 = host_vcpu->arch.vgic_cpu.vgic_v3; +} + +static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) +{ + struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; + struct kvm_vcpu *host_vcpu = shadow_state->host_vcpu; + struct vgic_v3_cpu_if *shadow_cpu_if = &shadow_vcpu->arch.vgic_cpu.vgic_v3; + struct vgic_v3_cpu_if *host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3; + unsigned int i; + + host_vcpu->arch.ctxt = shadow_vcpu->arch.ctxt; + + host_vcpu->arch.hcr_el2 = shadow_vcpu->arch.hcr_el2; + host_vcpu->arch.cptr_el2 = shadow_vcpu->arch.cptr_el2; + + host_vcpu->arch.fault = shadow_vcpu->arch.fault; + + host_vcpu->arch.flags = shadow_vcpu->arch.flags; + + host_cpu_if->vgic_hcr = shadow_cpu_if->vgic_hcr; + for (i = 0; i < shadow_cpu_if->used_lrs; ++i) + host_cpu_if->vgic_lr[i] = shadow_cpu_if->vgic_lr[i]; +} + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { - DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); + DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1); + int ret; + + host_vcpu = kern_hyp_va(host_vcpu); + + if (unlikely(is_protected_kvm_enabled())) { + struct kvm_shadow_vcpu_state *shadow_state; + struct kvm_vcpu *shadow_vcpu; + struct kvm *host_kvm; + unsigned int handle; + + host_kvm = kern_hyp_va(host_vcpu->kvm); + handle = host_kvm->arch.pkvm.shadow_handle; + shadow_state = pkvm_load_shadow_vcpu_state(handle, + host_vcpu->vcpu_idx); + if (!shadow_state) { + ret = -EINVAL; + goto out; + } + + shadow_vcpu = &shadow_state->shadow_vcpu; + flush_shadow_state(shadow_state); + + ret = __kvm_vcpu_run(shadow_vcpu); + + sync_shadow_state(shadow_state); + pkvm_put_shadow_vcpu_state(shadow_state); + } else { + ret = __kvm_vcpu_run(host_vcpu); + } - cpu_reg(host_ctxt, 1) = __kvm_vcpu_run(kern_hyp_va(vcpu)); +out: + cpu_reg(host_ctxt, 1) = ret; } static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index a4a518b2a43b..9cd2bf75ed88 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -244,6 +244,33 @@ static struct kvm_shadow_vm *find_shadow_by_handle(unsigned int shadow_handle) return shadow_table[shadow_idx]; } +struct kvm_shadow_vcpu_state * +pkvm_load_shadow_vcpu_state(unsigned int shadow_handle, unsigned int vcpu_idx) +{ + struct kvm_shadow_vcpu_state *shadow_state = NULL; + struct kvm_shadow_vm *vm; + + hyp_spin_lock(&shadow_lock); + vm = find_shadow_by_handle(shadow_handle); + if (!vm || vm->kvm.created_vcpus <= vcpu_idx) + goto unlock; + + shadow_state = &vm->shadow_vcpu_states[vcpu_idx]; + hyp_page_ref_inc(hyp_virt_to_page(vm)); +unlock: + hyp_spin_unlock(&shadow_lock); + return shadow_state; +} + +void pkvm_put_shadow_vcpu_state(struct kvm_shadow_vcpu_state *shadow_state) +{ + struct kvm_shadow_vm *vm = shadow_state->shadow_vm; + + hyp_spin_lock(&shadow_lock); + hyp_page_ref_dec(hyp_virt_to_page(vm)); + hyp_spin_unlock(&shadow_lock); +} + static void unpin_host_vcpus(struct kvm_shadow_vcpu_state *shadow_vcpu_states, unsigned int nr_vcpus) { From patchwork Thu May 19 13:41:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855128 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3394FC433EF for ; Thu, 19 May 2022 14:07:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YUvf8ISJP+yQj/1i8lVcByB7qE0uDoCu9E2x8gWZG/M=; b=zPAIvTL4bvVVOy nP9J7MiqPddIsVnvAGf2oqU+LSqjX+i/Mv4o6HVNDsKDGE97NRfavZjcjKFgwlXE74P3OCfRUJF0F pRedaswzyhpWP/xHM6FT2Iy8u/aVjZp235FZB3p+Can9ffNRxQi2idHJz6m8z6QWABzPGu14XLevJ dhsuptt4pEuPrtU/5MYw/AdmHsQVmz9c6DIV8jYgqpfjq156+ow9Nzn6AX1RyoJtv8EgN3mv51DKD WzXs0ziUsw1sMqfjHGNlScfwOGwTzcO3e2wpuQwLJcGISxVAGclhRtu1AHpxeVn7VOssyEnjMFuDL MV4C2k2O2wXyi00AstUQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgnJ-007FqM-Tp; Thu, 19 May 2022 14:06:26 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSD-0075uO-70 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:43 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5E33A6173E; Thu, 19 May 2022 13:44:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B43C8C385B8; Thu, 19 May 2022 13:44:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967876; bh=BDsaJNcPZTWeO0VnJ4T5Va/FaalzBuQkFu3ATLCjLco=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nbM6rpmYqcnCRss5TQv/BCu+sWUvflrdeDDHwGR0o/i7tITs6eBASUeh7xm65GmbR aBgz7kJYYshNGhnEX/mrfXO/+0+6ltmaZmvQR1jJ4w2RDCA4kniOoaBdgoURpgsmjn 4LgXuZsaST/lh6WV3FbVwftgz16126A/s64CtSPNzhL75U1B1BYeqxnbV2NiH/xXmy tuUMCm2ybziZ/vPeZERGy01QeuLRCwKcnI5pk1F6U36NHc0rkKzZjPyoexre0U3xF/ BfsBq948TyVBI0hUuOqHcA+HmPiH3h1v8EPnh3Y8YnEJRv3y00pF/uHPMJbY7AU45g dQbaaKpS2lu1A== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 33/89] KVM: arm64: Handle guest stage-2 page-tables entirely at EL2 Date: Thu, 19 May 2022 14:41:08 +0100 Message-Id: <20220519134204.5379-34-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064437_412196_6540684E X-CRM114-Status: GOOD ( 28.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that EL2 is able to manage guest stage-2 page-tables, avoid allocating a separate MMU structure in the host and instead introduce a new fault handler which responds to guest stage-2 faults by sharing GUP-pinned pages with the guest via a hypercall. These pages are recovered (and unpinned) on guest teardown via the page reclaim hypercall. Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_host.h | 6 ++ arch/arm64/kvm/arm.c | 10 ++- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 49 ++++++++++- arch/arm64/kvm/mmu.c | 137 +++++++++++++++++++++++++++-- arch/arm64/kvm/pkvm.c | 17 ++++ 6 files changed, 212 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index a68381699c40..931a351da3f2 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -65,6 +65,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_reclaim_page, + __KVM_HOST_SMCCC_FUNC___pkvm_host_map_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 3c6ed1f3887d..9252841850e4 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -158,10 +158,16 @@ struct kvm_s2_mmu { struct kvm_arch_memory_slot { }; +struct kvm_pinned_page { + struct list_head link; + struct page *page; +}; + struct kvm_protected_vm { unsigned int shadow_handle; struct mutex shadow_lock; struct kvm_hyp_memcache teardown_mc; + struct list_head pinned_pages; }; struct kvm_arch { diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 694ba3792e9d..5b41551a978b 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -358,7 +358,11 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) if (vcpu_has_run_once(vcpu) && unlikely(!irqchip_in_kernel(vcpu->kvm))) static_branch_dec(&userspace_irqchip_in_use); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + if (is_protected_kvm_enabled()) + free_hyp_memcache(&vcpu->arch.pkvm_memcache); + else + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); @@ -385,6 +389,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) struct kvm_s2_mmu *mmu; int *last_ran; + if (is_protected_kvm_enabled()) + goto nommu; + mmu = vcpu->arch.hw_mmu; last_ran = this_cpu_ptr(mmu->last_vcpu_ran); @@ -402,6 +409,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) *last_ran = vcpu->vcpu_id; } +nommu: vcpu->cpu = cpu; kvm_vgic_load(vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 7a0d95e28e00..245d267064b3 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -32,8 +32,6 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) shadow_vcpu->arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); shadow_vcpu->arch.sve_max_vl = host_vcpu->arch.sve_max_vl; - shadow_vcpu->arch.hw_mmu = host_vcpu->arch.hw_mmu; - shadow_vcpu->arch.hcr_el2 = host_vcpu->arch.hcr_el2; shadow_vcpu->arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; shadow_vcpu->arch.cptr_el2 = host_vcpu->arch.cptr_el2; @@ -107,6 +105,52 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static int pkvm_refill_memcache(struct kvm_vcpu *shadow_vcpu, + struct kvm_vcpu *host_vcpu) +{ + struct kvm_shadow_vcpu_state *shadow_vcpu_state = get_shadow_state(shadow_vcpu); + u64 nr_pages = VTCR_EL2_LVLS(shadow_vcpu_state->shadow_vm->kvm.arch.vtcr) - 1; + + return refill_memcache(&shadow_vcpu->arch.pkvm_memcache, nr_pages, + &host_vcpu->arch.pkvm_memcache); +} + +static void handle___pkvm_host_map_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, pfn, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 3); + struct kvm_shadow_vcpu_state *shadow_state; + struct kvm_vcpu *shadow_vcpu; + struct kvm *host_kvm; + unsigned int handle; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + host_vcpu = kern_hyp_va(host_vcpu); + host_kvm = kern_hyp_va(host_vcpu->kvm); + handle = host_kvm->arch.pkvm.shadow_handle; + shadow_state = pkvm_load_shadow_vcpu_state(handle, host_vcpu->vcpu_idx); + if (!shadow_state) + goto out; + + host_vcpu = shadow_state->host_vcpu; + shadow_vcpu = &shadow_state->shadow_vcpu; + + /* Topup shadow memcache with the host's */ + ret = pkvm_refill_memcache(shadow_vcpu, host_vcpu); + if (ret) + goto out_put_state; + + ret = __pkvm_host_share_guest(pfn, gfn, shadow_vcpu); +out_put_state: + pkvm_put_shadow_vcpu_state(shadow_state); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -297,6 +341,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_host_reclaim_page), + HANDLE_FUNC(__pkvm_host_map_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index df92b5f7ac63..c74c431588a3 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -190,6 +190,22 @@ static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 si __unmap_stage2_range(mmu, start, size, true); } +static void pkvm_stage2_flush(struct kvm *kvm) +{ + struct kvm_pinned_page *ppage; + + /* + * Contrary to stage2_apply_range(), we don't need to check + * whether the VM is being torn down, as this is always called + * from a vcpu thread, and the list is only ever freed on VM + * destroy (which only occurs when all vcpu are gone). + */ + list_for_each_entry(ppage, &kvm->arch.pkvm.pinned_pages, link) { + __clean_dcache_guest_page(page_address(ppage->page), PAGE_SIZE); + cond_resched_rwlock_write(&kvm->mmu_lock); + } +} + static void stage2_flush_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot) { @@ -215,9 +231,13 @@ static void stage2_flush_vm(struct kvm *kvm) idx = srcu_read_lock(&kvm->srcu); write_lock(&kvm->mmu_lock); - slots = kvm_memslots(kvm); - kvm_for_each_memslot(memslot, bkt, slots) - stage2_flush_memslot(kvm, memslot); + if (!is_protected_kvm_enabled()) { + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, bkt, slots) + stage2_flush_memslot(kvm, memslot); + } else if (!kvm_vm_is_protected(kvm)) { + pkvm_stage2_flush(kvm); + } write_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, idx); @@ -636,7 +656,9 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t return -EINVAL; phys_shift = KVM_VM_TYPE_ARM_IPA_SIZE(type); - if (phys_shift) { + if (is_protected_kvm_enabled()) { + phys_shift = kvm_ipa_limit; + } else if (phys_shift) { if (phys_shift > kvm_ipa_limit || phys_shift < ARM64_MIN_PARANGE_BITS) return -EINVAL; @@ -652,6 +674,11 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); kvm->arch.vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift); + INIT_LIST_HEAD(&kvm->arch.pkvm.pinned_pages); + mmu->arch = &kvm->arch; + + if (is_protected_kvm_enabled()) + return 0; if (mmu->pgt != NULL) { kvm_err("kvm_arch already initialized?\n"); @@ -760,6 +787,9 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); struct kvm_pgtable *pgt = NULL; + if (is_protected_kvm_enabled()) + return; + write_lock(&kvm->mmu_lock); pgt = mmu->pgt; if (pgt) { @@ -1113,6 +1143,99 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, return 0; } +static int pkvm_host_map_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu) +{ + int ret = kvm_call_hyp_nvhe(__pkvm_host_map_guest, pfn, gfn, vcpu); + + /* + * Getting -EPERM at this point implies that the pfn has already been + * mapped. This should only ever happen when two vCPUs faulted on the + * same page, and the current one lost the race to do the mapping. + */ + return (ret == -EPERM) ? -EAGAIN : ret; +} + +static int pkvm_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + unsigned long hva) +{ + struct kvm_hyp_memcache *hyp_memcache = &vcpu->arch.pkvm_memcache; + struct mm_struct *mm = current->mm; + unsigned int flags = FOLL_HWPOISON | FOLL_LONGTERM | FOLL_WRITE; + struct kvm_pinned_page *ppage; + struct kvm *kvm = vcpu->kvm; + struct page *page; + u64 pfn; + int ret; + + ret = topup_hyp_memcache(hyp_memcache, kvm_mmu_cache_min_pages(kvm)); + if (ret) + return -ENOMEM; + + ppage = kmalloc(sizeof(*ppage), GFP_KERNEL_ACCOUNT); + if (!ppage) + return -ENOMEM; + + ret = account_locked_vm(mm, 1, true); + if (ret) + goto free_ppage; + + mmap_read_lock(mm); + ret = pin_user_pages(hva, 1, flags, &page, NULL); + mmap_read_unlock(mm); + + if (ret == -EHWPOISON) { + kvm_send_hwpoison_signal(hva, PAGE_SHIFT); + ret = 0; + goto dec_account; + } else if (ret != 1) { + ret = -EFAULT; + goto dec_account; + } else if (!PageSwapBacked(page)) { + /* + * We really can't deal with page-cache pages returned by GUP + * because (a) we may trigger writeback of a page for which we + * no longer have access and (b) page_mkclean() won't find the + * stage-2 mapping in the rmap so we can get out-of-whack with + * the filesystem when marking the page dirty during unpinning + * (see cc5095747edf ("ext4: don't BUG if someone dirty pages + * without asking ext4 first")). + * + * Ideally we'd just restrict ourselves to anonymous pages, but + * we also want to allow memfd (i.e. shmem) pages, so check for + * pages backed by swap in the knowledge that the GUP pin will + * prevent try_to_unmap() from succeeding. + */ + ret = -EIO; + goto dec_account; + } + + write_lock(&kvm->mmu_lock); + pfn = page_to_pfn(page); + ret = pkvm_host_map_guest(pfn, fault_ipa >> PAGE_SHIFT, vcpu); + if (ret) { + if (ret == -EAGAIN) + ret = 0; + goto unpin; + } + + ppage->page = page; + INIT_LIST_HEAD(&ppage->link); + list_add(&ppage->link, &kvm->arch.pkvm.pinned_pages); + write_unlock(&kvm->mmu_lock); + + return 0; + +unpin: + write_unlock(&kvm->mmu_lock); + unpin_user_pages(&page, 1); +dec_account: + account_locked_vm(mm, 1, false); +free_ppage: + kfree(ppage); + + return ret; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, unsigned long fault_status) @@ -1470,7 +1593,11 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) goto out_unlock; } - ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status); + if (is_protected_kvm_enabled()) + ret = pkvm_mem_abort(vcpu, fault_ipa, hva); + else + ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status); + if (ret == 0) ret = 1; out: diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index b174d6dfde36..40e5490ef453 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -6,6 +6,7 @@ #include #include +#include #include #include @@ -183,12 +184,28 @@ int kvm_shadow_create(struct kvm *kvm) void kvm_shadow_destroy(struct kvm *kvm) { + struct kvm_pinned_page *ppage, *tmp; + struct mm_struct *mm = current->mm; + struct list_head *ppages; + if (kvm->arch.pkvm.shadow_handle) WARN_ON(kvm_call_hyp_nvhe(__pkvm_teardown_shadow, kvm->arch.pkvm.shadow_handle)); kvm->arch.pkvm.shadow_handle = 0; free_hyp_memcache(&kvm->arch.pkvm.teardown_mc); + + ppages = &kvm->arch.pkvm.pinned_pages; + list_for_each_entry_safe(ppage, tmp, ppages, link) { + WARN_ON(kvm_call_hyp_nvhe(__pkvm_host_reclaim_page, + page_to_pfn(ppage->page))); + cond_resched(); + + account_locked_vm(mm, 1, false); + unpin_user_pages_dirty_lock(&ppage->page, 1, true); + list_del(&ppage->link); + kfree(ppage); + } } int kvm_init_pvm(struct kvm *kvm) From patchwork Thu May 19 13:41:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9750DC433EF for ; Thu, 19 May 2022 14:08:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wh7w1HGPKIZii+1NN0Jd8xIzhfrWe8fNsuzkqzOjd5g=; b=jE4aMmP45a2CLL i8l6ffCR/Q34iVM696mIGz6/SZONWxMZIMevfYIHMrM2sf4ew85d+e9WZvHRPDGD6ffSOXV2i0Pbw BciZ4UkZVAJoh81rcbdhYCmjQIfHkgbH4P0lqyRdM40mnuBafUrXo+myXRVroGcWDGvopuA8Ju9w5 wZiCPeRJq0N+zzSj0W0aS71dEgmwJ59lWkQhwnerrh9ekCQOWlwvVQi/iIvhsN0hJlVDdGCY86EQ2 WSu7JiXkTV70iJ80u40zUyCWaPfO+Hs4kQuFh3YpHnm3QgyVHIU3d9LVCPgNTeCm4VUhauTWoyMfJ vhD3Hst0+aZiZmJSN98g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgnw-007G6E-Qi; Thu, 19 May 2022 14:07:05 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSI-0075ww-SG for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:45 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 58C33B824AF; Thu, 19 May 2022 13:44:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB594C34100; Thu, 19 May 2022 13:44:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967880; bh=sMh0UCgIWjXVfrdGOf6s/H5Omqvu6QIYMqHdrALaiFQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fChHAdSeRCu5nnQLCIerdlLwME4HuSOoTq765F+p2PmiqZDNBy1OVBaPUy1Z+HP/5 Qr1oV8QjMOdjFyK0QNZ5b+s6lQhG92ykY+hY/pqHuopk2MFzqF41khrHQy/p/Ge8cz vJ1Cpm4Ro9E4tqvDsz+QEcI6ejpFG36wYD/mc1nx7d1v4zxcLiTyKdWgjXgcoDMs4x i+EkttApQGaALL7WwshRplmOs/bHG0ui/pZLNqG0PpfG0DKO1Q3fpSdp00+mzz0JO1 TByAThQbuHkNkXxIXi4JwfqrSNOhO7eTBWHimmB6/ZmlyAKq6zxN2SzCqDS94EJyKf Pu3EqqpciPeHQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 34/89] KVM: arm64: Don't access kvm_arm_hyp_percpu_base at EL1 Date: Thu, 19 May 2022 14:41:09 +0100 Message-Id: <20220519134204.5379-35-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064443_196083_160E8F03 X-CRM114-Status: GOOD ( 13.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret The host KVM PMU code can currently index kvm_arm_hyp_percpu_base[] through this_cpu_ptr_hyp_sym(), but will not actually dereference that pointer when protected KVM is enabled. In preparation for making kvm_arm_hyp_percpu_base[] unaccessible to the host, let's make sure the indexing in hyp per-cpu pages is also done after the static key check to avoid spurious accesses to EL2-private data from EL1. Signed-off-by: Quentin Perret --- arch/arm64/kvm/pmu.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 03a6c1f4a09a..a8878fd8b696 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -31,9 +31,13 @@ static bool kvm_pmu_switch_needed(struct perf_event_attr *attr) */ void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) { - struct kvm_host_data *ctx = this_cpu_ptr_hyp_sym(kvm_host_data); + struct kvm_host_data *ctx; - if (!kvm_arm_support_pmu_v3() || !ctx || !kvm_pmu_switch_needed(attr)) + if (!kvm_arm_support_pmu_v3()) + return; + + ctx = this_cpu_ptr_hyp_sym(kvm_host_data); + if (!ctx || !kvm_pmu_switch_needed(attr)) return; if (!attr->exclude_host) @@ -47,9 +51,13 @@ void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) */ void kvm_clr_pmu_events(u32 clr) { - struct kvm_host_data *ctx = this_cpu_ptr_hyp_sym(kvm_host_data); + struct kvm_host_data *ctx; + + if (!kvm_arm_support_pmu_v3()) + return; - if (!kvm_arm_support_pmu_v3() || !ctx) + ctx = this_cpu_ptr_hyp_sym(kvm_host_data); + if (!ctx) return; ctx->pmu_events.events_host &= ~clr; From patchwork Thu May 19 13:41:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855130 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 532FFC433FE for ; Thu, 19 May 2022 14:09:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SykimynuzRQpDtncRzrbLz19qAUyqpePMlrbHicy7wg=; b=o4E2owFGWdMoCj iv//pWcEq0ew/ENh0872Cnqnsqwg+8u47Ln/mdJITZrTy7NdXjgxntOh2eBa78To0VVamoAyeptq6 qrsJE1HAD9Z8UG5OxgEmXH2XxeWc/3iddkOJS+x2EW71UzMkLL+QuCb8CdTDp8lHGh+03Jq9Y3Ibh f/W2pfEU8cTDDDGqKMGr6qwFMXwi1JuWWZ9yK9GFB9P/ltEXCw8QePddWJcxRLYw61if3ALZvrkmE yjV04Ho5cwAehuM/d5Q0NeUXdZCB6YLxVvaJqpvMUS3+TIk5co0PULd6Fjca7jn7cDIEbnhDJ6fGC eYB4YMjZpdoevCnKtvlQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgoY-007GOh-OZ; Thu, 19 May 2022 14:07:43 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSM-0075yx-Uj for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:49 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 53301B824A6; Thu, 19 May 2022 13:44:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2201C385AA; Thu, 19 May 2022 13:44:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967884; bh=4phMg5qKrrVOBZXC4vAMZ7uzsL/5lQSk91h6zsw+z5Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eYWUF7O+dZINKj8uUBJpxtiB+gKrfEI/2nL7l0N2Lr5z7rDx+cb0dAwDbLOcrYoGY WWM9Ip1wrYt2oIR8x34eqWc/uKJatACQ3BBWo6RdvTSOcuoRNh8aZriG5Ap+nN4Cc4 sGlX1GkSoza2sWPCH+ZnmulcJRnnGkgoDUokF5hSoWC4MdHyh1CaQUtqIbf1Pq2/cj 44Oo9U2tqWNRXFu64vNzoRvFXHZ4k1EXABMGdZjLMcNNM95bacGWt9AFeQcQkDtmaS SexpSKJc2RvTJ75aUFSaXQ8Bz2JU2uOph3beGtY23t+1DAaeC55ZnsUby52r1F2Ij4 /DfDowgaTA3fw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 35/89] KVM: arm64: Unmap kvm_arm_hyp_percpu_base from the host Date: Thu, 19 May 2022 14:41:10 +0100 Message-Id: <20220519134204.5379-36-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064447_405460_1A63A8FC X-CRM114-Status: GOOD ( 13.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret In pKVM mode, we can't trust the host not to mess with the hypervisor per-cpu offsets, so let's move the array containing them to the nVHE code. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 4 ++-- arch/arm64/kernel/image-vars.h | 3 --- arch/arm64/kvm/arm.c | 9 ++++----- arch/arm64/kvm/hyp/nvhe/hyp-smp.c | 2 ++ 4 files changed, 8 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 931a351da3f2..35b9d590bb74 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -110,7 +110,7 @@ enum __kvm_host_smccc_func { #define per_cpu_ptr_nvhe_sym(sym, cpu) \ ({ \ unsigned long base, off; \ - base = kvm_arm_hyp_percpu_base[cpu]; \ + base = kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu]; \ off = (unsigned long)&CHOOSE_NVHE_SYM(sym) - \ (unsigned long)&CHOOSE_NVHE_SYM(__per_cpu_start); \ base ? (typeof(CHOOSE_NVHE_SYM(sym))*)(base + off) : NULL; \ @@ -198,7 +198,7 @@ DECLARE_KVM_HYP_SYM(__kvm_hyp_vector); #define __kvm_hyp_init CHOOSE_NVHE_SYM(__kvm_hyp_init) #define __kvm_hyp_vector CHOOSE_HYP_SYM(__kvm_hyp_vector) -extern unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; +extern unsigned long kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[]; DECLARE_KVM_NVHE_SYM(__per_cpu_start); DECLARE_KVM_NVHE_SYM(__per_cpu_end); diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 4e3b6d618ac1..37a2d833851a 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -102,9 +102,6 @@ KVM_NVHE_ALIAS(gic_nonsecure_priorities); KVM_NVHE_ALIAS(__start___kvm_ex_table); KVM_NVHE_ALIAS(__stop___kvm_ex_table); -/* Array containing bases of nVHE per-CPU memory regions. */ -KVM_NVHE_ALIAS(kvm_arm_hyp_percpu_base); - /* PMU available static key */ #ifdef CONFIG_HW_PERF_EVENTS KVM_NVHE_ALIAS(kvm_arm_pmu_available); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 5b41551a978b..c7b362db692f 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -51,7 +51,6 @@ DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); -unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); static bool vgic_present; @@ -1800,13 +1799,13 @@ static void teardown_hyp_mode(void) free_hyp_pgds(); for_each_possible_cpu(cpu) { free_page(per_cpu(kvm_arm_hyp_stack_page, cpu)); - free_pages(kvm_arm_hyp_percpu_base[cpu], nvhe_percpu_order()); + free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order()); } } static int do_pkvm_init(u32 hyp_va_bits) { - void *per_cpu_base = kvm_ksym_ref(kvm_arm_hyp_percpu_base); + void *per_cpu_base = kvm_ksym_ref(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)); int ret; preempt_disable(); @@ -1907,7 +1906,7 @@ static int init_hyp_mode(void) page_addr = page_address(page); memcpy(page_addr, CHOOSE_NVHE_SYM(__per_cpu_start), nvhe_percpu_size()); - kvm_arm_hyp_percpu_base[cpu] = (unsigned long)page_addr; + kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu] = (unsigned long)page_addr; } /* @@ -1968,7 +1967,7 @@ static int init_hyp_mode(void) } for_each_possible_cpu(cpu) { - char *percpu_begin = (char *)kvm_arm_hyp_percpu_base[cpu]; + char *percpu_begin = (char *)kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu]; char *percpu_end = percpu_begin + nvhe_percpu_size(); /* Map Hyp percpu pages */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c index 9f54833af400..04d194583f1e 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c @@ -23,6 +23,8 @@ u64 cpu_logical_map(unsigned int cpu) return hyp_cpu_logical_map[cpu]; } +unsigned long __ro_after_init kvm_arm_hyp_percpu_base[NR_CPUS]; + unsigned long __hyp_per_cpu_offset(unsigned int cpu) { unsigned long *cpu_base_array; From patchwork Thu May 19 13:41:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BFC1CC433FE for ; Thu, 19 May 2022 14:10:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2xwMzyPwZe4ZRNNmTJJQYsHs2NiJzVAT3jKEZxeHRuQ=; b=XX0zjA0qVf1KXW kMsXmFwUISXvSuqKodyHjYq1d6hi93qBsdR4JW/UghvXw3J1u+2EWJ0MCtGvpTbLLGxLnnMxC4CTQ HLxIP+z7CQJ7EIXZr6nuTfjtbpOTShTKBXNFo8BxxRWw52gJ/yGdocHef4M9HjyzqzaZYxOXE6GQA H+0HGoVS60z6MQBpJL0oYnk1Yw0Q6yxP09Ac+q5paloWS36FCynqvjgJjreLQjX6hIarGYiDsF88+ DN1TjFxkS0R1hkjKiDFhW8lZh3LYa8Z9HZaYt2NRhxQnJxw6KqRRLb2fJP2N/OUvxRllDyYJwmmdK q8rgtY8/+7bWpQWZQ31A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgpR-007Gny-0Q; Thu, 19 May 2022 14:08:37 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSP-00760I-7n for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:51 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A55826173E; Thu, 19 May 2022 13:44:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 99165C34116; Thu, 19 May 2022 13:44:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967888; bh=zqVKKbCik0BT2NOr+vAjDmAd6qPp7FwhBrExj8PC3kE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jy8ugjMKFxgJUzCZw6EpncJky9cpGzBKD5/4kvwzMIa55oGxz246ROlAncFfRa1UM 9hREYDF0Ni1ok99Q2uLXDG/J8/0/6J5jiusuaRlvOVYXiWwB7+ruivYt/MlDKcZUx8 VlEAZyJYVaL7c9KAz273kzz1aRLPFLdVps8hRYWMjDIt7yOAXlLzU9TOjnCjl6mfV0 RP6pk6GJtA0ldfTMZH4/t5gWXjAiqXhID8G8r9oz3Y3ije/2BPPtZQyIs7YV8J7kMb 0RItXDoYdh/FH5aB3ziBvqo9CfIvsENgFpTOG4D8G1vPRYIVD0mdYI3xoongn169V4 mmz3pHulVGSkQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 36/89] KVM: arm64: Maintain a copy of 'kvm_arm_vmid_bits' at EL2 Date: Thu, 19 May 2022 14:41:11 +0100 Message-Id: <20220519134204.5379-37-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064449_453903_D1B7B699 X-CRM114-Status: GOOD ( 17.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Sharing 'kvm_arm_vmids_bits' between EL1 and EL2 allows the host to modify the variable arbitrarily, potentially leading to all sorts of shenanians as this is used to configure the VTTBR register for the guest stage-2. In preparation for unmapping host sections entirely from EL2, maintain a copy of 'kvm_arm_vmid_bits' and initialise it from the host value while it is still trusted. Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_hyp.h | 2 ++ arch/arm64/kernel/image-vars.h | 3 --- arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/hyp/nvhe/pkvm.c | 3 +++ 4 files changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index fd99cf09972d..6797eafe7890 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -124,4 +124,6 @@ extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val); extern unsigned long kvm_nvhe_sym(__icache_flags); +extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits); + #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 37a2d833851a..3e2489d23ff0 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -80,9 +80,6 @@ KVM_NVHE_ALIAS(nvhe_hyp_panic_handler); /* Vectors installed by hyp-init on reset HVC. */ KVM_NVHE_ALIAS(__hyp_stub_vectors); -/* VMID bits set by the KVM VMID allocator */ -KVM_NVHE_ALIAS(kvm_arm_vmid_bits); - /* Kernel symbols needed for cpus_have_final/const_caps checks. */ KVM_NVHE_ALIAS(arm64_const_caps_ready); KVM_NVHE_ALIAS(cpu_hwcap_keys); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c7b362db692f..07d2ac6a5aff 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1839,6 +1839,7 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits) kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1); kvm_nvhe_sym(__icache_flags) = __icache_flags; + kvm_nvhe_sym(kvm_arm_vmid_bits) = kvm_arm_vmid_bits; ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP); if (ret) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 9cd2bf75ed88..b29142a09e36 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -15,6 +15,9 @@ /* Used by icache_is_vpipt(). */ unsigned long __icache_flags; +/* Used by kvm_get_vttbr(). */ +unsigned int kvm_arm_vmid_bits; + /* * Set trap register values based on features in ID_AA64PFR0. */ From patchwork Thu May 19 13:41:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855132 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 79CF9C433EF for ; Thu, 19 May 2022 14:10:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=J+ubhHWgFhiUXbnz3Mr35ZCSQ5huYyLZtigSOBjYYhY=; b=bQxYcc2uLmXvl6 TUwtgVY7NImZ4KTznOrDZpX+xwzyYsV5bj8tfUSD2+HjcLa0yv4Bec1DebGuWhcmP7Fp2n5aWgGLN qH+ulCZRlipqUUQdROU7TzEI1vtPJ8mJfnN0LS8EUWyWYg4KYE7OyLOEzyNNPdxWPezWkcFllz8fc YqcsCSuMgbwx5LDQbZx/ZL2POFYYPrVaBV4kSANvT6qhfMa41NeeyJm3bQgbUwqnw+vYCHv6wPSPy Zhmz4JL+Js1I9nnd27pSKSsbnUQHvbgtWlm6DrDnsa04InWPQsiQYgifbrk2ALm4+CrpfzCXmIdcF y9idlrlecwatl/wD41SQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgqB-007HA2-Ti; Thu, 19 May 2022 14:09:24 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSU-00762y-LP for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:44:56 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4D9B8B824B0; Thu, 19 May 2022 13:44:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8FA86C34117; Thu, 19 May 2022 13:44:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967892; bh=uJnqJPwMR6L3zu0hKzCa6M5X1lVNf2v8m3jLNKMqNM4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R0497BzDPARsmIUjvgx2EZejin+P6ZOTLPa81aUHW0q8FvPtUJP3LpWcU5PBUgbsP 1rTwT+sJjD87VWB+U3bJ2FlpMn5iMvHaasaeqjlHOG6BTz6R2Dkx1BTXTjflDPt5vD EbLuhQG5UZCHhie+rj+6NSAqo+vGrYiysbTw1lpwerRIFeIU4Rue5txCLlp7ygUKo3 1sXAkTbYjJqhpkF+K7h4MsO51v8LylExPcfY1OC59B4TbCaYuIWZUQcbHfD2DpPhNm KbuRo2ggObmCIdukL1p+Yw7hNwXkF0voQUCYz8ZNWanYj6WZYUNCZAJqovOrQdiVGe 2BmAIjL0MlR/w== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 37/89] KVM: arm64: Explicitly map kvm_vgic_global_state at EL2 Date: Thu, 19 May 2022 14:41:12 +0100 Message-Id: <20220519134204.5379-38-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064454_900885_38BE9365 X-CRM114-Status: GOOD ( 13.83 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret The pkvm hypervisor may need to read the kvm_vgic_global_state variable at EL2. Make sure to explicitly map it in its stage-1 page-table rather than relying on mapping all of the host .rodata section. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/setup.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 59a478dde533..a851de624074 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -136,6 +136,11 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, if (ret) return ret; + ret = pkvm_create_mappings(&kvm_vgic_global_state, + &kvm_vgic_global_state + 1, prot); + if (ret) + return ret; + return 0; } From patchwork Thu May 19 13:41:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BF2CEC433F5 for ; Thu, 19 May 2022 14:11:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YR1MZBne9ya/Aeu7L0hJCpa3PiENU8cG7RtDWlnK96U=; b=dGpTerNaZVM2Ar vsTSkfw0uCiBKrnaCNv8UHA2gH9WzDBRNDcngk1bwDBeKw2Cap2BLGgRND8OuM0XsMkifkbPqZjqc NlDyctP92kc+Cq/Y2wdwhKiB8H52rJhi6/D5upbfoHcZJkwXirs+x6p7+MAGufj4PqAFanyDYbNvJ QrFeeGMOLR2TJNg81rEzsNqgpNnqzcNeao9Dcrm7seXUxvAlhjXLURqq01oihg1TZbJN4QMVhMURc KaR0zh6Th+Qinfx4oQWT4w2OTu4L5dGjsjUL82oOQi8zFR0xayRWaHReqKzUVq0kwNjdd/VjPI2pA kV0wPpRqmVLfD/G/gwVw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgr0-007HYC-0i; Thu, 19 May 2022 14:10:14 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSZ-00765C-NL for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:01 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 8BE13CE2440; Thu, 19 May 2022 13:44:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 86938C36AE7; Thu, 19 May 2022 13:44:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967896; bh=/a8/l7RJWZHY7hWaGIhGiCgxQi3kK7qNlpV74vXL9LE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Fh8c5EEbrBocWn0/lbHq/tOjwQAeOeCpvwb1HjvKWwOCOFcLRVML86UOKzPSI3V2J M6sQ+8Cc85Z+LtafNVYgt+IFIilx2fIhgbu75oaIuiFI6TXQoR7Bj2uRq9UID0IUjl Ip+LSj0r7Y1wsbnve37h+sVkLs+8TvWOeE2NJZ+n2xsGsr/8w7ImG6UMBDjAHBWlML C6VYkCXFBpzQsGesBkH7Wc99/MmLzYHr7hJYSCHtI5GOqkA6vJl45WLRc2NWafy+rB jvJTQ/npO6byjhnxdCJtVN9qyNC6ooBVfH3RaI7dpiGumqb1vT4vqf4k/wlG65/PBl MR5Q+ayZpl6Jg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 38/89] KVM: arm64: Don't map host sections in pkvm Date: Thu, 19 May 2022 14:41:13 +0100 Message-Id: <20220519134204.5379-39-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064500_163210_42C727A2 X-CRM114-Status: GOOD ( 15.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret We no longer need to map the host's .rodata and .bss sections in the pkvm hypervisor, so let's remove those mappings. This will avoid creating dependencies at EL2 on host-controlled data-structures. Signed-off-by: Quentin Perret --- arch/arm64/kernel/image-vars.h | 6 ------ arch/arm64/kvm/hyp/nvhe/setup.c | 14 +++----------- 2 files changed, 3 insertions(+), 17 deletions(-) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 3e2489d23ff0..2d4d6836ff47 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -115,12 +115,6 @@ KVM_NVHE_ALIAS_HYP(__memcpy, __pi_memcpy); KVM_NVHE_ALIAS_HYP(__memset, __pi_memset); #endif -/* Kernel memory sections */ -KVM_NVHE_ALIAS(__start_rodata); -KVM_NVHE_ALIAS(__end_rodata); -KVM_NVHE_ALIAS(__bss_start); -KVM_NVHE_ALIAS(__bss_stop); - /* Hyp memory sections */ KVM_NVHE_ALIAS(__hyp_idmap_text_start); KVM_NVHE_ALIAS(__hyp_idmap_text_end); diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index a851de624074..c55661976f64 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -119,23 +119,15 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, } /* - * Map the host's .bss and .rodata sections RO in the hypervisor, but - * transfer the ownership from the host to the hypervisor itself to - * make sure it can't be donated or shared with another entity. + * Map the host sections RO in the hypervisor, but transfer the + * ownership from the host to the hypervisor itself to make sure they + * can't be donated or shared with another entity. * * The ownership transition requires matching changes in the host * stage-2. This will be done later (see finalize_host_mappings()) once * the hyp_vmemmap is addressable. */ prot = pkvm_mkstate(PAGE_HYP_RO, PKVM_PAGE_SHARED_OWNED); - ret = pkvm_create_mappings(__start_rodata, __end_rodata, prot); - if (ret) - return ret; - - ret = pkvm_create_mappings(__hyp_bss_end, __bss_stop, prot); - if (ret) - return ret; - ret = pkvm_create_mappings(&kvm_vgic_global_state, &kvm_vgic_global_state + 1, prot); if (ret) From patchwork Thu May 19 13:41:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855134 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 64F3CC433EF for ; Thu, 19 May 2022 14:12:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nLqksSRyTFxBSL54DzvVeYvaUA3yZ+eXz2xfGVh5BjM=; b=D8GP3ZK3K0BqNl uA08wS999hvzKbtaaJJ68Z95cR572SAfJGxDjwuMmONTBqUSZImghG/4Ed+TO4xGUZEpAiye0gGFB YXqc1vRf0cLMOwF/lG13eVB5b+N1I/4FfqK/1HCMnFD5PJ/zOG1A9zkKCOhwkwVyBT3vFDWID5NHD bZBnCIUZJDndDUQCV8Jx2nCl/zvXedoS7JUm4eUFM2GvIfCY1LHSOerv+2Zki9+IW/T2MbZP840WK A/hyMeO8tzR7E8zbA8Sp1oxSZQfVscw7sEH4NO5WCYY+r3Tjvw59m9PkroQtIHDLyxFBQN7Enz+5j odLCcfNlwzfAHiAA722Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgrl-007HvM-Bn; Thu, 19 May 2022 14:11:01 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSc-007671-Lz for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:04 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 36706B824AF; Thu, 19 May 2022 13:45:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E9DBC36AE5; Thu, 19 May 2022 13:44:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967900; bh=SxLFdfnMnEkcdrJsQkkeO75KQ4eQ7yY1tbJ5lT5ChX8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XPqQc4zFWaMfABPQw5/dfWp2s/7zHYM4gze5jPW/dQaRUBPmocGR+TcexDTEnwGMM WJvO3tIobJKM1MzgXouAiiYjA6CHFTaOdjqSOS7lvHtFc1O+EfmIX3u5EgX2Z+2OUM rmB9N6bKE5jmxc9UB4Uz85VqPcUA9C2pduwt5TbVOmXPD9dt+ZCNk+XaHvw391dfX7 bkGM4ItJ/0wz5rTkUnSMJDbIKUL3Ofi3USzyfEbSkeyh8W6b5rgCg5PK+vS17guiNl fbt/1HMMOHa6ZTqaEkpF/0aqIgAskIulImvKfGdF2DHv6grWpZrwdvlssJWnyolUGG A4MDzh0YrL17Q== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 39/89] KVM: arm64: Extend memory donation to allow host-to-guest transitions Date: Thu, 19 May 2022 14:41:14 +0100 Message-Id: <20220519134204.5379-40-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064503_115624_FDDDE747 X-CRM114-Status: GOOD ( 16.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for supporting protected guests, where guest memory defaults to being inaccessible to the host, extend our memory protection mechanisms to support donation of pages from the host to a specific guest. Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 62 +++++++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 2 +- 3 files changed, 64 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 364432276fe0..b01b5cdb38de 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -69,6 +69,7 @@ int __pkvm_host_reclaim_page(u64 pfn); int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu); +int __pkvm_host_donate_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 2e92be8bb463..d0544259eb01 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -890,6 +890,14 @@ static int guest_ack_share(u64 addr, const struct pkvm_mem_transition *tx, size, PKVM_NOPAGE); } +static int guest_ack_donation(u64 addr, const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + + return __guest_check_page_state_range(tx->completer.guest.vcpu, addr, + size, PKVM_NOPAGE); +} + static int guest_complete_share(u64 addr, const struct pkvm_mem_transition *tx, enum kvm_pgtable_prot perms) { @@ -903,6 +911,17 @@ static int guest_complete_share(u64 addr, const struct pkvm_mem_transition *tx, prot, &vcpu->arch.pkvm_memcache); } +static int guest_complete_donation(u64 addr, const struct pkvm_mem_transition *tx) +{ + enum kvm_pgtable_prot prot = pkvm_mkstate(KVM_PGTABLE_PROT_RWX, PKVM_PAGE_OWNED); + struct kvm_vcpu *vcpu = tx->completer.guest.vcpu; + struct kvm_shadow_vm *vm = get_shadow_vm(vcpu); + u64 size = tx->nr_pages * PAGE_SIZE; + + return kvm_pgtable_stage2_map(&vm->pgt, addr, size, tx->completer.guest.phys, + prot, &vcpu->arch.pkvm_memcache); +} + static int check_share(struct pkvm_mem_share *share) { const struct pkvm_mem_transition *tx = &share->tx; @@ -1088,6 +1107,9 @@ static int check_donation(struct pkvm_mem_donation *donation) case PKVM_ID_HYP: ret = hyp_ack_donation(completer_addr, tx); break; + case PKVM_ID_GUEST: + ret = guest_ack_donation(completer_addr, tx); + break; default: ret = -EINVAL; } @@ -1122,6 +1144,9 @@ static int __do_donate(struct pkvm_mem_donation *donation) case PKVM_ID_HYP: ret = hyp_complete_donation(completer_addr, tx); break; + case PKVM_ID_GUEST: + ret = guest_complete_donation(completer_addr, tx); + break; default: ret = -EINVAL; } @@ -1362,6 +1387,43 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu) return ret; } +int __pkvm_host_donate_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu) +{ + int ret; + u64 host_addr = hyp_pfn_to_phys(pfn); + u64 guest_addr = hyp_pfn_to_phys(gfn); + struct kvm_shadow_vm *vm = get_shadow_vm(vcpu); + struct pkvm_mem_donation donation = { + .tx = { + .nr_pages = 1, + .initiator = { + .id = PKVM_ID_HOST, + .addr = host_addr, + .host = { + .completer_addr = guest_addr, + }, + }, + .completer = { + .id = PKVM_ID_GUEST, + .guest = { + .vcpu = vcpu, + .phys = host_addr, + }, + }, + }, + }; + + host_lock_component(); + guest_lock_component(vm); + + ret = do_donate(&donation); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} + static int hyp_zero_page(phys_addr_t phys) { void *addr; diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index a6676fd14cf9..2069e6833831 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -47,7 +47,7 @@ KVM_PTE_LEAF_ATTR_HI_S2_XN) #define KVM_INVALID_PTE_OWNER_MASK GENMASK(9, 2) -#define KVM_MAX_OWNER_ID 1 +#define KVM_MAX_OWNER_ID FIELD_MAX(KVM_INVALID_PTE_OWNER_MASK) struct kvm_pgtable_walk_data { struct kvm_pgtable *pgt; From patchwork Thu May 19 13:41:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EBE2CC433EF for ; Thu, 19 May 2022 14:13:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fVWh/zDPN/a+zJiEZBDxCmzyvJJNcaw93toaSTeCCuw=; b=dO2XsLIpEhPc+v 1uDsgM8UGtLeLbFbXRtzLm/UJV1057NqEyF8RvwLZwMUyoF+DsJLJ+KtJ4b1BVJrnnrDtoH89UUmu CK7fNaas9skOdBtWUDmqW6AK0+gSTOmuWWOrPNYipA0t5GRUnpWfkjIm1w+2lRQAPG2HG8LSZO5/5 wrIlIe29nFGzAXOnl/cbcpLjxeZSDm7u5zTHKgGnhxAv8OCTj99umW4PqhU3UXd0tewVd8B+F4H9a dHTnWFLuePLZoi3Hgi8c0VBZZqhk1ipklieFj8aDc6O5Nt35rUICD51tGi1o9ZDAU1s0+DQFqpxMr TeTeX1s5c0UpahG6obrQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgsS-007IEY-1r; Thu, 19 May 2022 14:11:45 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSf-00769F-6x for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:08 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8A0CB617A3; Thu, 19 May 2022 13:45:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7759CC36AE7; Thu, 19 May 2022 13:45:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967903; bh=NVIjWTZHHjkEdmFZZbMXsFiyFQQsWE9eTuQhs7pZgzs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=moLrW7+TJ5kt4af5rrP0VkOPYRUvenme9y0sKpDxbIotLGhKd5q8V/FJsoun9m9u3 f18churJ8CY0nHNQpfknitsbCjFMcLF9p+7OZrH0Jc5JBDwcasXMo315lqAKTAs8sn TvgEqN3yTdWz4cuo+EyM+o0lb3qjrTCSfMt3eMbLh4r9y9DelBjjtK6r1+CCjcujz+ sutRfNYM/uoJjnx+pgUZhRzPCFSdSCCMjpERnL2x1zTds4IQ4gn5525l8trYGgbi4l 2VWNIk6P4fJcb6DxiyruxoYUy7n0p+3onK2Lt8j35CKdClAwS5d0cv6SVva2KqHd3E KHouor9nHFxqg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 40/89] KVM: arm64: Split up nvhe/fixed_config.h Date: Thu, 19 May 2022 14:41:15 +0100 Message-Id: <20220519134204.5379-41-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064505_407057_3BAADB3A X-CRM114-Status: GOOD ( 26.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for using some of the pKVM fixed configuration register definitions to filter the available VM CAPs in the host, split the nvhe/fixed_config.h header so that the definitions can be shared with the host, while keeping the hypervisor function prototypes in the nvhe/ namespace. Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_pkvm.h | 190 ++++++++++++++++ .../arm64/kvm/hyp/include/nvhe/fixed_config.h | 205 ------------------ arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 6 +- arch/arm64/kvm/hyp/nvhe/pkvm.c | 1 - arch/arm64/kvm/hyp/nvhe/setup.c | 1 - arch/arm64/kvm/hyp/nvhe/switch.c | 2 +- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 2 +- 7 files changed, 197 insertions(+), 210 deletions(-) delete mode 100644 arch/arm64/kvm/hyp/include/nvhe/fixed_config.h diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index 1dc7372950b1..b92440cfb5b4 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -2,12 +2,14 @@ /* * Copyright (C) 2020 - Google LLC * Author: Quentin Perret + * Author: Fuad Tabba */ #ifndef __ARM64_KVM_PKVM_H__ #define __ARM64_KVM_PKVM_H__ #include #include +#include /* Maximum number of protected VMs that can be created. */ #define KVM_MAX_PVMS 255 @@ -18,6 +20,194 @@ int kvm_init_pvm(struct kvm *kvm); int kvm_shadow_create(struct kvm *kvm); void kvm_shadow_destroy(struct kvm *kvm); +/* + * Definitions for features to be allowed or restricted for guest virtual + * machines, depending on the mode KVM is running in and on the type of guest + * that is running. + * + * The ALLOW masks represent a bitmask of feature fields that are allowed + * without any restrictions as long as they are supported by the system. + * + * The RESTRICT_UNSIGNED masks, if present, represent unsigned fields for + * features that are restricted to support at most the specified feature. + * + * If a feature field is not present in either, than it is not supported. + * + * The approach taken for protected VMs is to allow features that are: + * - Needed by common Linux distributions (e.g., floating point) + * - Trivial to support, e.g., supporting the feature does not introduce or + * require tracking of additional state in KVM + * - Cannot be trapped or prevent the guest from using anyway + */ + +/* + * Allow for protected VMs: + * - Floating-point and Advanced SIMD + * - Data Independent Timing + */ +#define PVM_ID_AA64PFR0_ALLOW (\ + ARM64_FEATURE_MASK(ID_AA64PFR0_FP) | \ + ARM64_FEATURE_MASK(ID_AA64PFR0_ASIMD) | \ + ARM64_FEATURE_MASK(ID_AA64PFR0_DIT) \ + ) + +/* + * Restrict to the following *unsigned* features for protected VMs: + * - AArch64 guests only (no support for AArch32 guests): + * AArch32 adds complexity in trap handling, emulation, condition codes, + * etc... + * - RAS (v1) + * Supported by KVM + */ +#define PVM_ID_AA64PFR0_RESTRICT_UNSIGNED (\ + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \ + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \ + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \ + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \ + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) \ + ) + +/* + * Allow for protected VMs: + * - Branch Target Identification + * - Speculative Store Bypassing + */ +#define PVM_ID_AA64PFR1_ALLOW (\ + ARM64_FEATURE_MASK(ID_AA64PFR1_BT) | \ + ARM64_FEATURE_MASK(ID_AA64PFR1_SSBS) \ + ) + +/* + * Allow for protected VMs: + * - Mixed-endian + * - Distinction between Secure and Non-secure Memory + * - Mixed-endian at EL0 only + * - Non-context synchronizing exception entry and exit + */ +#define PVM_ID_AA64MMFR0_ALLOW (\ + ARM64_FEATURE_MASK(ID_AA64MMFR0_BIGENDEL) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR0_SNSMEM) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR0_BIGENDEL0) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR0_EXS) \ + ) + +/* + * Restrict to the following *unsigned* features for protected VMs: + * - 40-bit IPA + * - 16-bit ASID + */ +#define PVM_ID_AA64MMFR0_RESTRICT_UNSIGNED (\ + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \ + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) \ + ) + +/* + * Allow for protected VMs: + * - Hardware translation table updates to Access flag and Dirty state + * - Number of VMID bits from CPU + * - Hierarchical Permission Disables + * - Privileged Access Never + * - SError interrupt exceptions from speculative reads + * - Enhanced Translation Synchronization + */ +#define PVM_ID_AA64MMFR1_ALLOW (\ + ARM64_FEATURE_MASK(ID_AA64MMFR1_HADBS) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR1_VMIDBITS) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR1_HPD) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR1_PAN) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR1_SPECSEI) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR1_ETS) \ + ) + +/* + * Allow for protected VMs: + * - Common not Private translations + * - User Access Override + * - IESB bit in the SCTLR_ELx registers + * - Unaligned single-copy atomicity and atomic functions + * - ESR_ELx.EC value on an exception by read access to feature ID space + * - TTL field in address operations. + * - Break-before-make sequences when changing translation block size + * - E0PDx mechanism + */ +#define PVM_ID_AA64MMFR2_ALLOW (\ + ARM64_FEATURE_MASK(ID_AA64MMFR2_CNP) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR2_UAO) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR2_IESB) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR2_AT) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR2_IDS) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR2_TTL) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR2_BBM) | \ + ARM64_FEATURE_MASK(ID_AA64MMFR2_E0PD) \ + ) + +/* + * No support for Scalable Vectors for protected VMs: + * Requires additional support from KVM, e.g., context-switching and + * trapping at EL2 + */ +#define PVM_ID_AA64ZFR0_ALLOW (0ULL) + +/* + * No support for debug, including breakpoints, and watchpoints for protected + * VMs: + * The Arm architecture mandates support for at least the Armv8 debug + * architecture, which would include at least 2 hardware breakpoints and + * watchpoints. Providing that support to protected guests adds + * considerable state and complexity. Therefore, the reserved value of 0 is + * used for debug-related fields. + */ +#define PVM_ID_AA64DFR0_ALLOW (0ULL) +#define PVM_ID_AA64DFR1_ALLOW (0ULL) + +/* + * No support for implementation defined features. + */ +#define PVM_ID_AA64AFR0_ALLOW (0ULL) +#define PVM_ID_AA64AFR1_ALLOW (0ULL) + +/* + * No restrictions on instructions implemented in AArch64. + */ +#define PVM_ID_AA64ISAR0_ALLOW (\ + ARM64_FEATURE_MASK(ID_AA64ISAR0_AES) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR0_SHA1) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR0_SHA2) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR0_CRC32) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR0_ATOMICS) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR0_RDM) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR0_SHA3) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR0_SM3) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR0_SM4) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR0_DP) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR0_FHM) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR0_TS) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR0_TLB) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR0_RNDR) \ + ) + +#define PVM_ID_AA64ISAR1_ALLOW (\ + ARM64_FEATURE_MASK(ID_AA64ISAR1_DPB) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_API) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_JSCVT) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_FCMA) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_LRCPC) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_FRINTTS) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_SB) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_SPECRES) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_BF16) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_DGH) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_I8MM) \ + ) + +#define PVM_ID_AA64ISAR2_ALLOW (\ + ARM64_FEATURE_MASK(ID_AA64ISAR2_GPA3) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) \ + ) + extern struct memblock_region kvm_nvhe_sym(hyp_memory)[]; extern unsigned int kvm_nvhe_sym(hyp_memblock_nr); diff --git a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h deleted file mode 100644 index 5ad626527d41..000000000000 --- a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h +++ /dev/null @@ -1,205 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * Copyright (C) 2021 Google LLC - * Author: Fuad Tabba - */ - -#ifndef __ARM64_KVM_FIXED_CONFIG_H__ -#define __ARM64_KVM_FIXED_CONFIG_H__ - -#include - -/* - * This file contains definitions for features to be allowed or restricted for - * guest virtual machines, depending on the mode KVM is running in and on the - * type of guest that is running. - * - * The ALLOW masks represent a bitmask of feature fields that are allowed - * without any restrictions as long as they are supported by the system. - * - * The RESTRICT_UNSIGNED masks, if present, represent unsigned fields for - * features that are restricted to support at most the specified feature. - * - * If a feature field is not present in either, than it is not supported. - * - * The approach taken for protected VMs is to allow features that are: - * - Needed by common Linux distributions (e.g., floating point) - * - Trivial to support, e.g., supporting the feature does not introduce or - * require tracking of additional state in KVM - * - Cannot be trapped or prevent the guest from using anyway - */ - -/* - * Allow for protected VMs: - * - Floating-point and Advanced SIMD - * - Data Independent Timing - */ -#define PVM_ID_AA64PFR0_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64PFR0_FP) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_ASIMD) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_DIT) \ - ) - -/* - * Restrict to the following *unsigned* features for protected VMs: - * - AArch64 guests only (no support for AArch32 guests): - * AArch32 adds complexity in trap handling, emulation, condition codes, - * etc... - * - RAS (v1) - * Supported by KVM - */ -#define PVM_ID_AA64PFR0_RESTRICT_UNSIGNED (\ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) \ - ) - -/* - * Allow for protected VMs: - * - Branch Target Identification - * - Speculative Store Bypassing - */ -#define PVM_ID_AA64PFR1_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64PFR1_BT) | \ - ARM64_FEATURE_MASK(ID_AA64PFR1_SSBS) \ - ) - -/* - * Allow for protected VMs: - * - Mixed-endian - * - Distinction between Secure and Non-secure Memory - * - Mixed-endian at EL0 only - * - Non-context synchronizing exception entry and exit - */ -#define PVM_ID_AA64MMFR0_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64MMFR0_BIGENDEL) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR0_SNSMEM) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR0_BIGENDEL0) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR0_EXS) \ - ) - -/* - * Restrict to the following *unsigned* features for protected VMs: - * - 40-bit IPA - * - 16-bit ASID - */ -#define PVM_ID_AA64MMFR0_RESTRICT_UNSIGNED (\ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) \ - ) - -/* - * Allow for protected VMs: - * - Hardware translation table updates to Access flag and Dirty state - * - Number of VMID bits from CPU - * - Hierarchical Permission Disables - * - Privileged Access Never - * - SError interrupt exceptions from speculative reads - * - Enhanced Translation Synchronization - */ -#define PVM_ID_AA64MMFR1_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64MMFR1_HADBS) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_VMIDBITS) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_HPD) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_PAN) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_SPECSEI) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_ETS) \ - ) - -/* - * Allow for protected VMs: - * - Common not Private translations - * - User Access Override - * - IESB bit in the SCTLR_ELx registers - * - Unaligned single-copy atomicity and atomic functions - * - ESR_ELx.EC value on an exception by read access to feature ID space - * - TTL field in address operations. - * - Break-before-make sequences when changing translation block size - * - E0PDx mechanism - */ -#define PVM_ID_AA64MMFR2_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64MMFR2_CNP) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_UAO) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_IESB) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_AT) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_IDS) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_TTL) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_BBM) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_E0PD) \ - ) - -/* - * No support for Scalable Vectors for protected VMs: - * Requires additional support from KVM, e.g., context-switching and - * trapping at EL2 - */ -#define PVM_ID_AA64ZFR0_ALLOW (0ULL) - -/* - * No support for debug, including breakpoints, and watchpoints for protected - * VMs: - * The Arm architecture mandates support for at least the Armv8 debug - * architecture, which would include at least 2 hardware breakpoints and - * watchpoints. Providing that support to protected guests adds - * considerable state and complexity. Therefore, the reserved value of 0 is - * used for debug-related fields. - */ -#define PVM_ID_AA64DFR0_ALLOW (0ULL) -#define PVM_ID_AA64DFR1_ALLOW (0ULL) - -/* - * No support for implementation defined features. - */ -#define PVM_ID_AA64AFR0_ALLOW (0ULL) -#define PVM_ID_AA64AFR1_ALLOW (0ULL) - -/* - * No restrictions on instructions implemented in AArch64. - */ -#define PVM_ID_AA64ISAR0_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64ISAR0_AES) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_SHA1) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_SHA2) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_CRC32) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_ATOMICS) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_RDM) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_SHA3) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_SM3) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_SM4) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_DP) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_FHM) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_TS) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_TLB) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_RNDR) \ - ) - -#define PVM_ID_AA64ISAR1_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64ISAR1_DPB) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_API) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_JSCVT) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_FCMA) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_LRCPC) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_FRINTTS) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_SB) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_SPECRES) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_BF16) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_DGH) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_I8MM) \ - ) - -#define PVM_ID_AA64ISAR2_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64ISAR2_GPA3) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) \ - ) - -u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); -bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code); -bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code); -int kvm_check_pvm_sysreg_table(void); - -#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */ diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index e600dc4965c4..f76af6e0177a 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -67,5 +67,9 @@ struct kvm_shadow_vcpu_state * pkvm_load_shadow_vcpu_state(unsigned int shadow_handle, unsigned int vcpu_idx); void pkvm_put_shadow_vcpu_state(struct kvm_shadow_vcpu_state *shadow_state); -#endif /* __ARM64_KVM_NVHE_PKVM_H__ */ +u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); +bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code); +bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code); +int kvm_check_pvm_sysreg_table(void); +#endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index b29142a09e36..960427d6c168 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -6,7 +6,6 @@ #include #include -#include #include #include #include diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index c55661976f64..37002b9d7434 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -11,7 +11,6 @@ #include #include -#include #include #include #include diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 6410d21d8695..9d2b971e8613 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -26,8 +26,8 @@ #include #include -#include #include +#include /* Non-VHE specific context */ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 188fed1c174b..ddea42d7baf9 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -11,7 +11,7 @@ #include -#include +#include #include "../../sys_regs.h" From patchwork Thu May 19 13:41:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855138 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DA678C433EF for ; Thu, 19 May 2022 14:13:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=O2zilLFt4ob/1zr/QWAzuchtSOeutwtr0fl/ar0PT7c=; b=UTUZhWRecMZuNH AFSgVdPQtH0BukLZl3G1cSKMozENwh/CQ0ZV+41yZc1ff946ng3OkKT+qXoa6BE+ouboV66/ZQy15 kI5xyToXCZbi9XY4SD2rcdoesM8mgsvuSgwY13x379tMinvx3IkSs/gabNmx4XK1xoZLQ+PWFUFof QcGZvF14fXqWfb/k7/UHCcgdQichxFSJ9vRMnLneAb9Fr7jf15ItQv8pat27SfEspI/+Mw/y0YQTo RKYoJjSOI3sVZo1n0STYa+eWbwHBl6l+YWJt3p8kdNXWMCsC+VnAMjIfUWn43IsOLEQWw13JwG9El JddP7DXUk7jymLVSiJyg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgt9-007IZq-Cs; Thu, 19 May 2022 14:12:27 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSk-0076Bu-K6 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:12 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 23F85B82477; Thu, 19 May 2022 13:45:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 71080C34115; Thu, 19 May 2022 13:45:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967907; bh=2ZalBIGM7JSYFfLvIQUaX1p+ssc3QooHu98yjwP1v/w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eGFCCpT73KiD1pU7luVlg8x4AqHTP91+nIGFA6vsbv9NGUUUkM6+f+16s8HUDZHQV vib3XAT8a4KbKUo33Yk91wedCtFnD/jD+3kh4ArixrZMFbdJXggZi+fZTGobPhkxym HxB1fEHL+zBjjCeyswDIfjCcpZGRmcKRFq0Bg80PiA641hQBs94trYcE6W57o/mq4F yhgRvpnCL0gKXvKNxnsivpUBST6YKzv0AnQeKtQONM+ZFb0XmMS4hzSjLpQI/FRcfi YDP0BB2dBA2HFPlxYsK4f7ktQWBjEHFwSUjMbazeNcb+ekbd5pYJDAU4B+9AzBMWbm J1opVDsMFUTBA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 41/89] KVM: arm64: Make vcpu_{read, write}_sys_reg available to HYP code Date: Thu, 19 May 2022 14:41:16 +0100 Message-Id: <20220519134204.5379-42-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064510_990101_8472EE25 X-CRM114-Status: GOOD ( 15.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Allow vcpu_{read,write}_sys_reg() to be called from EL2 so that nVHE hyp code can reuse existing helper functions for operations such as resetting the vCPU state. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 30 +++++++++++++++++++++++++++--- arch/arm64/kvm/sys_regs.c | 20 -------------------- 2 files changed, 27 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9252841850e4..c55aadfdfd63 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -553,9 +553,6 @@ struct kvm_vcpu_arch { #define __vcpu_sys_reg(v,r) (ctxt_sys_reg(&(v)->arch.ctxt, (r))) -u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg); -void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg); - static inline bool __vcpu_read_sys_reg_from_cpu(int reg, u64 *val) { /* @@ -647,6 +644,33 @@ static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, int reg) return true; } +static inline u64 vcpu_arch_read_sys_reg(const struct kvm_vcpu_arch *vcpu_arch, + int reg) +{ + u64 val = 0x8badf00d8badf00d; + + /* sysregs_loaded_on_cpu is only used in VHE */ + if (!is_nvhe_hyp_code() && vcpu_arch->sysregs_loaded_on_cpu && + __vcpu_read_sys_reg_from_cpu(reg, &val)) + return val; + + return ctxt_sys_reg(&vcpu_arch->ctxt, reg); +} + +static inline void vcpu_arch_write_sys_reg(struct kvm_vcpu_arch *vcpu_arch, + u64 val, int reg) +{ + /* sysregs_loaded_on_cpu is only used in VHE */ + if (!is_nvhe_hyp_code() && vcpu_arch->sysregs_loaded_on_cpu && + __vcpu_write_sys_reg_to_cpu(val, reg)) + return; + + ctxt_sys_reg(&vcpu_arch->ctxt, reg) = val; +} + +#define vcpu_read_sys_reg(vcpu, reg) vcpu_arch_read_sys_reg(&((vcpu)->arch), reg) +#define vcpu_write_sys_reg(vcpu, val, reg) vcpu_arch_write_sys_reg(&((vcpu)->arch), val, reg) + struct kvm_vm_stat { struct kvm_vm_stat_generic generic; }; diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 7b45c040cc27..7886989443b9 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -68,26 +68,6 @@ static bool write_to_read_only(struct kvm_vcpu *vcpu, return false; } -u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) -{ - u64 val = 0x8badf00d8badf00d; - - if (vcpu->arch.sysregs_loaded_on_cpu && - __vcpu_read_sys_reg_from_cpu(reg, &val)) - return val; - - return __vcpu_sys_reg(vcpu, reg); -} - -void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) -{ - if (vcpu->arch.sysregs_loaded_on_cpu && - __vcpu_write_sys_reg_to_cpu(val, reg)) - return; - - __vcpu_sys_reg(vcpu, reg) = val; -} - /* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */ static u32 cache_levels; From patchwork Thu May 19 13:41:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5BD4DC433F5 for ; Thu, 19 May 2022 14:14:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JDrpNdlPTVaWVHISu6fuIeho8MWsiIEoZvvYWIAiBgI=; b=ZTNZMF4f7KJGff 5p8VfTFKyi9ZvP582GtMLpurKT00EdmrJ/i8oTR3fVQFSYfi5BAWaPxbxvTFSTF44SNN8TBmUS3q3 eIRMiWlRkGj62MzsV08IPZl7J7AQpVBtdhnbs8vtbbDMZj31w6fzYBk30UGaKJ9K/lCgd8uBDvX4R dNhAlwwl5kGuZv8if/gTU5eLfhs/cBnW+FVNU3C89+n2QxeaZgFEO6EY8rLQ+7YeU0EPMS4E88rMB JF+YrBPpwx8NIp5sZS2yjpRhhSKTSpEMePvLqkVsPLqyT2Z2n1nwwmCVC2ntkFpJWBTVlT9MxOiOB ivPsLRg0V2CEWQDyHy0g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgtk-007Irm-Jw; Thu, 19 May 2022 14:13:04 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSn-0076EB-3I for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:15 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 73FA06173E; Thu, 19 May 2022 13:45:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6CD13C34100; Thu, 19 May 2022 13:45:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967911; bh=So7rm6Tk+3JDsp2nyZRUfORf5bgPaTvoKJpTBE65TBs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d7oo7Dt9WSdR4M4KCK8NwiGvSwIctkA7BYQHR5Hm2iAW1kBJVYZazmFQsLtNa5+r4 GX51iCYucDoOKNJM5z8jZEpT21YUO/NXfCLPrFaiT87dE6ddC7edaYk8JupWQChh1J IXMThUyd++V0AoyOcun4BTIFFrOppzZr/+do3SHGDcn6rZikKWhVH5EqsY1pnSEIij a4ADm0nfmhNaeUshaqInPdeFcYUTuegdvIcnC7xBNJL1fOtqGTwH/ux8B6PFpwx7Df EiCZ4znq2tHikyMuA7h7u5SFUknAIzzjyUbUQV20frj/fhx0wMfWpVKUSS/Gh86ewH GRiPpQ6baksRw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 42/89] KVM: arm64: Simplify vgic-v3 hypercalls Date: Thu, 19 May 2022 14:41:17 +0100 Message-Id: <20220519134204.5379-43-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064513_266087_C1BA895A X-CRM114-Status: GOOD ( 19.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Consolidate the GICv3 VMCR accessor hypercalls into the APR save/restore hypercalls so that all of the EL2 GICv3 state is covered by a single pair of hypercalls. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_asm.h | 8 ++------ arch/arm64/include/asm/kvm_hyp.h | 4 ++-- arch/arm64/kvm/arm.c | 7 +++---- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 24 ++++++------------------ arch/arm64/kvm/hyp/vgic-v3-sr.c | 27 +++++++++++++++++++++++---- arch/arm64/kvm/vgic/vgic-v2.c | 9 +-------- arch/arm64/kvm/vgic/vgic-v3.c | 26 ++++---------------------- arch/arm64/kvm/vgic/vgic.c | 17 +++-------------- arch/arm64/kvm/vgic/vgic.h | 6 ++---- include/kvm/arm_vgic.h | 3 +-- 10 files changed, 47 insertions(+), 84 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 35b9d590bb74..22b5ee9f2b5c 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -73,11 +73,9 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid, __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff, - __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr, - __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr, - __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs, - __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_init_traps, + __KVM_HOST_SMCCC_FUNC___vgic_v3_save_vmcr_aprs, + __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_vmcr_aprs, __KVM_HOST_SMCCC_FUNC___pkvm_init_shadow, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_shadow, }; @@ -218,8 +216,6 @@ extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); extern void __kvm_adjust_pc(struct kvm_vcpu *vcpu); extern u64 __vgic_v3_get_gic_config(void); -extern u64 __vgic_v3_read_vmcr(void); -extern void __vgic_v3_write_vmcr(u32 vmcr); extern void __vgic_v3_init_lrs(void); extern u64 __kvm_get_mdcr_el2(void); diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 6797eafe7890..4adf7c2a77bd 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -61,8 +61,8 @@ void __vgic_v3_save_state(struct vgic_v3_cpu_if *cpu_if); void __vgic_v3_restore_state(struct vgic_v3_cpu_if *cpu_if); void __vgic_v3_activate_traps(struct vgic_v3_cpu_if *cpu_if); void __vgic_v3_deactivate_traps(struct vgic_v3_cpu_if *cpu_if); -void __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if); -void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if); +void __vgic_v3_save_vmcr_aprs(struct vgic_v3_cpu_if *cpu_if); +void __vgic_v3_restore_vmcr_aprs(struct vgic_v3_cpu_if *cpu_if); int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu); #ifdef __KVM_NVHE_HYPERVISOR__ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 07d2ac6a5aff..c9b8e2ca5cb5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -440,7 +440,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) if (has_vhe()) kvm_vcpu_put_sysregs_vhe(vcpu); kvm_timer_vcpu_put(vcpu); - kvm_vgic_put(vcpu); + kvm_vgic_put(vcpu, false); kvm_vcpu_pmu_restore_host(vcpu); kvm_arm_vmid_clear_active(); @@ -656,15 +656,14 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu) * doorbells to be signalled, should an interrupt become pending. */ preempt_disable(); - kvm_vgic_vmcr_sync(vcpu); - vgic_v4_put(vcpu, true); + kvm_vgic_put(vcpu, true); preempt_enable(); kvm_vcpu_halt(vcpu); kvm_clear_request(KVM_REQ_UNHALT, vcpu); preempt_disable(); - vgic_v4_load(vcpu); + kvm_vgic_load(vcpu); preempt_enable(); } diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 245d267064b3..5b46742d9f9b 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -205,16 +205,6 @@ static void handle___vgic_v3_get_gic_config(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = __vgic_v3_get_gic_config(); } -static void handle___vgic_v3_read_vmcr(struct kvm_cpu_context *host_ctxt) -{ - cpu_reg(host_ctxt, 1) = __vgic_v3_read_vmcr(); -} - -static void handle___vgic_v3_write_vmcr(struct kvm_cpu_context *host_ctxt) -{ - __vgic_v3_write_vmcr(cpu_reg(host_ctxt, 1)); -} - static void handle___vgic_v3_init_lrs(struct kvm_cpu_context *host_ctxt) { __vgic_v3_init_lrs(); @@ -225,18 +215,18 @@ static void handle___kvm_get_mdcr_el2(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = __kvm_get_mdcr_el2(); } -static void handle___vgic_v3_save_aprs(struct kvm_cpu_context *host_ctxt) +static void handle___vgic_v3_save_vmcr_aprs(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1); - __vgic_v3_save_aprs(kern_hyp_va(cpu_if)); + __vgic_v3_save_vmcr_aprs(kern_hyp_va(cpu_if)); } -static void handle___vgic_v3_restore_aprs(struct kvm_cpu_context *host_ctxt) +static void handle___vgic_v3_restore_vmcr_aprs(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1); - __vgic_v3_restore_aprs(kern_hyp_va(cpu_if)); + __vgic_v3_restore_vmcr_aprs(kern_hyp_va(cpu_if)); } static void handle___pkvm_init(struct kvm_cpu_context *host_ctxt) @@ -349,11 +339,9 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_tlb_flush_vmid), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), - HANDLE_FUNC(__vgic_v3_read_vmcr), - HANDLE_FUNC(__vgic_v3_write_vmcr), - HANDLE_FUNC(__vgic_v3_save_aprs), - HANDLE_FUNC(__vgic_v3_restore_aprs), HANDLE_FUNC(__pkvm_vcpu_init_traps), + HANDLE_FUNC(__vgic_v3_save_vmcr_aprs), + HANDLE_FUNC(__vgic_v3_restore_vmcr_aprs), HANDLE_FUNC(__pkvm_init_shadow), HANDLE_FUNC(__pkvm_teardown_shadow), }; diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index 4fb419f7b8b6..d5a51ea5459c 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -330,7 +330,7 @@ void __vgic_v3_deactivate_traps(struct vgic_v3_cpu_if *cpu_if) write_gicreg(0, ICH_HCR_EL2); } -void __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if) +static void __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if) { u64 val; u32 nr_pre_bits; @@ -363,7 +363,7 @@ void __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if) } } -void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if) +static void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if) { u64 val; u32 nr_pre_bits; @@ -455,16 +455,35 @@ u64 __vgic_v3_get_gic_config(void) return val; } -u64 __vgic_v3_read_vmcr(void) +static u64 __vgic_v3_read_vmcr(void) { return read_gicreg(ICH_VMCR_EL2); } -void __vgic_v3_write_vmcr(u32 vmcr) +static void __vgic_v3_write_vmcr(u32 vmcr) { write_gicreg(vmcr, ICH_VMCR_EL2); } +void __vgic_v3_save_vmcr_aprs(struct vgic_v3_cpu_if *cpu_if) +{ + __vgic_v3_save_aprs(cpu_if); + if (cpu_if->vgic_sre) + cpu_if->vgic_vmcr = __vgic_v3_read_vmcr(); +} + +void __vgic_v3_restore_vmcr_aprs(struct vgic_v3_cpu_if *cpu_if) +{ + /* + * If dealing with a GICv2 emulation on GICv3, VMCR_EL2.VFIQen + * is dependent on ICC_SRE_EL1.SRE, and we have to perform the + * VMCR_EL2 save/restore in the world switch. + */ + if (cpu_if->vgic_sre) + __vgic_v3_write_vmcr(cpu_if->vgic_vmcr); + __vgic_v3_restore_aprs(cpu_if); +} + static int __vgic_v3_bpr_min(void) { /* See Pseudocode for VPriorityGroup */ diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c index 645648349c99..4e8bb90bd96f 100644 --- a/arch/arm64/kvm/vgic/vgic-v2.c +++ b/arch/arm64/kvm/vgic/vgic-v2.c @@ -470,17 +470,10 @@ void vgic_v2_load(struct kvm_vcpu *vcpu) kvm_vgic_global_state.vctrl_base + GICH_APR); } -void vgic_v2_vmcr_sync(struct kvm_vcpu *vcpu) +void vgic_v2_put(struct kvm_vcpu *vcpu, bool blocking) { struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; cpu_if->vgic_vmcr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VMCR); -} - -void vgic_v2_put(struct kvm_vcpu *vcpu) -{ - struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; - - vgic_v2_vmcr_sync(vcpu); cpu_if->vgic_apr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_APR); } diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index b549af8b1dc2..18b7fda8d59c 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -720,15 +720,7 @@ void vgic_v3_load(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - /* - * If dealing with a GICv2 emulation on GICv3, VMCR_EL2.VFIQen - * is dependent on ICC_SRE_EL1.SRE, and we have to perform the - * VMCR_EL2 save/restore in the world switch. - */ - if (likely(cpu_if->vgic_sre)) - kvm_call_hyp(__vgic_v3_write_vmcr, cpu_if->vgic_vmcr); - - kvm_call_hyp(__vgic_v3_restore_aprs, cpu_if); + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); if (has_vhe()) __vgic_v3_activate_traps(cpu_if); @@ -736,23 +728,13 @@ void vgic_v3_load(struct kvm_vcpu *vcpu) WARN_ON(vgic_v4_load(vcpu)); } -void vgic_v3_vmcr_sync(struct kvm_vcpu *vcpu) +void vgic_v3_put(struct kvm_vcpu *vcpu, bool blocking) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - if (likely(cpu_if->vgic_sre)) - cpu_if->vgic_vmcr = kvm_call_hyp_ret(__vgic_v3_read_vmcr); -} - -void vgic_v3_put(struct kvm_vcpu *vcpu) -{ - struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - - WARN_ON(vgic_v4_put(vcpu, false)); - - vgic_v3_vmcr_sync(vcpu); + WARN_ON(vgic_v4_put(vcpu, blocking)); - kvm_call_hyp(__vgic_v3_save_aprs, cpu_if); + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); if (has_vhe()) __vgic_v3_deactivate_traps(cpu_if); diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c index d97e6080b421..6189ad969675 100644 --- a/arch/arm64/kvm/vgic/vgic.c +++ b/arch/arm64/kvm/vgic/vgic.c @@ -931,26 +931,15 @@ void kvm_vgic_load(struct kvm_vcpu *vcpu) vgic_v3_load(vcpu); } -void kvm_vgic_put(struct kvm_vcpu *vcpu) +void kvm_vgic_put(struct kvm_vcpu *vcpu, bool blocking) { if (unlikely(!vgic_initialized(vcpu->kvm))) return; if (kvm_vgic_global_state.type == VGIC_V2) - vgic_v2_put(vcpu); + vgic_v2_put(vcpu, blocking); else - vgic_v3_put(vcpu); -} - -void kvm_vgic_vmcr_sync(struct kvm_vcpu *vcpu) -{ - if (unlikely(!irqchip_in_kernel(vcpu->kvm))) - return; - - if (kvm_vgic_global_state.type == VGIC_V2) - vgic_v2_vmcr_sync(vcpu); - else - vgic_v3_vmcr_sync(vcpu); + vgic_v3_put(vcpu, blocking); } int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h index 3fd6c86a7ef3..947985b3b03e 100644 --- a/arch/arm64/kvm/vgic/vgic.h +++ b/arch/arm64/kvm/vgic/vgic.h @@ -196,8 +196,7 @@ int vgic_register_dist_iodev(struct kvm *kvm, gpa_t dist_base_address, void vgic_v2_init_lrs(void); void vgic_v2_load(struct kvm_vcpu *vcpu); -void vgic_v2_put(struct kvm_vcpu *vcpu); -void vgic_v2_vmcr_sync(struct kvm_vcpu *vcpu); +void vgic_v2_put(struct kvm_vcpu *vcpu, bool blocking); void vgic_v2_save_state(struct kvm_vcpu *vcpu); void vgic_v2_restore_state(struct kvm_vcpu *vcpu); @@ -227,8 +226,7 @@ int vgic_register_redist_iodev(struct kvm_vcpu *vcpu); bool vgic_v3_check_base(struct kvm *kvm); void vgic_v3_load(struct kvm_vcpu *vcpu); -void vgic_v3_put(struct kvm_vcpu *vcpu); -void vgic_v3_vmcr_sync(struct kvm_vcpu *vcpu); +void vgic_v3_put(struct kvm_vcpu *vcpu, bool blocking); bool vgic_has_its(struct kvm *kvm); int kvm_vgic_register_its_device(void); diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index bb30a6803d9f..65e09c3924be 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -380,8 +380,7 @@ bool kvm_vgic_map_is_active(struct kvm_vcpu *vcpu, unsigned int vintid); int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu); void kvm_vgic_load(struct kvm_vcpu *vcpu); -void kvm_vgic_put(struct kvm_vcpu *vcpu); -void kvm_vgic_vmcr_sync(struct kvm_vcpu *vcpu); +void kvm_vgic_put(struct kvm_vcpu *vcpu, bool blocking); #define irqchip_in_kernel(k) (!!((k)->arch.vgic.in_kernel)) #define vgic_initialized(k) ((k)->arch.vgic.initialized) From patchwork Thu May 19 13:41:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DA317C433F5 for ; Thu, 19 May 2022 14:14:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OBuU3V2hthYRv1FDUBdjWyK3kx4gF7yB/HJVF0nl5oc=; b=qkquNtw2PVSENW bwpb4w9CVHg4DMoeO5pt8hDy+ULH8aO7CVCGKDcqqZaZh75/Ddz9sFmrycw7HyG6bgAy9xOfogM+p fz1RRG/CwlQoHvpJys59/pG1yxB0JTwLA/9jEM6dkolIxhY6ciRRDWxlCTHQp2xF4WvVtujefsQDB iO1+AmHCtpzQCXzKXVv0URxY3bCoxXgj6vmpG7ad6Q24DW5Z5uzH6kh9DWcs3pWPU0JVsW2OwDrGk xNf4TmfGOrSDpirImQrSixF1gvOPp9ABUAnmxxmd7kI6B2QKakQT71uANnZh6LzlmWBBhTS6BCsve Zz1XeqozUdvub2Iosfkg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrguS-007JGC-0g; Thu, 19 May 2022 14:13:48 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSq-0076GI-UZ for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:21 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 71A056176B; Thu, 19 May 2022 13:45:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 633BCC34115; Thu, 19 May 2022 13:45:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967915; bh=NfURsrYW4h19F3WtHdQkYC/dH7t8OmUGfRpzGnNJnCw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=V8CE1dB72f2xw7zuWqcSJwBuXFyA8DJKPJRmNTFu5MMnncefdQvH44S7N0U1mB9Zc ltLAoKsJukAnREAop57PfnCskiIzNHdCzq3VrvaaDi5HSEiG1O3LYAEQKg9LTzqyMJ jhaNiN+GUuxos+/jHZyFfxDfYMgM6mxlabIuj8N7wT/wDz928AScV3Zp5V/6vhNnaH g6qjSBqWMkvTW4b8Sp+O4eXKMhz5/NuR4xPxSQqt9j+Yc0IzJNUvluddSX1bRjO3sK 0DgQsZntMcDT4vYQ+Zv0jsXgGqvSdgwffsIBbH3ABN83srItSWA4LM6em6bcJkfJzZ k431H+QX4noLw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 43/89] KVM: arm64: Add the {flush, sync}_vgic_state() primitives Date: Thu, 19 May 2022 14:41:18 +0100 Message-Id: <20220519134204.5379-44-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064517_111779_EDEEE6F8 X-CRM114-Status: GOOD ( 17.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Rather than blindly copying the vGIC state to/from the host at EL2, introduce a couple of helpers to copy only what is needed and to sanitise untrusted data passed by the host kernel. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 50 +++++++++++++++++++++++++----- 1 file changed, 43 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 5b46742d9f9b..58515e5d24ec 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -18,10 +18,51 @@ #include #include +#include + DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); +static void flush_vgic_state(struct kvm_vcpu *host_vcpu, + struct kvm_vcpu *shadow_vcpu) +{ + struct vgic_v3_cpu_if *host_cpu_if, *shadow_cpu_if; + unsigned int used_lrs, max_lrs, i; + + host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3; + shadow_cpu_if = &shadow_vcpu->arch.vgic_cpu.vgic_v3; + + max_lrs = (read_gicreg(ICH_VTR_EL2) & 0xf) + 1; + used_lrs = READ_ONCE(host_cpu_if->used_lrs); + used_lrs = min(used_lrs, max_lrs); + + shadow_cpu_if->vgic_hcr = READ_ONCE(host_cpu_if->vgic_hcr); + /* Should be a one-off */ + shadow_cpu_if->vgic_sre = (ICC_SRE_EL1_DIB | + ICC_SRE_EL1_DFB | + ICC_SRE_EL1_SRE); + shadow_cpu_if->used_lrs = used_lrs; + + for (i = 0; i < used_lrs; i++) + shadow_cpu_if->vgic_lr[i] = READ_ONCE(host_cpu_if->vgic_lr[i]); +} + +static void sync_vgic_state(struct kvm_vcpu *host_vcpu, + struct kvm_vcpu *shadow_vcpu) +{ + struct vgic_v3_cpu_if *host_cpu_if, *shadow_cpu_if; + unsigned int i; + + host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3; + shadow_cpu_if = &shadow_vcpu->arch.vgic_cpu.vgic_v3; + + WRITE_ONCE(host_cpu_if->vgic_hcr, shadow_cpu_if->vgic_hcr); + + for (i = 0; i < shadow_cpu_if->used_lrs; i++) + WRITE_ONCE(host_cpu_if->vgic_lr[i], shadow_cpu_if->vgic_lr[i]); +} + static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) { struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; @@ -43,16 +84,13 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) shadow_vcpu->arch.vsesr_el2 = host_vcpu->arch.vsesr_el2; - shadow_vcpu->arch.vgic_cpu.vgic_v3 = host_vcpu->arch.vgic_cpu.vgic_v3; + flush_vgic_state(host_vcpu, shadow_vcpu); } static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) { struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; struct kvm_vcpu *host_vcpu = shadow_state->host_vcpu; - struct vgic_v3_cpu_if *shadow_cpu_if = &shadow_vcpu->arch.vgic_cpu.vgic_v3; - struct vgic_v3_cpu_if *host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3; - unsigned int i; host_vcpu->arch.ctxt = shadow_vcpu->arch.ctxt; @@ -63,9 +101,7 @@ static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) host_vcpu->arch.flags = shadow_vcpu->arch.flags; - host_cpu_if->vgic_hcr = shadow_cpu_if->vgic_hcr; - for (i = 0; i < shadow_cpu_if->used_lrs; ++i) - host_cpu_if->vgic_lr[i] = shadow_cpu_if->vgic_lr[i]; + sync_vgic_state(host_vcpu, shadow_vcpu); } static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) From patchwork Thu May 19 13:41:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 656C7C433EF for ; Thu, 19 May 2022 14:15:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MTfaC7qlwBLK3002R0RRHgpwi7zK1vChodQUN5LPTCE=; b=N1xRYI3Vt7T8AW PURtjE/GtDNcb0UaIk2QQTULAtz7XLj70W54ygbSgofk5ZpIyjX/0jPHNBRqjtSfkcLa6/QW2dTWy CxrV72qnmHclUt82vqX/hCi6t0Gv0o15QfbIJBZ1beE6v2urFI4JhVcplJFbWT9fhSx9VGXh5NW6I 7FXrveMRVnAxgSadUttCNdWuePW1pTRQD7T/aL2dHLMYGzHCzVIW16aCW+9SqJ8hDtwy/iNZuoN/O W7OTR5l0g/dFOtMMlKAr7FpuydNalDb8fGF1L72Q4P6GVJjZ2n1kyv9dOJNuMSm3oahKhtmuT0Yu+ XtgqUoF9qsyb0tHMnQiA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgv1-007JbX-BT; Thu, 19 May 2022 14:14:24 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSx-0076Ig-B5 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:26 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 8FE31CE2466; Thu, 19 May 2022 13:45:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5F885C385B8; Thu, 19 May 2022 13:45:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967919; bh=Wd/HwkBqurC96ZGUbUiPBa09e91aiokQncSXqY2+W8A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nv0J7+q5DNpWtUyZflP18IUXGvVQz6kZ23wtPsn4EH1Zp0na7ewTMNMI/jESdl0jY kaJLuZ1/UmO0i5rCxQu/II3wMQGm61PlpJcwlM7M9VjvLRAf9XmuSQk8gq4hhA76+J UarGWnqVZ/uJ7PdpUbLO79QDVN75VZvQQ9TwBLMScAoQubJB0T2RMPkx/bCRmstMj7 IpPPr4YKqDMtLP3VwKpDFVbw1li75USzU3RJF8LIVfPo7DF+HKRPfs1tAvmFOPzf16 UUwgt3GfGVj23i2SUEuLWseCK13FrmD9oGSIbbE9JH3pbejdapr/o0Fa6b7+m1T2bD lmy3TBRccp+0w== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 44/89] KVM: arm64: Introduce predicates to check for protected state Date: Thu, 19 May 2022 14:41:19 +0100 Message-Id: <20220519134204.5379-45-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064523_851975_CE2C0E83 X-CRM114-Status: GOOD ( 14.45 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier In order to determine whether or not a VM or (shadow) vCPU are protected, introduce a helper function to query this state. For now, these will always return 'false' as the underlying field is never configured. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 6 ++---- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 13 +++++++++++++ 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index c55aadfdfd63..066eb7234bdd 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -164,6 +164,7 @@ struct kvm_pinned_page { }; struct kvm_protected_vm { + bool enabled; unsigned int shadow_handle; struct mutex shadow_lock; struct kvm_hyp_memcache teardown_mc; @@ -895,10 +896,7 @@ int kvm_set_ipa_limit(void); #define __KVM_HAVE_ARCH_VM_ALLOC struct kvm *kvm_arch_alloc_vm(void); -static inline bool kvm_vm_is_protected(struct kvm *kvm) -{ - return false; -} +#define kvm_vm_is_protected(kvm) ((kvm)->arch.pkvm.enabled) void kvm_init_protected_traps(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index f76af6e0177a..3997eb3dff55 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -58,6 +58,19 @@ static inline struct kvm_shadow_vm *get_shadow_vm(struct kvm_vcpu *shadow_vcpu) return get_shadow_state(shadow_vcpu)->shadow_vm; } +static inline bool shadow_state_is_protected(struct kvm_shadow_vcpu_state *shadow_state) +{ + return shadow_state->shadow_vm->kvm.arch.pkvm.enabled; +} + +static inline bool vcpu_is_protected(struct kvm_vcpu *vcpu) +{ + if (!is_protected_kvm_enabled()) + return false; + + return shadow_state_is_protected(get_shadow_state(vcpu)); +} + void hyp_shadow_table_init(void *tbl); int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, size_t shadow_size, unsigned long pgd_hva); From patchwork Thu May 19 13:41:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855142 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5722EC433F5 for ; Thu, 19 May 2022 14:16:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CZd4r5/wRZpJz4dGGDIzMcmkcTfV7uZKlDbCZzGpYoY=; b=X87v5FKhLNe+9s kLrhXjkAsg2Nbt6xORYs+HHGLITBa19GkONx+NBOkvlBrhlwCS0YasrEENYT6tgrOHYp3z96vcFgr l5AMeEviyWcoxBv+isi2Ux1Vd/sUwo1eSGhAsncxGZC8J/x8e+nTPWQixvkSsy6bFlhtrwVudjY0N 6cJNtjVbj1JmSNHACov1mRwgLlCcR1Phk/TSxZd+JOSL5msY5x2DHRMB3owY350C3UTrbvi1c4qXH E6zegORUaq9k+7dGPYvGL+Tmr1UAPZhbN1v4v1lv6tS3VLYdGSfsoSV+egoQDkNSTFbDKFZCIlDIW eqrRekdut4RMzQ4ynGUA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgvZ-007JvQ-Hx; Thu, 19 May 2022 14:14:57 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSy-0076Jf-QB for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:28 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 65F2F6179F; Thu, 19 May 2022 13:45:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 59C8AC34119; Thu, 19 May 2022 13:45:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967923; bh=3CjY4Pec448XotcyN2iqZqW/AXk9FKiq5AorkwA5PIo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ah5dMC5WL11TdBzKBFW4Bm7Zf9QsydkXGKdBlSpTd3G5QGJbK85WTN2cy469QC4ob /Ke2PCNB091Ze8VKttP1KicuACFtsKPLDbf24CemZhY1WpCwsnIwK16Ry1BBmcRU4V V4Ooe8GEU+9EvDOonvoO5BlORgY7iMI/1dlajlgTk/j03cJwQ89WlP3IFCKa7osB2v BTGEs6Abq4Tr/F35/f96HSYeol7vZc0gf6QUbYifbzXQYgPWmkqEfNFND86Mxc1zRh VI+zByOsw5wmlxrriGPanviEM+V75OACVZrfev4deRza+qgONZSTQpWJ+cD6DUDWk1 WemTltlhe7UrQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 45/89] KVM: arm64: Add the {flush, sync}_timer_state() primitives Date: Thu, 19 May 2022 14:41:20 +0100 Message-Id: <20220519134204.5379-46-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064524_994894_2CD357B1 X-CRM114-Status: GOOD ( 15.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier In preparation for save/restore of the timer state at EL2 for protected VMs, introduce a couple of sync/flush primitives for the architected timer, in much the same way as we have for the GIC. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 34 ++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 58515e5d24ec..32e7e1cad00f 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -63,6 +63,38 @@ static void sync_vgic_state(struct kvm_vcpu *host_vcpu, WRITE_ONCE(host_cpu_if->vgic_lr[i], shadow_cpu_if->vgic_lr[i]); } +static void flush_timer_state(struct kvm_shadow_vcpu_state *shadow_state) +{ + struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; + + if (!shadow_state_is_protected(shadow_state)) + return; + + /* + * A shadow vcpu has no offset, and sees vtime == ptime. The + * ptimer is fully emulated by EL1 and cannot be trusted. + */ + write_sysreg(0, cntvoff_el2); + isb(); + write_sysreg_el0(__vcpu_sys_reg(shadow_vcpu, CNTV_CVAL_EL0), SYS_CNTV_CVAL); + write_sysreg_el0(__vcpu_sys_reg(shadow_vcpu, CNTV_CTL_EL0), SYS_CNTV_CTL); +} + +static void sync_timer_state(struct kvm_shadow_vcpu_state *shadow_state) +{ + struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; + + if (!shadow_state_is_protected(shadow_state)) + return; + + /* + * Preserve the vtimer state so that it is always correct, + * even if the host tries to make a mess. + */ + __vcpu_sys_reg(shadow_vcpu, CNTV_CVAL_EL0) = read_sysreg_el0(SYS_CNTV_CVAL); + __vcpu_sys_reg(shadow_vcpu, CNTV_CTL_EL0) = read_sysreg_el0(SYS_CNTV_CTL); +} + static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) { struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; @@ -85,6 +117,7 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) shadow_vcpu->arch.vsesr_el2 = host_vcpu->arch.vsesr_el2; flush_vgic_state(host_vcpu, shadow_vcpu); + flush_timer_state(shadow_state); } static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) @@ -102,6 +135,7 @@ static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) host_vcpu->arch.flags = shadow_vcpu->arch.flags; sync_vgic_state(host_vcpu, shadow_vcpu); + sync_timer_state(shadow_state); } static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) From patchwork Thu May 19 13:41:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8AF01C433F5 for ; Thu, 19 May 2022 14:17:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=G8Clv/UsE1AgmKCUTPpYGXERC1LJiBgTJB8y74N0YQ0=; b=Of8ngDIoEiUt+Z qmjGZ+knypfnxZCJmh6vztrdBwaFPHxJ3sQj4mkORTfYw9Lnf8ioplrOMSo59YPUvZbiHDhyn2JhX N6ad4hgj+bQNAFtV0zfTpP3ZCtWKONOz8MvY9KA7h0RG7qtNZvzI4q8cEEk61+s5E9JVnr5xnQuUL WtuieMIs8NcEodwHEaBhWTd2TygKAMS4Vx2cYUaGZl1LkTDMH5nfM0skt/vkLe5cFGS+9vSnuI+QA LysexI/QddUxjS9AVNTlwuekc4DxA2zQiBQZZ6cWI7GoRRyJM1czqPGa2EN974EQoLCLUrUUmDIw9 lYkMvQQ+GjDsIPgNB8Sw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgw8-007KFA-C0; Thu, 19 May 2022 14:15:32 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgT4-0076Lz-LX for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:35 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1C959B8235B; Thu, 19 May 2022 13:45:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 510CDC36AE3; Thu, 19 May 2022 13:45:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967927; bh=8OjZsLF2wkXtv2aTPoN0QIpD2cu5xYWElGWpPMfY3OU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OMUQiwFrj0VO96u+pbo84x+j1YaW8rWqIZJtuiqrg/3d1GnLr0PewYXh+E8MfqrWp BvZz8TqNjfnCvGC2gOUtPNg2UZCCnd5iSLOQrOzUyPXw6LOVc93sIiuxISN6bUU5LM PUYomexnxj7l8IBlW50n72FOctPfIQcPDVKCCXm58Cq0sUet8oNlEWHi+yM5Tys27Q 1BWFZOf3Eu+kH8SJhqU4CHCnYpem3L7qEHunb0DwUVXN8x+KAIpx4D8u1tzXyzhOcK OU3VwYwI3nqINgHqTqSSCK+qWjwH4KWjSNkfg9UWEXFAKPKS7X8yoBO4uKVIkuzjDc OSBkQT9GPB/yw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 46/89] KVM: arm64: Introduce the pkvm_vcpu_{load, put} hypercalls Date: Thu, 19 May 2022 14:41:21 +0100 Message-Id: <20220519134204.5379-47-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064531_073990_BF69D912 X-CRM114-Status: GOOD ( 23.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Rather than look-up the shadow vCPU on every run hypercall at EL2, introduce a per-CPU loaded_shadow_state' which is updated by a pair of load/put hypercalls that are called directly from kvm_arch_vcpu_{load,put}() when pKVM is enabled. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_asm.h | 2 + arch/arm64/kvm/arm.c | 14 ++++++ arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 7 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 65 ++++++++++++++++++-------- arch/arm64/kvm/hyp/nvhe/pkvm.c | 28 +++++++++++ arch/arm64/kvm/vgic/vgic-v3.c | 6 ++- 6 files changed, 100 insertions(+), 22 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 22b5ee9f2b5c..07ee95d0f97d 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -78,6 +78,8 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_vmcr_aprs, __KVM_HOST_SMCCC_FUNC___pkvm_init_shadow, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_shadow, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c9b8e2ca5cb5..514519563976 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -429,12 +429,26 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) vcpu_ptrauth_disable(vcpu); kvm_arch_vcpu_load_debug_state_flags(vcpu); + if (is_protected_kvm_enabled()) { + kvm_call_hyp_nvhe(__pkvm_vcpu_load, + vcpu->kvm->arch.pkvm.shadow_handle, + vcpu->vcpu_idx, vcpu->arch.hcr_el2); + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + } + if (!cpumask_test_cpu(smp_processor_id(), vcpu->kvm->arch.supported_cpus)) vcpu_set_on_unsupported_cpu(vcpu); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { + if (is_protected_kvm_enabled()) { + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + kvm_call_hyp_nvhe(__pkvm_vcpu_put); + } + kvm_arch_vcpu_put_debug_state_flags(vcpu); kvm_arch_vcpu_put_fp(vcpu); if (has_vhe()) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index 3997eb3dff55..343d87877aa2 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -24,6 +24,12 @@ struct kvm_shadow_vcpu_state { /* A pointer to the shadow vm. */ struct kvm_shadow_vm *shadow_vm; + + /* + * Points to the per-cpu pointer of the cpu where it's loaded, or NULL + * if not loaded. + */ + struct kvm_shadow_vcpu_state **loaded_shadow_state; }; /* @@ -79,6 +85,7 @@ int __pkvm_teardown_shadow(unsigned int shadow_handle); struct kvm_shadow_vcpu_state * pkvm_load_shadow_vcpu_state(unsigned int shadow_handle, unsigned int vcpu_idx); void pkvm_put_shadow_vcpu_state(struct kvm_shadow_vcpu_state *shadow_state); +struct kvm_shadow_vcpu_state *pkvm_loaded_shadow_vcpu_state(void); u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 32e7e1cad00f..9e3a2aa6f737 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -138,40 +138,63 @@ static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) sync_timer_state(shadow_state); } +static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(unsigned int, shadow_handle, host_ctxt, 1); + DECLARE_REG(unsigned int, vcpu_idx, host_ctxt, 2); + DECLARE_REG(u64, hcr_el2, host_ctxt, 3); + struct kvm_shadow_vcpu_state *shadow_state; + struct kvm_vcpu *shadow_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + shadow_state = pkvm_load_shadow_vcpu_state(shadow_handle, vcpu_idx); + if (!shadow_state) + return; + + shadow_vcpu = &shadow_state->shadow_vcpu; + + if (shadow_state_is_protected(shadow_state)) { + /* Propagate WFx trapping flags, trap ptrauth */ + shadow_vcpu->arch.hcr_el2 &= ~(HCR_TWE | HCR_TWI | + HCR_API | HCR_APK); + shadow_vcpu->arch.hcr_el2 |= hcr_el2 & (HCR_TWE | HCR_TWI); + } +} + +static void handle___pkvm_vcpu_put(struct kvm_cpu_context *host_ctxt) +{ + struct kvm_shadow_vcpu_state *shadow_state; + + if (!is_protected_kvm_enabled()) + return; + + shadow_state = pkvm_loaded_shadow_vcpu_state(); + + if (shadow_state) { + pkvm_put_shadow_vcpu_state(shadow_state); + } +} + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1); int ret; - host_vcpu = kern_hyp_va(host_vcpu); - if (unlikely(is_protected_kvm_enabled())) { - struct kvm_shadow_vcpu_state *shadow_state; - struct kvm_vcpu *shadow_vcpu; - struct kvm *host_kvm; - unsigned int handle; - - host_kvm = kern_hyp_va(host_vcpu->kvm); - handle = host_kvm->arch.pkvm.shadow_handle; - shadow_state = pkvm_load_shadow_vcpu_state(handle, - host_vcpu->vcpu_idx); - if (!shadow_state) { - ret = -EINVAL; - goto out; - } - - shadow_vcpu = &shadow_state->shadow_vcpu; + struct kvm_shadow_vcpu_state *shadow_state = pkvm_loaded_shadow_vcpu_state(); + struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; + flush_shadow_state(shadow_state); ret = __kvm_vcpu_run(shadow_vcpu); sync_shadow_state(shadow_state); - pkvm_put_shadow_vcpu_state(shadow_state); } else { - ret = __kvm_vcpu_run(host_vcpu); + ret = __kvm_vcpu_run(kern_hyp_va(host_vcpu)); } -out: cpu_reg(host_ctxt, 1) = ret; } @@ -414,6 +437,8 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__vgic_v3_restore_vmcr_aprs), HANDLE_FUNC(__pkvm_init_shadow), HANDLE_FUNC(__pkvm_teardown_shadow), + HANDLE_FUNC(__pkvm_vcpu_load), + HANDLE_FUNC(__pkvm_vcpu_put), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 960427d6c168..f18f622336b8 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -17,6 +17,12 @@ unsigned long __icache_flags; /* Used by kvm_get_vttbr(). */ unsigned int kvm_arm_vmid_bits; +/* + * The shadow state for the currently loaded vcpu. Used only when protected KVM + * is enabled for both protected and non-protected VMs. + */ +static DEFINE_PER_CPU(struct kvm_shadow_vcpu_state *, loaded_shadow_state); + /* * Set trap register values based on features in ID_AA64PFR0. */ @@ -252,15 +258,30 @@ pkvm_load_shadow_vcpu_state(unsigned int shadow_handle, unsigned int vcpu_idx) struct kvm_shadow_vcpu_state *shadow_state = NULL; struct kvm_shadow_vm *vm; + /* Cannot load a new vcpu without putting the old one first. */ + if (__this_cpu_read(loaded_shadow_state)) + return NULL; + hyp_spin_lock(&shadow_lock); vm = find_shadow_by_handle(shadow_handle); if (!vm || vm->kvm.created_vcpus <= vcpu_idx) goto unlock; shadow_state = &vm->shadow_vcpu_states[vcpu_idx]; + + /* Ensure vcpu isn't loaded on more than one cpu simultaneously. */ + if (unlikely(shadow_state->loaded_shadow_state)) { + shadow_state = NULL; + goto unlock; + } + shadow_state->loaded_shadow_state = this_cpu_ptr(&loaded_shadow_state); + hyp_page_ref_inc(hyp_virt_to_page(vm)); unlock: hyp_spin_unlock(&shadow_lock); + + __this_cpu_write(loaded_shadow_state, shadow_state); + return shadow_state; } @@ -269,10 +290,17 @@ void pkvm_put_shadow_vcpu_state(struct kvm_shadow_vcpu_state *shadow_state) struct kvm_shadow_vm *vm = shadow_state->shadow_vm; hyp_spin_lock(&shadow_lock); + shadow_state->loaded_shadow_state = NULL; + __this_cpu_write(loaded_shadow_state, NULL); hyp_page_ref_dec(hyp_virt_to_page(vm)); hyp_spin_unlock(&shadow_lock); } +struct kvm_shadow_vcpu_state *pkvm_loaded_shadow_vcpu_state(void) +{ + return __this_cpu_read(loaded_shadow_state); +} + static void unpin_host_vcpus(struct kvm_shadow_vcpu_state *shadow_vcpu_states, unsigned int nr_vcpus) { diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index 18b7fda8d59c..07e62575d0f3 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -720,7 +720,8 @@ void vgic_v3_load(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); if (has_vhe()) __vgic_v3_activate_traps(cpu_if); @@ -734,7 +735,8 @@ void vgic_v3_put(struct kvm_vcpu *vcpu, bool blocking) WARN_ON(vgic_v4_put(vcpu, blocking)); - kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); if (has_vhe()) __vgic_v3_deactivate_traps(cpu_if); From patchwork Thu May 19 13:41:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855144 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 30AB2C433EF for ; Thu, 19 May 2022 14:17:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=EJ1o0DTIcPBEfaEgO31n2j0ReqgvkZFzV498n5JFfng=; b=a/UVX29rEC0BG/ lXHABXlSXVgFh5smUluvIOjggvfGpNMWsegaQvM6OlkUyd7ySUmDv2f17itYa6IZaJZcS5CBAidxZ Xr1/h1CABPjp18IZgNF3GEYYfP84CRPlblHOcY8C6ytyt9B6MttLA0NRDNiBICvVFHsgP1rH8mP0a hMYBThJHbChvG22NQOBbcPEkJh0hLXNCdJSyxFLLpNsj97DQ/9HXX77dX4xXpSaOZl8FIEsVaKibl iGWfAvmt2g2INPgxgcXZ958cmZe1XsWflln6JFyUMswysd9Xg1RuZkEhzwnzv8PKqLlCZh+m9Lhy5 xfc9bcyJbotjKqUn+nUQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgwk-007KZY-PI; Thu, 19 May 2022 14:16:11 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgT6-0076NM-RA for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:37 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5506E61783; Thu, 19 May 2022 13:45:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 47F4FC36AE9; Thu, 19 May 2022 13:45:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967931; bh=Lap94vFReqwzTxg/tbwg0Yft34nE9OnNIp6v3p0dx8U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GaBkW/HfPe0KmezeFJL4EFboOVaIrUETQBmTsWvHeWFsNovB0DsnZja5PCdhGL1c5 voJhnaSvK6DzfziEV1XynP58XmIUR6wsJDltbTNo+dk9rPMlgF+hloxC5p87BuNwG0 MeG13K1Ji0WX+XqFoMbo6NYc6f/i19Mkn7e4tzaR5Xhi+neV8qNzUIGFlxZSi4noaP kXiWL20/KlZSbohxXXrovc0M94LzPoF67UfMtV2kGTzHo2zGuQFol8p9AYiTWzZfj/ CUQF0RIr+O9MEmRG8o4N8O1fwnnYxFymOZw08xMZ1+Nstoo+n/1/3DiiEyRYrg0myx mPW3FZIW7X7XA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 47/89] KVM: arm64: Add current vcpu and shadow_state lookup primitive Date: Thu, 19 May 2022 14:41:22 +0100 Message-Id: <20220519134204.5379-48-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064533_006570_2518DFEC X-CRM114-Status: GOOD ( 17.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier In order to be able to safely manipulate the loaded state, add a helper that always return the vcpu as mapped in the EL2 S1 address space as well as the pointer to the shadow state. In case of failure, both pointers are returned as NULL values. For non-protected setups, state is always NULL, and vcpu the EL2 mapping of the input value. handle___kvm_vcpu_run() is converted to this helper. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 41 +++++++++++++++++++++++++----- 1 file changed, 35 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 9e3a2aa6f737..40cbf45800b7 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -177,22 +177,51 @@ static void handle___pkvm_vcpu_put(struct kvm_cpu_context *host_ctxt) } } +static struct kvm_vcpu *__get_current_vcpu(struct kvm_vcpu *vcpu, + struct kvm_shadow_vcpu_state **state) +{ + struct kvm_shadow_vcpu_state *sstate = NULL; + + vcpu = kern_hyp_va(vcpu); + + if (unlikely(is_protected_kvm_enabled())) { + sstate = pkvm_loaded_shadow_vcpu_state(); + if (!sstate || vcpu != sstate->host_vcpu) { + sstate = NULL; + vcpu = NULL; + } + } + + *state = sstate; + return vcpu; +} + +#define get_current_vcpu(ctxt, regnr, statepp) \ + ({ \ + DECLARE_REG(struct kvm_vcpu *, __vcpu, ctxt, regnr); \ + __get_current_vcpu(__vcpu, statepp); \ + }) + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { - DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1); + struct kvm_shadow_vcpu_state *shadow_state; + struct kvm_vcpu *vcpu; int ret; - if (unlikely(is_protected_kvm_enabled())) { - struct kvm_shadow_vcpu_state *shadow_state = pkvm_loaded_shadow_vcpu_state(); - struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; + vcpu = get_current_vcpu(host_ctxt, 1, &shadow_state); + if (!vcpu) { + cpu_reg(host_ctxt, 1) = -EINVAL; + return; + } + if (unlikely(shadow_state)) { flush_shadow_state(shadow_state); - ret = __kvm_vcpu_run(shadow_vcpu); + ret = __kvm_vcpu_run(&shadow_state->shadow_vcpu); sync_shadow_state(shadow_state); } else { - ret = __kvm_vcpu_run(kern_hyp_va(host_vcpu)); + ret = __kvm_vcpu_run(vcpu); } cpu_reg(host_ctxt, 1) = ret; From patchwork Thu May 19 13:41:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2DC12C433EF for ; Thu, 19 May 2022 14:18:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8d2lgX+mBSc787+WobSEcriKR3YBPlfNfM8bzPB9lOA=; b=RGzyujK8igCh0H sr/8TKRVsO1MyHIz02ckpGLSReZfjc4tAV4969hJ+ajw1dmtlSxaiGF07YMxCPFkRV+G8qq1QENxh syg0z5eDsv66u9vLBUzxM5Ls2iSc4wWzrX6VnZRvZZQv1W8lxZfDvis+JDBCPf/dBizmqJ8N2JcYg S3YTckjEs5s178QSRDWVWbyrps511Luz2qgTAWcXgesOcvACRjuslOxVNm0vfawYAe4MVAw1o/qQk aVrNgN+WljAAO4G5ZptBTh04lW5yU7u7wBww1j4ZmNTOpVfi+40nk+3BuHnmINzGneUROKLz9aW8U 5lWW4xQvLFy4E6bj1KzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgy4-007LCM-Jg; Thu, 19 May 2022 14:17:33 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTC-0076PM-DX for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:43 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E14BEB8235B; Thu, 19 May 2022 13:45:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3EFBBC34100; Thu, 19 May 2022 13:45:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967935; bh=01scSGwFMX4UX/Fyks9TF9FRUQgO1lQ2Yumv8GkNuXU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hsLv2QBDVEbJnkW9C6SNDZaSlM95bEV9jgWjWDC33tevmIDYfvPlgWQfBf2C29IE8 PdrjeP1VGHB94FH6y35YbmYNC/i9JVfbJB+emajAZaoIzi95LKLCU+SwTbGWkUrS/p MoXyjdHnNQ6HAi/FTI8f+/qIX20stHS87CALlgAbqEbPAPGNrJkMgrFuPNFYLtm2n+ rtC2wyfGHOJAFkgmrNzzWMiFXrTFkfPagj2sTYvDldt1HDi1Q1syEqX2xr8AcRuLV3 wNRwecTcQNVScVjsq0sxNu66dlU02HnZfRA81vGhPJokr0hhQmWejRAF31YI9aeRzf q4kTZXgcig5Hw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 48/89] KVM: arm64: Skip __kvm_adjust_pc() for protected vcpus Date: Thu, 19 May 2022 14:41:23 +0100 Message-Id: <20220519134204.5379-49-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064538_674042_18F4B86A X-CRM114-Status: GOOD ( 14.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Prevent the host from issuing arbitrary PC adjustments for protected vCPUs. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 40cbf45800b7..86dff0dc05f3 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -275,9 +275,22 @@ static void handle___pkvm_host_map_guest(struct kvm_cpu_context *host_ctxt) static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { - DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); + struct kvm_shadow_vcpu_state *shadow_state; + struct kvm_vcpu *vcpu; + + vcpu = get_current_vcpu(host_ctxt, 1, &shadow_state); + if (!vcpu) + return; + + if (shadow_state) { + /* This only applies to non-protected VMs */ + if (shadow_state_is_protected(shadow_state)) + return; + + vcpu = &shadow_state->shadow_vcpu; + } - __kvm_adjust_pc(kern_hyp_va(vcpu)); + __kvm_adjust_pc(vcpu); } static void handle___kvm_flush_vm_context(struct kvm_cpu_context *host_ctxt) From patchwork Thu May 19 13:41:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855146 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D37EBC433FE for ; Thu, 19 May 2022 14:19:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=liIEALw0rDx5QNoWRNchG7VWh/uh4CfCyNr+mpbTrEM=; b=Ip5SKBkljXwuiN LbnjyXTKgu7HssJruI9SRO41o7Nq5yMzjVqb8JqVi81nDOCRga5358+UfRbwBoTC1eFHkzFMbPkLj UvelYqW0jQR8j6h7OuMy8vPj3Py/3fjZ0WGQ0uZZRK/XZkNE6qtgdsfAku3n4kWJhuqu2c3JWXjTO BUQCAmZuIiynbn2ou4vjAva++XZ/G6KtS31QCxXtFZfGQ5AuzGFfMxRMk8Fiuj1zwZi95Pqs98gRw E4SQkVDIhrY7s7jmTzIAWiZEDxtZr+wx63C6ggwtzdtJ4vjO585RL56aiM1s7VLJE35IoTZT+k70Q zMXnh4BMhqap2BGH2XXQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgys-007LYe-29; Thu, 19 May 2022 14:18:23 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTG-0076RL-D6 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:47 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 05F41B824AF; Thu, 19 May 2022 13:45:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 35805C385AA; Thu, 19 May 2022 13:45:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967939; bh=HFRJpgCE92vv/8Xgnf9OmEXUkv8UW/YItxgJ7HnU+bc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ps84NFP9x4Ubm9E1OtA0g8yrueW1XFv9aVFu70FM0t0nZ2gNSTEdiE/MTqoLE144+ y6q8RYW3A1qiqSshj71BduHi7uz1tLUnAtG/3Z8Y5v4/yVCj6pIyFX8DbXyW+CXsLq 0+7Lr7rTTsnFaISZPDvjVtEYWkqNKXjhBTYfgZxdhlVNp+HpdhSHqjRzG+ojnEgVeZ G2Vv+kvmejx9Fx39uwt7nqKLwpVaiLan6DRASOodof+sKnzjhSScEDHfj/2tuao08S i6zJ8qd0xpd5wDwScezFcFmld212A3c+RRqTvZvyj+maAheJKo/5Yb+znCuoIN0lPp RMC1X3rsc6mYA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 49/89] KVM: arm64: Add hyp per_cpu variable to track current physical cpu number Date: Thu, 19 May 2022 14:41:24 +0100 Message-Id: <20220519134204.5379-50-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064542_808147_D159943C X-CRM114-Status: GOOD ( 13.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Hyp cannot trust the equivalent variable at the host. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_hyp.h | 3 +++ arch/arm64/kvm/arm.c | 4 ++++ arch/arm64/kvm/hyp/nvhe/hyp-smp.c | 2 ++ 3 files changed, 9 insertions(+) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 4adf7c2a77bd..e38869e88019 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -15,6 +15,9 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); DECLARE_PER_CPU(unsigned long, kvm_hyp_vector); DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); +DECLARE_PER_CPU(int, hyp_cpu_number); + +#define hyp_smp_processor_id() (__this_cpu_read(hyp_cpu_number)) #define read_sysreg_elx(r,nvh,vh) \ ({ \ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 514519563976..3bb379f15c07 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -52,6 +52,7 @@ DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); +DECLARE_KVM_NVHE_PER_CPU(int, hyp_cpu_number); static bool vgic_present; @@ -1487,6 +1488,9 @@ static void cpu_prepare_hyp_mode(int cpu) { struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu); unsigned long tcr; + int *hyp_cpu_number_ptr = per_cpu_ptr_nvhe_sym(hyp_cpu_number, cpu); + + *hyp_cpu_number_ptr = cpu; /* * Calculate the raw per-cpu offset without a translation from the diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c index 04d194583f1e..9fcb92abd0b5 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c @@ -8,6 +8,8 @@ #include #include +DEFINE_PER_CPU(int, hyp_cpu_number); + /* * nVHE copy of data structures tracking available CPU cores. * Only entries for CPUs that were online at KVM init are populated. From patchwork Thu May 19 13:41:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855147 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A028CC433EF for ; Thu, 19 May 2022 14:21:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FkswLUV8mCvatmUSfE0udQbGzVLtF98SLxZ8naQlH2c=; b=GgLt3bVek3IQ4E TkSPFl5Qdva+J7KKc0AIaLZknLTrhOn3zZqwP8uY05S7YGuyUityAgVTarU98I0tBl1BrNpgXmuyG WjQ9FlCuGGZmnfqCNjT0IgoXcsDXT/0WqFTRkYNv2BbeQet52sdkok/+QxWjcIhrRsBJyUXp/zk7R vdrTEhF6tiXKKv6/EDIRQdp14Q/iii6G6Vk4dVka5XrllnyB44TFv0SoYiBOa2ZnUZbY0nz8UaIxZ 0FIRWL2o/Ep2MY+htAt6A5Zubg0+q+2/d2vPn2qLZC37iopsIT8EKKMfjHNkL4kFlSGZkxQMMLdWG d9FY6ATyRSa4CXg/EzbQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh01-007M3v-2b; Thu, 19 May 2022 14:19:33 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTK-0076T7-F6 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:51 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E6893B824AA; Thu, 19 May 2022 13:45:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2DCE9C385B8; Thu, 19 May 2022 13:45:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967943; bh=asJoncPL+c6dfZ/3VORJ/alvAWHNYckJGw8PqP4+tBY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L9Orza4KSbC9vQrWJ3Tdpkf5OTTU93Oo1gTMd1dI63A9fHRNnEu5loZB782PADBIL cvCVl0O91yfYB6H6MhLOJmWdDLj283djRcWJXVOFt7ItkAtU0vXHKnOYfVVeHUEEZF LGr5iuxb9bzHP4LkFZec4BpH9SxlV5R4jD3h1mUvIcOB4x1nWaq+5EL6S6B4OSq5ej 4CR0+hOzax7kpQLq+Ts7beEvTrmd5FLccgxmyXBI5kiL9y/uLPYs0TFwiZokdI7Bwg YcMIwoiK3UuNs1UAWGmTTHQaHhNBGh1Z2lVEKRxcckcwWleWstBmiSvS/z6+oDx43V FUrKMus3wzSWg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 50/89] KVM: arm64: Ensure that TLBs and I-cache are private to each vcpu Date: Thu, 19 May 2022 14:41:25 +0100 Message-Id: <20220519134204.5379-51-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064546_839381_6EA76CE8 X-CRM114-Status: GOOD ( 26.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Guarantee that both TLBs and I-cache are private to each vcpu. Flush the CPU context if a different vcpu from the same vm is loaded on the same physical CPU. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 9 +++++-- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 17 ++++++++++++- arch/arm64/kvm/hyp/nvhe/pkvm.c | 34 ++++++++++++++++++++++++-- arch/arm64/kvm/pkvm.c | 20 +++++++++++---- 4 files changed, 70 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index 343d87877aa2..da8de2b7afb4 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -45,6 +45,9 @@ struct kvm_shadow_vm { /* The total size of the donated shadow area. */ size_t shadow_area_size; + /* The total size of the donated area for last_ran. */ + size_t last_ran_size; + struct kvm_pgtable pgt; struct kvm_pgtable_mm_ops mm_ops; struct hyp_pool pool; @@ -78,8 +81,10 @@ static inline bool vcpu_is_protected(struct kvm_vcpu *vcpu) } void hyp_shadow_table_init(void *tbl); -int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, - size_t shadow_size, unsigned long pgd_hva); +int __pkvm_init_shadow(struct kvm *kvm, + unsigned long shadow_hva, size_t shadow_size, + unsigned long pgd_hva, + unsigned long last_ran_hva, size_t last_ran_size); int __pkvm_teardown_shadow(unsigned int shadow_handle); struct kvm_shadow_vcpu_state * diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 86dff0dc05f3..229ef890d459 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -145,6 +145,7 @@ static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt) DECLARE_REG(u64, hcr_el2, host_ctxt, 3); struct kvm_shadow_vcpu_state *shadow_state; struct kvm_vcpu *shadow_vcpu; + int *last_ran; if (!is_protected_kvm_enabled()) return; @@ -155,6 +156,17 @@ static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt) shadow_vcpu = &shadow_state->shadow_vcpu; + /* + * Guarantee that both TLBs and I-cache are private to each vcpu. If a + * vcpu from the same VM has previously run on the same physical CPU, + * nuke the relevant contexts. + */ + last_ran = &shadow_vcpu->arch.hw_mmu->last_vcpu_ran[hyp_smp_processor_id()]; + if (*last_ran != shadow_vcpu->vcpu_id) { + __kvm_flush_cpu_context(shadow_vcpu->arch.hw_mmu); + *last_ran = shadow_vcpu->vcpu_id; + } + if (shadow_state_is_protected(shadow_state)) { /* Propagate WFx trapping flags, trap ptrauth */ shadow_vcpu->arch.hcr_el2 &= ~(HCR_TWE | HCR_TWI | @@ -436,9 +448,12 @@ static void handle___pkvm_init_shadow(struct kvm_cpu_context *host_ctxt) DECLARE_REG(unsigned long, host_shadow_va, host_ctxt, 2); DECLARE_REG(size_t, shadow_size, host_ctxt, 3); DECLARE_REG(unsigned long, pgd, host_ctxt, 4); + DECLARE_REG(unsigned long, last_ran, host_ctxt, 5); + DECLARE_REG(size_t, last_ran_size, host_ctxt, 6); cpu_reg(host_ctxt, 1) = __pkvm_init_shadow(host_kvm, host_shadow_va, - shadow_size, pgd); + shadow_size, pgd, + last_ran, last_ran_size); } static void handle___pkvm_teardown_shadow(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index f18f622336b8..c0403416ce1d 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -338,6 +338,8 @@ static int set_host_vcpus(struct kvm_shadow_vcpu_state *shadow_vcpu_states, static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, struct kvm_vcpu **vcpu_array, + int *last_ran, + size_t last_ran_size, unsigned int nr_vcpus) { int i; @@ -345,6 +347,9 @@ static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, vm->host_kvm = kvm; vm->kvm.created_vcpus = nr_vcpus; vm->kvm.arch.vtcr = host_kvm.arch.vtcr; + vm->kvm.arch.mmu.last_vcpu_ran = last_ran; + vm->last_ran_size = last_ran_size; + memset(vm->kvm.arch.mmu.last_vcpu_ran, -1, sizeof(int) * hyp_nr_cpus); for (i = 0; i < nr_vcpus; i++) { struct kvm_shadow_vcpu_state *shadow_vcpu_state = &vm->shadow_vcpu_states[i]; @@ -471,6 +476,15 @@ static int check_shadow_size(unsigned int nr_vcpus, size_t shadow_size) return 0; } +/* + * Check whether the size of the area donated by the host is sufficient for + * tracking the last vcpu that has run a physical cpu on this vm. + */ +static int check_last_ran_size(size_t size) +{ + return size >= (hyp_nr_cpus * sizeof(int)) ? 0 : -ENOMEM; +} + static void *map_donated_memory_noclear(unsigned long host_va, size_t size) { void *va = (void *)kern_hyp_va(host_va); @@ -530,6 +544,10 @@ static void unmap_donated_memory_noclear(void *va, size_t size) * pgd_hva: The host va of the area being donated for the stage-2 PGD for * the VM. Must be page aligned. Its size is implied by the VM's * VTCR. + * last_ran_hva: The host va of the area being donated for hyp to use to track + * the most recent physical cpu on which each vcpu has run. + * last_ran_size: The size of the area being donated at last_ran_hva. + * Must be a multiple of the page size. * Note: An array to the host KVM VCPUs (host VA) is passed via the pgd, as to * not to be dependent on how the VCPU's are layed out in struct kvm. * @@ -537,9 +555,11 @@ static void unmap_donated_memory_noclear(void *va, size_t size) * negative error code on failure. */ int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, - size_t shadow_size, unsigned long pgd_hva) + size_t shadow_size, unsigned long pgd_hva, + unsigned long last_ran_hva, size_t last_ran_size) { struct kvm_shadow_vm *vm = NULL; + void *last_ran = NULL; unsigned int nr_vcpus; size_t pgd_size = 0; void *pgd = NULL; @@ -555,12 +575,20 @@ int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, if (ret) goto err_unpin_kvm; + ret = check_last_ran_size(last_ran_size); + if (ret) + goto err_unpin_kvm; + ret = -ENOMEM; vm = map_donated_memory(shadow_hva, shadow_size); if (!vm) goto err_remove_mappings; + last_ran = map_donated_memory(last_ran_hva, last_ran_size); + if (!last_ran) + goto err_remove_mappings; + pgd_size = kvm_pgtable_stage2_pgd_size(host_kvm.arch.vtcr); pgd = map_donated_memory_noclear(pgd_hva, pgd_size); if (!pgd) @@ -570,7 +598,7 @@ int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, if (ret) goto err_remove_mappings; - ret = init_shadow_structs(kvm, vm, pgd, nr_vcpus); + ret = init_shadow_structs(kvm, vm, pgd, last_ran, last_ran_size, nr_vcpus); if (ret < 0) goto err_unpin_host_vcpus; @@ -595,6 +623,7 @@ int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, unpin_host_vcpus(vm->shadow_vcpu_states, nr_vcpus); err_remove_mappings: unmap_donated_memory(vm, shadow_size); + unmap_donated_memory(last_ran, last_ran_size); unmap_donated_memory_noclear(pgd, pgd_size); err_unpin_kvm: hyp_unpin_shared_mem(kvm, kvm + 1); @@ -636,6 +665,7 @@ int __pkvm_teardown_shadow(unsigned int shadow_handle) shadow_size = vm->shadow_area_size; hyp_unpin_shared_mem(vm->host_kvm, vm->host_kvm + 1); + unmap_donated_memory(vm->kvm.arch.mmu.last_vcpu_ran, vm->last_ran_size); memset(vm, 0, shadow_size); for (addr = vm; addr < (void *)vm + shadow_size; addr += PAGE_SIZE) push_hyp_memcache(mc, addr, hyp_virt_to_phys); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 40e5490ef453..67aad91dc3e5 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -111,8 +111,8 @@ static int __kvm_shadow_create(struct kvm *kvm) { struct kvm_vcpu *vcpu, **vcpu_array; unsigned int shadow_handle; - size_t pgd_sz, shadow_sz; - void *pgd, *shadow_addr; + size_t pgd_sz, shadow_sz, last_ran_sz; + void *pgd, *shadow_addr, *last_ran; unsigned long idx; int ret; @@ -138,6 +138,14 @@ static int __kvm_shadow_create(struct kvm *kvm) goto free_pgd; } + /* Allocate memory to donate to hyp for tracking mmu->last_vcpu_ran. */ + last_ran_sz = PAGE_ALIGN(num_possible_cpus() * sizeof(int)); + last_ran = alloc_pages_exact(last_ran_sz, GFP_KERNEL_ACCOUNT); + if (!last_ran) { + ret = -ENOMEM; + goto free_shadow; + } + /* Stash the vcpu pointers into the PGD */ BUILD_BUG_ON(KVM_MAX_VCPUS > (PAGE_SIZE / sizeof(u64))); vcpu_array = pgd; @@ -145,7 +153,7 @@ static int __kvm_shadow_create(struct kvm *kvm) /* Indexing of the vcpus to be sequential starting at 0. */ if (WARN_ON(vcpu->vcpu_idx != idx)) { ret = -EINVAL; - goto free_shadow; + goto free_last_ran; } vcpu_array[idx] = vcpu; @@ -153,9 +161,9 @@ static int __kvm_shadow_create(struct kvm *kvm) /* Donate the shadow memory to hyp and let hyp initialize it. */ ret = kvm_call_hyp_nvhe(__pkvm_init_shadow, kvm, shadow_addr, shadow_sz, - pgd); + pgd, last_ran, last_ran_sz); if (ret < 0) - goto free_shadow; + goto free_last_ran; shadow_handle = ret; @@ -163,6 +171,8 @@ static int __kvm_shadow_create(struct kvm *kvm) kvm->arch.pkvm.shadow_handle = shadow_handle; return 0; +free_last_ran: + free_pages_exact(last_ran, last_ran_sz); free_shadow: free_pages_exact(shadow_addr, shadow_sz); free_pgd: From patchwork Thu May 19 13:41:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855148 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5C208C433F5 for ; Thu, 19 May 2022 14:21:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GPM60jnkZMoUTJ4JBTmFiW31aLjEPmaCcm/vOZlLfXU=; b=uituDrkap22j9y TfMcngcN36i0e6/TTM+1LUhAN9OdiL5hBfEOvtfJAksy/n+rnPcVY9xUdlZ5qQiN1DfsILNe32Vzb yj1cPRiixOpNbQNPjNZHlprNePpgUtH8Vq8B+5NLOtI6TwoePiRlOaQ6d02Lugwkio08VGDmIeZDZ WcSKQ7moyA5XgxAb5U95aiX+nx6KhUdJCv6Su1fSfkm3juQD1Qjthy8B/+K1OVA4WXLuA7CcaCk2z ALNL0At/mbC8YG07v3isOvxRVLznVgrc87/gIZuR9HbJMdL7rKnMJYnMZv0/ElhSbUV/pdvVZqksp MvuGlv8t55njgCs6umVA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh0m-007MON-Q6; Thu, 19 May 2022 14:20:22 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTO-0076Ux-2h for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:54 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id CFC9DB824AB; Thu, 19 May 2022 13:45:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 25501C34100; Thu, 19 May 2022 13:45:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967947; bh=vn94WCmAYZN6zOdSO+ooW8+de2325RR/Q7OATfCvR9k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YWFWQwTQ5O1KPqUp4BW5LPJrK1PuwVjN4EzxwAUnvG9gKXhFK29lUf6lJCynTmQIY uuMF81D9IwMYXDoVfnf80ZErySl4bRkjRdJnmNTXylawqPQbuVSX7ng/D9cFD6o35g InsiV34IuXF/t7k7pEd+BVAkOnKwqbpkxLsqi0/9qBBcGc/p1RAoQOpXNfg/oZUbNE QQAnwIRB783+q8GGoqZRySDQTalK6LEId6R7zRcOepIeV9Pi2pG0wO7Lg3Zeyao50t iB3nCYJg/r5LeOJHjZ1PjccVQpsPbVBB7UNvLWkIbzJ43AdLz8rhzMjROferhnKAoX ZMdk4tF0CqUgA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 51/89] KVM: arm64: Introduce per-EC entry/exit handlers Date: Thu, 19 May 2022 14:41:26 +0100 Message-Id: <20220519134204.5379-52-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064550_436483_A16E6A81 X-CRM114-Status: GOOD ( 18.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Introduce per-EC entry/exit handlers at EL2 and provide initial implementations to manage the 'flags' and fault information registers. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 3 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 99 +++++++++++++++++++++++--- 2 files changed, 94 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index da8de2b7afb4..d070400b5616 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -25,6 +25,9 @@ struct kvm_shadow_vcpu_state { /* A pointer to the shadow vm. */ struct kvm_shadow_vm *shadow_vm; + /* Tracks exit code for the protected guest. */ + u32 exit_code; + /* * Points to the per-cpu pointer of the cpu where it's loaded, or NULL * if not loaded. diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 229ef890d459..bbf2621f1862 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -24,6 +24,51 @@ DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); +typedef void (*shadow_entry_exit_handler_fn)(struct kvm_vcpu *, struct kvm_vcpu *); + +static void handle_vm_entry_generic(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + unsigned long host_flags = READ_ONCE(host_vcpu->arch.flags); + + shadow_vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION | + KVM_ARM64_EXCEPT_MASK); + + if (host_flags & KVM_ARM64_PENDING_EXCEPTION) { + shadow_vcpu->arch.flags |= KVM_ARM64_PENDING_EXCEPTION; + shadow_vcpu->arch.flags |= host_flags & KVM_ARM64_EXCEPT_MASK; + } else if (host_flags & KVM_ARM64_INCREMENT_PC) { + shadow_vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC; + } +} + +static void handle_vm_exit_generic(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + WRITE_ONCE(host_vcpu->arch.fault.esr_el2, + shadow_vcpu->arch.fault.esr_el2); +} + +static void handle_vm_exit_abt(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + WRITE_ONCE(host_vcpu->arch.fault.esr_el2, + shadow_vcpu->arch.fault.esr_el2); + WRITE_ONCE(host_vcpu->arch.fault.far_el2, + shadow_vcpu->arch.fault.far_el2); + WRITE_ONCE(host_vcpu->arch.fault.hpfar_el2, + shadow_vcpu->arch.fault.hpfar_el2); + WRITE_ONCE(host_vcpu->arch.fault.disr_el1, + shadow_vcpu->arch.fault.disr_el1); +} + +static const shadow_entry_exit_handler_fn entry_vm_shadow_handlers[] = { + [0 ... ESR_ELx_EC_MAX] = handle_vm_entry_generic, +}; + +static const shadow_entry_exit_handler_fn exit_vm_shadow_handlers[] = { + [0 ... ESR_ELx_EC_MAX] = handle_vm_exit_generic, + [ESR_ELx_EC_IABT_LOW] = handle_vm_exit_abt, + [ESR_ELx_EC_DABT_LOW] = handle_vm_exit_abt, +}; + static void flush_vgic_state(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) { @@ -99,6 +144,8 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) { struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; struct kvm_vcpu *host_vcpu = shadow_state->host_vcpu; + shadow_entry_exit_handler_fn ec_handler; + u8 esr_ec; shadow_vcpu->arch.ctxt = host_vcpu->arch.ctxt; @@ -109,8 +156,6 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) shadow_vcpu->arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; shadow_vcpu->arch.cptr_el2 = host_vcpu->arch.cptr_el2; - shadow_vcpu->arch.flags = host_vcpu->arch.flags; - shadow_vcpu->arch.debug_ptr = kern_hyp_va(host_vcpu->arch.debug_ptr); shadow_vcpu->arch.host_fpsimd_state = host_vcpu->arch.host_fpsimd_state; @@ -118,24 +163,62 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) flush_vgic_state(host_vcpu, shadow_vcpu); flush_timer_state(shadow_state); + + switch (ARM_EXCEPTION_CODE(shadow_state->exit_code)) { + case ARM_EXCEPTION_IRQ: + case ARM_EXCEPTION_EL1_SERROR: + case ARM_EXCEPTION_IL: + break; + case ARM_EXCEPTION_TRAP: + esr_ec = ESR_ELx_EC(kvm_vcpu_get_esr(shadow_vcpu)); + ec_handler = entry_vm_shadow_handlers[esr_ec]; + if (ec_handler) + ec_handler(host_vcpu, shadow_vcpu); + break; + default: + BUG(); + } + + shadow_state->exit_code = 0; } -static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) +static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state, + u32 exit_reason) { struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; struct kvm_vcpu *host_vcpu = shadow_state->host_vcpu; + shadow_entry_exit_handler_fn ec_handler; + unsigned long host_flags; + u8 esr_ec; host_vcpu->arch.ctxt = shadow_vcpu->arch.ctxt; host_vcpu->arch.hcr_el2 = shadow_vcpu->arch.hcr_el2; host_vcpu->arch.cptr_el2 = shadow_vcpu->arch.cptr_el2; - host_vcpu->arch.fault = shadow_vcpu->arch.fault; - - host_vcpu->arch.flags = shadow_vcpu->arch.flags; - sync_vgic_state(host_vcpu, shadow_vcpu); sync_timer_state(shadow_state); + + switch (ARM_EXCEPTION_CODE(exit_reason)) { + case ARM_EXCEPTION_IRQ: + break; + case ARM_EXCEPTION_TRAP: + esr_ec = ESR_ELx_EC(kvm_vcpu_get_esr(shadow_vcpu)); + ec_handler = exit_vm_shadow_handlers[esr_ec]; + if (ec_handler) + ec_handler(host_vcpu, shadow_vcpu); + break; + case ARM_EXCEPTION_EL1_SERROR: + case ARM_EXCEPTION_IL: + break; + default: + BUG(); + } + + host_flags = READ_ONCE(host_vcpu->arch.flags) & + ~(KVM_ARM64_PENDING_EXCEPTION | KVM_ARM64_INCREMENT_PC); + WRITE_ONCE(host_vcpu->arch.flags, host_flags); + shadow_state->exit_code = exit_reason; } static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt) @@ -231,7 +314,7 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) ret = __kvm_vcpu_run(&shadow_state->shadow_vcpu); - sync_shadow_state(shadow_state); + sync_shadow_state(shadow_state, ret); } else { ret = __kvm_vcpu_run(vcpu); } From patchwork Thu May 19 13:41:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B60FEC433F5 for ; Thu, 19 May 2022 14:23:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LWdCTOzO4oMM41HuwxJ4tnAKJ/BnDMXIqfLtunRHawQ=; b=IAPyiUEKe+IG3m zgVSo44n5xhIvzzyPUAB1m+ERRXVVkxXZnR5qRWWvzAW9/R62+7PLO8YFK95/SQuSlsmSC5C2A+93 9hTs0tsQTLdPhz2Q7OpiqPYc/lcZCKgDnL9R3tQ2UCJ2m4a1P3eO7nt8sY2tZkmohMjRybaBZLE+p sG9edJFz1+Vl0w409GOlCuyOrb/jl8FMAwLHzcq/9Lwxg/WLeYCnATfQdQCxLX2w+cYW9rZQYIUJk 30lCKGGm7L8A6PXsNSP8VZNJK5Dn6loj5CIqE2qxVOTfg7O1ZOSq6lnhXLdsxmg9Gst241Lil5wVO geHXtl7eyHBCYBWrFwKQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh1Y-007Mj6-Ik; Thu, 19 May 2022 14:21:09 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTS-0076X4-Du for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:56 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D9279B824B0; Thu, 19 May 2022 13:45:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1D25CC385AA; Thu, 19 May 2022 13:45:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967951; bh=pov3q8IKhuOU79Xlh7AlW/TDQ/gM9bi5cB3v51OWCjo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AATwCPGiq1+J+rPiG6A7FyES69L0Ej4YLtExIOfyyqWCFke9ZmgeLiev8tm3YwoI8 l8QSalTMPcjKZSTSp9f2ZgCWM5iUtc9/C3O9DnzE3wrCve4BUgzRTd92BWjgXFjDA8 cL+zk73rTFBvLL6IADHjeaXWCxyieXq6r0qckPvuqyj3XhSkbJd6/uA77y65tqY2eT Y/vWBxUkKj+u4PoJvH2nbFeGU4s+kmxvD7WGG5r0jkLl6eatzxe+g1qcmRhyd/oSkW ChlRMt8W+yUDa1manDXJ5jcFsVJFn5eEu8aXII4DyvV3Jiouy5JSUnZBDrSTiC+VBV 5uOVqxDHUxixw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 52/89] KVM: arm64: Introduce lazy-ish state sync for non-protected VMs Date: Thu, 19 May 2022 14:41:27 +0100 Message-Id: <20220519134204.5379-53-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064554_832019_D9615F36 X-CRM114-Status: GOOD ( 27.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Rather than blindly copying the register state between the shadow and host vCPU structures, abstract this code into some helpers which are called only for non-protected VMs running under pKVM. To faciliate host accesses to guest registers within a get/put sequence, introduce a new 'sync_state' hypercall to provide access to the registers of a non-protected VM when handling traps. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/arm.c | 7 ++++ arch/arm64/kvm/handle_exit.c | 22 ++++++++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 65 +++++++++++++++++++++++++++++- 5 files changed, 95 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 07ee95d0f97d..ea3b3a60bedb 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -80,6 +80,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_teardown_shadow, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_sync_state, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 066eb7234bdd..160cbf973bcb 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -512,6 +512,7 @@ struct kvm_vcpu_arch { #define KVM_ARM64_DEBUG_STATE_SAVE_TRBE (1 << 13) /* Save TRBE context if active */ #define KVM_ARM64_FP_FOREIGN_FPSTATE (1 << 14) #define KVM_ARM64_ON_UNSUPPORTED_CPU (1 << 15) /* Physical CPU not in supported_cpus */ +#define KVM_ARM64_PKVM_STATE_DIRTY (1 << 16) #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \ KVM_GUESTDBG_USE_SW_BP | \ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 3bb379f15c07..7c57c14e173a 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -448,6 +448,10 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_call_hyp(__vgic_v3_save_vmcr_aprs, &vcpu->arch.vgic_cpu.vgic_v3); kvm_call_hyp_nvhe(__pkvm_vcpu_put); + + /* __pkvm_vcpu_put implies a sync of the state */ + if (!kvm_vm_is_protected(vcpu->kvm)) + vcpu->arch.flags |= KVM_ARM64_PKVM_STATE_DIRTY; } kvm_arch_vcpu_put_debug_state_flags(vcpu); @@ -575,6 +579,9 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) return ret; if (is_protected_kvm_enabled()) { + /* Start with the vcpu in a dirty state */ + if (!kvm_vm_is_protected(vcpu->kvm)) + vcpu->arch.flags |= KVM_ARM64_PKVM_STATE_DIRTY; ret = kvm_shadow_create(kvm); if (ret) return ret; diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 97fe14aab1a3..9334c4a64007 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -203,6 +203,21 @@ static int handle_trap_exceptions(struct kvm_vcpu *vcpu) { int handled; + /* + * If we run a non-protected VM when protection is enabled + * system-wide, resync the state from the hypervisor and mark + * it as dirty on the host side if it wasn't dirty already + * (which could happen if preemption has taken place). + */ + if (is_protected_kvm_enabled() && !kvm_vm_is_protected(vcpu->kvm)) { + preempt_disable(); + if (!(vcpu->arch.flags & KVM_ARM64_PKVM_STATE_DIRTY)) { + kvm_call_hyp_nvhe(__pkvm_vcpu_sync_state); + vcpu->arch.flags |= KVM_ARM64_PKVM_STATE_DIRTY; + } + preempt_enable(); + } + /* * See ARM ARM B1.14.1: "Hyp traps on instructions * that fail their condition code check" @@ -270,6 +285,13 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index) /* For exit types that need handling before we can be preempted */ void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index) { + /* + * We just exited, so the state is clean from a hypervisor + * perspective. + */ + if (is_protected_kvm_enabled()) + vcpu->arch.flags &= ~KVM_ARM64_PKVM_STATE_DIRTY; + if (ARM_SERROR_PENDING(exception_index)) { if (this_cpu_has_cap(ARM64_HAS_RAS_EXTN)) { u64 disr = kvm_vcpu_get_disr(vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index bbf2621f1862..2a12d6f710ef 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -140,6 +140,38 @@ static void sync_timer_state(struct kvm_shadow_vcpu_state *shadow_state) __vcpu_sys_reg(shadow_vcpu, CNTV_CTL_EL0) = read_sysreg_el0(SYS_CNTV_CTL); } +static void __copy_vcpu_state(const struct kvm_vcpu *from_vcpu, + struct kvm_vcpu *to_vcpu) +{ + int i; + + to_vcpu->arch.ctxt.regs = from_vcpu->arch.ctxt.regs; + to_vcpu->arch.ctxt.spsr_abt = from_vcpu->arch.ctxt.spsr_abt; + to_vcpu->arch.ctxt.spsr_und = from_vcpu->arch.ctxt.spsr_und; + to_vcpu->arch.ctxt.spsr_irq = from_vcpu->arch.ctxt.spsr_irq; + to_vcpu->arch.ctxt.spsr_fiq = from_vcpu->arch.ctxt.spsr_fiq; + + /* + * Copy the sysregs, but don't mess with the timer state which + * is directly handled by EL1 and is expected to be preserved. + */ + for (i = 1; i < NR_SYS_REGS; i++) { + if (i >= CNTVOFF_EL2 && i <= CNTP_CTL_EL0) + continue; + to_vcpu->arch.ctxt.sys_regs[i] = from_vcpu->arch.ctxt.sys_regs[i]; + } +} + +static void __sync_vcpu_state(struct kvm_shadow_vcpu_state *shadow_state) +{ + __copy_vcpu_state(&shadow_state->shadow_vcpu, shadow_state->host_vcpu); +} + +static void __flush_vcpu_state(struct kvm_shadow_vcpu_state *shadow_state) +{ + __copy_vcpu_state(shadow_state->host_vcpu, &shadow_state->shadow_vcpu); +} + static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) { struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; @@ -147,7 +179,16 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) shadow_entry_exit_handler_fn ec_handler; u8 esr_ec; - shadow_vcpu->arch.ctxt = host_vcpu->arch.ctxt; + /* + * If we deal with a non-protected guest and the state is potentially + * dirty (from a host perspective), copy the state back into the shadow. + */ + if (!shadow_state_is_protected(shadow_state)) { + unsigned long host_flags = READ_ONCE(host_vcpu->arch.flags); + + if (host_flags & KVM_ARM64_PKVM_STATE_DIRTY) + __flush_vcpu_state(shadow_state); + } shadow_vcpu->arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); shadow_vcpu->arch.sve_max_vl = host_vcpu->arch.sve_max_vl; @@ -268,10 +309,31 @@ static void handle___pkvm_vcpu_put(struct kvm_cpu_context *host_ctxt) shadow_state = pkvm_loaded_shadow_vcpu_state(); if (shadow_state) { + struct kvm_vcpu *host_vcpu = shadow_state->host_vcpu; + + if (!shadow_state_is_protected(shadow_state) && + !(READ_ONCE(host_vcpu->arch.flags) & KVM_ARM64_PKVM_STATE_DIRTY)) + __sync_vcpu_state(shadow_state); + pkvm_put_shadow_vcpu_state(shadow_state); } } +static void handle___pkvm_vcpu_sync_state(struct kvm_cpu_context *host_ctxt) +{ + struct kvm_shadow_vcpu_state *shadow_state; + + if (!is_protected_kvm_enabled()) + return; + + shadow_state = pkvm_loaded_shadow_vcpu_state(); + + if (!shadow_state || shadow_state_is_protected(shadow_state)) + return; + + __sync_vcpu_state(shadow_state); +} + static struct kvm_vcpu *__get_current_vcpu(struct kvm_vcpu *vcpu, struct kvm_shadow_vcpu_state **state) { @@ -579,6 +641,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_teardown_shadow), HANDLE_FUNC(__pkvm_vcpu_load), HANDLE_FUNC(__pkvm_vcpu_put), + HANDLE_FUNC(__pkvm_vcpu_sync_state), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) From patchwork Thu May 19 13:41:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855156 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 33D18C433F5 for ; Thu, 19 May 2022 14:23:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0hzL3j/gShb54ELxe2eybFGVOM/UpB+NaK0V5GtSuKY=; b=NggZm/dWGvzuPK uwHI0O8t/jO+y9DTbs15sRL9qUf/bIqi4PxtfuCK5dZsdmQ+mxQCsHiBygQT4quMPnR8qiiL5UK1V XyL2TIxR6c6XFHXsdWWe5yyQnGqXFPi5Aat7//f/8yBDH15+NX49rwedmAfj4RLkdES6RlvPTrEgA oNtb4J3QL1+FXTHDLbme+15bcCHnW+5WYDMvwjO8eHMHfDEWckz3lZEg9Lt7TBBbMWn0szYzXZ2fa CaRLjq1p2h45+mXm0X/Vg4tEXow0p2YCjIbjqx4n61aKU5KwujOuhb/XPfAgaCJP+MJp5mpddkEZJ 1wZOz4UFd1VlRFs+wiFA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh2O-007N6L-Gi; Thu, 19 May 2022 14:22:02 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTW-0076Yi-6y for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:00 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id C77E5B824A6; Thu, 19 May 2022 13:45:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 13976C34115; Thu, 19 May 2022 13:45:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967955; bh=Us5TydH7WI/rZV15ZTqLlCPi3fuqooPfs40Aq0JtCjg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=idXjjs+cpv/cEMpS4QA91l1Pgcfz2v1VjEvHh9BL67YQCcesyaUttuJo+z+Q6SVUe 3ZkjlmSs3h/L3Mok08eil9KrEZoHjBeOL+zF6jYlGcO0sk4BPtni2QOlcpMBiaWdBN QFSUT/1vIn0niNb6oYpXUbBDCNbkVj2E1OEJM4Nd1jzQgZvF7U93cdVqFJhNM23/do gYCalGkUEUp5J0V9R39BrUvhWHbo8uO1YK4TInERx3eL37WRcCkJwDq9/ue7suyF+7 pBqbs67lF+Ew9WYrG4u9ug70FpO4cNjD2iUPOXdFo0Yxiati2SyILe5k5MSTGhPSZx vON2IWksQBZuA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 53/89] KVM: arm64: Lazy host FP save/restore Date: Thu, 19 May 2022 14:41:28 +0100 Message-Id: <20220519134204.5379-54-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064558_601901_221E7EA5 X-CRM114-Status: GOOD ( 17.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Implement lazy save/restore of the host FPSIMD register state at EL2. This allows us to save/restore guest FPSIMD registers without involving the host and means that we can avoid having to repopulate the shadow register state on every flush. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 57 ++++++++++++++++++++++++++---- 1 file changed, 51 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 2a12d6f710ef..228736a9ab40 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -20,6 +20,14 @@ #include +/* + * Host FPSIMD state. Written to when the guest accesses its own FPSIMD state, + * and read when the guest state is live and we need to switch back to the host. + * + * Only valid when the KVM_ARM64_FP_ENABLED flag is set in the shadow structure. + */ +static DEFINE_PER_CPU(struct user_fpsimd_state, loaded_host_fpsimd_state); + DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); @@ -195,10 +203,8 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) shadow_vcpu->arch.hcr_el2 = host_vcpu->arch.hcr_el2; shadow_vcpu->arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; - shadow_vcpu->arch.cptr_el2 = host_vcpu->arch.cptr_el2; shadow_vcpu->arch.debug_ptr = kern_hyp_va(host_vcpu->arch.debug_ptr); - shadow_vcpu->arch.host_fpsimd_state = host_vcpu->arch.host_fpsimd_state; shadow_vcpu->arch.vsesr_el2 = host_vcpu->arch.vsesr_el2; @@ -235,7 +241,6 @@ static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state, host_vcpu->arch.ctxt = shadow_vcpu->arch.ctxt; host_vcpu->arch.hcr_el2 = shadow_vcpu->arch.hcr_el2; - host_vcpu->arch.cptr_el2 = shadow_vcpu->arch.cptr_el2; sync_vgic_state(host_vcpu, shadow_vcpu); sync_timer_state(shadow_state); @@ -262,6 +267,27 @@ static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state, shadow_state->exit_code = exit_reason; } +static void fpsimd_host_restore(void) +{ + sysreg_clear_set(cptr_el2, CPTR_EL2_TZ | CPTR_EL2_TFP, 0); + isb(); + + if (unlikely(is_protected_kvm_enabled())) { + struct kvm_shadow_vcpu_state *shadow_state = pkvm_loaded_shadow_vcpu_state(); + struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; + struct user_fpsimd_state *host_fpsimd_state = this_cpu_ptr(&loaded_host_fpsimd_state); + + __fpsimd_save_state(&shadow_vcpu->arch.ctxt.fp_regs); + __fpsimd_restore_state(host_fpsimd_state); + + shadow_vcpu->arch.flags &= ~KVM_ARM64_FP_ENABLED; + shadow_vcpu->arch.flags |= KVM_ARM64_FP_HOST; + } + + if (system_supports_sve()) + sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); +} + static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(unsigned int, shadow_handle, host_ctxt, 1); @@ -291,6 +317,9 @@ static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt) *last_ran = shadow_vcpu->vcpu_id; } + shadow_vcpu->arch.host_fpsimd_state = this_cpu_ptr(&loaded_host_fpsimd_state); + shadow_vcpu->arch.flags |= KVM_ARM64_FP_HOST; + if (shadow_state_is_protected(shadow_state)) { /* Propagate WFx trapping flags, trap ptrauth */ shadow_vcpu->arch.hcr_el2 &= ~(HCR_TWE | HCR_TWI | @@ -310,6 +339,10 @@ static void handle___pkvm_vcpu_put(struct kvm_cpu_context *host_ctxt) if (shadow_state) { struct kvm_vcpu *host_vcpu = shadow_state->host_vcpu; + struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; + + if (shadow_vcpu->arch.flags & KVM_ARM64_FP_ENABLED) + fpsimd_host_restore(); if (!shadow_state_is_protected(shadow_state) && !(READ_ONCE(host_vcpu->arch.flags) & KVM_ARM64_PKVM_STATE_DIRTY)) @@ -377,6 +410,19 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) ret = __kvm_vcpu_run(&shadow_state->shadow_vcpu); sync_shadow_state(shadow_state, ret); + + if (shadow_state->shadow_vcpu.arch.flags & KVM_ARM64_FP_ENABLED) { + /* + * The guest has used the FP, trap all accesses + * from the host (both FP and SVE). + */ + u64 reg = CPTR_EL2_TFP; + + if (system_supports_sve()) + reg |= CPTR_EL2_TZ; + + sysreg_clear_set(cptr_el2, 0, reg); + } } else { ret = __kvm_vcpu_run(vcpu); } @@ -707,10 +753,9 @@ void handle_trap(struct kvm_cpu_context *host_ctxt) case ESR_ELx_EC_SMC64: handle_host_smc(host_ctxt); break; + case ESR_ELx_EC_FP_ASIMD: case ESR_ELx_EC_SVE: - sysreg_clear_set(cptr_el2, CPTR_EL2_TZ, 0); - isb(); - sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); + fpsimd_host_restore(); break; case ESR_ELx_EC_IABT_LOW: case ESR_ELx_EC_DABT_LOW: From patchwork Thu May 19 13:41:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 976F5C433EF for ; Thu, 19 May 2022 14:25:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BZKYgBIocn+NFEl5rW643tB98e0PT/gENPEw7kPtRyM=; b=xo8JpjdRIRP7N3 KrJrzdIJVhPPDjLo3eA4RyVeL2Q4averrI1j/ZAAAyg5NRbY3HBHwaZyrSAuIIdItTgWlMc57wQdr WObv8fawZdDp0lXp6cG+yZmhxCm6dujHPHhGVNkZrw6RDN4hcio00he63QIQXMwe4ehDWp57Z1UFF Yp9DA3pAbI2+MmUlpsjuV3aRhsPbFjMHSMB9v0sr/ig+juanvqL+bMw8jvb16mb8Oyk+o7lfDqmTT hUdUngO/NSOUVmogCmjj/rzVpHYoB6TixxwKNcD1oYk6WcCvbAX349HXE5dIAJ/GHg1x9EJ25qgEZ AiFhaJkp/jz67U9aRZoQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh42-007NvZ-PQ; Thu, 19 May 2022 14:23:43 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTY-0076aP-Iq for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:03 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 181F061783; Thu, 19 May 2022 13:46:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A3E1C36AE5; Thu, 19 May 2022 13:45:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967959; bh=cvBKGfWFAbmH68bmeDTUSse0RpVg4TXL+HSHAKVcsow=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XQp2SFcckGWaZ9r0J9knvbktkXOKat0vRrw0drOozmkrj5NmVw+v4Z1R1k2SIUxh/ tZNo0nxM6bw4OZGYXFVFUR2oF9sQ2KLHxFgTEMcna2qXdo7NDaeG3hkx+WH2jsnXiH xlxQcV3WCwpZWdkEK6BnY/k5Ca1DblJjAl/yKHX6pct9uCx/nLEZa2UBebBkYHq9JZ l0fQFEOiUQMJpeZbA51jyKB3gtseKc8z4xyhtMzLuWBxXsc2mphJwz61qYXJ1cNL/f SimpwFLv03rxan7C6ccAVnO/6rzgBCSNmoIIWii5vMFHu+SEutvx5hKD5X4YPZmAmq mmtHtc6gGT3FQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 54/89] KVM: arm64: Reduce host/shadow vcpu state copying Date: Thu, 19 May 2022 14:41:29 +0100 Message-Id: <20220519134204.5379-55-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064600_764475_D924F156 X-CRM114-Status: GOOD ( 14.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier When running with pKVM enabled, protected guests run with a fixed CPU configuration and therefore features such as hardware debug and SVE are unavailable and their state does not need to be copied from the host structures on each flush operation. Although non-protected guests do require the host and shadow structures to be kept in-sync with each other, we can defer writing back to the host to an explicit sync hypercall, rather than doing it after every vCPU run. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 228736a9ab40..e82c0faf6c81 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -196,17 +196,18 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) if (host_flags & KVM_ARM64_PKVM_STATE_DIRTY) __flush_vcpu_state(shadow_state); - } - shadow_vcpu->arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); - shadow_vcpu->arch.sve_max_vl = host_vcpu->arch.sve_max_vl; + shadow_vcpu->arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); + shadow_vcpu->arch.sve_max_vl = host_vcpu->arch.sve_max_vl; - shadow_vcpu->arch.hcr_el2 = host_vcpu->arch.hcr_el2; - shadow_vcpu->arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; + shadow_vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS & ~(HCR_RW | HCR_TWI | HCR_TWE); + shadow_vcpu->arch.hcr_el2 |= READ_ONCE(host_vcpu->arch.hcr_el2); - shadow_vcpu->arch.debug_ptr = kern_hyp_va(host_vcpu->arch.debug_ptr); + shadow_vcpu->arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; + shadow_vcpu->arch.debug_ptr = kern_hyp_va(host_vcpu->arch.debug_ptr); + } - shadow_vcpu->arch.vsesr_el2 = host_vcpu->arch.vsesr_el2; + shadow_vcpu->arch.vsesr_el2 = host_vcpu->arch.vsesr_el2; flush_vgic_state(host_vcpu, shadow_vcpu); flush_timer_state(shadow_state); @@ -238,10 +239,10 @@ static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state, unsigned long host_flags; u8 esr_ec; - host_vcpu->arch.ctxt = shadow_vcpu->arch.ctxt; - - host_vcpu->arch.hcr_el2 = shadow_vcpu->arch.hcr_el2; - + /* + * Don't sync the vcpu GPR/sysreg state after a run. Instead, + * leave it in the shadow until someone actually requires it. + */ sync_vgic_state(host_vcpu, shadow_vcpu); sync_timer_state(shadow_state); From patchwork Thu May 19 13:41:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855158 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BB3A9C433F5 for ; Thu, 19 May 2022 14:26:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=a8eD/S3VXUvocltkBf9UbMQX27f00Q9jNACZi6WIGLc=; b=G46hengZmIUGf/ gjyMmoYH46WJpr3NZYiqwUDlU9JySkmWaouUO4G5m1cCaGTlDxMANIftJgbCo6f4uisENo+iol9oV OA5QJ4TS+hsdne7iLEQ2k4z//z9PBxGHzg2H8TeBonrplQjzqmMUJ8alPnqWL0ZRBkM8rkep9Gj7o JKzYybcVqHdhZLCbbhJjVnf92OefKdXTrGjdE40kihXKMX5lb3znAdunN7tgE1E9IT3CvPqxGFwDx SfejgzjWyNgAKemWS5C+Bw6ALLaK2yYzRx1xxTGUWha2xEHnM3YKW3qs9jOO1P0+iEU+9RqAABIoL +c41sQo7r04B+vazNYCw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh51-007OPU-70; Thu, 19 May 2022 14:24:44 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTe-0076cT-3c for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:08 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id BA7C9B824AE; Thu, 19 May 2022 13:46:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F3F12C36AE3; Thu, 19 May 2022 13:45:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967963; bh=7ZFIfc31Ljwo59UHFdAV6Hez2rIIXtc9xnETxzvgDvc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hNozO7SAuAmOJdzhDHTWB9npt/YWIKfntAwQlt+/2AvOsFzCQNebP/51xqpn41wk1 96KzcgKZXipkIg24qNupD07sJ2r64RuIRxqqo1tJ2a32q2UXrSAldrnfIb98VURoY5 YUYDNmr873wEVy0bboxF855tUenGFoqND3EUw0sweETF5f59IHlF51rMLc61GSPmvT WPb1ZTT5+2GypfGQPtEo94PyYFeAAyMP4yA+Gg6VuN1n8ZAyngEJ+zHZK0izgZbvRi W8oSNpgO9c0UAULEJ5YJQY0ncWOdyZtoKVrCaIJwwq6kStQ/pz+YT2diOGExHV+yM/ nfjt/yA+6Accw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 55/89] KVM: arm64: Do not pass the vcpu to __pkvm_host_map_guest() Date: Thu, 19 May 2022 14:41:30 +0100 Message-Id: <20220519134204.5379-56-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064606_508087_E2CF648D X-CRM114-Status: GOOD ( 16.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba __pkvm_host_map_guest() always applies to the loaded vcpu in hyp, and should not trust the host to provide the vcpu. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 15 ++++----------- arch/arm64/kvm/mmu.c | 6 +++--- 2 files changed, 7 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index e82c0faf6c81..0f1c9d27f6eb 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -445,20 +445,15 @@ static void handle___pkvm_host_map_guest(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(u64, pfn, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 3); - struct kvm_shadow_vcpu_state *shadow_state; + struct kvm_vcpu *host_vcpu; struct kvm_vcpu *shadow_vcpu; - struct kvm *host_kvm; - unsigned int handle; + struct kvm_shadow_vcpu_state *shadow_state; int ret = -EINVAL; if (!is_protected_kvm_enabled()) goto out; - host_vcpu = kern_hyp_va(host_vcpu); - host_kvm = kern_hyp_va(host_vcpu->kvm); - handle = host_kvm->arch.pkvm.shadow_handle; - shadow_state = pkvm_load_shadow_vcpu_state(handle, host_vcpu->vcpu_idx); + shadow_state = pkvm_loaded_shadow_vcpu_state(); if (!shadow_state) goto out; @@ -468,11 +463,9 @@ static void handle___pkvm_host_map_guest(struct kvm_cpu_context *host_ctxt) /* Topup shadow memcache with the host's */ ret = pkvm_refill_memcache(shadow_vcpu, host_vcpu); if (ret) - goto out_put_state; + goto out; ret = __pkvm_host_share_guest(pfn, gfn, shadow_vcpu); -out_put_state: - pkvm_put_shadow_vcpu_state(shadow_state); out: cpu_reg(host_ctxt, 1) = ret; } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c74c431588a3..137d4382ed1c 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1143,9 +1143,9 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, return 0; } -static int pkvm_host_map_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu) +static int pkvm_host_map_guest(u64 pfn, u64 gfn) { - int ret = kvm_call_hyp_nvhe(__pkvm_host_map_guest, pfn, gfn, vcpu); + int ret = kvm_call_hyp_nvhe(__pkvm_host_map_guest, pfn, gfn); /* * Getting -EPERM at this point implies that the pfn has already been @@ -1211,7 +1211,7 @@ static int pkvm_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, write_lock(&kvm->mmu_lock); pfn = page_to_pfn(page); - ret = pkvm_host_map_guest(pfn, fault_ipa >> PAGE_SHIFT, vcpu); + ret = pkvm_host_map_guest(pfn, fault_ipa >> PAGE_SHIFT); if (ret) { if (ret == -EAGAIN) ret = 0; From patchwork Thu May 19 13:41:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A00DBC433F5 for ; Thu, 19 May 2022 14:26:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=79KM1B/XRP63GRzFHRzOQDdQuin5ziJGxIQ21cUdzZQ=; b=ymxknSle7CJWqk IdIKR+jZeskYkADOCylZlqpnkdZOg1Ho1PIu9bjMNAMPLqBaRELxxBVZkZphsQClduH/SB3Nkl57g TSenKrS4tAXrfd9JWKVlvR28O9cuGhlOdSi4TvYjj/Ur9rUkAF/yXDflqh645f40wD8xVAl7H2Rlz ibhsSxYuUouIlpfIgpiwBdz9V3cSyFqxOKnqumDRbDmSnYI4UoU9OSazZU1WwQEtUsTTaj3QaaZ97 M9fSWvKsnss6fAWSJz70U2g0jKq82OPBCdtOLSxUL/75OHrTYN+mpIyt1Pl+AmZrxUGQdMvG/2qrd S3DV2s5XghdYMaV1NPDA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh5q-007OpF-4o; Thu, 19 May 2022 14:25:34 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTg-0076e7-Pl for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:11 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 52119617D2; Thu, 19 May 2022 13:46:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E9EEEC34100; Thu, 19 May 2022 13:46:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967967; bh=88e6OVrkJmSynl9ligIzRQd0lQ3ObHpfzQ+vy+gTKaY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=t7nbdcmc4S6MeEqTouC0TJGSjgHdW6wM8e+t4Ayg3eXGeT1aMD9Ua735ag0u7sU8S YsBa1mPe7z0pQjA/OrcAhieua+PTI+PrEldWuBdNdjy/ph5zumy/zOAd8RlGIPqYIX l8l4m4GyY2TYHO8fYCuWB3PW2qDyjcEC4H5RfeGH3f1LgasLCBpiTQaj29TV1p6OUX bKBoePq8DFe5DGLMnmq69HWYIPG7iKFeMRloyEtACj0Vswhd1CEsZWqFSf72uBSARc yGDbR35gjObq9re4epZynBw2hdicIHeriQLgGzsm0738+ONoiyCopDkYYADf2iMR2Y hV34CciKw9KqQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 56/89] KVM: arm64: Check directly whether the vcpu is protected Date: Thu, 19 May 2022 14:41:31 +0100 Message-Id: <20220519134204.5379-57-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064608_949784_FFB338BE X-CRM114-Status: GOOD ( 13.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Simpler code and ensures we're always looking at hyp state. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/switch.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 9d2b971e8613..6bb979ee51cc 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -215,7 +215,7 @@ static const exit_handler_fn pvm_exit_handlers[] = { static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu) { - if (unlikely(kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)))) + if (unlikely(vcpu_is_protected(vcpu))) return pvm_exit_handlers; return hyp_exit_handlers; @@ -234,9 +234,7 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu) */ static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code) { - struct kvm *kvm = kern_hyp_va(vcpu->kvm); - - if (kvm_vm_is_protected(kvm) && vcpu_mode_is_32bit(vcpu)) { + if (unlikely(vcpu_is_protected(vcpu) && vcpu_mode_is_32bit(vcpu))) { /* * As we have caught the guest red-handed, decide that it isn't * fit for purpose anymore by making the vcpu invalid. The VMM From patchwork Thu May 19 13:41:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855160 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 973EFC433F5 for ; Thu, 19 May 2022 14:27:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QQg2sYT84xYvWix3GC+8ZMa2GVtAx3nQUNn8RLmkBFg=; b=r1QcqKSZFSk0CC TJlQOb69CRIhvYG4XP6ilN/MRQ26ImiOpd4qDlteIDBsy5c8OJmEV6z2oXQsV2dRHs3r4ZGqVDZPf GYzHA71lXm7OGPkepkxsKY1N8vCH82Xkp4Jl/3xp/wBv5C+fivxjTdj8NBkk+FNUdQ5bPQWKcUBJz 1DST1bDH6Quu76xFjGj1yliart+v0nXz6VrsDypXu6e5JDRd5nYSonKf/o411NKQagkHLa6X2KavU G+hs7jSvQFzVfHeFSbpB3YBmZGFLZHwau916TKjthzGpSAz7E5NdXRFLCJvjeHtunZsdKlTN4Z2Gr fCtHsFkpqxYg0Lhp4HQA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh6P-007P7t-L5; Thu, 19 May 2022 14:26:10 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTm-0076gw-B3 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:16 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D1E38B824D9; Thu, 19 May 2022 13:46:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E5E8FC385B8; Thu, 19 May 2022 13:46:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967971; bh=gNfFOi6pd5y/KMC1EENc0l08zoMIj4rMl1P1lAcHzmI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u6nFRAbTKYWiGRYq2O+qFR9S7ODm9TBeMw0yMLdLvLR/o74yXsoLWW4DrFBKaSsT1 0vsUeJ2J5x3lKmZDM3DzioeP03cx9RPsw4ayjkiazraeWQUPpTssdRZoZkAnQWplWJ IGrc6LSRZSXv6xum3n3ZBXvddcMIRhK6yXDbCbKEyRoCNwR5xwfMMYElYbbIiKW4DE aGwDDY8rDzdvBJ+hPY3BBZJq2fiIDkR1n0vaS7ObwSB1jFR3ViSsYmzo0ISSwIhX+w h96nvNClKHlT7x0CSd29JeyzM5Jai4cwvnVa7Zbc1hfw6O94eQP+YNvqmR0BHHfEC/ lCuQ55bVYcFsw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 57/89] KVM: arm64: Trap debug break and watch from guest Date: Thu, 19 May 2022 14:41:32 +0100 Message-Id: <20220519134204.5379-58-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064614_626202_A68BDF47 X-CRM114-Status: GOOD ( 14.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Debug and trace are not currently supported for protected guests, so trap accesses to the related registers and emulate them as RAZ/WI. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 11 +++++++++++ 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index c0403416ce1d..cd0712e13ab0 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -108,7 +108,7 @@ static void pvm_init_traps_aa64dfr0(struct kvm_vcpu *vcpu) /* Trap Debug */ if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), feature_ids)) - mdcr_set |= MDCR_EL2_TDRA | MDCR_EL2_TDA | MDCR_EL2_TDE; + mdcr_set |= MDCR_EL2_TDRA | MDCR_EL2_TDA; /* Trap OS Double Lock */ if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_DOUBLELOCK), feature_ids)) diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index ddea42d7baf9..e732826f9624 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -356,6 +356,17 @@ static const struct sys_reg_desc pvm_sys_reg_descs[] = { /* Cache maintenance by set/way operations are restricted. */ /* Debug and Trace Registers are restricted. */ + RAZ_WI(SYS_DBGBVRn_EL1(0)), + RAZ_WI(SYS_DBGBCRn_EL1(0)), + RAZ_WI(SYS_DBGWVRn_EL1(0)), + RAZ_WI(SYS_DBGWCRn_EL1(0)), + RAZ_WI(SYS_MDSCR_EL1), + RAZ_WI(SYS_OSLAR_EL1), + RAZ_WI(SYS_OSLSR_EL1), + RAZ_WI(SYS_OSDLR_EL1), + + /* Group 1 ID registers */ + RAZ_WI(SYS_REVIDR_EL1), /* AArch64 mappings of the AArch32 ID registers */ /* CRm=1 */ From patchwork Thu May 19 13:41:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AD481C433EF for ; Thu, 19 May 2022 14:28:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XBtzBKzAvzh7Yt8Sd7D9m70Q1zwKGFirJpqNUFip1qc=; b=ji1FJ9/+0nVPQi LAmrw4OQULGcj1UsF61ZKQoHX+hi9krhu+I0MvjJFjFnWqfCbX7iezX06+/kT9Vokl/wQ5mvPVdq7 UhV/L7q8vUiKURLcE5XeCZz8dv6l6jcAIeaZ1Tu4g1AprycrRQlWZpB9E5AzfmieeonYYglSoQhUn 2f2Jz8Tjr5qz9a3hfU0Ortu8/kF5h32Cu23H6NiyYyd5ZzFjeIMoYAvLq6IZgtquDM8HdgGHpsqSr 0QAMaooqvjJ4pqu0vINF3BPY8yXsdV2mY5XjkU0OLOc4dadsT49eRYfRT8IDjjB8+dRH2W3pUrKWP HXyC34gRGTcZXEjj7Rdw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh7F-007PYj-HW; Thu, 19 May 2022 14:27:02 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTo-0076ib-Ny for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:18 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EC80D61783; Thu, 19 May 2022 13:46:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DCBF5C34117; Thu, 19 May 2022 13:46:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967975; bh=uG8rE6eI1bG8REgLyDqei99z8VOKVa5T7JzjFS4MdB4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oJDWsHte1Eu+CXoQnGl0DFP2gcLbZEghMyQd/21cT5hZiCV37gyHB9wuLpFV0jDoV /MEFad8mDennEV68Sjjj1yF1QfMylazGmaJAO2lz4ZAY2A9jMQ2YI99lnGU16IMTs0 6+pOTw8rog5ISVG1XBbd1+r9UTI90bRuJH6b7pa8eTb/lrPZVzpJUIL/GbiHyYYLjQ 2c7uHgv/zJezCvj25R8lTAGAbikTAwnVfkt1XYqTYT6BV49VoHQxcJ8MQOprTKHl05 pI6wECdwl/UN/AEd7ExrIz5xOIF34sY2ghghGgGy/zfzvb9JVXIp+BEXS836Yr5HCN hUu7ctyFjyuzw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 58/89] KVM: arm64: Restrict protected VM capabilities Date: Thu, 19 May 2022 14:41:33 +0100 Message-Id: <20220519134204.5379-59-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064616_923781_437371D3 X-CRM114-Status: GOOD ( 17.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Restrict protected VM capabilities based on the fixed-configuration for protected VMs. No functional change intended in current KVM-supported modes (nVHE, VHE). Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_pkvm.h | 27 ++++++++++++ arch/arm64/kvm/arm.c | 69 ++++++++++++++++++++++++++++++- 2 files changed, 95 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index b92440cfb5b4..6f13f62558dd 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -208,6 +208,33 @@ void kvm_shadow_destroy(struct kvm *kvm); ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) \ ) +/* + * Returns the maximum number of breakpoints supported for protected VMs. + */ +static inline int pkvm_get_max_brps(void) +{ + int num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_BRPS), + PVM_ID_AA64DFR0_ALLOW); + + /* + * If breakpoints are supported, the maximum number is 1 + the field. + * Otherwise, return 0, which is not compliant with the architecture, + * but is reserved and is used here to indicate no debug support. + */ + return num ? num + 1 : 0; +} + +/* + * Returns the maximum number of watchpoints supported for protected VMs. + */ +static inline int pkvm_get_max_wrps(void) +{ + int num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_WRPS), + PVM_ID_AA64DFR0_ALLOW); + + return num ? num + 1 : 0; +} + extern struct memblock_region kvm_nvhe_sym(hyp_memory)[]; extern unsigned int kvm_nvhe_sym(hyp_memblock_nr); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 7c57c14e173a..10e036bf06e3 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -194,9 +194,10 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_unshare_hyp(kvm, kvm + 1); } -int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) +static int kvm_check_extension(struct kvm *kvm, long ext) { int r; + switch (ext) { case KVM_CAP_IRQCHIP: r = vgic_present; @@ -294,6 +295,72 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) return r; } +/* + * Checks whether the extension specified in ext is supported in protected + * mode for the specified vm. + * The capabilities supported by kvm in general are passed in kvm_cap. + */ +static int pkvm_check_extension(struct kvm *kvm, long ext, int kvm_cap) +{ + int r; + + switch (ext) { + case KVM_CAP_IRQCHIP: + case KVM_CAP_ARM_PSCI: + case KVM_CAP_ARM_PSCI_0_2: + case KVM_CAP_NR_VCPUS: + case KVM_CAP_MAX_VCPUS: + case KVM_CAP_MAX_VCPU_ID: + case KVM_CAP_MSI_DEVID: + case KVM_CAP_ARM_VM_IPA_SIZE: + r = kvm_cap; + break; + case KVM_CAP_GUEST_DEBUG_HW_BPS: + r = min(kvm_cap, pkvm_get_max_brps()); + break; + case KVM_CAP_GUEST_DEBUG_HW_WPS: + r = min(kvm_cap, pkvm_get_max_wrps()); + break; + case KVM_CAP_ARM_PMU_V3: + r = kvm_cap && FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), + PVM_ID_AA64DFR0_ALLOW); + break; + case KVM_CAP_ARM_SVE: + r = kvm_cap && FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_SVE), + PVM_ID_AA64PFR0_RESTRICT_UNSIGNED); + break; + case KVM_CAP_ARM_PTRAUTH_ADDRESS: + r = kvm_cap && + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_API), + PVM_ID_AA64ISAR1_ALLOW) && + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA), + PVM_ID_AA64ISAR1_ALLOW); + break; + case KVM_CAP_ARM_PTRAUTH_GENERIC: + r = kvm_cap && + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI), + PVM_ID_AA64ISAR1_ALLOW) && + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA), + PVM_ID_AA64ISAR1_ALLOW); + break; + default: + r = 0; + break; + } + + return r; +} + +int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) +{ + int r = kvm_check_extension(kvm, ext); + + if (kvm && kvm_vm_is_protected(kvm)) + r = pkvm_check_extension(kvm, ext, r); + + return r; +} + long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { From patchwork Thu May 19 13:41:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 79727C433EF for ; Thu, 19 May 2022 14:28:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SS0+qbmHlf3Oa8v/HMqiZBkyr2b2mRVVTx5q32QrO90=; b=MXIU1jYqQC24F3 8GL12uACUwy2GtaNSEDMApHWOjvK1KBRXcNVcND+Oi4v5y73FrNGyTiyNdSrcYLkgEroz5DWDI8Or si9/AbN/tvojnoNj2WYCEXn5vdbAsjMuPcyVfpSOMH8Ye915GX5L9VBwkgw1FNzRJeMvMC2pKrm0Q 5LrxLtbPfKXQRT4JfUitPdDs3414wK9mkuhjF/vEvk4yckElOzPYlS2sgRXhPGiRPmKwkiyGa5spr aYRupxSKXWMySVwmJlk4P6XaqcboE9WF6sZxZNyP3c9S3Pi7aDWt+7x+yeNaIAFmCkmVgh1u6MsNO Fksgki4qKZP0fT7qcw1A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh7j-007Pmo-MN; Thu, 19 May 2022 14:27:32 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTr-0076lE-UK for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:23 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7DF04617D7; Thu, 19 May 2022 13:46:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D2E56C36AE3; Thu, 19 May 2022 13:46:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967979; bh=B36GaRnGRujL+MHDMxoxbcIaaln4Kn+Z0GYJ4GdQl5g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=g/cD/rpj+xe2AzYsja4gh8SJLDatcI9W6clm8PO0qstElAz/8qaq/DJmS1qfFiyTY SZ9d5rK+4KSZ4MdWaFdgHMlZNyBQl71XC/ZhqBRJJyZSjEfNWhRBIE4rsHV//YJqly agcpCwUC+8yYKpN63DB5DpNsWYIi+UH/rQ+M7vtGVMG8hjgwCK2vhWMw63Ubq+d7B1 gEdDkdBYB4AOvMBe05r2ST2rPp+sbdisS3rQVDf+AZpNh99DNpB1Gyw6u/JLuCyWxB 9SDet6wxT1EHJtRAV8toHQMYWGxDr4IhOezHOipLbuA2xYvILtTSCNwiMvCQftzW+8 dTtLc1xekzv5A== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 59/89] KVM: arm64: Do not support MTE for protected VMs Date: Thu, 19 May 2022 14:41:34 +0100 Message-Id: <20220519134204.5379-60-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064620_063812_037C1F02 X-CRM114-Status: GOOD ( 13.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Return an error (-EINVAL) if trying to enable MTE on a protected vm. Signed-off-by: Fuad Tabba Signed-off-by: Peter Collingbourne --- arch/arm64/kvm/arm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 10e036bf06e3..8a1b4ba1dfa7 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -90,7 +90,9 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, break; case KVM_CAP_ARM_MTE: mutex_lock(&kvm->lock); - if (!system_supports_mte() || kvm->created_vcpus) { + if (!system_supports_mte() || + kvm_vm_is_protected(kvm) || + kvm->created_vcpus) { r = -EINVAL; } else { r = 0; From patchwork Thu May 19 13:41:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B576BC433EF for ; Thu, 19 May 2022 14:29:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0a4D5kP/1Bk2RWANPS1hWg4vJ4UQ3VelTpaNbU1Zgj8=; b=PdqqmZDfLgbjet ghD2o/+UL6q/bdeixRgo3Qg7gNS690eyySXbp/TCMo7NrK8mG+mFWQJW7bzqCBLJFpLTvs914a4T/ naPr7EqGjUC6wXlaozsef+vrjCmCcutqGji94dtkxMxXkDKY9F3JnojFxHZHtaDQjGY65QMWqeJ+A I3erX+aJ5BBmwj8VvW2nwBKSMn0QURQq3m0Icn1F77owGj8PmogOMS1UPQ6uX1bX/jijpvcI63ixZ P3xFYQapuiqzKw+bW8S4fr5wob+CSbzrcissekYzL4tmQTfubjwlelZAv4VuxzozwsB8pPaeL9hQC fiOcdNDYbOTqXKlXLk6Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh8U-007Q9U-AO; Thu, 19 May 2022 14:28:18 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTx-0076o1-S9 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:28 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 780F5B824B1; Thu, 19 May 2022 13:46:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CA50BC34115; Thu, 19 May 2022 13:46:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967983; bh=cg4GGkz5VYYPNfvIkui5Wg//bUy2NhL/r1SMWT/4qu0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RC/YM0rrlPAWNXrjxTqesx/9CykjZXla6WqnKSun5FTL1+uWRcBQ9cenJlHQBRHZR tQGQXJ/ppFFPr6ROExzf/VRelMZyGGW51NRdP0ymjPI+XojertlUu46l/ZOgBkE5vb RVUIdv57Iui2Vt1raoanPXux+g4GObELiXuxPqiiycM6Ld/PvP5gWZbuVA8pgvsbsV kiyISHfLFvhz+EN39iFiw7MbWnfNSdo5bVmmM+eywI7HDy9JS5gd2jsfIp2XUwy5Wp sjv+F/58jBe8qGkJN+o4UlT6nJjYcVgQk7UFeIWzB6cTOuBnwE4/uYQtr+kTOIFypD zMQVMgQUEHiAg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 60/89] KVM: arm64: Refactor reset_mpidr to extract its computation Date: Thu, 19 May 2022 14:41:35 +0100 Message-Id: <20220519134204.5379-61-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064626_267895_7103A9DC X-CRM114-Status: GOOD ( 15.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Move the computation of the mpidr to its own function in a shared header, as the computation will be used by hyp in protected mode. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/sys_regs.c | 14 +------------- arch/arm64/kvm/sys_regs.h | 19 +++++++++++++++++++ 2 files changed, 20 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 7886989443b9..d2b1ad662546 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -598,19 +598,7 @@ static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { - u64 mpidr; - - /* - * Map the vcpu_id into the first three affinity level fields of - * the MPIDR. We limit the number of VCPUs in level 0 due to a - * limitation to 16 CPUs in that level in the ICC_SGIxR registers - * of the GICv3 to be able to address each CPU directly when - * sending IPIs. - */ - mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0); - mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1); - mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2); - vcpu_write_sys_reg(vcpu, (1ULL << 31) | mpidr, MPIDR_EL1); + vcpu_write_sys_reg(vcpu, calculate_mpidr(vcpu), MPIDR_EL1); } static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index cc0cc95a0280..9b32772f398e 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -183,6 +183,25 @@ find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[], return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg); } +static inline u64 calculate_mpidr(const struct kvm_vcpu *vcpu) +{ + u64 mpidr; + + /* + * Map the vcpu_id into the first three affinity level fields of + * the MPIDR. We limit the number of VCPUs in level 0 due to a + * limitation to 16 CPUs in that level in the ICC_SGIxR registers + * of the GICv3 to be able to address each CPU directly when + * sending IPIs. + */ + mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0); + mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1); + mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2); + mpidr |= (1ULL << 31); + + return mpidr; +} + const struct sys_reg_desc *find_reg_by_id(u64 id, struct sys_reg_params *params, const struct sys_reg_desc table[], From patchwork Thu May 19 13:41:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E7690C433EF for ; Thu, 19 May 2022 14:30:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=oqsxuQJIzCMeBVRhpWVhBSf7wGGI0M4l6nCjqLsIERI=; b=VBQcrc02AdfQs1 KnokxbmvA6wY1/6XXojQGvLgQDxfL66+jeqAYg98fpz08zhWnlJ0OxsTo//GQamJM0MjD+5z566j6 UHgUDg47y6At6Y93awRjpEsldeSaPGUHvr5Ikh53RNJvguoQleXQL/rCKHZzueBzDngFv3zOzFrOS eGmEQnsicqruNMd9wOtD7L/KnDSJ0WTBn720HdchhRDw3zbZMdngnVWElp6+/Red9rnvUROLHuh7O zMjDx2DZMwvR5TVnV6Uly98GaVXggb/s1ktStta5DLePGlI6KS5u12jHxNGiFg/lxQ4kL7G8Ai5JK 1mc+rKKCKwEso47H3hQw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh9L-007QcQ-9k; Thu, 19 May 2022 14:29:12 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgU1-0076pM-Su for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:32 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 70F8CB824AE; Thu, 19 May 2022 13:46:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C082FC34116; Thu, 19 May 2022 13:46:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967987; bh=JS4X8UXFmd8TMeGhdyRSyVoZggCXiskjb7wg/m/cY/M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qmX602WiwPws7twhIK8JNNaBUANeZSnlzbsXwkwyEb8+oOrI18ctAaTLp1TVOxuMd ZAi/HVCQqRf3kUNCXvHTyo+FQnmTC371TiBt8nF8Vy+Zs1/pkNqkCpPMihwkEc7Niv Bqb3VV2iimil/mttC7u9JJtsp9fTliYEEoMuKyZZFbThhsReQJrX1jDVln4gqoOSPE muCZ76Cthq9T+sA/ymDx3ZL0FrhUZmBF48gZn+ruAs/GstotTRvTV0aG9WK736Z1O4 /Z9PIJHw4W/ZMExV7C18GZ6jfl/k4jEL2wB8tK55BOR9L5dNMfKLnB0x6wmXSCkPiU m4rd18YhIsRlA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 61/89] KVM: arm64: Reset sysregs for protected VMs Date: Thu, 19 May 2022 14:41:36 +0100 Message-Id: <20220519134204.5379-62-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064630_296082_8979A49D X-CRM114-Status: GOOD ( 20.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Create a framework for resetting protected VM system registers to their architecturally defined reset values. No functional change intended as these are not hooked in yet. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 1 + arch/arm64/kvm/hyp/nvhe/sys_regs.c | 84 +++++++++++++++++++++++++- 2 files changed, 84 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index d070400b5616..e772f9835a86 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -98,6 +98,7 @@ struct kvm_shadow_vcpu_state *pkvm_loaded_shadow_vcpu_state(void); u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code); bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code); +void kvm_reset_pvm_sys_regs(struct kvm_vcpu *vcpu); int kvm_check_pvm_sysreg_table(void); #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index e732826f9624..aeea565d84b8 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -470,8 +470,85 @@ static const struct sys_reg_desc pvm_sys_reg_descs[] = { /* Performance Monitoring Registers are restricted. */ }; +/* A structure to track reset values for system registers in protected vcpus. */ +struct sys_reg_desc_reset { + /* Index into sys_reg[]. */ + int reg; + + /* Reset function. */ + void (*reset)(struct kvm_vcpu *, const struct sys_reg_desc_reset *); + + /* Reset value. */ + u64 value; +}; + +static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc_reset *r) +{ + __vcpu_sys_reg(vcpu, r->reg) = read_sysreg(actlr_el1); +} + +static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc_reset *r) +{ + __vcpu_sys_reg(vcpu, r->reg) = read_sysreg(amair_el1); +} + +static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc_reset *r) +{ + __vcpu_sys_reg(vcpu, r->reg) = calculate_mpidr(vcpu); +} + +static void reset_value(struct kvm_vcpu *vcpu, const struct sys_reg_desc_reset *r) +{ + __vcpu_sys_reg(vcpu, r->reg) = r->value; +} + +/* Specify the register's reset value. */ +#define RESET_VAL(REG, RESET_VAL) { REG, reset_value, RESET_VAL } + +/* Specify a function that calculates the register's reset value. */ +#define RESET_FUNC(REG, RESET_FUNC) { REG, RESET_FUNC, 0 } + +/* + * Architected system registers reset values for Protected VMs. + * Important: Must be sorted ascending by REG (index into sys_reg[]) + */ +static const struct sys_reg_desc_reset pvm_sys_reg_reset_vals[] = { + RESET_FUNC(MPIDR_EL1, reset_mpidr), + RESET_VAL(SCTLR_EL1, 0x00C50078), + RESET_FUNC(ACTLR_EL1, reset_actlr), + RESET_VAL(CPACR_EL1, 0), + RESET_VAL(ZCR_EL1, 0), + RESET_VAL(TCR_EL1, 0), + RESET_VAL(VBAR_EL1, 0), + RESET_VAL(CONTEXTIDR_EL1, 0), + RESET_FUNC(AMAIR_EL1, reset_amair_el1), + RESET_VAL(CNTKCTL_EL1, 0), + RESET_VAL(MDSCR_EL1, 0), + RESET_VAL(MDCCINT_EL1, 0), + RESET_VAL(DISR_EL1, 0), + RESET_VAL(PMCCFILTR_EL0, 0), + RESET_VAL(PMUSERENR_EL0, 0), +}; + /* - * Checks that the sysreg table is unique and in-order. + * Sets system registers to reset value + * + * This function finds the right entry and sets the registers on the protected + * vcpu to their architecturally defined reset values. + */ +void kvm_reset_pvm_sys_regs(struct kvm_vcpu *vcpu) +{ + unsigned long i; + + for (i = 0; i < ARRAY_SIZE(pvm_sys_reg_reset_vals); i++) { + const struct sys_reg_desc_reset *r = &pvm_sys_reg_reset_vals[i]; + + r->reset(vcpu, r); + } +} + +/* + * Checks that the sysreg tables are unique and in-order. * * Returns 0 if the table is consistent, or 1 otherwise. */ @@ -484,6 +561,11 @@ int kvm_check_pvm_sysreg_table(void) return 1; } + for (i = 1; i < ARRAY_SIZE(pvm_sys_reg_reset_vals); i++) { + if (pvm_sys_reg_reset_vals[i-1].reg >= pvm_sys_reg_reset_vals[i].reg) + return 1; + } + return 0; } From patchwork Thu May 19 13:41:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2E818C433F5 for ; Thu, 19 May 2022 14:31:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yOw6ecudWv2a+U/LooPy+z700wDHekd8EjvaAMVDyhU=; b=Jef766iYue1fYJ /RQ7JUM4oRZM5Y/0sRc0F4Z+anNUur3/4aKDhKdb/t2MUwRhHV7CeGm6tYiNaQa7YRPyL/6KL8NEz XtT4mT7iwmCmMlQFzGaIAqWbFH234udGE6DWD/3hGURdMFustwFhdiIPrD+rdB7lW3ANJVytldgds qgtptqY7Hmpl7c8zn/D5kZKfzcsOhly3nABH6vosRk/AosX7C6k9nmpQrQT5CES1QZxRxpWSRsb/B cX8ZOagFWoKkfVc6MTUN8QBkRp3d0I8sFrZrzJMSh3OK/68CspUw8QF7vUWC1h+tFrM+R3uuhN1Tb vQ7Acq1RwbhjlF8uTMmw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh9t-007QuH-D4; Thu, 19 May 2022 14:29:45 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgU5-0076rZ-RU for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:35 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 78C41B824AB; Thu, 19 May 2022 13:46:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B69ECC36AE3; Thu, 19 May 2022 13:46:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967991; bh=f/t2sfwXE6qndGFlrn//t91O5QPLvKErMyl+kkmPMXg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=K8tAIMGoLjRHyFe80/Nh8CQ5mIO2bCVGA8Lh6GU6deOuOYaGS8esw60c2cndn1UNl c5IGGuscx8ub3Z55ykvIopOpWUcYsGzdRsccVQwm7DDqlbX/Oh6hd1w7Coc7KTvzZ9 9aRAQ7fyUO3sfut2vdOCtawqZzM+qJeTRS9+0Sk6uLK+LzDbYbcOGLD/eDhbklvrE8 nhheBr9UBFhQVadsX2iDmuq8Sdh7xPPFL82k7k430TfoUGRlFxA/eAJ0KmBpBfRrAg YukCFN2DtLdFz3rQe9H/ZQjhXjfr/AjcuqxICNO1v43irPRC8GnKLyffLVYX5YKwgx UWPqqdwTisDqQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 62/89] KVM: arm64: Move pkvm_vcpu_init_traps to shadow vcpu init Date: Thu, 19 May 2022 14:41:37 +0100 Message-Id: <20220519134204.5379-63-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064634_303045_7F650E2D X-CRM114-Status: GOOD ( 17.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Move the initialization of traps to the initialization of the shadow vcpu, and remove the associated hypercall. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 - arch/arm64/kvm/arm.c | 8 -------- arch/arm64/kvm/hyp/include/nvhe/trap_handler.h | 2 -- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 8 -------- arch/arm64/kvm/hyp/nvhe/pkvm.c | 4 +++- 5 files changed, 3 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index ea3b3a60bedb..7af0b7695a2c 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -73,7 +73,6 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid, __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff, - __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_init_traps, __KVM_HOST_SMCCC_FUNC___vgic_v3_save_vmcr_aprs, __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_vmcr_aprs, __KVM_HOST_SMCCC_FUNC___pkvm_init_shadow, diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 8a1b4ba1dfa7..65af1757e73a 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -664,14 +664,6 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) static_branch_inc(&userspace_irqchip_in_use); } - /* - * Initialize traps for protected VMs. - * NOTE: Move to run in EL2 directly, rather than via a hypercall, once - * the code is in place for first run initialization at EL2. - */ - if (kvm_vm_is_protected(kvm)) - kvm_call_hyp_nvhe(__pkvm_vcpu_init_traps, vcpu); - mutex_lock(&kvm->lock); set_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags); mutex_unlock(&kvm->lock); diff --git a/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h b/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h index 45a84f0ade04..1e6d995968a1 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h +++ b/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h @@ -15,6 +15,4 @@ #define DECLARE_REG(type, name, ctxt, reg) \ type name = (type)cpu_reg(ctxt, (reg)) -void __pkvm_vcpu_init_traps(struct kvm_vcpu *vcpu); - #endif /* __ARM64_KVM_NVHE_TRAP_HANDLER_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 0f1c9d27f6eb..c1939dd2294f 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -620,13 +620,6 @@ static void handle___pkvm_prot_finalize(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = __pkvm_prot_finalize(); } -static void handle___pkvm_vcpu_init_traps(struct kvm_cpu_context *host_ctxt) -{ - DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); - - __pkvm_vcpu_init_traps(kern_hyp_va(vcpu)); -} - static void handle___pkvm_init_shadow(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm *, host_kvm, host_ctxt, 1); @@ -674,7 +667,6 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_tlb_flush_vmid), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), - HANDLE_FUNC(__pkvm_vcpu_init_traps), HANDLE_FUNC(__vgic_v3_save_vmcr_aprs), HANDLE_FUNC(__vgic_v3_restore_vmcr_aprs), HANDLE_FUNC(__pkvm_init_shadow), diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index cd0712e13ab0..2c13ba0f2bf2 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -188,7 +188,7 @@ static void pvm_init_trap_regs(struct kvm_vcpu *vcpu) /* * Initialize trap register values for protected VMs. */ -void __pkvm_vcpu_init_traps(struct kvm_vcpu *vcpu) +static void pkvm_vcpu_init_traps(struct kvm_vcpu *vcpu) { pvm_init_trap_regs(vcpu); pvm_init_traps_aa64pfr0(vcpu); @@ -363,6 +363,8 @@ static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, shadow_vcpu->vcpu_idx = i; shadow_vcpu->arch.hw_mmu = &vm->kvm.arch.mmu; + + pkvm_vcpu_init_traps(shadow_vcpu); } return 0; From patchwork Thu May 19 13:41:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AD530C433EF for ; Thu, 19 May 2022 14:32:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GXLmIUgi83Cl4Bm8SVtuno3azYC+0+BSAAfSRGoKm6E=; b=4Dymexp+gJ6crO mE8rikVGqD8E6HsitRzYyNkRPwDsIr6GBjjE2troQr4qwbLyN4v84smCXH0X1/Byr3kBoJEF70cpA HJAL5q3xhe4XDbKY5Z6TQ+Bjo3ZL3iaDHMSTQW6S6QEuicI7NzRcy+SGdzd6tDCZw18pWFA6jKBw8 3eUgdPpYcCHc94z3swawncnF9ZYsFWyXRbKaFyswJV19Wokrxkp3tmFtGdOQNqAuvK3bVVWEKxiDc Tp9I20g5Nk/3w/XnVByGzf71rWe8I9jJaiCO68yVK9x8iTrauLVDxY1TUxqIRtef6tma3yU832cai fzGWSZ7ZMOiQwSTdKJuA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhBH-007Rfx-T3; Thu, 19 May 2022 14:31:12 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgU8-0076tC-Gp for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:41 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BCA106179E; Thu, 19 May 2022 13:46:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AD38BC36AE7; Thu, 19 May 2022 13:46:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967995; bh=ZcIfzGb/vK2vfQ80a6r+rH0+I9h0CSiL3sHfSlkSKFk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=koKvS2BJfJlWQTyHB1/GcP7ECOIzwWeD22WksHUY4txSEj3OsGg68jqB7dg6xdVVS 42zLeqlHJAiu6Pd+gq96X8ili4d6uLSrfnVV8g4ODy/BdYaSxPSb6awiqPIviPBkkZ KkOOhGxtG0AII2SbMTVwEiRFYMeVzPyvVDjPgH7xWDYRvFu6D5zfD/sou5vU42f9Q3 ATqxWYa+6vp3uT4QNdyWMeYHxsyY+mOhd2bY0uoLYQxjPSX8s2l204k5tVbNTQAzTo agrhMliNtz7gWMJSEoDEYYFUCwtoXaK/9KAH7RWqoXeAjHyRBOFSeAt25tlOgt9tN1 dd0v//a7iD2Ng== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 63/89] KVM: arm64: Fix initializing traps in protected mode Date: Thu, 19 May 2022 14:41:38 +0100 Message-Id: <20220519134204.5379-64-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064636_744425_E5F1D255 X-CRM114-Status: GOOD ( 16.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba The values of the trapping registers for protected VMs should be computed from the ground up, and not depend on potentially preexisting values. Moreover, non-protected VMs should not be restricted in protected mode in the same manner as protected VMs. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/pkvm.c | 48 ++++++++++++++++++++++------------ 1 file changed, 31 insertions(+), 17 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 2c13ba0f2bf2..839506a546c7 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -168,34 +168,48 @@ static void pvm_init_traps_aa64mmfr1(struct kvm_vcpu *vcpu) */ static void pvm_init_trap_regs(struct kvm_vcpu *vcpu) { - const u64 hcr_trap_feat_regs = HCR_TID3; - const u64 hcr_trap_impdef = HCR_TACR | HCR_TIDCP | HCR_TID1; - /* * Always trap: * - Feature id registers: to control features exposed to guests * - Implementation-defined features */ - vcpu->arch.hcr_el2 |= hcr_trap_feat_regs | hcr_trap_impdef; + vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS | + HCR_TID3 | HCR_TACR | HCR_TIDCP | HCR_TID1; + + if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) { + /* route synchronous external abort exceptions to EL2 */ + vcpu->arch.hcr_el2 |= HCR_TEA; + /* trap error record accesses */ + vcpu->arch.hcr_el2 |= HCR_TERR; + } - /* Clear res0 and set res1 bits to trap potential new features. */ - vcpu->arch.hcr_el2 &= ~(HCR_RES0); - vcpu->arch.mdcr_el2 &= ~(MDCR_EL2_RES0); - vcpu->arch.cptr_el2 |= CPTR_NVHE_EL2_RES1; - vcpu->arch.cptr_el2 &= ~(CPTR_NVHE_EL2_RES0); + if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) + vcpu->arch.hcr_el2 |= HCR_FWB; + + if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE)) + vcpu->arch.hcr_el2 |= HCR_TID2; } /* * Initialize trap register values for protected VMs. */ -static void pkvm_vcpu_init_traps(struct kvm_vcpu *vcpu) +static void pkvm_vcpu_init_traps(struct kvm_vcpu *shadow_vcpu, struct kvm_vcpu *host_vcpu) { - pvm_init_trap_regs(vcpu); - pvm_init_traps_aa64pfr0(vcpu); - pvm_init_traps_aa64pfr1(vcpu); - pvm_init_traps_aa64dfr0(vcpu); - pvm_init_traps_aa64mmfr0(vcpu); - pvm_init_traps_aa64mmfr1(vcpu); + shadow_vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT; + shadow_vcpu->arch.mdcr_el2 = 0; + + if (!vcpu_is_protected(shadow_vcpu)) { + shadow_vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS | + READ_ONCE(host_vcpu->arch.hcr_el2); + return; + } + + pvm_init_trap_regs(shadow_vcpu); + pvm_init_traps_aa64pfr0(shadow_vcpu); + pvm_init_traps_aa64pfr1(shadow_vcpu); + pvm_init_traps_aa64dfr0(shadow_vcpu); + pvm_init_traps_aa64mmfr0(shadow_vcpu); + pvm_init_traps_aa64mmfr1(shadow_vcpu); } /* @@ -364,7 +378,7 @@ static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, shadow_vcpu->arch.hw_mmu = &vm->kvm.arch.mmu; - pkvm_vcpu_init_traps(shadow_vcpu); + pkvm_vcpu_init_traps(shadow_vcpu, host_vcpu); } return 0; From patchwork Thu May 19 13:41:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855174 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 73D83C433F5 for ; Thu, 19 May 2022 14:31:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Hu7FvymFFkSaaT4S2DAKiuRFE3ejh8FWvweJ4k7/wKk=; b=1l8Fax8JrNFswA Z8OPsF8TE1giFd6xlQHQ/nPH6/7BYDqwKxzs37M5CeqKbmqI+zgYUEihgFdwdSlarIERAiqsNMxXg fAYu1vhTVe7ytunzuVc/NXwhNM85ZbrYevbI1dOYhKhhcwWlrXz/hMwkBxSRbnLs9bKWORGH5dJC6 +ezlR9cIk0VTCDX1QMc8rjxNW6etKxJBSj+siU81WNVdbHLCtZnB9yPV0e1uoJeo4sFddNRKRm9Eg SmoQEJZOXvQauCl+ML2PwdGWZqb+th+tOeSCdz1Z6au94E8i3u+B1bgYiP5JMCh5RWl0MO0t7MYS5 MOTwa/X+HRz6WVySn8Vg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhAH-007R8A-NA; Thu, 19 May 2022 14:30:10 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUC-0076ui-6m for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:41 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B2068617D2; Thu, 19 May 2022 13:46:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A424CC385AA; Thu, 19 May 2022 13:46:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967999; bh=ykWIkC/B0e9kSshvdCfgGdw5lw0ICxc/NqaNlHqBpS8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=T8HYojTSxMlrrGKq7Yb57Pih7CxPJahzenwHlNqXjii2CspaJWdBeQnPSfIWbAL0H H0Sxdhyh+1uCPzcskuvc2ktDETJgRSohx1uZMaZsx7D6vixCUpQNsQ34fDquJ5EN0n 2jjA9AcGAdmu91H75D+WpWsthclh+zUAqLlCFKnogsMiSCAFfPZG6OzgjR8VbMHF9D 9+UoMXUGEhwH1/bEcGRnAwrgQ6s11InLEaPBsltjyAFlT+u/X8iMZM6DSRPbtPg+z/ jcgoMl9xbOwz1K+SbLiSSdWz3JL+B7Z9kK+U864SDAjN2D0MKR0ab7S3yOfm5kplCc Ssmdi4fsmyCDw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 64/89] KVM: arm64: Advertise GICv3 sysreg interface to protected guests Date: Thu, 19 May 2022 14:41:39 +0100 Message-Id: <20220519134204.5379-65-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064640_340176_90B7A5F2 X-CRM114-Status: GOOD ( 12.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Advertise the system register GICv3 CPU interface to protected guests as that is the only supported configuration under pKVM. Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_pkvm.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index 6f13f62558dd..062ae2ffbdfb 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -43,11 +43,13 @@ void kvm_shadow_destroy(struct kvm *kvm); /* * Allow for protected VMs: * - Floating-point and Advanced SIMD + * - GICv3(+) system register interface * - Data Independent Timing */ #define PVM_ID_AA64PFR0_ALLOW (\ ARM64_FEATURE_MASK(ID_AA64PFR0_FP) | \ ARM64_FEATURE_MASK(ID_AA64PFR0_ASIMD) | \ + ARM64_FEATURE_MASK(ID_AA64PFR0_GIC) | \ ARM64_FEATURE_MASK(ID_AA64PFR0_DIT) \ ) From patchwork Thu May 19 13:41:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53FA6C433F5 for ; Thu, 19 May 2022 14:33:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=u9wjbB0U9/VTJ5WZuwlQkqUtqTiO6l13flw3IMUU9Po=; b=yqdmIsCTyxFv+O fMyZmbcKqwtAzd01Hxt9bEO2xWVSxBGbp9Zy3bni6prykLxl4bbJXUXue0QXRml/ABTT+bActJGbx HOTFnCh+LpjfND8RPOAJ//XTKAS6PkTWxI1iwvlZNFWAWZWjfC0FdbIHzaqvbjY9fqHv9ySseE6ev BDUjkgWcEjmRQZuJ6TvvkGANybH1JTxgOMNbVLD2FQK1jLNBQ1Ls2aphN6qocA5PNUo3NaFOyqS5C DAruNFEKwWO2BkHsfzCNo6oqx+YvfjTmZM6sGgY+gSMn+9wPlGHmvgntIlM+fB8eCP6+2yokrxvGk 4IkBN8JCKOjcdFDEyGXQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhC5-007S7w-Oq; Thu, 19 May 2022 14:32:02 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUH-0076xR-Kl for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:47 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4858CB824AF; Thu, 19 May 2022 13:46:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 99EACC34116; Thu, 19 May 2022 13:46:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968003; bh=kgENcXfDAe+sx9c87Cb7w5AnDvemeP3zb9MSHnnKeBg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SjfvBwcxc0CDVBgdK0cyBAv6bmGMIHmagBvNXm4nsCVvV/4S9A2KI2m7z8GSuMvpQ Rc0VMktQzoYDV2sZiXyyXBlvnNtvx0cTN9efsjsu+nYGxV8jcIjHQHNqHajcYYiSFE WC314ABABqOrRIxHdLjedC9YyaWuZGddgKjG+iYk1Au0rUd5ylD3sePHHk37X3poMC vubMdvlScVWTH8KodKsajSjSqb1BUxqMnCoV9EIE/MDUU43Ky7S62wEvMoRJUrLnrL qib+yVWT4mkisVq6/govseqNN4Pw+B4/ftWZkhR+b9CJv9iOJqDLwxLGSbMBGKWma/ Zmzh2pZGRGCPw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 65/89] KVM: arm64: Force injection of a data abort on NISV MMIO exit Date: Thu, 19 May 2022 14:41:40 +0100 Message-Id: <20220519134204.5379-66-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064646_009137_03986DA6 X-CRM114-Status: GOOD ( 21.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier If a vcpu exits for a data abort with an invalid syndrome, the expectations are that userspace has a chance to save the day if it has requested to see such exits. However, this is completely futile in the case of a protected VM, as none of the state is available. In this particular case, inject a data abort directly into the vcpu, consistent with what userspace could do. This also helps with pKVM, which discards all syndrome information when forwarding data aborts that are not known to be MMIO. Finally, hide the RETURN_NISV_IO_ABORT_TO_USER cap from userspace on protected VMs, and document this tweak to the API. Signed-off-by: Marc Zyngier --- Documentation/virt/kvm/api.rst | 7 +++++++ arch/arm64/kvm/arm.c | 14 ++++++++++---- arch/arm64/kvm/mmio.c | 9 +++++++++ 3 files changed, 26 insertions(+), 4 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index d13fa6600467..207706260f67 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6115,6 +6115,13 @@ Note that KVM does not skip the faulting instruction as it does for KVM_EXIT_MMIO, but userspace has to emulate any change to the processing state if it decides to decode and emulate the instruction. +This feature isn't available to protected VMs, as userspace does not +have access to the state that is required to perform the emulation. +Instead, a data abort exception is directly injected in the guest. +Note that although KVM_CAP_ARM_NISV_TO_USER will be reported if +queried outside of a protected VM context, the feature will not be +exposed if queried on a protected VM file descriptor. + :: /* KVM_EXIT_X86_RDMSR / KVM_EXIT_X86_WRMSR */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 65af1757e73a..9c5a935a9a73 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -84,9 +84,13 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, switch (cap->cap) { case KVM_CAP_ARM_NISV_TO_USER: - r = 0; - set_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER, - &kvm->arch.flags); + if (kvm_vm_is_protected(kvm)) { + r = -EINVAL; + } else { + r = 0; + set_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER, + &kvm->arch.flags); + } break; case KVM_CAP_ARM_MTE: mutex_lock(&kvm->lock); @@ -217,13 +221,15 @@ static int kvm_check_extension(struct kvm *kvm, long ext) case KVM_CAP_IMMEDIATE_EXIT: case KVM_CAP_VCPU_EVENTS: case KVM_CAP_ARM_IRQ_LINE_LAYOUT_2: - case KVM_CAP_ARM_NISV_TO_USER: case KVM_CAP_ARM_INJECT_EXT_DABT: case KVM_CAP_SET_GUEST_DEBUG: case KVM_CAP_VCPU_ATTRIBUTES: case KVM_CAP_PTP_KVM: r = 1; break; + case KVM_CAP_ARM_NISV_TO_USER: + r = !kvm || !kvm_vm_is_protected(kvm); + break; case KVM_CAP_SET_GUEST_DEBUG2: return KVM_GUESTDBG_VALID_MASK; case KVM_CAP_ARM_SET_DEVICE_ADDR: diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c index 3dd38a151d2a..db6630c70f8b 100644 --- a/arch/arm64/kvm/mmio.c +++ b/arch/arm64/kvm/mmio.c @@ -133,8 +133,17 @@ int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) /* * No valid syndrome? Ask userspace for help if it has * volunteered to do so, and bail out otherwise. + * + * In the protected VM case, there isn't much userspace can do + * though, so directly deliver an exception to the guest. */ if (!kvm_vcpu_dabt_isvalid(vcpu)) { + if (is_protected_kvm_enabled() && + kvm_vm_is_protected(vcpu->kvm)) { + kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu)); + return 1; + } + if (test_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER, &vcpu->kvm->arch.flags)) { run->exit_reason = KVM_EXIT_ARM_NISV; From patchwork Thu May 19 13:41:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855180 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 182CDC433F5 for ; Thu, 19 May 2022 14:34:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=UFUB+WfKdEhtl+6BBS8oOENdNTEWzlXFVq4KpHBFcR0=; b=gNVCmqqYbm0J70 e7GCxZDJckkkQHxNerLL8Ci2EalSnalo6fI6UvvGBFna2jvW9JB3Lq1Tx7+rfX6ClCMHh797WlBap r8jVBUUt72QQWxZJ8iSjNI8PG2I0qdDI3YjoUBEfc8+akc5PDolBfSIzTJrZUmkz9X5EUTpfJU7D+ 4U2LsDMbnppbZC8RdlczzFKSPzqtErdjRfwsMtGuuFF86MbwYXJwAAnGnZ+nju8buP3+mbatiSEUG qDtEO2GcwJWBKnrrlldScmk/Us5o0DyoHZmxIlsJUwzMGE2xz+w9l1vdGozVDiXvxtwh29yQmsXcn G766kB8lUuH6Otabm2Ww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhCp-007SU4-18; Thu, 19 May 2022 14:32:48 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUJ-0076zX-NT for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:49 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3945E617D1; Thu, 19 May 2022 13:46:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8FF2AC385B8; Thu, 19 May 2022 13:46:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968007; bh=E1015Uo5m9qQ+l/7YYTJwJcecYTkXjzWrhDPUZbkvqQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Sa8MhUqPbbEtNQoduO6nqD4n1geAIXj03Okw2MLZAGzVD/WXwb6+clOJIYjK7za3A ruFghhtE3LF4KCXFu2TGRDLKMaro1wB6ba+np+GNntmB80SCIhCXvfvP7PAANFTE7m hM0remAoSSBf/B3FkCUzY9cmJpCaDqGRhGoQss7jBh7Z0/D/zEs37i8+htwQWZK8I3 1lXl53f3OFKaX7MutdoTvi2ugXs60KRKdsXOvCl74EIw6yNdVFbMgn2GgX/tZUiv1l dYkuVqY5DAui1d5BcNgkQE7kNmD5z7903UeQvOq332FxoM2WfIu3qHupCv5spkBdS4 cxWvD4Kdli7Aw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 66/89] KVM: arm64: Donate memory to protected guests Date: Thu, 19 May 2022 14:41:41 +0100 Message-Id: <20220519134204.5379-67-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064647_922186_FCAD65A1 X-CRM114-Status: GOOD ( 14.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Instead of sharing memory with protected guests, which still leaves the host with r/w access, donate the underlying pages so that they are unmapped from the host stage-2. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index c1939dd2294f..e987f34641dd 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -465,7 +465,10 @@ static void handle___pkvm_host_map_guest(struct kvm_cpu_context *host_ctxt) if (ret) goto out; - ret = __pkvm_host_share_guest(pfn, gfn, shadow_vcpu); + if (shadow_state_is_protected(shadow_state)) + ret = __pkvm_host_donate_guest(pfn, gfn, shadow_vcpu); + else + ret = __pkvm_host_share_guest(pfn, gfn, shadow_vcpu); out: cpu_reg(host_ctxt, 1) = ret; } From patchwork Thu May 19 13:41:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E4402C433EF for ; Thu, 19 May 2022 14:34:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PIiiU/gQyhL2BdJcEWYAI/vaLGcoG8cQ7xrnm6y14uY=; b=EkqzSTZi+IPzrg xNBz4uSD41wxxKTLXlyNogV2LL5a7AmS3UrIbH3lBo+ucxyxvnpyqH+bO01NIhOUMQAdKL+JK3a91 QN3/Vn/rco9HjdHTaLMYUHPJ79iiSKVGXCPtTOOSfR8qkuVI3OktosR6wCy8OL9zL/iMRs6ybW1vK SnT8yXKqyctw1sSDU+zh0iBaQBPihLBCbheg5Tn+idXFgxvliLBhb4s1+h/KeGQC3fGbkhmHm84sv hukW/oUzp/fyTXr1qpl7Z38aXxQiDKnsquL5LqLFf9YasPhfiiIEsB3La030/8v1L+tyM74uG/gRi hJ7j8/cbGY1vmhou9laQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhDQ-007Smw-C8; Thu, 19 May 2022 14:33:24 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUN-00771I-MC for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:53 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3244C617D8; Thu, 19 May 2022 13:46:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 87E55C34115; Thu, 19 May 2022 13:46:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968011; bh=CGHMgaYd98aExBB2QN4A0HjGqijaiRCqy9ZKGnJWM2U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vu8hMY4BNL+I7rSqNG7qlHHdiQsxFHJziw4u0r/YWV9n7VNCmU6E+R3DXhKztX+co jWKQKJtZHk17dPYa6kErbrLU1POxBtqh06pl4HU4ZXWj6YgTF4g7Cs5CRro6Si+NVj VR+M/DoZ33hDxyQxhnUuLtkicQ1p8mvM3E8jiQs4EW+g417IV2IZIho7shTCQQ2d/Q N9XKrpIVPAZGC/oGY5CsZc1awf4Zl5gov+mYnY/8dLbBkXuroST6I9hFOd4E0b2rF3 Zn+S/Sl7G+YsQAJ4ukF6bgJc7RWHw1MNYQDxayl8v3AJ0EHi0zJP5OgH6lWGrUhx/O ocgXBbSBAgP+Q== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 67/89] KVM: arm64: Add EL2 entry/exit handlers for pKVM guests Date: Thu, 19 May 2022 14:41:42 +0100 Message-Id: <20220519134204.5379-68-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064651_865706_582D7B0D X-CRM114-Status: GOOD ( 22.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Introduce separate El2 entry/exit handlers for protected and non-protected guests under pKVM and hook up the protected handlers to expose the minimum amount of data to the host required for EL1 handling. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 230 ++++++++++++++++++++++++++++- 1 file changed, 228 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index e987f34641dd..692576497ed9 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -20,6 +20,8 @@ #include +#include "../../sys_regs.h" + /* * Host FPSIMD state. Written to when the guest accesses its own FPSIMD state, * and read when the guest state is live and we need to switch back to the host. @@ -34,6 +36,207 @@ void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); typedef void (*shadow_entry_exit_handler_fn)(struct kvm_vcpu *, struct kvm_vcpu *); +static void handle_pvm_entry_wfx(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + shadow_vcpu->arch.flags |= READ_ONCE(host_vcpu->arch.flags) & + KVM_ARM64_INCREMENT_PC; +} + +static void handle_pvm_entry_sys64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + unsigned long host_flags; + + host_flags = READ_ONCE(host_vcpu->arch.flags); + + /* Exceptions have priority on anything else */ + if (host_flags & KVM_ARM64_PENDING_EXCEPTION) { + /* Exceptions caused by this should be undef exceptions. */ + u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT); + + __vcpu_sys_reg(shadow_vcpu, ESR_EL1) = esr; + shadow_vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION | + KVM_ARM64_EXCEPT_MASK); + shadow_vcpu->arch.flags |= (KVM_ARM64_PENDING_EXCEPTION | + KVM_ARM64_EXCEPT_AA64_ELx_SYNC | + KVM_ARM64_EXCEPT_AA64_EL1); + + return; + } + + if (host_flags & KVM_ARM64_INCREMENT_PC) { + shadow_vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION | + KVM_ARM64_EXCEPT_MASK); + shadow_vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC; + } + + if (!esr_sys64_to_params(shadow_vcpu->arch.fault.esr_el2).is_write) { + /* r0 as transfer register between the guest and the host. */ + u64 rt_val = READ_ONCE(host_vcpu->arch.ctxt.regs.regs[0]); + int rt = kvm_vcpu_sys_get_rt(shadow_vcpu); + + vcpu_set_reg(shadow_vcpu, rt, rt_val); + } +} + +static void handle_pvm_entry_iabt(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + unsigned long cpsr = *vcpu_cpsr(shadow_vcpu); + unsigned long host_flags; + u32 esr = ESR_ELx_IL; + + host_flags = READ_ONCE(host_vcpu->arch.flags); + + if (!(host_flags & KVM_ARM64_PENDING_EXCEPTION)) + return; + + /* + * If the host wants to inject an exception, get syndrom and + * fault address. + */ + if ((cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t) + esr |= (ESR_ELx_EC_IABT_LOW << ESR_ELx_EC_SHIFT); + else + esr |= (ESR_ELx_EC_IABT_CUR << ESR_ELx_EC_SHIFT); + + esr |= ESR_ELx_FSC_EXTABT; + + __vcpu_sys_reg(shadow_vcpu, ESR_EL1) = esr; + __vcpu_sys_reg(shadow_vcpu, FAR_EL1) = kvm_vcpu_get_hfar(shadow_vcpu); + + /* Tell the run loop that we want to inject something */ + shadow_vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION | + KVM_ARM64_EXCEPT_MASK); + shadow_vcpu->arch.flags |= (KVM_ARM64_PENDING_EXCEPTION | + KVM_ARM64_EXCEPT_AA64_ELx_SYNC | + KVM_ARM64_EXCEPT_AA64_EL1); +} + +static void handle_pvm_entry_dabt(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + unsigned long host_flags; + bool rd_update; + + host_flags = READ_ONCE(host_vcpu->arch.flags); + + /* Exceptions have priority over anything else */ + if (host_flags & KVM_ARM64_PENDING_EXCEPTION) { + unsigned long cpsr = *vcpu_cpsr(shadow_vcpu); + u32 esr = ESR_ELx_IL; + + if ((cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t) + esr |= (ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT); + else + esr |= (ESR_ELx_EC_DABT_CUR << ESR_ELx_EC_SHIFT); + + esr |= ESR_ELx_FSC_EXTABT; + + __vcpu_sys_reg(shadow_vcpu, ESR_EL1) = esr; + __vcpu_sys_reg(shadow_vcpu, FAR_EL1) = kvm_vcpu_get_hfar(shadow_vcpu); + /* Tell the run loop that we want to inject something */ + shadow_vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION | + KVM_ARM64_EXCEPT_MASK); + shadow_vcpu->arch.flags |= (KVM_ARM64_PENDING_EXCEPTION | + KVM_ARM64_EXCEPT_AA64_ELx_SYNC | + KVM_ARM64_EXCEPT_AA64_EL1); + + /* Cancel potential in-flight MMIO */ + shadow_vcpu->mmio_needed = false; + return; + } + + /* Handle PC increment on MMIO */ + if ((host_flags & KVM_ARM64_INCREMENT_PC) && shadow_vcpu->mmio_needed) { + shadow_vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION | + KVM_ARM64_EXCEPT_MASK); + shadow_vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC; + } + + /* If we were doing an MMIO read access, update the register*/ + rd_update = (shadow_vcpu->mmio_needed && + (host_flags & KVM_ARM64_INCREMENT_PC)); + rd_update &= !kvm_vcpu_dabt_iswrite(shadow_vcpu); + + if (rd_update) { + /* r0 as transfer register between the guest and the host. */ + u64 rd_val = READ_ONCE(host_vcpu->arch.ctxt.regs.regs[0]); + int rd = kvm_vcpu_dabt_get_rd(shadow_vcpu); + + vcpu_set_reg(shadow_vcpu, rd, rd_val); + } + + shadow_vcpu->mmio_needed = false; +} + +static void handle_pvm_exit_wfx(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + WRITE_ONCE(host_vcpu->arch.ctxt.regs.pstate, + shadow_vcpu->arch.ctxt.regs.pstate & PSR_MODE_MASK); + WRITE_ONCE(host_vcpu->arch.fault.esr_el2, + shadow_vcpu->arch.fault.esr_el2); +} + +static void handle_pvm_exit_sys64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + u32 esr_el2 = shadow_vcpu->arch.fault.esr_el2; + + /* r0 as transfer register between the guest and the host. */ + WRITE_ONCE(host_vcpu->arch.fault.esr_el2, + esr_el2 & ~ESR_ELx_SYS64_ISS_RT_MASK); + + /* The mode is required for the host to emulate some sysregs */ + WRITE_ONCE(host_vcpu->arch.ctxt.regs.pstate, + shadow_vcpu->arch.ctxt.regs.pstate & PSR_MODE_MASK); + + if (esr_sys64_to_params(esr_el2).is_write) { + int rt = kvm_vcpu_sys_get_rt(shadow_vcpu); + u64 rt_val = vcpu_get_reg(shadow_vcpu, rt); + + WRITE_ONCE(host_vcpu->arch.ctxt.regs.regs[0], rt_val); + } +} + +static void handle_pvm_exit_iabt(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + WRITE_ONCE(host_vcpu->arch.fault.esr_el2, + shadow_vcpu->arch.fault.esr_el2); + WRITE_ONCE(host_vcpu->arch.fault.hpfar_el2, + shadow_vcpu->arch.fault.hpfar_el2); +} + +static void handle_pvm_exit_dabt(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + /* + * For now, we treat all data aborts as MMIO since we have no knowledge + * of the memslot configuration at EL2. + */ + shadow_vcpu->mmio_needed = true; + + if (shadow_vcpu->mmio_needed) { + /* r0 as transfer register between the guest and the host. */ + WRITE_ONCE(host_vcpu->arch.fault.esr_el2, + shadow_vcpu->arch.fault.esr_el2 & ~ESR_ELx_SRT_MASK); + + if (kvm_vcpu_dabt_iswrite(shadow_vcpu)) { + int rt = kvm_vcpu_dabt_get_rd(shadow_vcpu); + u64 rt_val = vcpu_get_reg(shadow_vcpu, rt); + + WRITE_ONCE(host_vcpu->arch.ctxt.regs.regs[0], rt_val); + } + } else { + WRITE_ONCE(host_vcpu->arch.fault.esr_el2, + shadow_vcpu->arch.fault.esr_el2 & ~ESR_ELx_ISV); + } + + WRITE_ONCE(host_vcpu->arch.ctxt.regs.pstate, + shadow_vcpu->arch.ctxt.regs.pstate & PSR_MODE_MASK); + WRITE_ONCE(host_vcpu->arch.fault.far_el2, + shadow_vcpu->arch.fault.far_el2 & GENMASK(11, 0)); + WRITE_ONCE(host_vcpu->arch.fault.hpfar_el2, + shadow_vcpu->arch.fault.hpfar_el2); + WRITE_ONCE(__vcpu_sys_reg(host_vcpu, SCTLR_EL1), + __vcpu_sys_reg(shadow_vcpu, SCTLR_EL1) & (SCTLR_ELx_EE | SCTLR_EL1_E0E)); +} + static void handle_vm_entry_generic(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) { unsigned long host_flags = READ_ONCE(host_vcpu->arch.flags); @@ -67,6 +270,22 @@ static void handle_vm_exit_abt(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shad shadow_vcpu->arch.fault.disr_el1); } +static const shadow_entry_exit_handler_fn entry_pvm_shadow_handlers[] = { + [0 ... ESR_ELx_EC_MAX] = NULL, + [ESR_ELx_EC_WFx] = handle_pvm_entry_wfx, + [ESR_ELx_EC_SYS64] = handle_pvm_entry_sys64, + [ESR_ELx_EC_IABT_LOW] = handle_pvm_entry_iabt, + [ESR_ELx_EC_DABT_LOW] = handle_pvm_entry_dabt, +}; + +static const shadow_entry_exit_handler_fn exit_pvm_shadow_handlers[] = { + [0 ... ESR_ELx_EC_MAX] = NULL, + [ESR_ELx_EC_WFx] = handle_pvm_exit_wfx, + [ESR_ELx_EC_SYS64] = handle_pvm_exit_sys64, + [ESR_ELx_EC_IABT_LOW] = handle_pvm_exit_iabt, + [ESR_ELx_EC_DABT_LOW] = handle_pvm_exit_dabt, +}; + static const shadow_entry_exit_handler_fn entry_vm_shadow_handlers[] = { [0 ... ESR_ELx_EC_MAX] = handle_vm_entry_generic, }; @@ -219,9 +438,13 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) break; case ARM_EXCEPTION_TRAP: esr_ec = ESR_ELx_EC(kvm_vcpu_get_esr(shadow_vcpu)); - ec_handler = entry_vm_shadow_handlers[esr_ec]; + if (shadow_state_is_protected(shadow_state)) + ec_handler = entry_pvm_shadow_handlers[esr_ec]; + else + ec_handler = entry_vm_shadow_handlers[esr_ec]; if (ec_handler) ec_handler(host_vcpu, shadow_vcpu); + break; default: BUG(); @@ -251,7 +474,10 @@ static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state, break; case ARM_EXCEPTION_TRAP: esr_ec = ESR_ELx_EC(kvm_vcpu_get_esr(shadow_vcpu)); - ec_handler = exit_vm_shadow_handlers[esr_ec]; + if (shadow_state_is_protected(shadow_state)) + ec_handler = exit_pvm_shadow_handlers[esr_ec]; + else + ec_handler = exit_vm_shadow_handlers[esr_ec]; if (ec_handler) ec_handler(host_vcpu, shadow_vcpu); break; From patchwork Thu May 19 13:41:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2B9ABC433EF for ; Thu, 19 May 2022 14:35:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wxn4WtjF/ce9F+Qinu9n2Pf+t2eZFmnNC8R7hwtwt5E=; b=Q7EsEnIQb9arji MV434ztwXjqC0DLIJCeGzMO7EOEXrpjCniLh70duslSNYpVPjnPMmA4dOKPFTO5GCPqeK1/QDAt+N PpNrhwzFHHWI9cnz1ow7R1LoxOU16x0a9xiTX8F3kHmXn3LdCmmq9J+BJRlK2WEdTCWuquYIAM5Vs TAnsihZfNEyw9irY0ZYW+KZjNA1ZA3frVFXz91U/xywA3/Qmjs1mjWGCnNBAKLpj4+ZYGnPy6kYtB jaSc2KMjil5027FTeN/jpilADIpZW6RGcZ6ln7/XLJBGHF8eyqlIdm2WswtjeHbgwLdgVakZXGtvQ B5Hhqm25EcLiJ1zJQSSw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhEJ-007TDJ-Ku; Thu, 19 May 2022 14:34:20 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUR-00773Z-W8 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:57 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 89F61617CA; Thu, 19 May 2022 13:46:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F0CAC36AE3; Thu, 19 May 2022 13:46:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968015; bh=B946jQuS7j2fg5L+Q7AQunEposbhc2Gy7MExnPSIXp0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JGG5z1aIipXv0PgYjivZS0279gWmC483ak3c4hspl9DEisL8KMsLBYHK6z5b2wkJQ s/CT8WKOT1SEbFndyiC4kMsf3wzWDoY8yrauyMY2H7ZtKffnWx8epydJch3i7QDGs7 LCxBvmpl4gLsOVYi5qGlNn92OcrXV8b3GMjSlFQZ8Q57d6/C6nwofGd8la8SRDF8w7 xPfb5lTMyJqwCCBxvZ8uSzT6AnXXLmJNPaWpFcJWwwDGGZgzjCpJI9gqtkDt2Rq237 JDnKZ2vrORNFe3304WUJcsI2qvBu20TSaXU6fzc+hy9FMe2GrahlfSoHgVy7+gG129 Xc9pMYrSkrFBw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 68/89] KVM: arm64: Move vgic state between host and shadow vcpu structures Date: Thu, 19 May 2022 14:41:43 +0100 Message-Id: <20220519134204.5379-69-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064656_198920_45396D0B X-CRM114-Status: GOOD ( 19.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Since the world switch vgic code operates on the shadow data structure, move the state back and forth between the host and shadow vcpu. This is currently limited to the VMCR and APR registers, but further patches will deal with the rest of the state. Note that some of the scontrol settings (such as SRE) are always set to the same value. This will eventually be moved to the shadow initialisation. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 65 ++++++++++++++++++++++++++++-- 1 file changed, 61 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 692576497ed9..5d6cee7436f4 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -619,6 +619,17 @@ static struct kvm_vcpu *__get_current_vcpu(struct kvm_vcpu *vcpu, __get_current_vcpu(__vcpu, statepp); \ }) +#define get_current_vcpu_from_cpu_if(ctxt, regnr, statepp) \ + ({ \ + DECLARE_REG(struct vgic_v3_cpu_if *, cif, ctxt, regnr); \ + struct kvm_vcpu *__vcpu; \ + __vcpu = container_of(cif, \ + struct kvm_vcpu, \ + arch.vgic_cpu.vgic_v3); \ + \ + __get_current_vcpu(__vcpu, statepp); \ + }) + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { struct kvm_shadow_vcpu_state *shadow_state; @@ -778,16 +789,62 @@ static void handle___kvm_get_mdcr_el2(struct kvm_cpu_context *host_ctxt) static void handle___vgic_v3_save_vmcr_aprs(struct kvm_cpu_context *host_ctxt) { - DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1); + struct kvm_shadow_vcpu_state *shadow_state; + struct kvm_vcpu *vcpu; + + vcpu = get_current_vcpu_from_cpu_if(host_ctxt, 1, &shadow_state); + if (!vcpu) + return; + + if (shadow_state) { + struct vgic_v3_cpu_if *shadow_cpu_if, *cpu_if; + int i; + + shadow_cpu_if = &shadow_state->shadow_vcpu.arch.vgic_cpu.vgic_v3; + __vgic_v3_save_vmcr_aprs(shadow_cpu_if); + + cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - __vgic_v3_save_vmcr_aprs(kern_hyp_va(cpu_if)); + cpu_if->vgic_vmcr = shadow_cpu_if->vgic_vmcr; + for (i = 0; i < ARRAY_SIZE(cpu_if->vgic_ap0r); i++) { + cpu_if->vgic_ap0r[i] = shadow_cpu_if->vgic_ap0r[i]; + cpu_if->vgic_ap1r[i] = shadow_cpu_if->vgic_ap1r[i]; + } + } else { + __vgic_v3_save_vmcr_aprs(&vcpu->arch.vgic_cpu.vgic_v3); + } } static void handle___vgic_v3_restore_vmcr_aprs(struct kvm_cpu_context *host_ctxt) { - DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1); + struct kvm_shadow_vcpu_state *shadow_state; + struct kvm_vcpu *vcpu; - __vgic_v3_restore_vmcr_aprs(kern_hyp_va(cpu_if)); + vcpu = get_current_vcpu_from_cpu_if(host_ctxt, 1, &shadow_state); + if (!vcpu) + return; + + if (shadow_state) { + struct vgic_v3_cpu_if *shadow_cpu_if, *cpu_if; + int i; + + shadow_cpu_if = &shadow_state->shadow_vcpu.arch.vgic_cpu.vgic_v3; + cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; + + shadow_cpu_if->vgic_vmcr = cpu_if->vgic_vmcr; + /* Should be a one-off */ + shadow_cpu_if->vgic_sre = (ICC_SRE_EL1_DIB | + ICC_SRE_EL1_DFB | + ICC_SRE_EL1_SRE); + for (i = 0; i < ARRAY_SIZE(cpu_if->vgic_ap0r); i++) { + shadow_cpu_if->vgic_ap0r[i] = cpu_if->vgic_ap0r[i]; + shadow_cpu_if->vgic_ap1r[i] = cpu_if->vgic_ap1r[i]; + } + + __vgic_v3_restore_vmcr_aprs(shadow_cpu_if); + } else { + __vgic_v3_restore_vmcr_aprs(&vcpu->arch.vgic_cpu.vgic_v3); + } } static void handle___pkvm_init(struct kvm_cpu_context *host_ctxt) From patchwork Thu May 19 13:41:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3BC8DC433F5 for ; Thu, 19 May 2022 14:36:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TDs9BJofd/TTCZgxT2I9ytrLfzU+KH06uuLAgJfncTo=; b=Z/g37tRn5APssH v/CrdIjOWndB2BZtlBCQE5lOlWAFFBscd26QjXkX194ScBK8zjSuFt0z8DxdYb9/xh+x3O8N8dMKI Y7gt1DbL+haAsoXM2o7NREn0wNcIdEIM9xmEbX+dsTzfH8YGQli5fKWn3nWIm7Hz74zErLEq2SBzU iF+SetWBhyFdiwBtnnoqOAYy23XGPQ2uRb4eWZK3bAU1E0OJ7y8jys/rM3+hZZNB4zQGVWcBf3+m2 iy55oBpSgTiOV/roqVMgrL0hrOm39SdgHejFE7ymbt8d9zNOm2BC3QWCQGX9f+/1aKwCq6Me352WL q1S9cwho2jmOjEHduI7A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhEq-007TU4-Sh; Thu, 19 May 2022 14:34:53 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUX-00775u-IR for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:05 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 28276B824AF; Thu, 19 May 2022 13:47:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 756DCC36AE5; Thu, 19 May 2022 13:46:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968018; bh=5F/D/YXt7t+edZebowc69LcMSXovKQAVpAUbrzWr4G8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GRqWIDIZ4YTWgSt+DR16tZq2E1Ls2DuPEl+9nvlXiOkuMl613G47YivsqPXtlNF7r Hoc3zHhhPWr0c6vZtsUp0i43+ZPYdo7OrNn/IYLf0AYuvMAmBuO1AI6VOfJLo4lDF2 tdmi9SP1XL5QacOsTzXTvG/B6yDLhdCJ3dw1qC00PSNkbWbfMCmPv2PCclzh5PktDv Lg1N0DkrTLKWaMlg2H4Q0FHmwfQ6tmAkXDMAJuFyy/QBU5QaUoI1OndaP+gS3MCqlB OwjyF8M0PlwnflbwrlewxH0oaOxNp9cNZTqlrZsPLYHLG9a9I90bEgkV3jT2i0eX4x n59cjJsO/e/AA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 69/89] KVM: arm64: Do not update virtual timer state for protected VMs Date: Thu, 19 May 2022 14:41:44 +0100 Message-Id: <20220519134204.5379-70-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064701_851479_2F64C4B0 X-CRM114-Status: GOOD ( 14.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Protected vCPUs always run with a virtual counter offset of 0, so don't bother trying to update it from the host. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/arch_timer.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c index 6e542e2eae32..63d06f372eb1 100644 --- a/arch/arm64/kvm/arch_timer.c +++ b/arch/arm64/kvm/arch_timer.c @@ -88,7 +88,9 @@ static u64 timer_get_offset(struct arch_timer_context *ctxt) switch(arch_timer_ctx_index(ctxt)) { case TIMER_VTIMER: - return __vcpu_sys_reg(vcpu, CNTVOFF_EL2); + if (likely(!kvm_vm_is_protected(vcpu->kvm))) + return __vcpu_sys_reg(vcpu, CNTVOFF_EL2); + fallthrough; default: return 0; } @@ -753,6 +755,9 @@ static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff) struct kvm *kvm = vcpu->kvm; struct kvm_vcpu *tmp; + if (unlikely(kvm_vm_is_protected(vcpu->kvm))) + cntvoff = 0; + mutex_lock(&kvm->lock); kvm_for_each_vcpu(i, tmp, kvm) timer_set_offset(vcpu_vtimer(tmp), cntvoff); From patchwork Thu May 19 13:41:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 717A6C433EF for ; Thu, 19 May 2022 14:37:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IxIQi6WE/9qRFXu2Wedj4OpqwmZWmOgdi1PZqMIHgVg=; b=L+qe0SvUi48Bem TAhFrIE1nIylF2+yaoRggbFFZMHO6xswDrIIEnaOx/XtTQh0EVXoUs/3wCwF38YcoVwp6CCaVOvbn lUbuPk7LMtAAxHGL+gd4C8HLefkyI5OtLLlijvalQnIgh3V/sJw/3W63xaa1ap6xi3hvs4BR0wazv ma4OXZQ6vk4APPVXQzfA37Kso4RHtHRIZ78hTCH3XxcJch8pdWV8T+jDSymDfKStJwNn1xG4iyJZ0 7w0Cc7im3CqclK2N8UVaVgoJECZZBBAZxOL7Pe/+IGiip4unJzi6eEXS5yKjnLUXE1ntzfjRS/vQ9 rCA9eOo80YdQsqHFkxyA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhFh-007Tpk-He; Thu, 19 May 2022 14:35:47 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUZ-00777Y-W7 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:05 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 792656179F; Thu, 19 May 2022 13:47:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6B259C36AE9; Thu, 19 May 2022 13:46:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968022; bh=PaDvO7OrWACJT+4yuXWM0lxUPvUE9Q+FqN8Dtq8OkYs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tsTVC9ijsjxTgVKN4ZxQeljZ4egvZyASLAVQrvNK+vfin5WkChvWM0MZ4vaVc8lUQ MA29CgUqKr29/+PkljcpierdXtaBJX5oOr6BOpA3se/asO+cglnCIDcv3OLJuLKP23 SR4Nof5VZ/pOobQ02GymuIKcskCFnIaCMr8ibihB7O356lPuI8IGTzRI4z+y2fHXsx D5KXzW/rCs5CthslpRLqrCQYLYF8f6fQS5WL3c9GOpe+SbtNMpVmjIrdEQeuK58cYS 4K1jjEA2EV/O6mBBEAmylVFqpYmi48myGiIB6ssoTpfLRzn5Tfiw7oQ0l41QhntPgG WYFQRuKnZ3Z6w== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 70/89] KVM: arm64: Refactor kvm_vcpu_enable_ptrauth() for hyp use Date: Thu, 19 May 2022 14:41:45 +0100 Message-Id: <20220519134204.5379-71-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064704_214255_4B2E5A5C X-CRM114-Status: GOOD ( 14.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Move kvm_vcpu_enable_ptrauth() to a shared header to be used by hyp in protected mode. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 16 ++++++++++++++++ arch/arm64/kvm/reset.c | 16 ---------------- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index d62405ce3e6d..bb56aff4de95 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -43,6 +43,22 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_vcpu_wfi(struct kvm_vcpu *vcpu); +static inline int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) +{ + /* + * For now make sure that both address/generic pointer authentication + * features are requested by the userspace together and the system + * supports these capabilities. + */ + if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) || + !test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features) || + !system_has_full_ptr_auth()) + return -EINVAL; + + vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH; + return 0; +} + static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) { return !(vcpu->arch.hcr_el2 & HCR_RW); diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index c07265ea72fd..cc25f540962b 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -165,22 +165,6 @@ static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu) memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); } -static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) -{ - /* - * For now make sure that both address/generic pointer authentication - * features are requested by the userspace together and the system - * supports these capabilities. - */ - if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) || - !test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features) || - !system_has_full_ptr_auth()) - return -EINVAL; - - vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH; - return 0; -} - static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu) { struct kvm_vcpu *tmp; From patchwork Thu May 19 13:41:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 91673C433EF for ; Thu, 19 May 2022 14:37:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+jk8CH1XmiIcZQv3y5pGYf5+z0SHlJFPSQLWbm+nkk8=; b=jXlMp+Bj2X7FML ahHJYkegN2AO9gsEXfn+N1UjDh17gKQOod5vVQIoCPoaleGHrl0KXvHqNZh15rPnGlRgeJaU7zH2k v1J1PGUxXkCWoiS4AAnaiYKdi1UNh2Gr2iH7iV+nt8MWWnlKkLes8MhMJhpeIDq7q1ft8QNssa1xx o9+nRmh7RPgT1EIytbsctJ5wRcwSk/U5QeRZ+soZ/VFYftpTwqaW3OQR/t6hD2b6LnrJfiUILf2DH wc3p3pDHYXBz/L+T91B94yb3kev4Y1pj4l0+QuUHsTcE1kLA2Zut1dhzNkGFAmI1ig3n9XD69HuwT LYAW0aycg7IJDrsQcgVg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhGJ-007U7B-Nj; Thu, 19 May 2022 14:36:24 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUd-00779u-Ss for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:09 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 725DB617D7; Thu, 19 May 2022 13:47:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6397DC34116; Thu, 19 May 2022 13:47:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968026; bh=c/cAcfak9Xq41vGzDwWXcOzbHn4dVNJfvW6JUCOKNTI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MnPjV3KuVwQUlMfc1bxb7Eh0vxVdKcadWa6N25Tysz4Od0+B2FJt1LWs8WjIylfl2 Cr7z63W96JEuaQRNGEl84zm2GC3vU9pwfmjV4x7GThKwuEKXS9ycGnTlTnrcHZD0s9 xBz+tzYcC5a1FccAF49rSQR+OdlxpFXHQo+CLJAZaBXWPtDv/+MtfwBTvoTe4U6i2D 0IwivhMTCBHyjYAqAnqchlXudb+DJRsnLE5QHlI/LvKZKaat5NMAy13xAdavTcYHar rdgZwtbrDknGepmUFtuyn0yr4A+dYOmx08mBYfc2pxGgzFBjiFgFNwhn4q/LWXpCYd kvP4cYDsuQnfQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 71/89] KVM: arm64: Initialize shadow vm state at hyp Date: Thu, 19 May 2022 14:41:46 +0100 Message-Id: <20220519134204.5379-72-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064708_048384_4C532921 X-CRM114-Status: GOOD ( 17.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Do not rely on the state of the vm as provided by the host, but initialize it instead at EL2 to a known good and safe state. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/pkvm.c | 71 ++++++++++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 839506a546c7..51da5c1d7e0d 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -6,6 +6,9 @@ #include #include + +#include + #include #include #include @@ -315,6 +318,53 @@ struct kvm_shadow_vcpu_state *pkvm_loaded_shadow_vcpu_state(void) return __this_cpu_read(loaded_shadow_state); } +/* Check and copy the supported features for the vcpu from the host. */ +static void copy_features(struct kvm_vcpu *shadow_vcpu, struct kvm_vcpu *host_vcpu) +{ + DECLARE_BITMAP(allowed_features, KVM_VCPU_MAX_FEATURES); + + /* No restrictions for non-protected VMs. */ + if (!kvm_vm_is_protected(shadow_vcpu->kvm)) { + bitmap_copy(shadow_vcpu->arch.features, + host_vcpu->arch.features, + KVM_VCPU_MAX_FEATURES); + return; + } + + bitmap_zero(allowed_features, KVM_VCPU_MAX_FEATURES); + + /* + * For protected vms, always allow: + * - CPU starting in poweroff state + * - PSCI v0.2 + */ + set_bit(KVM_ARM_VCPU_POWER_OFF, allowed_features); + set_bit(KVM_ARM_VCPU_PSCI_0_2, allowed_features); + + /* + * Check if remaining features are allowed: + * - Performance Monitoring + * - Scalable Vectors + * - Pointer Authentication + */ + if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), PVM_ID_AA64DFR0_ALLOW)) + set_bit(KVM_ARM_VCPU_PMU_V3, allowed_features); + + if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_SVE), PVM_ID_AA64PFR0_ALLOW)) + set_bit(KVM_ARM_VCPU_SVE, allowed_features); + + if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_API), PVM_ID_AA64ISAR1_ALLOW) && + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA), PVM_ID_AA64ISAR1_ALLOW)) + set_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, allowed_features); + + if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI), PVM_ID_AA64ISAR1_ALLOW) && + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA), PVM_ID_AA64ISAR1_ALLOW)) + set_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, allowed_features); + + bitmap_and(shadow_vcpu->arch.features, host_vcpu->arch.features, + allowed_features, KVM_VCPU_MAX_FEATURES); +} + static void unpin_host_vcpus(struct kvm_shadow_vcpu_state *shadow_vcpu_states, unsigned int nr_vcpus) { @@ -350,6 +400,17 @@ static int set_host_vcpus(struct kvm_shadow_vcpu_state *shadow_vcpu_states, return 0; } +static int init_ptrauth(struct kvm_vcpu *shadow_vcpu) +{ + int ret = 0; + + if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, shadow_vcpu->arch.features) || + test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, shadow_vcpu->arch.features)) + ret = kvm_vcpu_enable_ptrauth(shadow_vcpu); + + return ret; +} + static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, struct kvm_vcpu **vcpu_array, int *last_ran, @@ -357,10 +418,12 @@ static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, unsigned int nr_vcpus) { int i; + int ret; vm->host_kvm = kvm; vm->kvm.created_vcpus = nr_vcpus; vm->kvm.arch.vtcr = host_kvm.arch.vtcr; + vm->kvm.arch.pkvm.enabled = READ_ONCE(kvm->arch.pkvm.enabled); vm->kvm.arch.mmu.last_vcpu_ran = last_ran; vm->last_ran_size = last_ran_size; memset(vm->kvm.arch.mmu.last_vcpu_ran, -1, sizeof(int) * hyp_nr_cpus); @@ -377,8 +440,16 @@ static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, shadow_vcpu->vcpu_idx = i; shadow_vcpu->arch.hw_mmu = &vm->kvm.arch.mmu; + shadow_vcpu->arch.power_off = true; + + copy_features(shadow_vcpu, host_vcpu); + + ret = init_ptrauth(shadow_vcpu); + if (ret) + return ret; pkvm_vcpu_init_traps(shadow_vcpu, host_vcpu); + kvm_reset_pvm_sys_regs(shadow_vcpu); } return 0; From patchwork Thu May 19 13:41:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 401F6C433EF for ; Thu, 19 May 2022 14:38:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hOInTZBq9F18HBT6CzDg1o1EpWnuNZKF4P608Oy42kU=; b=z+BzZriYu63V50 g5Gtv4Rw1f4tzjUkTKN8jFUhVRMP8Qf9zLn3UusGH5UzHmcKiKSrHCvf4BvAjNaxHNvnzzqq6NeKR J5ueuthqS0iZScQeREzGy27686P2iioQtEomtQu40Mzu1BIsnZezMD2J+aPvOo8cw2Toct3jaNDA4 VCBXHxFa8VN42EcNjl3JtrjF86YzMLyE42B6ypNEEkN/CWfAuuRUVwUsrodQg9vF6S7HS0zSLaXJq etPZ6zbg+Hud6kq38kkPM8Iq/KqxE5babCUKd8UvHHTdznx7xS3aOr5LuuhhEHCmrMjk/HYrZy8Em UwJmbjI5efWlK9e3HnXw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhGv-007USr-15; Thu, 19 May 2022 14:37:01 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUj-0077C6-C1 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:15 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0917FB824AB; Thu, 19 May 2022 13:47:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5A117C385AA; Thu, 19 May 2022 13:47:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968030; bh=hCh5aMXsjpfqNJ0ujhIzhZM+KshM303G+POWXF8vjmk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SpObs3l33bmi2hgGcZfQGielUdcM4kiaTitgszKPs+CAXPpQgg3DOTn+vtj7f/FA8 3O7wmJz2Oo92OeK3eiHNZlE6YbQAjYjHcighQwArM8zp8MEPexpWB7X6kBaW1LapNl vlA/B03vD6+WJFpPbWuX0cEA/qfvGGjKgYbz7J5NnxLhCMcHIJd0bl7ZQcJQIkZkyM jO5NXEPxUQw4aaO5yh2+J+j3OzdL96JkVUrCojCtQtJ5H4I4+H/zLufXur3kEGH6jD FxeD34LXtYEIE1Fi2SopnbaioFPax+Dt4ve6AN5gds75b8IFtvPsi+pbGGYFiCjcii dwLjo9/zYy7PQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 72/89] KVM: arm64: Track the SVE state in the shadow vcpu Date: Thu, 19 May 2022 14:41:47 +0100 Message-Id: <20220519134204.5379-73-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064713_758021_F7591335 X-CRM114-Status: GOOD ( 16.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier When dealing with a guest with SVE enabled, make sure the host SVE state is pinned at EL2 S1, and that the shadow state is correctly initialised (and then unpinned on teardown). Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 9 ++++---- arch/arm64/kvm/hyp/nvhe/pkvm.c | 33 ++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 5d6cee7436f4..1e39dc7eab4d 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -416,8 +416,7 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) if (host_flags & KVM_ARM64_PKVM_STATE_DIRTY) __flush_vcpu_state(shadow_state); - shadow_vcpu->arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); - shadow_vcpu->arch.sve_max_vl = host_vcpu->arch.sve_max_vl; + shadow_vcpu->arch.flags = host_flags; shadow_vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS & ~(HCR_RW | HCR_TWI | HCR_TWE); shadow_vcpu->arch.hcr_el2 |= READ_ONCE(host_vcpu->arch.hcr_el2); @@ -488,8 +487,10 @@ static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state, BUG(); } - host_flags = READ_ONCE(host_vcpu->arch.flags) & - ~(KVM_ARM64_PENDING_EXCEPTION | KVM_ARM64_INCREMENT_PC); + host_flags = shadow_vcpu->arch.flags; + if (shadow_state_is_protected(shadow_state)) + host_flags &= ~(KVM_ARM64_PENDING_EXCEPTION | KVM_ARM64_INCREMENT_PC); + WRITE_ONCE(host_vcpu->arch.flags, host_flags); shadow_state->exit_code = exit_reason; } diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 51da5c1d7e0d..9feeb0b5433a 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -372,7 +372,19 @@ static void unpin_host_vcpus(struct kvm_shadow_vcpu_state *shadow_vcpu_states, for (i = 0; i < nr_vcpus; i++) { struct kvm_vcpu *host_vcpu = shadow_vcpu_states[i].host_vcpu; + struct kvm_vcpu *shadow_vcpu = &shadow_vcpu_states[i].shadow_vcpu; + size_t sve_state_size; + void *sve_state; + hyp_unpin_shared_mem(host_vcpu, host_vcpu + 1); + + if (!test_bit(KVM_ARM_VCPU_SVE, shadow_vcpu->arch.features)) + continue; + + sve_state = shadow_vcpu->arch.sve_state; + sve_state = kern_hyp_va(sve_state); + sve_state_size = vcpu_sve_state_size(shadow_vcpu); + hyp_unpin_shared_mem(sve_state, sve_state + sve_state_size); } } @@ -448,6 +460,27 @@ static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, if (ret) return ret; + if (test_bit(KVM_ARM_VCPU_SVE, shadow_vcpu->arch.features)) { + size_t sve_state_size; + void *sve_state; + + shadow_vcpu->arch.sve_state = READ_ONCE(host_vcpu->arch.sve_state); + shadow_vcpu->arch.sve_max_vl = READ_ONCE(host_vcpu->arch.sve_max_vl); + + sve_state = kern_hyp_va(shadow_vcpu->arch.sve_state); + sve_state_size = vcpu_sve_state_size(shadow_vcpu); + + if (!shadow_vcpu->arch.sve_state || !sve_state_size || + hyp_pin_shared_mem(sve_state, + sve_state + sve_state_size)) { + clear_bit(KVM_ARM_VCPU_SVE, + shadow_vcpu->arch.features); + shadow_vcpu->arch.sve_state = NULL; + shadow_vcpu->arch.sve_max_vl = 0; + return -EINVAL; + } + } + pkvm_vcpu_init_traps(shadow_vcpu, host_vcpu); kvm_reset_pvm_sys_regs(shadow_vcpu); } From patchwork Thu May 19 13:41:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 88E70C433EF for ; Thu, 19 May 2022 14:39:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bPwO2xn2+kW9ZiVRxCKVNw4qkgCWC+zWchFble/BcZQ=; b=1A1p/S9XRE/n8y 8heJ3IyMCPOvTI9GdOlNeLbd0O2GdlZmtxoyVuT3jVoxjGccryvbJpOlj906HrauwfYJkSlSc1SQn gNpt/KDe7A3nve5rKTResPx6OOwI5nKkRBUvex0Y2/dv8+rJxZIEPcPWSOKuza22lHHke3LvB7MaY s3Y9HhMedoR6AQSc09FTF9MIbbdxuINfvCqnM69/riwrLGNADIFUzaWcxd8hPGIcV9fJhYTAUr1c7 X3nbRiEp7cn74qLbU6rbY6o+nNPjyM/fSosU6H9eCqwf0Mlyhw8lPZnCR6ydQUcvu2Ca129clng7W qSM69RJFz3B3tGsAfcow==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhIS-007VDO-Oo; Thu, 19 May 2022 14:38:37 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUl-0077DE-Gb for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:17 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EF98A61783; Thu, 19 May 2022 13:47:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 50D87C34117; Thu, 19 May 2022 13:47:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968034; bh=RYzz4i/BOxS0OcFaoS3qfD1miQC80/tTsW0x1NowfrY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N2c/oSD/VTDmAKjYyy2DWTAOgsB/u/4It/Wha8qRNDwDUsXaJayc3PyaaTuNTUuiA bms34Goz8eJ/lek44x643XoUqlOU1I5URcPJ7mskY6GN2/z5QTSFBthBx6mMCGCfvL cN8Z3QsPPKzjGcodk70XSYWwF0tnMrfBJtE6ADD/7yl596PKP1N+FDHg1bre3mZ/Do oBYtr6LcycuM+4mcApwNoGN8d3/P+yPfSD69YieyaJglK9jLYv7ZcetwHZlitdX6BY G5qrFbP2sw3Xe9KeOpkBpESviRMZisKlJGQcWIhPAbRc6cjgajMMj7V/vPvPT6rv0J EYbQ3h2cOwXmg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 73/89] KVM: arm64: Add HVC handling for protected guests at EL2 Date: Thu, 19 May 2022 14:41:48 +0100 Message-Id: <20220519134204.5379-74-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064715_701603_A1F2085F X-CRM114-Status: GOOD ( 19.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Rather than forwarding guest hypercalls back to the host for handling, implement some basic handling at EL2 which will later be extending to provide additional functionality such as PSCI. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 2 ++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 24 ++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 22 ++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 1 + 4 files changed, 49 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index e772f9835a86..33d34cc639ea 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -101,4 +101,6 @@ bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code); void kvm_reset_pvm_sys_regs(struct kvm_vcpu *vcpu); int kvm_check_pvm_sysreg_table(void); +bool kvm_handle_pvm_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 1e39dc7eab4d..26c8709f5494 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -4,6 +4,8 @@ * Author: Andrew Scull */ +#include + #include #include @@ -42,6 +44,13 @@ static void handle_pvm_entry_wfx(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *sh KVM_ARM64_INCREMENT_PC; } +static void handle_pvm_entry_hvc64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + u64 ret = READ_ONCE(host_vcpu->arch.ctxt.regs.regs[0]); + + vcpu_set_reg(shadow_vcpu, 0, ret); +} + static void handle_pvm_entry_sys64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) { unsigned long host_flags; @@ -195,6 +204,19 @@ static void handle_pvm_exit_sys64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *s } } +static void handle_pvm_exit_hvc64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + int i; + + WRITE_ONCE(host_vcpu->arch.fault.esr_el2, + shadow_vcpu->arch.fault.esr_el2); + + /* Pass the hvc function id (r0) as well as any potential arguments. */ + for (i = 0; i < 8; i++) + WRITE_ONCE(host_vcpu->arch.ctxt.regs.regs[i], + vcpu_get_reg(shadow_vcpu, i)); +} + static void handle_pvm_exit_iabt(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) { WRITE_ONCE(host_vcpu->arch.fault.esr_el2, @@ -273,6 +295,7 @@ static void handle_vm_exit_abt(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shad static const shadow_entry_exit_handler_fn entry_pvm_shadow_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_WFx] = handle_pvm_entry_wfx, + [ESR_ELx_EC_HVC64] = handle_pvm_entry_hvc64, [ESR_ELx_EC_SYS64] = handle_pvm_entry_sys64, [ESR_ELx_EC_IABT_LOW] = handle_pvm_entry_iabt, [ESR_ELx_EC_DABT_LOW] = handle_pvm_entry_dabt, @@ -281,6 +304,7 @@ static const shadow_entry_exit_handler_fn entry_pvm_shadow_handlers[] = { static const shadow_entry_exit_handler_fn exit_pvm_shadow_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_WFx] = handle_pvm_exit_wfx, + [ESR_ELx_EC_HVC64] = handle_pvm_exit_hvc64, [ESR_ELx_EC_SYS64] = handle_pvm_exit_sys64, [ESR_ELx_EC_IABT_LOW] = handle_pvm_exit_iabt, [ESR_ELx_EC_DABT_LOW] = handle_pvm_exit_dabt, diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 9feeb0b5433a..92e60ebeced5 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -7,6 +7,8 @@ #include #include +#include + #include #include @@ -797,3 +799,23 @@ int __pkvm_teardown_shadow(unsigned int shadow_handle) hyp_spin_unlock(&shadow_lock); return err; } + +/* + * Handler for protected VM HVC calls. + * + * Returns true if the hypervisor has handled the exit, and control should go + * back to the guest, or false if it hasn't. + */ +bool kvm_handle_pvm_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code) +{ + u32 fn = smccc_get_function(vcpu); + + switch (fn) { + case ARM_SMCCC_VERSION_FUNC_ID: + /* Nothing to be handled by the host. Go back to the guest. */ + smccc_set_retval(vcpu, ARM_SMCCC_VERSION_1_1, 0, 0, 0); + return true; + default: + return false; + } +} diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 6bb979ee51cc..87338775288c 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -205,6 +205,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { static const exit_handler_fn pvm_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, + [ESR_ELx_EC_HVC64] = kvm_handle_pvm_hvc64, [ESR_ELx_EC_SYS64] = kvm_handle_pvm_sys64, [ESR_ELx_EC_SVE] = kvm_handle_pvm_restricted, [ESR_ELx_EC_FP_ASIMD] = kvm_handle_pvm_fpsimd, From patchwork Thu May 19 13:41:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5C66CC433F5 for ; Thu, 19 May 2022 14:40:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GpfHr1tJKWeJ1BoCk/kY589U1PUh4EIt7APJRDtIt0M=; b=gwFIw+7rXSnMht NlsYwpX4TxhMn+xRZChptsm4RTAZTcqr2L2Fr/BpDlhzMVfkgIFvZDSWBHhJIo/jilpDBwhf567Wc t+hkz31cfC+gfOawL+hG7LgBWRZxPUEBt0sXeN37eucvZfSDz4JbFNdQTQNW5+55+RDoCxkb0RGLL 50rUiAYHTTtApzd83D3m56znySwfoZfUrlFAbsjlhw3p0/LMyZJpAPPH9pTVikA2o3Z9sL0fvzrdh Fnbhkhjh5CIqEKx/WAqCMKPwACeRz/Jsps4Da874B2tcOJ29rnuUMmBgLDmmO5CNNUkqG1LMybjvw WI3zMcJG4AO9MMRofKqQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhJS-007Vcb-89; Thu, 19 May 2022 14:39:38 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUq-0077FF-3y for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:23 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4CF8A6179E; Thu, 19 May 2022 13:47:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 46A3EC34115; Thu, 19 May 2022 13:47:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968038; bh=x58MGQPrXq++wFZZLs9GZeS60czDjmBGDjMArfLgdBY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kNi1MDEcq6MHfeD5+z9H+oi70NhpxB1dAjxmkTdJHTbMkL768KFgylxF5f4LYwikm Vx3IYJu0S+oZaxicKuh0l7l8O9I1hyTbzC6j7U0EfghOYordOpklg9SNs67Nv+5okE LKrSl4R41+AcxKwdtmF7NP9LW1zaG9B4E4SZ8yqMHQeaIdG29THXyHHzHIGHyG3VWW qWcOqKQCU1ZEfvKrwlTf8Le+NEgdzKaQfMenYq9aSRvEyD0IZ18K4Y6amlDSR82C9V pVGOmksJsYRek9MPei5djqy0zZ9PT6sKN1TriZS1N7PXewlmU8vLaF9yWerjZ3lfI9 uteJsptaycOeQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 74/89] KVM: arm64: Move pstate reset values to kvm_arm.h Date: Thu, 19 May 2022 14:41:49 +0100 Message-Id: <20220519134204.5379-75-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064720_301026_9A487C49 X-CRM114-Status: GOOD ( 13.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Move the macro defines of the pstate reset values to a shared header to be used by hyp in protected mode. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_arm.h | 9 +++++++++ arch/arm64/kvm/reset.c | 9 --------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 98b60fa86853..056cda220bff 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -359,4 +359,13 @@ #define CPACR_EL1_DEFAULT (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN |\ CPACR_EL1_ZEN_EL1EN) +/* + * ARMv8 Reset Values + */ +#define VCPU_RESET_PSTATE_EL1 (PSR_MODE_EL1h | PSR_A_BIT | PSR_I_BIT | \ + PSR_F_BIT | PSR_D_BIT) + +#define VCPU_RESET_PSTATE_SVC (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT | \ + PSR_AA32_I_BIT | PSR_AA32_F_BIT) + #endif /* __ARM64_KVM_ARM_H__ */ diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index cc25f540962b..6bc979aece3c 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -32,15 +32,6 @@ /* Maximum phys_shift supported for any VM on this host */ static u32 kvm_ipa_limit; -/* - * ARMv8 Reset Values - */ -#define VCPU_RESET_PSTATE_EL1 (PSR_MODE_EL1h | PSR_A_BIT | PSR_I_BIT | \ - PSR_F_BIT | PSR_D_BIT) - -#define VCPU_RESET_PSTATE_SVC (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT | \ - PSR_AA32_I_BIT | PSR_AA32_F_BIT) - unsigned int kvm_sve_max_vl; int kvm_arm_init_sve(void) From patchwork Thu May 19 13:41:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 845F8C433EF for ; Thu, 19 May 2022 14:41:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YAZsnt7eVkQBIZbslhid0F7JkUVHbMTqxTfLfRa9pj8=; b=UEg9kNWznjTCQ8 WkEBVOGXcXFTIpkOfjkor7gJVTqhs/MMX/oWptLxuzzz9vS1UaqK6yU8aFG+EMMYBLbprEvcPNqWW f+w2j5H1WE3HS0gcQy3kjB0Zi6u2VrX3gMjIxlR61R4cudcmT86V9A5JidCloJ9WTgIXs9xuzy7IF z22Bi9KtgoUV0I+qIOi2Sq/1O88qnZigwwuU5fa1bLyEy4yOzXHYbbtb6vzbPmyqc40fOLyW785jr LykmhkPsjHWVeJFoJM/Aulmr37fxceUWIPqfWXvujIigu/yWTzocD8Vmgj5cJaxm0f8FQlc98mOKG kjEOjcqXYSnKSqOcxEEg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhK8-007Vwc-KL; Thu, 19 May 2022 14:40:21 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUv-0077Hs-Of for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:32 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0CD62B824B1; Thu, 19 May 2022 13:47:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3C9BBC385B8; Thu, 19 May 2022 13:47:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968042; bh=0mU7k9EwSwUtnIN+8oQEEhPJDCDPGHEQw7raxhtz3Lo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WVSOYf4g8C0W9C6u1dWFO8tC3NZrQ/+a8hERURAcXTaafwf9lT2RdsjxDsEMZbF1e LcKA66CMWIjp8Yxw7w/mOMZhDANKkmw/BG1p7WI9Nx65519p7wSO2YG7pd8c3MYy6T KkPytu1aRcBk79HKVb4Ds3lfUKPPTm6IMulKUBXK1dIWHgf9uUySZAwhnn/B5gmFYd A4Exm67JltDh6PHqGDh75Xm6vkopwMCTiXeNQB6e5R42RCv7ASkn+2oJRpYM9wcXar qfX1ysqJfJOM9hCKSzbajhAonurP8SEyzRkl1VrN7wzJhDJxsp3cCEVixozWWkfAi4 DAFJfrqSm3MXQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 75/89] KVM: arm64: Move some kvm_psci functions to a shared header Date: Thu, 19 May 2022 14:41:50 +0100 Message-Id: <20220519134204.5379-76-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064726_228129_F4D35C70 X-CRM114-Status: GOOD ( 17.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Move some PSCI functions and macros to a shared header to be used by hyp in protected mode. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 30 ++++++++++++++++++++++++++++ arch/arm64/kvm/psci.c | 28 -------------------------- 2 files changed, 30 insertions(+), 28 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index bb56aff4de95..82515b015eb4 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -492,4 +492,34 @@ static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature) return test_bit(feature, vcpu->arch.features); } +/* Narrow the PSCI register arguments (r1 to r3) to 32 bits. */ +static inline void kvm_psci_narrow_to_32bit(struct kvm_vcpu *vcpu) +{ + int i; + + /* + * Zero the input registers' upper 32 bits. They will be fully + * zeroed on exit, so we're fine changing them in place. + */ + for (i = 1; i < 4; i++) + vcpu_set_reg(vcpu, i, lower_32_bits(vcpu_get_reg(vcpu, i))); +} + +static inline bool kvm_psci_valid_affinity(struct kvm_vcpu *vcpu, + unsigned long affinity) +{ + return !(affinity & ~MPIDR_HWID_BITMASK); +} + + +#define AFFINITY_MASK(level) ~((0x1UL << ((level) * MPIDR_LEVEL_BITS)) - 1) + +static inline unsigned long psci_affinity_mask(unsigned long affinity_level) +{ + if (affinity_level <= 3) + return MPIDR_HWID_BITMASK & AFFINITY_MASK(affinity_level); + + return 0; +} + #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c index 372da09a2fab..e7baacd696ad 100644 --- a/arch/arm64/kvm/psci.c +++ b/arch/arm64/kvm/psci.c @@ -21,16 +21,6 @@ * as described in ARM document number ARM DEN 0022A. */ -#define AFFINITY_MASK(level) ~((0x1UL << ((level) * MPIDR_LEVEL_BITS)) - 1) - -static unsigned long psci_affinity_mask(unsigned long affinity_level) -{ - if (affinity_level <= 3) - return MPIDR_HWID_BITMASK & AFFINITY_MASK(affinity_level); - - return 0; -} - static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu) { /* @@ -58,12 +48,6 @@ static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu) kvm_vcpu_kick(vcpu); } -static inline bool kvm_psci_valid_affinity(struct kvm_vcpu *vcpu, - unsigned long affinity) -{ - return !(affinity & ~MPIDR_HWID_BITMASK); -} - static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu) { struct vcpu_reset_state *reset_state; @@ -201,18 +185,6 @@ static void kvm_psci_system_reset2(struct kvm_vcpu *vcpu) KVM_SYSTEM_EVENT_RESET_FLAG_PSCI_RESET2); } -static void kvm_psci_narrow_to_32bit(struct kvm_vcpu *vcpu) -{ - int i; - - /* - * Zero the input registers' upper 32 bits. They will be fully - * zeroed on exit, so we're fine changing them in place. - */ - for (i = 1; i < 4; i++) - vcpu_set_reg(vcpu, i, lower_32_bits(vcpu_get_reg(vcpu, i))); -} - static unsigned long kvm_psci_check_allowed_function(struct kvm_vcpu *vcpu, u32 fn) { switch(fn) { From patchwork Thu May 19 13:41:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 30E65C433EF for ; Thu, 19 May 2022 14:42:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lIed3mfQUO5B+PrV8QCTB5jgrsPMpmNjzCf+S/bMmJ8=; b=Jrw98swIWnypjZ ZrbkPo3HJV0h2H4UDahvfJw6ovCaBPykusHjJOBnGU0VAWPsaWWci+Sc2t1EcnMaCDhAHlW0L8pJz gvUxsaXeGpbbjFBuWDmtjzIwTlrr8LEZYZuMxNuQayfN+1KzswbhpQRM1W8wMmUv1HC/h6eyZNp/L 9pcOZGj+VoFWGWc5T4lBgVzBA/eASSdjPm+fXDtGNhl90adebCtG2vBgvmBCO57mp4WS/4mvdx/Z1 wAzs3VWvfSCodfhwPTHt9Q8k4PPSjCHGMMlLsjNalQsO42RwMOi2Wknz6sE35ezObpnKUPUa1eH77 +H04nRFXiC9ngS7Dlv+A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhKz-007WKw-VW; Thu, 19 May 2022 14:41:14 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUz-0077KX-CB for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:32 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DE3C0B824DA; Thu, 19 May 2022 13:47:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 32815C34119; Thu, 19 May 2022 13:47:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968046; bh=sbfGSCQlcdHr0AlNILq/FUQuZCUkIjPNbUYGIv+SreI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S4PtIS2/n2txKk9VfZXhyhkRcZz/zUXjKAdMDN52IZB/o8BcLi/awKd4YB4FiDQ43 N3w2Er4fYT4XMfGPGDsuubvTUYYCHStQ0JqjYo5oT0eyx32HEIje1+5aFAvKFCoip/ ysf+6GXXvjBXygNxYdaf9HVAwdUmGJTR4C3SlHi8LKU8ihQMfuIJ4uVLa7SKt3tfUz 42Q1TfImobSrMi1VAnOW25q8aEooXZ0j7S7Qm29O8US3xhRmwIb4RJGM49UYZZZ4A0 GIy9U8KQMgBelbsYEQAHqqnjyBxJoC8OtqD7aGA1FlW+mwXdxcDjOP1MtAlYxarglN Q7DjDjNKaorPg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 76/89] KVM: arm64: Factor out vcpu_reset code for core registers and PSCI Date: Thu, 19 May 2022 14:41:51 +0100 Message-Id: <20220519134204.5379-77-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064729_814889_8240AD91 X-CRM114-Status: GOOD ( 18.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Factor out logic that resets a vcpu's core registers, including additional PSCI handling. This code will be reused when resetting VMs in protected mode. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 41 +++++++++++++++++++++++++ arch/arm64/kvm/reset.c | 45 +++++----------------------- 2 files changed, 48 insertions(+), 38 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 82515b015eb4..2a79c861b8e0 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -522,4 +522,45 @@ static inline unsigned long psci_affinity_mask(unsigned long affinity_level) return 0; } +/* Reset a vcpu's core registers. */ +static inline void kvm_reset_vcpu_core(struct kvm_vcpu *vcpu) +{ + u32 pstate; + + if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { + pstate = VCPU_RESET_PSTATE_SVC; + } else { + pstate = VCPU_RESET_PSTATE_EL1; + } + + /* Reset core registers */ + memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu))); + memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs)); + vcpu->arch.ctxt.spsr_abt = 0; + vcpu->arch.ctxt.spsr_und = 0; + vcpu->arch.ctxt.spsr_irq = 0; + vcpu->arch.ctxt.spsr_fiq = 0; + vcpu_gp_regs(vcpu)->pstate = pstate; +} + +/* PSCI reset handling for a vcpu. */ +static inline void kvm_reset_vcpu_psci(struct kvm_vcpu *vcpu, + struct vcpu_reset_state *reset_state) +{ + unsigned long target_pc = reset_state->pc; + + /* Gracefully handle Thumb2 entry point */ + if (vcpu_mode_is_32bit(vcpu) && (target_pc & 1)) { + target_pc &= ~1UL; + vcpu_set_thumb(vcpu); + } + + /* Propagate caller endianness */ + if (reset_state->be) + kvm_vcpu_set_be(vcpu); + + *vcpu_pc(vcpu) = target_pc; + vcpu_set_reg(vcpu, 0, reset_state->r0); +} + #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 6bc979aece3c..4d223fae996d 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -109,7 +109,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) kfree(buf); return ret; } - + vcpu->arch.sve_state = buf; vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED; return 0; @@ -202,7 +202,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) struct vcpu_reset_state reset_state; int ret; bool loaded; - u32 pstate; mutex_lock(&vcpu->kvm->lock); reset_state = vcpu->arch.reset_state; @@ -240,29 +239,13 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) goto out; } - switch (vcpu->arch.target) { - default: - if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { - pstate = VCPU_RESET_PSTATE_SVC; - } else { - pstate = VCPU_RESET_PSTATE_EL1; - } - - if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) { - ret = -EINVAL; - goto out; - } - break; + if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) { + ret = -EINVAL; + goto out; } /* Reset core registers */ - memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu))); - memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs)); - vcpu->arch.ctxt.spsr_abt = 0; - vcpu->arch.ctxt.spsr_und = 0; - vcpu->arch.ctxt.spsr_irq = 0; - vcpu->arch.ctxt.spsr_fiq = 0; - vcpu_gp_regs(vcpu)->pstate = pstate; + kvm_reset_vcpu_core(vcpu); /* Reset system registers */ kvm_reset_sys_regs(vcpu); @@ -271,22 +254,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) * Additional reset state handling that PSCI may have imposed on us. * Must be done after all the sys_reg reset. */ - if (reset_state.reset) { - unsigned long target_pc = reset_state.pc; - - /* Gracefully handle Thumb2 entry point */ - if (vcpu_mode_is_32bit(vcpu) && (target_pc & 1)) { - target_pc &= ~1UL; - vcpu_set_thumb(vcpu); - } - - /* Propagate caller endianness */ - if (reset_state.be) - kvm_vcpu_set_be(vcpu); - - *vcpu_pc(vcpu) = target_pc; - vcpu_set_reg(vcpu, 0, reset_state.r0); - } + if (reset_state.reset) + kvm_reset_vcpu_psci(vcpu, &reset_state); /* Reset timer */ ret = kvm_timer_vcpu_reset(vcpu); From patchwork Thu May 19 13:41:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855198 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 22AD9C433F5 for ; Thu, 19 May 2022 14:43:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VVxUmQY0vJ2dhJCdOtmH+wWifaYAVUHUKOPSB+Nn3gY=; b=idwFgv07ZZMgbI q7wLTe8sb5EhkyxbijFXnTAaqE+wJuA9oFXyttOQO6UGao0NoZvYBBnKfF8Fhx3jMQPU6/6EM8wij G78lkyvSCpgHAzm4hh+81LHebQWLpTjemo8RSVfbpksNOtxPb/AP+tm0O8MQdQ0i2MNiobV8EmYPV 1tl0dBbnjQl+4xL4+/LmAac4tyV8x6GKqHcaY1SMPBT/Q133flhYM8ovdMxWVbXcMQHDFylBmyty7 A4wOGiD4WWPD3SGjJWB9a46cVGfkQpEslMkIHYSAHSbkyxVPADoWtyEVaAoGIVNfmxndMyUxpcIN+ aKfWwZV3FFaJlfUxNpeg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhLj-007WeO-CO; Thu, 19 May 2022 14:41:59 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgV3-0077Mn-7Y for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:36 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id CF51AB824AB; Thu, 19 May 2022 13:47:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2AD1BC36AE3; Thu, 19 May 2022 13:47:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968050; bh=yJrQN8Oq2UaIQqo3+D40sbnBWd1Yq9zyT101cMoDx48=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MdJABmMLeaRIbp33zhCd792e6IPsLObJopxudR25r36kxBkDuHE3EiIqXohaVmRgZ vXp6Fp9uykpEPfFnSg8qjoeWRLhIDpmmGP/YhLHLFStsaNdvtZoAisgIncCOal5wiY P4e7S1EiOW1iV4duBltN70LTqN5LNMol2Q+STbkbXXrx9brGNDZnXUzK8rDfrgcAPw bXWQ4zIBxmMkR45d92oLwgxaYOPMkSqPMaypyQ7hSv7lhK2RE0MZGLXzp120sQbw9p OL5S2zDwwQFHFXYv5XrEqY4RSNtyArs8SmH2OfuAG1nf5t0wp88hnO+ZVbP8BXFiTs hkrCoAFG/GRyQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 77/89] KVM: arm64: Handle PSCI for protected VMs in EL2 Date: Thu, 19 May 2022 14:41:52 +0100 Message-Id: <20220519134204.5379-78-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064733_667142_1B7FF344 X-CRM114-Status: GOOD ( 31.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Add PSCI 1.1 support for protected VMs at EL2. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 13 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 69 +++++- arch/arm64/kvm/hyp/nvhe/pkvm.c | 320 ++++++++++++++++++++++++- 3 files changed, 399 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index 33d34cc639ea..c1987115b217 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -28,6 +28,15 @@ struct kvm_shadow_vcpu_state { /* Tracks exit code for the protected guest. */ u32 exit_code; + /* + * Track the power state transition of a protected vcpu. + * Can be in one of three states: + * PSCI_0_2_AFFINITY_LEVEL_ON + * PSCI_0_2_AFFINITY_LEVEL_OFF + * PSCI_0_2_AFFINITY_LEVEL_PENDING + */ + int power_state; + /* * Points to the per-cpu pointer of the cpu where it's loaded, or NULL * if not loaded. @@ -101,6 +110,10 @@ bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code); void kvm_reset_pvm_sys_regs(struct kvm_vcpu *vcpu); int kvm_check_pvm_sysreg_table(void); +void pkvm_reset_vcpu(struct kvm_shadow_vcpu_state *shadow_state); + bool kvm_handle_pvm_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code); +struct kvm_shadow_vcpu_state *pkvm_mpidr_to_vcpu_state(struct kvm_shadow_vm *vm, unsigned long mpidr); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 26c8709f5494..c4778c7d8c4b 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -21,6 +21,7 @@ #include #include +#include #include "../../sys_regs.h" @@ -46,8 +47,37 @@ static void handle_pvm_entry_wfx(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *sh static void handle_pvm_entry_hvc64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) { + u32 psci_fn = smccc_get_function(shadow_vcpu); u64 ret = READ_ONCE(host_vcpu->arch.ctxt.regs.regs[0]); + switch (psci_fn) { + case PSCI_0_2_FN_CPU_ON: + case PSCI_0_2_FN64_CPU_ON: + /* + * Check whether the cpu_on request to the host was successful. + * If not, reset the vcpu state from ON_PENDING to OFF. + * This could happen if this vcpu attempted to turn on the other + * vcpu while the other one is in the process of turning itself + * off. + */ + if (ret != PSCI_RET_SUCCESS) { + unsigned long cpu_id = smccc_get_arg1(shadow_vcpu); + struct kvm_shadow_vm *shadow_vm; + struct kvm_shadow_vcpu_state *target_vcpu_state; + + shadow_vm = get_shadow_vm(shadow_vcpu); + target_vcpu_state = pkvm_mpidr_to_vcpu_state(shadow_vm, cpu_id); + + if (target_vcpu_state && READ_ONCE(target_vcpu_state->power_state) == PSCI_0_2_AFFINITY_LEVEL_ON_PENDING) + WRITE_ONCE(target_vcpu_state->power_state, PSCI_0_2_AFFINITY_LEVEL_OFF); + + ret = PSCI_RET_INTERNAL_FAILURE; + } + break; + default: + break; + } + vcpu_set_reg(shadow_vcpu, 0, ret); } @@ -206,13 +236,45 @@ static void handle_pvm_exit_sys64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *s static void handle_pvm_exit_hvc64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) { - int i; + int n, i; + + switch (smccc_get_function(shadow_vcpu)) { + /* + * CPU_ON takes 3 arguments, however, to wake up the target vcpu the + * host only needs to know the target's cpu_id, which is passed as the + * first argument. The processing of the reset state is done at hyp. + */ + case PSCI_0_2_FN_CPU_ON: + case PSCI_0_2_FN64_CPU_ON: + n = 2; + break; + + case PSCI_0_2_FN_CPU_OFF: + case PSCI_0_2_FN_SYSTEM_OFF: + case PSCI_0_2_FN_SYSTEM_RESET: + case PSCI_0_2_FN_CPU_SUSPEND: + case PSCI_0_2_FN64_CPU_SUSPEND: + n = 1; + break; + + case PSCI_1_1_FN_SYSTEM_RESET2: + case PSCI_1_1_FN64_SYSTEM_RESET2: + n = 3; + break; + + /* + * The rest are either blocked or handled by HYP, so we should + * really never be here. + */ + default: + BUG(); + } WRITE_ONCE(host_vcpu->arch.fault.esr_el2, shadow_vcpu->arch.fault.esr_el2); /* Pass the hvc function id (r0) as well as any potential arguments. */ - for (i = 0; i < 8; i++) + for (i = 0; i < n; i++) WRITE_ONCE(host_vcpu->arch.ctxt.regs.regs[i], vcpu_get_reg(shadow_vcpu, i)); } @@ -430,6 +492,9 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) shadow_entry_exit_handler_fn ec_handler; u8 esr_ec; + if (READ_ONCE(shadow_state->power_state) == PSCI_0_2_AFFINITY_LEVEL_ON_PENDING) + pkvm_reset_vcpu(shadow_state); + /* * If we deal with a non-protected guest and the state is potentially * dirty (from a host perspective), copy the state back into the shadow. diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 92e60ebeced5..d44f524c936b 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -8,6 +8,7 @@ #include #include +#include #include @@ -425,6 +426,27 @@ static int init_ptrauth(struct kvm_vcpu *shadow_vcpu) return ret; } +static int init_shadow_psci(struct kvm_shadow_vm *vm, + struct kvm_shadow_vcpu_state *shadow_vcpu_state, + struct kvm_vcpu *host_vcpu) +{ + struct kvm_vcpu *shadow_vcpu = &shadow_vcpu_state->shadow_vcpu; + struct vcpu_reset_state *reset_state = &shadow_vcpu->arch.reset_state; + + if (test_bit(KVM_ARM_VCPU_POWER_OFF, shadow_vcpu->arch.features)) { + reset_state->reset = false; + shadow_vcpu_state->power_state = PSCI_0_2_AFFINITY_LEVEL_OFF; + return 0; + } + + reset_state->pc = READ_ONCE(host_vcpu->arch.ctxt.regs.pc); + reset_state->r0 = READ_ONCE(host_vcpu->arch.ctxt.regs.regs[0]); + reset_state->reset = true; + shadow_vcpu_state->power_state = PSCI_0_2_AFFINITY_LEVEL_ON_PENDING; + + return 0; +} + static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, struct kvm_vcpu **vcpu_array, int *last_ran, @@ -485,6 +507,11 @@ static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, pkvm_vcpu_init_traps(shadow_vcpu, host_vcpu); kvm_reset_pvm_sys_regs(shadow_vcpu); + + /* Must be done after reseting sys registers. */ + ret = init_shadow_psci(vm, shadow_vcpu_state, host_vcpu); + if (ret) + return ret; } return 0; @@ -800,6 +827,297 @@ int __pkvm_teardown_shadow(unsigned int shadow_handle) return err; } +/* + * This function sets the registers on the vcpu to their architecturally defined + * reset values. + * + * Note: Can only be called by the vcpu on itself, after it has been turned on. + */ +void pkvm_reset_vcpu(struct kvm_shadow_vcpu_state *shadow_state) +{ + struct kvm_vcpu *vcpu = &shadow_state->shadow_vcpu; + struct vcpu_reset_state *reset_state = &vcpu->arch.reset_state; + + WARN_ON(!reset_state->reset); + + init_ptrauth(vcpu); + + kvm_reset_vcpu_core(vcpu); + kvm_reset_pvm_sys_regs(vcpu); + + /* Must be done after reseting sys registers. */ + kvm_reset_vcpu_psci(vcpu, reset_state); + + reset_state->reset = false; + + shadow_state->exit_code = 0; + + WARN_ON(shadow_state->power_state != PSCI_0_2_AFFINITY_LEVEL_ON_PENDING); + WRITE_ONCE(vcpu->arch.power_off, false); + WRITE_ONCE(shadow_state->power_state, PSCI_0_2_AFFINITY_LEVEL_ON); +} + +struct kvm_shadow_vcpu_state *pkvm_mpidr_to_vcpu_state(struct kvm_shadow_vm *vm, unsigned long mpidr) +{ + struct kvm_vcpu *vcpu; + int i; + + mpidr &= MPIDR_HWID_BITMASK; + + for (i = 0; i < vm->kvm.created_vcpus; i++) { + vcpu = &vm->shadow_vcpu_states[i].shadow_vcpu; + + if (mpidr == kvm_vcpu_get_mpidr_aff(vcpu)) + return &vm->shadow_vcpu_states[i]; + } + + return NULL; +} + +/* + * Returns true if the hypervisor has handled the PSCI call, and control should + * go back to the guest, or false if the host needs to do some additional work + * (i.e., wake up the vcpu). + */ +static bool pvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu) +{ + struct kvm_shadow_vcpu_state *target_vcpu_state; + struct kvm_shadow_vm *vm; + struct vcpu_reset_state *reset_state; + unsigned long cpu_id; + unsigned long hvc_ret_val; + int power_state; + + cpu_id = smccc_get_arg1(source_vcpu); + if (!kvm_psci_valid_affinity(source_vcpu, cpu_id)) { + hvc_ret_val = PSCI_RET_INVALID_PARAMS; + goto error; + } + + vm = get_shadow_vm(source_vcpu); + target_vcpu_state = pkvm_mpidr_to_vcpu_state(vm, cpu_id); + + /* Make sure the caller requested a valid vcpu. */ + if (!target_vcpu_state) { + hvc_ret_val = PSCI_RET_INVALID_PARAMS; + goto error; + } + + /* + * Make sure the requested vcpu is not on to begin with. + * Atomic to avoid race between vcpus trying to power on the same vcpu. + */ + power_state = cmpxchg(&target_vcpu_state->power_state, + PSCI_0_2_AFFINITY_LEVEL_OFF, + PSCI_0_2_AFFINITY_LEVEL_ON_PENDING); + switch (power_state) { + case PSCI_0_2_AFFINITY_LEVEL_ON_PENDING: + hvc_ret_val = PSCI_RET_ON_PENDING; + goto error; + case PSCI_0_2_AFFINITY_LEVEL_ON: + hvc_ret_val = PSCI_RET_ALREADY_ON; + goto error; + case PSCI_0_2_AFFINITY_LEVEL_OFF: + break; + default: + hvc_ret_val = PSCI_RET_INTERNAL_FAILURE; + goto error; + } + + reset_state = &target_vcpu_state->shadow_vcpu.arch.reset_state; + + reset_state->pc = smccc_get_arg2(source_vcpu); + reset_state->r0 = smccc_get_arg3(source_vcpu); + + /* Propagate caller endianness */ + reset_state->be = kvm_vcpu_is_be(source_vcpu); + + reset_state->reset = true; + + /* + * Return to the host, which should make the KVM_REQ_VCPU_RESET request + * as well as kvm_vcpu_wake_up() to schedule the vcpu. + */ + return false; + +error: + /* If there's an error go back straight to the guest. */ + smccc_set_retval(source_vcpu, hvc_ret_val, 0, 0, 0); + return true; +} + +static bool pvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu) +{ + int i, matching_cpus = 0; + unsigned long mpidr; + unsigned long target_affinity; + unsigned long target_affinity_mask; + unsigned long lowest_affinity_level; + struct kvm_shadow_vm *vm; + unsigned long hvc_ret_val; + + target_affinity = smccc_get_arg1(vcpu); + lowest_affinity_level = smccc_get_arg2(vcpu); + + if (!kvm_psci_valid_affinity(vcpu, target_affinity)) { + hvc_ret_val = PSCI_RET_INVALID_PARAMS; + goto done; + } + + /* Determine target affinity mask */ + target_affinity_mask = psci_affinity_mask(lowest_affinity_level); + if (!target_affinity_mask) { + hvc_ret_val = PSCI_RET_INVALID_PARAMS; + goto done; + } + + vm = get_shadow_vm(vcpu); + + /* Ignore other bits of target affinity */ + target_affinity &= target_affinity_mask; + + hvc_ret_val = PSCI_0_2_AFFINITY_LEVEL_OFF; + + /* + * If at least one vcpu matching target affinity is ON then return ON, + * then if at least one is PENDING_ON then return PENDING_ON. + * Otherwise, return OFF. + */ + for (i = 0; i < vm->kvm.created_vcpus; i++) { + struct kvm_shadow_vcpu_state *tmp = &vm->shadow_vcpu_states[i]; + + mpidr = kvm_vcpu_get_mpidr_aff(&tmp->shadow_vcpu); + + if ((mpidr & target_affinity_mask) == target_affinity) { + int power_state; + + matching_cpus++; + power_state = READ_ONCE(tmp->power_state); + switch (power_state) { + case PSCI_0_2_AFFINITY_LEVEL_ON_PENDING: + hvc_ret_val = PSCI_0_2_AFFINITY_LEVEL_ON_PENDING; + break; + case PSCI_0_2_AFFINITY_LEVEL_ON: + hvc_ret_val = PSCI_0_2_AFFINITY_LEVEL_ON; + goto done; + case PSCI_0_2_AFFINITY_LEVEL_OFF: + break; + default: + hvc_ret_val = PSCI_RET_INTERNAL_FAILURE; + goto done; + } + } + } + + if (!matching_cpus) + hvc_ret_val = PSCI_RET_INVALID_PARAMS; + +done: + /* Nothing to be handled by the host. Go back to the guest. */ + smccc_set_retval(vcpu, hvc_ret_val, 0, 0, 0); + return true; +} + +/* + * Returns true if the hypervisor has handled the PSCI call, and control should + * go back to the guest, or false if the host needs to do some additional work + * (e.g., turn off and update vcpu scheduling status). + */ +static bool pvm_psci_vcpu_off(struct kvm_vcpu *vcpu) +{ + struct kvm_shadow_vcpu_state *vcpu_state = get_shadow_state(vcpu); + + WARN_ON(vcpu->arch.power_off); + WARN_ON(vcpu_state->power_state != PSCI_0_2_AFFINITY_LEVEL_ON); + + WRITE_ONCE(vcpu->arch.power_off, true); + WRITE_ONCE(vcpu_state->power_state, PSCI_0_2_AFFINITY_LEVEL_OFF); + + /* Return to the host so that it can finish powering off the vcpu. */ + return false; +} + +static bool pvm_psci_version(struct kvm_vcpu *vcpu) +{ + /* Nothing to be handled by the host. Go back to the guest. */ + smccc_set_retval(vcpu, KVM_ARM_PSCI_1_1, 0, 0, 0); + return true; +} + +static bool pvm_psci_not_supported(struct kvm_vcpu *vcpu) +{ + /* Nothing to be handled by the host. Go back to the guest. */ + smccc_set_retval(vcpu, PSCI_RET_NOT_SUPPORTED, 0, 0, 0); + return true; +} + +static bool pvm_psci_features(struct kvm_vcpu *vcpu) +{ + u32 feature = smccc_get_arg1(vcpu); + unsigned long val; + + switch (feature) { + case PSCI_0_2_FN_PSCI_VERSION: + case PSCI_0_2_FN_CPU_SUSPEND: + case PSCI_0_2_FN64_CPU_SUSPEND: + case PSCI_0_2_FN_CPU_OFF: + case PSCI_0_2_FN_CPU_ON: + case PSCI_0_2_FN64_CPU_ON: + case PSCI_0_2_FN_AFFINITY_INFO: + case PSCI_0_2_FN64_AFFINITY_INFO: + case PSCI_0_2_FN_SYSTEM_OFF: + case PSCI_0_2_FN_SYSTEM_RESET: + case PSCI_1_0_FN_PSCI_FEATURES: + case PSCI_1_1_FN_SYSTEM_RESET2: + case PSCI_1_1_FN64_SYSTEM_RESET2: + case ARM_SMCCC_VERSION_FUNC_ID: + val = PSCI_RET_SUCCESS; + break; + default: + val = PSCI_RET_NOT_SUPPORTED; + break; + } + + /* Nothing to be handled by the host. Go back to the guest. */ + smccc_set_retval(vcpu, val, 0, 0, 0); + return true; +} + +static bool pkvm_handle_psci(struct kvm_vcpu *vcpu) +{ + u32 psci_fn = smccc_get_function(vcpu); + + switch (psci_fn) { + case PSCI_0_2_FN_CPU_ON: + kvm_psci_narrow_to_32bit(vcpu); + fallthrough; + case PSCI_0_2_FN64_CPU_ON: + return pvm_psci_vcpu_on(vcpu); + case PSCI_0_2_FN_CPU_OFF: + return pvm_psci_vcpu_off(vcpu); + case PSCI_0_2_FN_AFFINITY_INFO: + kvm_psci_narrow_to_32bit(vcpu); + fallthrough; + case PSCI_0_2_FN64_AFFINITY_INFO: + return pvm_psci_vcpu_affinity_info(vcpu); + case PSCI_0_2_FN_PSCI_VERSION: + return pvm_psci_version(vcpu); + case PSCI_1_0_FN_PSCI_FEATURES: + return pvm_psci_features(vcpu); + case PSCI_0_2_FN_SYSTEM_RESET: + case PSCI_0_2_FN_CPU_SUSPEND: + case PSCI_0_2_FN64_CPU_SUSPEND: + case PSCI_0_2_FN_SYSTEM_OFF: + case PSCI_1_1_FN_SYSTEM_RESET2: + case PSCI_1_1_FN64_SYSTEM_RESET2: + return false; /* Handled by the host. */ + default: + break; + } + + return pvm_psci_not_supported(vcpu); +} + /* * Handler for protected VM HVC calls. * @@ -816,6 +1134,6 @@ bool kvm_handle_pvm_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code) smccc_set_retval(vcpu, ARM_SMCCC_VERSION_1_1, 0, 0, 0); return true; default: - return false; + return pkvm_handle_psci(vcpu); } } From patchwork Thu May 19 13:41:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 478D2C433F5 for ; Thu, 19 May 2022 14:44:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iCXbflDN2l0d/45VHy98qDneM13paCuiAsubKKu5s4E=; b=FS9dopWXIFYZry 38GNMvkX1uC23QJOau8J6//IyVa6eE9vSfOH39WNzz/lZ2Fz6Ri0eB4ZKu1qaEVPnBY+fKMXBMADm upd6SuDESCneVwGvQ9gYitifA9XaQKOzd6nyhQTBJidq+4LAUlDD93oMOHVc4IUs97wNH+a4bPLKA j+AFgFQkcS5leK1m7zmfTdUpAvbiFFQ9X+K7oLugFFwXWFFizqf4dBOHcUUeMsLff9l87WYF8t94w kK59VDrGNo3I/MiDYR2KHGw5ibY11Wt8qYR5lkJqXwTxej+U7txDGmPTcty7v514+8n3iPYIIxN0C sjUzjDO2oq/90avN3HQQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhMN-007Wwm-8G; Thu, 19 May 2022 14:42:40 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgV6-0077OG-2g for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:37 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 31A1261783; Thu, 19 May 2022 13:47:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2334EC36AE5; Thu, 19 May 2022 13:47:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968054; bh=t93B8PWJDF7lEwK9KmqgiZYIgy52JGGxHtbXIsyZ+Wg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HSDNyz/VQ0/O6xi3wUTBj30BhzIcZP1MSjDDkcNH37MY7QrQNhnWcIQr6OHgnb37g vLCp9oZ1xV5egSYUgvxn5ex/4485OzpTjHRZTs/IG2JG01IbUMrqKjjJ9zmvhhrxws QKPxZzZoDqliXtjs22S36M4KoJA+0jI+Uumh/+5nKNdVYBBXO+xcy5kF7Fof4y//2f oJcWbVrBP9RQha6uZhDlLrXQVGj4kbQPiGvO5dZq6LU2+WYd/qKdiTpWSQIK1s5GRW jVqcKsEKvEk4T3m1Pq03Suyrb951kfWX+i4zSm/xwcN9f4fh7GVqS0nOaOkZBWRpKa nlFVK49GvvZlw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 78/89] KVM: arm64: Don't expose TLBI hypercalls after de-privilege Date: Thu, 19 May 2022 14:41:53 +0100 Message-Id: <20220519134204.5379-79-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064736_240230_720A4652 X-CRM114-Status: GOOD ( 12.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that TLBI invalidation is handled entirely at EL2 for both protected and non-protected guests when protected KVM has initialised, unplug the unused TLBI hypercalls. Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_asm.h | 8 ++++---- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 7af0b7695a2c..d020c4cce888 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -59,6 +59,10 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs, __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs, __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config, + __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid, + __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize, /* Hypercalls available after pKVM finalisation */ @@ -68,10 +72,6 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_map_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, - __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, - __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa, - __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid, - __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff, __KVM_HOST_SMCCC_FUNC___vgic_v3_save_vmcr_aprs, __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_vmcr_aprs, diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index c4778c7d8c4b..694e0071b13e 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -1030,6 +1030,10 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_enable_ssbs), HANDLE_FUNC(__vgic_v3_init_lrs), HANDLE_FUNC(__vgic_v3_get_gic_config), + HANDLE_FUNC(__kvm_flush_vm_context), + HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), + HANDLE_FUNC(__kvm_tlb_flush_vmid), + HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__pkvm_prot_finalize), HANDLE_FUNC(__pkvm_host_share_hyp), @@ -1038,10 +1042,6 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_map_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), - HANDLE_FUNC(__kvm_flush_vm_context), - HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), - HANDLE_FUNC(__kvm_tlb_flush_vmid), - HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), HANDLE_FUNC(__vgic_v3_save_vmcr_aprs), HANDLE_FUNC(__vgic_v3_restore_vmcr_aprs), From patchwork Thu May 19 13:41:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855200 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 71B23C433EF for ; Thu, 19 May 2022 14:45:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1WFpziAGzn6QnZx5p8gYDNN9xFv4jVlgrNW7ytb9evE=; b=XAs5EptsPA2Gvu 01Krwpe52C5bBkObJNkzS3YQ5F8kDtv9+REgMbkQ9Qcx7BSaVdL/uxsie8OYQXGPW7ND8DsVjj0/F GNPMQtXXyCTV9MA88PoF/99ftlXtgHsyjdyEOv2gs6zxAfqD77vtO5xvnHZeDS7LYararavwA3dAE Xw/7Gvc50xS41aUapqSJl1r5uO4iJ5SHZIgzewdLsEoMp0ft2R1SGPqFEPAbJjr7pNF5voxOsqCsJ fNpn3/hDwb0N8FaGJqCC+l/7lzXEOxYwIoVSyoLq1cbCXvuLMdX8H1bWTLCXp4nrh218U+72+Rzxq MQfbXbMmcMZlmZNLrTmQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhNN-007XNt-I4; Thu, 19 May 2022 14:43:41 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgVB-0077RM-Fl for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:43 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B6717B824B1; Thu, 19 May 2022 13:47:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19249C34116; Thu, 19 May 2022 13:47:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968058; bh=Fi4thb9Ps0T6UgODhOnaPjCJWB96ZlDSu7QTR8NPXQo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GZBOSK7VfxPFzib5mV3HMqsJQ9/evGbAliHuAVatM2kZEoNTGkN0CKuYss/GQpPCu wCgUw+uVwN1pqUqj1qSqKSMfGISRf8SfmpdY+mlaCSYruHm7oXUJPdGWrFmbeTh8WP Zw6e+/WSVVPnQxuonjCpc79/P2/cMN0N4Tjhn/j+zb1kaj+Lat55ejjuUb/totLzuL IcYgQuKPC9LjSOJdsAwwp6UQ5bG03WonLTNoT26HOaaY6QgmMyqnifqvsLB80yC+Jg CaIFEdO2RVaFsM73oebNGyphbNtUAV020Nd+iw/B3afIZavVOonu+kcI200/wbJa/j ZjfRbga5U+cvQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 79/89] KVM: arm64: Add is_pkvm_initialized() helper Date: Thu, 19 May 2022 14:41:54 +0100 Message-Id: <20220519134204.5379-80-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064741_720926_1DDDC6ED X-CRM114-Status: GOOD ( 14.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Add a helper allowing to check when the pkvm static key is enabled to ease the introduction of pkvm hooks in other parts of the code. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/virt.h | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h index 0e80db4327b6..3d5bfcdb49aa 100644 --- a/arch/arm64/include/asm/virt.h +++ b/arch/arm64/include/asm/virt.h @@ -74,6 +74,12 @@ void __hyp_reset_vectors(void); DECLARE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); +static inline bool is_pkvm_initialized(void) +{ + return IS_ENABLED(CONFIG_KVM) && + static_branch_likely(&kvm_protected_mode_initialized); +} + /* Reports the availability of HYP mode */ static inline bool is_hyp_mode_available(void) { @@ -81,8 +87,7 @@ static inline bool is_hyp_mode_available(void) * If KVM protected mode is initialized, all CPUs must have been booted * in EL2. Avoid checking __boot_cpu_mode as CPUs now come up in EL1. */ - if (IS_ENABLED(CONFIG_KVM) && - static_branch_likely(&kvm_protected_mode_initialized)) + if (is_pkvm_initialized()) return true; return (__boot_cpu_mode[0] == BOOT_CPU_MODE_EL2 && @@ -96,8 +101,7 @@ static inline bool is_hyp_mode_mismatched(void) * If KVM protected mode is initialized, all CPUs must have been booted * in EL2. Avoid checking __boot_cpu_mode as CPUs now come up in EL1. */ - if (IS_ENABLED(CONFIG_KVM) && - static_branch_likely(&kvm_protected_mode_initialized)) + if (is_pkvm_initialized()) return false; return __boot_cpu_mode[0] != __boot_cpu_mode[1]; From patchwork Thu May 19 13:41:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BF496C433F5 for ; Thu, 19 May 2022 14:45:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CFY+5Whxs5uWZKr2CPNZ1RdeRzVBLWbZVSVo/CytUNo=; b=4F74eKl4VvTOTb +R0jo6J5VQgSN3fTOjCaW4HSvWYkAPDUxHeBLUkqhwcoRl+HTZoNxgAesu0VtoqjOF6vaXbPYhNcm uoLC9xn2c0jcPVr3dHWKEjqw2Hpv0w9gHNglPE4q2R7qika66sQTf+btCmxGorLrapBciTB14g1EJ 61jfw8qL2lNezjm9wSOFLPm5A3XgT6VWl3QRnAjPVyx0taw+g65vJV1y+SUJ7C2SdXsZYCmXEBqO+ 8HiFHuHGHHaCtLUKV5veFZlwhaxz/V9Pwb10G/LvZFqmH6lOmX+UZ+9g/RNW5+en6BTPlxQuHo62u bhij1L4UOnQsKP85uMIQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhNy-007Xd1-QK; Thu, 19 May 2022 14:44:19 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgVF-0077U3-4d for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:47 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id AF9D2B824AF; Thu, 19 May 2022 13:47:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0FA43C34115; Thu, 19 May 2022 13:47:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968062; bh=U5L7YKpcVZ4RGBMLh9l/9JmqOUeaR68eA9xYHCvM3eA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q+Ks1TY6Op73OxzQK3ChLtd/TSZuV9gBh2YroRocI9RKFy09F1zcAGk9CdV3M5IxC XjhGIB1gsZkQe+MAXANKkVhd2iTHJNKBC7gJs2+ZVaIPmtCJlmr+P0nrm/+DqRn5p9 KyDWNcTpwlkiOkDYlZhOAv6aa6DnFue54lCccxOch7gHIXVij5y8nhgsdN6nTqB+9r 1WzVTquG5+Kig1mctkHhHOS6BI+MtGMuJCiE7a2vXnE2MKO3ptt2MHBW5821u3HxJ3 lPU9JocWCCzZZS0aX4Vt048eG6NXkFJjZCJ4bIAjVtxD5S1VidlCnPPcepaNbSOEgL CoamQoUCITO4g== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 80/89] KVM: arm64: Refactor enter_exception64() Date: Thu, 19 May 2022 14:41:55 +0100 Message-Id: <20220519134204.5379-81-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064745_558059_78CC81CA X-CRM114-Status: GOOD ( 20.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret In order to simplify the injection of exceptions in the host in pkvm context, let's factor out of enter_exception64() the code calculating the exception offset from VBAR_EL1 and the cpsr. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_emulate.h | 5 ++ arch/arm64/kvm/hyp/exception.c | 89 ++++++++++++++++------------ 2 files changed, 57 insertions(+), 37 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 2a79c861b8e0..8b6c391bbee8 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -41,6 +41,11 @@ void kvm_inject_vabt(struct kvm_vcpu *vcpu); void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); +unsigned long get_except64_offset(unsigned long psr, unsigned long target_mode, + enum exception_type type); +unsigned long get_except64_cpsr(unsigned long old, bool has_mte, + unsigned long sctlr, unsigned long mode); + void kvm_vcpu_wfi(struct kvm_vcpu *vcpu); static inline int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index c5d009715402..14a80b0e2f91 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -60,31 +60,12 @@ static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) vcpu->arch.ctxt.spsr_und = val; } -/* - * This performs the exception entry at a given EL (@target_mode), stashing PC - * and PSTATE into ELR and SPSR respectively, and compute the new PC/PSTATE. - * The EL passed to this function *must* be a non-secure, privileged mode with - * bit 0 being set (PSTATE.SP == 1). - * - * When an exception is taken, most PSTATE fields are left unchanged in the - * handler. However, some are explicitly overridden (e.g. M[4:0]). Luckily all - * of the inherited bits have the same position in the AArch64/AArch32 SPSR_ELx - * layouts, so we don't need to shuffle these for exceptions from AArch32 EL0. - * - * For the SPSR_ELx layout for AArch64, see ARM DDI 0487E.a page C5-429. - * For the SPSR_ELx layout for AArch32, see ARM DDI 0487E.a page C5-426. - * - * Here we manipulate the fields in order of the AArch64 SPSR_ELx layout, from - * MSB to LSB. - */ -static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, - enum exception_type type) +unsigned long get_except64_offset(unsigned long psr, unsigned long target_mode, + enum exception_type type) { - unsigned long sctlr, vbar, old, new, mode; + u64 mode = psr & (PSR_MODE_MASK | PSR_MODE32_BIT); u64 exc_offset; - mode = *vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT); - if (mode == target_mode) exc_offset = CURRENT_EL_SP_ELx_VECTOR; else if ((mode | PSR_MODE_THREAD_BIT) == target_mode) @@ -94,28 +75,32 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, else exc_offset = LOWER_EL_AArch32_VECTOR; - switch (target_mode) { - case PSR_MODE_EL1h: - vbar = __vcpu_read_sys_reg(vcpu, VBAR_EL1); - sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1); - __vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL1); - break; - default: - /* Don't do that */ - BUG(); - } - - *vcpu_pc(vcpu) = vbar + exc_offset + type; + return exc_offset + type; +} - old = *vcpu_cpsr(vcpu); - new = 0; +/* + * When an exception is taken, most PSTATE fields are left unchanged in the + * handler. However, some are explicitly overridden (e.g. M[4:0]). Luckily all + * of the inherited bits have the same position in the AArch64/AArch32 SPSR_ELx + * layouts, so we don't need to shuffle these for exceptions from AArch32 EL0. + * + * For the SPSR_ELx layout for AArch64, see ARM DDI 0487E.a page C5-429. + * For the SPSR_ELx layout for AArch32, see ARM DDI 0487E.a page C5-426. + * + * Here we manipulate the fields in order of the AArch64 SPSR_ELx layout, from + * MSB to LSB. + */ +unsigned long get_except64_cpsr(unsigned long old, bool has_mte, + unsigned long sctlr, unsigned long target_mode) +{ + u64 new = 0; new |= (old & PSR_N_BIT); new |= (old & PSR_Z_BIT); new |= (old & PSR_C_BIT); new |= (old & PSR_V_BIT); - if (kvm_has_mte(vcpu->kvm)) + if (has_mte) new |= PSR_TCO_BIT; new |= (old & PSR_DIT_BIT); @@ -151,6 +136,36 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, new |= target_mode; + return new; +} + +/* + * This performs the exception entry at a given EL (@target_mode), stashing PC + * and PSTATE into ELR and SPSR respectively, and compute the new PC/PSTATE. + * The EL passed to this function *must* be a non-secure, privileged mode with + * bit 0 being set (PSTATE.SP == 1). + */ +static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, + enum exception_type type) +{ + u64 offset = get_except64_offset(*vcpu_cpsr(vcpu), target_mode, type); + unsigned long sctlr, vbar, old, new; + + switch (target_mode) { + case PSR_MODE_EL1h: + vbar = __vcpu_read_sys_reg(vcpu, VBAR_EL1); + sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1); + __vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL1); + break; + default: + /* Don't do that */ + BUG(); + } + + *vcpu_pc(vcpu) = vbar + offset; + + old = *vcpu_cpsr(vcpu); + new = get_except64_cpsr(old, kvm_has_mte(vcpu->kvm), sctlr, target_mode); *vcpu_cpsr(vcpu) = new; __vcpu_write_spsr(vcpu, old); } From patchwork Thu May 19 13:41:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855202 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5C381C433EF for ; Thu, 19 May 2022 14:46:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JIuRdde6Ig5bm6AwpIf2gRk9KJ6U/G6yGhDXmcdhRT0=; b=V0fpstEw3UwyMn UqbX4vEEyYFaQk3gmHNEx3+MYdyTPpyiJSawRA98VdEIRBgKbqk/4+/WbFvSLJZLbu2eEByeOiyrN FNXzrsGRmyZJtwDym7fgOjRnLRsATfy5gjhB+E1W6cRObHOk2+rAE3hYJqSV6iX0CJ2xMZltSrgJg 0DswLVVV+fDduHgRtzhCQkyTNRE75vEGqFPhZczfSBt4e6r+NcFbtumZRaXZm0LvAGj4cJQSMevY0 nJ9UDeiNtRTF0bxaMzla6odGI5DjBfwkHtB6JqCucBsBhLJd4fahTNAlHt91EJC1FP07kU0+BxsH8 +nBkgxPEGm6sFY9UUxWg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhOp-007Y68-MS; Thu, 19 May 2022 14:45:12 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgVH-0077VX-Ux for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:49 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 14221617A0; Thu, 19 May 2022 13:47:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 05AF1C385B8; Thu, 19 May 2022 13:47:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968066; bh=p4epx9gpbWHrqgR4lCHBD+OJVcMuF2+CwY/eYIiDFs8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b3NX1aHtHJU6tzqRl9/COho2WNeulJ3zXdWEhi9RI6d91qW/bA7+rXfzwLabysnxO rO0lDyDbRy7+Ymcg/NMsVi0LFInppO07IoqKihDBZeGvEEZ0hX1PlW0uPD8Ad3XHyi c8C5LIbwC99YWuru23eZYC0e53HGfhJjVr/BmrIp4337cXYJRHlVnsKEcH8rEfdLLQ 6rzguHJ/GqQ4HyblsBpNR+h4jwwchu/+kTWu7VzFwibZI9t2BOafulpf2jFNGIfTgR trXoPHJ+qV8DfeiQRy67jGv5XvrFq47kqruU5D4EfOJd6oI3Io2aqFr6w2mzHk6aDH fBDIBwcjZNNqQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 81/89] KVM: arm64: Inject SIGSEGV on illegal accesses Date: Thu, 19 May 2022 14:41:56 +0100 Message-Id: <20220519134204.5379-82-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064748_127618_BF2D98DD X-CRM114-Status: GOOD ( 25.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret The pKVM hypervisor will currently panic if the host tries to access memory that it doesn't own (e.g. protected guest memory). Sadly, as guest memory can still be mapped into the VMM's address space, userspace can trivially crash the kernel/hypervisor by poking into guest memory. To prevent this, inject the abort back in the host with S1PTW set in the ESR, hence allowing the host to differentiate this abort from normal userspace faults and inject a SIGSEGV cleanly. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 50 ++++++++++++++++++++++++++- arch/arm64/mm/fault.c | 22 ++++++++++++ 2 files changed, 71 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index d0544259eb01..8459dc33e460 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -549,6 +549,50 @@ static int host_stage2_idmap(u64 addr) return ret; } +static void host_inject_abort(struct kvm_cpu_context *host_ctxt) +{ + u64 spsr = read_sysreg_el2(SYS_SPSR); + u64 esr = read_sysreg_el2(SYS_ESR); + u64 ventry, ec; + + /* Repaint the ESR to report a same-level fault if taken from EL1 */ + if ((spsr & PSR_MODE_MASK) != PSR_MODE_EL0t) { + ec = ESR_ELx_EC(esr); + if (ec == ESR_ELx_EC_DABT_LOW) + ec = ESR_ELx_EC_DABT_CUR; + else if (ec == ESR_ELx_EC_IABT_LOW) + ec = ESR_ELx_EC_IABT_CUR; + else + WARN_ON(1); + esr &= ~ESR_ELx_EC_MASK; + esr |= ec << ESR_ELx_EC_SHIFT; + } + + /* + * Since S1PTW should only ever be set for stage-2 faults, we're pretty + * much guaranteed that it won't be set in ESR_EL1 by the hardware. So, + * let's use that bit to allow the host abort handler to differentiate + * this abort from normal userspace faults. + * + * Note: although S1PTW is RES0 at EL1, it is guaranteed by the + * architecture to be backed by flops, so it should be safe to use. + */ + esr |= ESR_ELx_S1PTW; + + write_sysreg_el1(esr, SYS_ESR); + write_sysreg_el1(spsr, SYS_SPSR); + write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR); + write_sysreg_el1(read_sysreg_el2(SYS_FAR), SYS_FAR); + + ventry = read_sysreg_el1(SYS_VBAR); + ventry += get_except64_offset(spsr, PSR_MODE_EL1h, except_type_sync); + write_sysreg_el2(ventry, SYS_ELR); + + spsr = get_except64_cpsr(spsr, system_supports_mte(), + read_sysreg_el1(SYS_SCTLR), PSR_MODE_EL1h); + write_sysreg_el2(spsr, SYS_SPSR); +} + void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) { struct kvm_vcpu_fault_info fault; @@ -560,7 +604,11 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) addr = (fault.hpfar_el2 & HPFAR_MASK) << 8; ret = host_stage2_idmap(addr); - BUG_ON(ret && ret != -EAGAIN); + + if (ret == -EPERM) + host_inject_abort(host_ctxt); + else + BUG_ON(ret && ret != -EAGAIN); } struct pkvm_mem_transition { diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 77341b160aca..2b2c16a2535c 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -41,6 +41,7 @@ #include #include #include +#include struct fault_info { int (*fn)(unsigned long far, unsigned int esr, @@ -257,6 +258,15 @@ static inline bool is_el1_permission_fault(unsigned long addr, unsigned int esr, return false; } +static bool is_pkvm_stage2_abort(unsigned int esr) +{ + /* + * S1PTW should only ever be set in ESR_EL1 if the pkvm hypervisor + * injected a stage-2 abort -- see host_inject_abort(). + */ + return is_pkvm_initialized() && (esr & ESR_ELx_S1PTW); +} + static bool __kprobes is_spurious_el1_translation_fault(unsigned long addr, unsigned int esr, struct pt_regs *regs) @@ -268,6 +278,9 @@ static bool __kprobes is_spurious_el1_translation_fault(unsigned long addr, (esr & ESR_ELx_FSC_TYPE) != ESR_ELx_FSC_FAULT) return false; + if (is_pkvm_stage2_abort(esr)) + return false; + local_irq_save(flags); asm volatile("at s1e1r, %0" :: "r" (addr)); isb(); @@ -383,6 +396,8 @@ static void __do_kernel_fault(unsigned long addr, unsigned int esr, msg = "read from unreadable memory"; } else if (addr < PAGE_SIZE) { msg = "NULL pointer dereference"; + } else if (is_pkvm_stage2_abort(esr)) { + msg = "access to hypervisor-protected memory"; } else { if (kfence_handle_page_fault(addr, esr & ESR_ELx_WNR, regs)) return; @@ -572,6 +587,13 @@ static int __kprobes do_page_fault(unsigned long far, unsigned int esr, addr, esr, regs); } + if (is_pkvm_stage2_abort(esr)) { + if (!user_mode(regs)) + goto no_context; + arm64_force_sig_fault(SIGSEGV, SEGV_ACCERR, far, "stage-2 fault"); + return 0; + } + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); /* From patchwork Thu May 19 13:41:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EFE44C433F5 for ; Thu, 19 May 2022 14:47:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AXPauP3ULNHe/ArdlUr4M5NZvnIUyrp2u5BvcQtiiAc=; b=Mi89ZkCce1N9gP YPED3YseJnRAIHiyx4Mlz6+Ox4TI54mvIX+5Cv73sW+b50BAONHcplE8oVGTqQbT0+1Rf62NEGQbv a00hvroOJ1FclIN/O0AVN6CgtpUEjsNrSnCbsK6+erE/gh2i9wFrzeHIJT9qTqLhdbQMwt4iI3N2d j1GTsDWTiincrNL+6GLIxMhLdf6Le/rHJVGRn7q31eC071zMdk2KMfRBlvzyKPhW7ET8jhmozcB02 nr8M4GZvlu3d5pu70aeNUuApkIfFVBde+k2rTIsdeboa1JKpRp1oF+lrwD0uqIGKuqZAN6gbhKmex IW+HBbWS8xaoIpdxvtMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhPd-007YR0-MP; Thu, 19 May 2022 14:46:02 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgVN-0077Yk-65 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:55 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id C08B8B824AB; Thu, 19 May 2022 13:47:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F0148C34115; Thu, 19 May 2022 13:47:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968070; bh=NKHi1S9dowrOA1pvVkNqiZvmnGHhmVg8DFJ6rMj+TKc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pqzWyoHjbCn6oAOlNYD9hpVLuM89F2FGpkJ8NTddouNuMdMtGPvDW877U56SGOwin YP84w9Rt6Em7GWReORDO1DQZCUsXVPuV01k+lII5m03Qx82csKUval6UmcuEzL7aPc utPesrK4fY3INarMNKM5CjIA6jYDQhF8TJASsqzR/v0GMepYn49ZPYnoyLlnzRMCcK Ni0YahpftDDpFzzEAB+emQPWWXNjNt86dlad9gzGM7G8yPXupPEXFtd6thBY4bs2XK 4zWA4liV9l+GHzDQTU/6/cYgvby1i0anp7LaStJCoYwobBD6+GeOQSmdruEWTvwmSl r5ARzxRNMIDVg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 82/89] KVM: arm64: Support TLB invalidation in guest context Date: Thu, 19 May 2022 14:41:57 +0100 Message-Id: <20220519134204.5379-83-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064753_585795_09954BED X-CRM114-Status: GOOD ( 25.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Typically, TLB invalidation of guest stage-2 mappings using nVHE is performed by a hypercall originating from the host. For the invalidation instruction to be effective, therefore, __tlb_switch_to_{guest,host}() swizzle the active stage-2 context around the TLBI instruction. With guest-to-host memory sharing and unsharing hypercalls originating from the guest under pKVM, there is now a need to support both guest and host VMID invalidations issued from guest context. Replace the __tlb_switch_to_{guest,host}() functions with a more general {enter,exit}_vmid_context() implementation which supports being invoked from guest context and acts as a no-op if the target context matches the running context. Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/nvhe/tlb.c | 96 ++++++++++++++++++++++++++++------- 1 file changed, 78 insertions(+), 18 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index d296d617f589..3f5601176fab 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -11,26 +11,62 @@ #include struct tlb_inv_context { - u64 tcr; + struct kvm_s2_mmu *mmu; + u64 tcr; + u64 sctlr; }; -static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu, - struct tlb_inv_context *cxt) +static void enter_vmid_context(struct kvm_s2_mmu *mmu, + struct tlb_inv_context *cxt) { + struct kvm_s2_mmu *host_mmu = &host_kvm.arch.mmu; + struct kvm_cpu_context *host_ctxt; + struct kvm_vcpu *vcpu; + + host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; + vcpu = host_ctxt->__hyp_running_vcpu; + cxt->mmu = NULL; + + /* + * If we're already in the desired context, then there's nothing + * to do. + */ + if (vcpu) { + if (mmu == vcpu->arch.hw_mmu || WARN_ON(mmu != host_mmu)) + return; + } else if (mmu == host_mmu) { + return; + } + + cxt->mmu = mmu; if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { u64 val; /* * For CPUs that are affected by ARM 1319367, we need to - * avoid a host Stage-1 walk while we have the guest's - * VMID set in the VTTBR in order to invalidate TLBs. - * We're guaranteed that the S1 MMU is enabled, so we can - * simply set the EPD bits to avoid any further TLB fill. + * avoid a Stage-1 walk with the old VMID while we have + * the new VMID set in the VTTBR in order to invalidate TLBs. + * We're guaranteed that the host S1 MMU is enabled, so + * we can simply set the EPD bits to avoid any further + * TLB fill. For guests, we ensure that the S1 MMU is + * temporarily enabled in the next context. */ val = cxt->tcr = read_sysreg_el1(SYS_TCR); val |= TCR_EPD1_MASK | TCR_EPD0_MASK; write_sysreg_el1(val, SYS_TCR); isb(); + + if (vcpu) { + val = cxt->sctlr = read_sysreg_el1(SYS_SCTLR); + if (!(val & SCTLR_ELx_M)) { + val |= SCTLR_ELx_M; + write_sysreg_el1(val, SYS_SCTLR); + isb(); + } + } else { + /* The host S1 MMU is always enabled. */ + cxt->sctlr = SCTLR_ELx_M; + } } /* @@ -39,20 +75,44 @@ static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu, * ensuring that we always have an ISB, but not two ISBs back * to back. */ - __load_stage2(mmu, kern_hyp_va(mmu->arch)); + if (vcpu) + __load_host_stage2(); + else + __load_stage2(mmu, kern_hyp_va(mmu->arch)); + asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT)); } -static void __tlb_switch_to_host(struct tlb_inv_context *cxt) +static void exit_vmid_context(struct tlb_inv_context *cxt) { - __load_host_stage2(); + struct kvm_s2_mmu *mmu = cxt->mmu; + struct kvm_cpu_context *host_ctxt; + struct kvm_vcpu *vcpu; + + host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; + vcpu = host_ctxt->__hyp_running_vcpu; + + if (!mmu) + return; + + if (vcpu) + __load_stage2(mmu, kern_hyp_va(mmu->arch)); + else + __load_host_stage2(); if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { - /* Ensure write of the host VMID */ + /* Ensure write of the old VMID */ isb(); - /* Restore the host's TCR_EL1 */ + + if (!(cxt->sctlr & SCTLR_ELx_M)) { + write_sysreg_el1(cxt->sctlr, SYS_SCTLR); + isb(); + } + write_sysreg_el1(cxt->tcr, SYS_TCR); } + + cxt->mmu = NULL; } void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, @@ -63,7 +123,7 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, dsb(ishst); /* Switch to requested VMID */ - __tlb_switch_to_guest(mmu, &cxt); + enter_vmid_context(mmu, &cxt); /* * We could do so much better if we had the VA as well. @@ -106,7 +166,7 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, if (icache_is_vpipt()) icache_inval_all_pou(); - __tlb_switch_to_host(&cxt); + exit_vmid_context(&cxt); } void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) @@ -116,13 +176,13 @@ void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) dsb(ishst); /* Switch to requested VMID */ - __tlb_switch_to_guest(mmu, &cxt); + enter_vmid_context(mmu, &cxt); __tlbi(vmalls12e1is); dsb(ish); isb(); - __tlb_switch_to_host(&cxt); + exit_vmid_context(&cxt); } void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu) @@ -130,14 +190,14 @@ void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu) struct tlb_inv_context cxt; /* Switch to requested VMID */ - __tlb_switch_to_guest(mmu, &cxt); + enter_vmid_context(mmu, &cxt); __tlbi(vmalle1); asm volatile("ic iallu"); dsb(nsh); isb(); - __tlb_switch_to_host(&cxt); + exit_vmid_context(&cxt); } void __kvm_flush_vm_context(void) From patchwork Thu May 19 13:41:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855206 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4AA0FC433EF for ; Thu, 19 May 2022 14:48:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xyMltFDdPRJPyZHpVzfKISl/IM1qMQAMXxwOS6FLjrs=; b=mJFpiAPiVjwLuL g9A/5XsN6QKZjNkGMBWA7L7rLEH94rSWnP8dgc3b1fo17q+LF+pixVaFbLzCvT1ncjc2g1JD+O6g1 QF1KNpTT6QEJ6km+ZY6TvsO7Y15JDOaEGQTRhF9XvIAgYEuvoDcXHnOaw+xQT9EAAMkCcII4wxuWE YGXFcVdFBMV4NvC4/uYrSgbM0EBXSN0qow06K0IS0NjMYJhwk+ksDDRM9peFWcrPM0VANkOd5asyA LYYlVgBOoe8MvMBES9TPKmoWc79ou/WZcakpbETUE59zQ2zzDPunI3wUea8XvdCSAxmQIwRgJLxiT djgvotV5dAH/XqS5AxZQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhQJ-007Ymb-OJ; Thu, 19 May 2022 14:46:43 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgVR-0077ag-6I for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:59 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id BDE81B824B1; Thu, 19 May 2022 13:47:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E6ECFC34117; Thu, 19 May 2022 13:47:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968074; bh=Bb16fB1SmGSKYWwrpfSZcfkvibh8rQKnfPLdNL//f+Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bOg5OFUjv2277j/meQP3O4SaTl6wfeWQ86gbdZeGsaiOHccCxWEfmELroBkaS0d/I +QKCf7moqK4aBN832WtFmuPKh/qvDFmY/+rgSNCfw8aZ14/zC9tmR49iWg450LFf3/ z6GCDdavpRvE+CKcSQUaQVvEK/wz2/SCGU801vf2bylAjEJoAu0Pq46x2bHvW3KjpZ DA86pBK+RVb2wcG5XWS5V2qvuIcONAV6YwqWmNHaVywTjV8KWF830WnofrKs8mzHiM 4w0lD7YfWAqGD3MnSCvx5zmPJeoD4S//atYCW42tMe97HhbzZGvPjCBcD6hg6dJ8Kw JOA3De/F/wF+w== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 83/89] KVM: arm64: Avoid BBM when changing only s/w bits in Stage-2 PTE Date: Thu, 19 May 2022 14:41:58 +0100 Message-Id: <20220519134204.5379-84-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064757_486528_F45A56EB X-CRM114-Status: GOOD ( 15.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Break-before-make (BBM) can be expensive, as transitioning via an invalid mapping (i.e. the "break" step) requires the completion of TLB invalidation and can also cause other agents to fault concurrently on the invalid mapping. Since BBM is not required when changing only the software bits of a PTE, avoid the sequence in this case and just update the PTE directly. Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/pgtable.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 2069e6833831..756bbb15c1f3 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -744,6 +744,13 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level, if (!stage2_pte_needs_update(old, new)) return -EAGAIN; + /* + * If we're only changing software bits, then we don't need to + * do anything else/ + */ + if (!((old ^ new) & ~KVM_PTE_LEAF_ATTR_HI_SW)) + goto out_set_pte; + stage2_put_pte(ptep, data->mmu, addr, level, mm_ops); } @@ -754,9 +761,11 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level, if (mm_ops->icache_inval_pou && stage2_pte_executable(new)) mm_ops->icache_inval_pou(kvm_pte_follow(new, mm_ops), granule); - smp_store_release(ptep, new); if (stage2_pte_is_counted(new)) mm_ops->get_page(ptep); + +out_set_pte: + smp_store_release(ptep, new); if (kvm_phys_is_valid(phys)) data->phys += granule; return 0; From patchwork Thu May 19 13:41:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6C1EFC433EF for ; Thu, 19 May 2022 14:48:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bjiN6ZWHi47tcpOBC5VU3qLtcyIE+oPhV6okYwU9Jd4=; b=nTK0eIGYF61U19 iO7OMOI0wo/0Dkeu4OjyuyETQAJLU8eiySBXbta7QXBe6FRYMT0DfvRAtQNvMtYUrOd70HidgE6Zk RQjMtfhd07H38SqsbRG7cFkSUqdKfTXL0Rmnm8jgDekdCzLo6Lhgt1QgvOpV9izzQH65gc3zwz+OG tk7/PJyAuwx+SFlbB367yTUGBkCW4cykqff5YH7SwvpUTNE8ES/TvDM6Pj1WCf4So+tZNFYGlclU1 oIoSq9SvsyrUygFlxEVfa681feqdbASxA3649ibVRDYy9L7QdPkBg23rzEsYm86dVoI11wmqcBFJ0 rWa4G8o7tv6P5XLukUHQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhR8-007ZCw-Rc; Thu, 19 May 2022 14:47:35 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgVT-0077dW-Lp for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:48:01 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EA771617D7; Thu, 19 May 2022 13:47:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DDFFDC34115; Thu, 19 May 2022 13:47:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968078; bh=9Xcg6PAiUhNMzYZjYDeZGGUgT1Ycy9l//rfV2Q6AvI0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fVEwx9HdjMpmh5ekCS7FwgjsynUf+WyeDaaU/04ro0kPpDvf4efNYF4t4LVB38aJu s2LSdeTCktXW9LNvlin/NICKBD5+esxWOHK54yG8m+zhIzWtNg5f/vsiPqAjgmy8ky HQ+gstuU2TuB1ovJ7rhKlWccQ8BfNptg0+2x60h6tA3UnzUsepvqjj/2iF6MvBqQ3k F8xwyVa5TLJCWzIh2XhG4nCHTRZ9Fwxz7zMj60J9VDkRnsNyAdw1ZyNYhnJBI31wq8 GEsNJnlBD392FRYYzE0iIpb7ZSeskVSqyI3CKfNx15VQoVjk9krY9CK/r0SMZqBzWp XONyClLQ0j9hQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 84/89] KVM: arm64: Extend memory sharing to allow guest-to-host transitions Date: Thu, 19 May 2022 14:41:59 +0100 Message-Id: <20220519134204.5379-85-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064759_851016_71F8C16B X-CRM114-Status: GOOD ( 18.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org A guest that can only operate on private memory is pretty useless, as it has no way to share buffers with the host for things like virtio. Extend our memory protection mechanisms to support the sharing and unsharing of guest pages from the guest to the host. For now, this functionality is unused but will later be exposed to the guest via hypercalls. Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 232 ++++++++++++++++++ 2 files changed, 234 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index b01b5cdb38de..e0bbb1726fa3 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -70,6 +70,8 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu); int __pkvm_host_donate_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu); +int __pkvm_guest_share_host(struct kvm_vcpu *vcpu, u64 ipa); +int __pkvm_guest_unshare_host(struct kvm_vcpu *vcpu, u64 ipa); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 8459dc33e460..d839bb573b49 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -780,11 +780,41 @@ static int __host_ack_transition(u64 addr, const struct pkvm_mem_transition *tx, return __host_check_page_state_range(addr, size, state); } +static int host_ack_share(u64 addr, const struct pkvm_mem_transition *tx, + enum kvm_pgtable_prot perms) +{ + if (perms != PKVM_HOST_MEM_PROT) + return -EPERM; + + return __host_ack_transition(addr, tx, PKVM_NOPAGE); +} + static int host_ack_donation(u64 addr, const struct pkvm_mem_transition *tx) { return __host_ack_transition(addr, tx, PKVM_NOPAGE); } +static int host_ack_unshare(u64 addr, const struct pkvm_mem_transition *tx) +{ + return __host_ack_transition(addr, tx, PKVM_PAGE_SHARED_BORROWED); +} + +static int host_complete_share(u64 addr, const struct pkvm_mem_transition *tx, + enum kvm_pgtable_prot perms) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + + return __host_set_page_state_range(addr, size, PKVM_PAGE_SHARED_BORROWED); +} + +static int host_complete_unshare(u64 addr, const struct pkvm_mem_transition *tx) +{ + u8 owner_id = tx->initiator.id; + u64 size = tx->nr_pages * PAGE_SIZE; + + return host_stage2_set_owner_locked(addr, size, owner_id); +} + static int host_complete_donation(u64 addr, const struct pkvm_mem_transition *tx) { u64 size = tx->nr_pages * PAGE_SIZE; @@ -970,6 +1000,120 @@ static int guest_complete_donation(u64 addr, const struct pkvm_mem_transition *t prot, &vcpu->arch.pkvm_memcache); } +static int __guest_get_completer_addr(u64 *completer_addr, phys_addr_t phys, + const struct pkvm_mem_transition *tx) +{ + switch (tx->completer.id) { + case PKVM_ID_HOST: + *completer_addr = phys; + break; + case PKVM_ID_HYP: + *completer_addr = (u64)__hyp_va(phys); + break; + default: + return -EINVAL; + } + + return 0; +} + +static int __guest_request_page_transition(u64 *completer_addr, + const struct pkvm_mem_transition *tx, + enum pkvm_page_state desired) +{ + struct kvm_vcpu *vcpu = tx->initiator.guest.vcpu; + struct kvm_shadow_vm *vm = get_shadow_vm(vcpu); + enum pkvm_page_state state; + phys_addr_t phys; + kvm_pte_t pte; + u32 level; + int ret; + + if (tx->nr_pages != 1) + return -E2BIG; + + ret = kvm_pgtable_get_leaf(&vm->pgt, tx->initiator.addr, &pte, &level); + if (ret) + return ret; + + state = guest_get_page_state(pte); + if (state == PKVM_NOPAGE) + return -EFAULT; + + if (state != desired) + return -EPERM; + + /* + * We only deal with page granular mappings in the guest for now as + * the pgtable code relies on being able to recreate page mappings + * lazily after zapping a block mapping, which doesn't work once the + * pages have been donated. + */ + if (level != KVM_PGTABLE_MAX_LEVELS - 1) + return -EINVAL; + + phys = kvm_pte_to_phys(pte); + if (!addr_is_allowed_memory(phys)) + return -EINVAL; + + return __guest_get_completer_addr(completer_addr, phys, tx); +} + +static int guest_request_share(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + return __guest_request_page_transition(completer_addr, tx, + PKVM_PAGE_OWNED); +} + +static int guest_request_unshare(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + return __guest_request_page_transition(completer_addr, tx, + PKVM_PAGE_SHARED_OWNED); +} + +static int __guest_initiate_page_transition(u64 *completer_addr, + const struct pkvm_mem_transition *tx, + enum pkvm_page_state state) +{ + struct kvm_vcpu *vcpu = tx->initiator.guest.vcpu; + struct kvm_shadow_vm *vm = get_shadow_vm(vcpu); + struct kvm_hyp_memcache *mc = &vcpu->arch.pkvm_memcache; + u64 size = tx->nr_pages * PAGE_SIZE; + u64 addr = tx->initiator.addr; + enum kvm_pgtable_prot prot; + phys_addr_t phys; + kvm_pte_t pte; + int ret; + + ret = kvm_pgtable_get_leaf(&vm->pgt, addr, &pte, NULL); + if (ret) + return ret; + + phys = kvm_pte_to_phys(pte); + prot = pkvm_mkstate(kvm_pgtable_stage2_pte_prot(pte), state); + ret = kvm_pgtable_stage2_map(&vm->pgt, addr, size, phys, prot, mc); + if (ret) + return ret; + + return __guest_get_completer_addr(completer_addr, phys, tx); +} + +static int guest_initiate_share(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + return __guest_initiate_page_transition(completer_addr, tx, + PKVM_PAGE_SHARED_OWNED); +} + +static int guest_initiate_unshare(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + return __guest_initiate_page_transition(completer_addr, tx, + PKVM_PAGE_OWNED); +} + static int check_share(struct pkvm_mem_share *share) { const struct pkvm_mem_transition *tx = &share->tx; @@ -980,6 +1124,9 @@ static int check_share(struct pkvm_mem_share *share) case PKVM_ID_HOST: ret = host_request_owned_transition(&completer_addr, tx); break; + case PKVM_ID_GUEST: + ret = guest_request_share(&completer_addr, tx); + break; default: ret = -EINVAL; } @@ -988,6 +1135,9 @@ static int check_share(struct pkvm_mem_share *share) return ret; switch (tx->completer.id) { + case PKVM_ID_HOST: + ret = host_ack_share(completer_addr, tx, share->completer_prot); + break; case PKVM_ID_HYP: ret = hyp_ack_share(completer_addr, tx, share->completer_prot); break; @@ -1011,6 +1161,9 @@ static int __do_share(struct pkvm_mem_share *share) case PKVM_ID_HOST: ret = host_initiate_share(&completer_addr, tx); break; + case PKVM_ID_GUEST: + ret = guest_initiate_share(&completer_addr, tx); + break; default: ret = -EINVAL; } @@ -1019,6 +1172,9 @@ static int __do_share(struct pkvm_mem_share *share) return ret; switch (tx->completer.id) { + case PKVM_ID_HOST: + ret = host_complete_share(completer_addr, tx, share->completer_prot); + break; case PKVM_ID_HYP: ret = hyp_complete_share(completer_addr, tx, share->completer_prot); break; @@ -1062,6 +1218,9 @@ static int check_unshare(struct pkvm_mem_share *share) case PKVM_ID_HOST: ret = host_request_unshare(&completer_addr, tx); break; + case PKVM_ID_GUEST: + ret = guest_request_unshare(&completer_addr, tx); + break; default: ret = -EINVAL; } @@ -1070,6 +1229,9 @@ static int check_unshare(struct pkvm_mem_share *share) return ret; switch (tx->completer.id) { + case PKVM_ID_HOST: + ret = host_ack_unshare(completer_addr, tx); + break; case PKVM_ID_HYP: ret = hyp_ack_unshare(completer_addr, tx); break; @@ -1090,6 +1252,9 @@ static int __do_unshare(struct pkvm_mem_share *share) case PKVM_ID_HOST: ret = host_initiate_unshare(&completer_addr, tx); break; + case PKVM_ID_GUEST: + ret = guest_initiate_unshare(&completer_addr, tx); + break; default: ret = -EINVAL; } @@ -1098,6 +1263,9 @@ static int __do_unshare(struct pkvm_mem_share *share) return ret; switch (tx->completer.id) { + case PKVM_ID_HOST: + ret = host_complete_unshare(completer_addr, tx); + break; case PKVM_ID_HYP: ret = hyp_complete_unshare(completer_addr, tx); break; @@ -1255,6 +1423,70 @@ int __pkvm_host_share_hyp(u64 pfn) return ret; } +int __pkvm_guest_share_host(struct kvm_vcpu *vcpu, u64 ipa) +{ + int ret; + struct kvm_shadow_vm *vm = get_shadow_vm(vcpu); + struct pkvm_mem_share share = { + .tx = { + .nr_pages = 1, + .initiator = { + .id = PKVM_ID_GUEST, + .addr = ipa, + .guest = { + .vcpu = vcpu, + }, + }, + .completer = { + .id = PKVM_ID_HOST, + }, + }, + .completer_prot = PKVM_HOST_MEM_PROT, + }; + + host_lock_component(); + guest_lock_component(vm); + + ret = do_share(&share); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} + +int __pkvm_guest_unshare_host(struct kvm_vcpu *vcpu, u64 ipa) +{ + int ret; + struct kvm_shadow_vm *vm = get_shadow_vm(vcpu); + struct pkvm_mem_share share = { + .tx = { + .nr_pages = 1, + .initiator = { + .id = PKVM_ID_GUEST, + .addr = ipa, + .guest = { + .vcpu = vcpu, + }, + }, + .completer = { + .id = PKVM_ID_HOST, + }, + }, + .completer_prot = PKVM_HOST_MEM_PROT, + }; + + host_lock_component(); + guest_lock_component(vm); + + ret = do_unshare(&share); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} + int __pkvm_host_unshare_hyp(u64 pfn) { int ret; From patchwork Thu May 19 13:42:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855208 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 85B43C433EF for ; Thu, 19 May 2022 14:49:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ycXbqrtRVeZa0E3CiYfP/qp03gDBEGpxRfQlcz5r8KU=; b=fR3+qIEHj5SOYh IGT6acQ8TAkl6thnKDTDbRMyrTc2RRtP2/J42HLQebck6vePUiogUdFQydvJNXkuT/uZYm2z3gKBf 7MDkoi4gnfTH9SXAPRDNML470SHBAPRopOQii+jJB5DDvsSk09Dj3j2OkgiujMaQU9sUc3WlDUshu HXq6ee0YF+I9CYXgI/M45FFyT+wI6nrCWvNzgQLYdWrp/1fqIGZpmQJlYBIj0GEFEXLU9S+85CimA MVNmoefpRETri7Wt5Nz7CNk0ZNRKA1MfGysz1uXK677N/bYsAvAT28teXaHJVjwXNU3QdDaa+otTz kSrxkCfK9sn1RabELPBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhRm-007ZWi-7c; Thu, 19 May 2022 14:48:16 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgVZ-0077gL-2G for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:48:07 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A45E4B8235B; Thu, 19 May 2022 13:48:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D4DACC36AE3; Thu, 19 May 2022 13:47:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968082; bh=/Mo4K/TZgcyHThL34GJtkzmcE7F2fxw8lRcqi2rTW/c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lZctvavEBzQ00kZTcii/cmXssJ+6ByUMt0ooYB9x1ORwmzX45NFefcdnDcrDq00F3 8AUjJ3/5aQHqluBsoZAe2pb/lrKTqMGWEV1HGuQaUq2ta0BAlykMFyaBdVDuRcD4Oy UWTk3wBiKP9OguhQsQK1r3WxxQRCcQp7BDFBsDO4pxIhoI4cr0nF2xF67J0ne5r7Cl CVqHMqRNqN0zkRVfrQmTEojUH4MWxc6tBUzJ6EKZxn3pEUs8neqIjbPrywBgK2QNFf RsJKYFDUThn20OmDAG98aF1azY8/pjt5MOK3eJ6zQD7xU/u1I01BzedjWJU6qbOc1x Q6JDZvF3zpbhw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 85/89] KVM: arm64: Document the KVM/arm64-specific calls in hypercalls.rst Date: Thu, 19 May 2022 14:42:00 +0100 Message-Id: <20220519134204.5379-86-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064805_475885_2ED14FF8 X-CRM114-Status: GOOD ( 15.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org KVM/arm64 makes use of the SMCCC "Vendor Specific Hypervisor Service Call Range" to expose KVM-specific hypercalls to guests in a discoverable and extensible fashion. Document the existence of this interface and the discovery hypercall. Signed-off-by: Will Deacon --- Documentation/virt/kvm/arm/hypercalls.rst | 46 +++++++++++++++++++++++ Documentation/virt/kvm/arm/index.rst | 1 + 2 files changed, 47 insertions(+) create mode 100644 Documentation/virt/kvm/arm/hypercalls.rst diff --git a/Documentation/virt/kvm/arm/hypercalls.rst b/Documentation/virt/kvm/arm/hypercalls.rst new file mode 100644 index 000000000000..17be111f493f --- /dev/null +++ b/Documentation/virt/kvm/arm/hypercalls.rst @@ -0,0 +1,46 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=============================================== +KVM/arm64-specific hypercalls exposed to guests +=============================================== + +This file documents the KVM/arm64-specific hypercalls which may be +exposed by KVM/arm64 to guest operating systems. These hypercalls are +issued using the HVC instruction according to version 1.1 of the Arm SMC +Calling Convention (DEN0028/C): + +https://developer.arm.com/docs/den0028/c + +All KVM/arm64-specific hypercalls are allocated within the "Vendor +Specific Hypervisor Service Call" range with a UID of +``28b46fb6-2ec5-11e9-a9ca-4b564d003a74``. This UID should be queried by the +guest using the standard "Call UID" function for the service range in +order to determine that the KVM/arm64-specific hypercalls are available. + +``ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID`` +--------------------------------------------- + +Provides a discovery mechanism for other KVM/arm64 hypercalls. + ++---------------------+-------------------------------------------------------------+ +| Presence: | Mandatory for the KVM/arm64 UID | ++---------------------+-------------------------------------------------------------+ +| Calling convention: | HVC32 | ++---------------------+----------+--------------------------------------------------+ +| Function ID: | (uint32) | 0x86000000 | ++---------------------+----------+--------------------------------------------------+ +| Arguments: | None | ++---------------------+----------+----+---------------------------------------------+ +| Return Values: | (uint32) | R0 | Bitmap of available function numbers 0-31 | +| +----------+----+---------------------------------------------+ +| | (uint32) | R1 | Bitmap of available function numbers 32-63 | +| +----------+----+---------------------------------------------+ +| | (uint32) | R2 | Bitmap of available function numbers 64-95 | +| +----------+----+---------------------------------------------+ +| | (uint32) | R3 | Bitmap of available function numbers 96-127 | ++---------------------+----------+----+---------------------------------------------+ + +``ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID`` +---------------------------------------- + +See ptp_kvm.rst diff --git a/Documentation/virt/kvm/arm/index.rst b/Documentation/virt/kvm/arm/index.rst index 78a9b670aafe..b4067da3fcb6 100644 --- a/Documentation/virt/kvm/arm/index.rst +++ b/Documentation/virt/kvm/arm/index.rst @@ -8,6 +8,7 @@ ARM :maxdepth: 2 hyp-abi + hypercalls psci pvtime ptp_kvm From patchwork Thu May 19 13:42:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B870C433EF for ; Thu, 19 May 2022 14:50:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qePLIdEs0tJFNVQtKdrntIdnZluRftbPiowfAQ21FlM=; b=DlkTLFpHmiqAT4 mhECPlEJAAokbVSdmjl+cPe2WkVkLcaQnO67HbBCOk2xyGAr0qoNoAwjhpObxMmlinhos45gtHuCu hk+Ye8Wgx+lWqxaHAnxjmPLSPbhRR94P66Ss17aP3A0ESUjg9j90FbjMrGtd/uukyItobcZLl1Bmz j3ocN9cgNaxhL3yFUKJXOcVlhYdrPPwKAwy8Zc7cqaD4gnJVlgDhg0xwMoUAnSO8qAbFGog3LbNTY 3m8YbPKdP31NdOOB5NUCJWiMHJcQ1QxVI68JNYM/6cMdLdNb8FylIDCG42aSvkMwoE2kyp8/q5lQp RAJlrU49ahlR+YgR8Kiw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhSU-007Zwf-BG; Thu, 19 May 2022 14:48:58 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgVd-0077ij-Cp for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:48:11 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A1843B824AE; Thu, 19 May 2022 13:48:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CA919C385B8; Thu, 19 May 2022 13:48:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968086; bh=S0NIjDKf9+rgpK0PQjxkwGZ1hayqbk6xeRNbkZiaeAQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=V5NfQ+2MWe5P5aBo/67wWU4PsJrsByA2i3PIYW5amw33uWiZxKKHtRExK7tShypjR 3RVV7PraQRArJOIzQiwnm58dAl0OvyWJlKtVbA9kxMzEA/M5YnwvdM0pfqrgQ75RHg mWgxOoZAJGVJXCiNYqfkJJ+7coKcovUfOlLvMoweq/LKnbzMlcvuIi3WfyIyL7mtNy Zd7UtBcY9qvZtCJXIIGWBGN37rhXsjkAK2N0jHQh4LqnfHPWC60nCN7hu+U1+mw1Cu ETJl94SJxf9ramNn7jNOykMirgiO6qu5E8gSoFHfZeESZ8+SSPT1+uKzIrIBakI9/9 KPZkXwdwX2m7g== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 86/89] KVM: arm64: Reformat/beautify PTP hypercall documentation Date: Thu, 19 May 2022 14:42:01 +0100 Message-Id: <20220519134204.5379-87-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064809_790133_54141B2D X-CRM114-Status: GOOD ( 13.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The PTP hypercall documentation doesn't produce the best-looking table when formatting in HTML as all of the return value definitions end up on the same line. Reformat the PTP hypercall documentation to follow the formatting used by hypercalls.rst. Signed-off-by: Will Deacon --- Documentation/virt/kvm/arm/ptp_kvm.rst | 38 ++++++++++++++++---------- 1 file changed, 24 insertions(+), 14 deletions(-) diff --git a/Documentation/virt/kvm/arm/ptp_kvm.rst b/Documentation/virt/kvm/arm/ptp_kvm.rst index aecdc80ddcd8..7c0960970a0e 100644 --- a/Documentation/virt/kvm/arm/ptp_kvm.rst +++ b/Documentation/virt/kvm/arm/ptp_kvm.rst @@ -7,19 +7,29 @@ PTP_KVM is used for high precision time sync between host and guests. It relies on transferring the wall clock and counter value from the host to the guest using a KVM-specific hypercall. -* ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID: 0x86000001 +``ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID`` +---------------------------------------- -This hypercall uses the SMC32/HVC32 calling convention: +Retrieve current time information for the specific counter. There are no +endianness restrictions. -ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID - ============== ======== ===================================== - Function ID: (uint32) 0x86000001 - Arguments: (uint32) KVM_PTP_VIRT_COUNTER(0) - KVM_PTP_PHYS_COUNTER(1) - Return Values: (int32) NOT_SUPPORTED(-1) on error, or - (uint32) Upper 32 bits of wall clock time (r0) - (uint32) Lower 32 bits of wall clock time (r1) - (uint32) Upper 32 bits of counter (r2) - (uint32) Lower 32 bits of counter (r3) - Endianness: No Restrictions. - ============== ======== ===================================== ++---------------------+-------------------------------------------------------+ +| Presence: | Optional | ++---------------------+-------------------------------------------------------+ +| Calling convention: | HVC32 | ++---------------------+----------+--------------------------------------------+ +| Function ID: | (uint32) | 0x86000001 | ++---------------------+----------+----+---------------------------------------+ +| Arguments: | (uint32) | R1 | ``KVM_PTP_VIRT_COUNTER (0)`` | +| | | +---------------------------------------+ +| | | | ``KVM_PTP_PHYS_COUNTER (1)`` | ++---------------------+----------+----+---------------------------------------+ +| Return Values: | (int32) | R0 | ``NOT_SUPPORTED (-1)`` on error, else | +| | | | upper 32 bits of wall clock time | +| +----------+----+---------------------------------------+ +| | (uint32) | R1 | Lower 32 bits of wall clock time | +| +----------+----+---------------------------------------+ +| | (uint32) | R2 | Upper 32 bits of counter | +| +----------+----+---------------------------------------+ +| | (uint32) | R3 | Lower 32 bits of counter | ++---------------------+----------+----+---------------------------------------+ From patchwork Thu May 19 13:42:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855210 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 63730C433F5 for ; Thu, 19 May 2022 14:51:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iBKczQ9NnArIyvRT9Wy5K5157FTboCap/ckdIocZSx8=; b=yTa+TCYuLviWHe yvsgrQESRLf0LsUrVClh9QQ4I84UPjbQzzQbGRPD+nXclZkQCwg58mbH/8Veh5AtNrK8fuSw12oH8 UWNC3/CK/QDfpmZgAeZzAgAbit9+p51UzIRfjnJVJXlU5YRcoUPwPbKzahZTh9Z//FzNFk+JOhck2 Z3iyH8XW23crtQLIZJixNiYAr4QD2bHOYA/fYh9oXYgaEVJwLUOxwzSOuMcKkGqpe7l/A2yQAQ44D VgQn3Yfw4RSnvORhD4pi2m2GFzTNumSEw0Q4NUaZu0Th9yGYoJfbh8AdNBHO2w4ZxZyFEGRfM5kqw zO8S/4Av7MyAi5X+KxnA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhTC-007aOD-6m; Thu, 19 May 2022 14:49:42 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgVf-0077ki-DG for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:48:14 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CEDE161790; Thu, 19 May 2022 13:48:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C2605C34117; Thu, 19 May 2022 13:48:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968090; bh=OVnUUIfJD5cEpFAJ7GyZGCXUvefbmDSGP1cFzFXgN2I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d971vyFNl+PaYVERA4ZRaeJkiiGI0hjyo6CHX66oWawBWpjjLcxD5zF9/9+Z3ZsgR igaiFl2all+VZWDRB59V7iS6ax0UkAUmN19WjqHJQna0jyLuwEskRKS4lXC2zbE1MD ci5Wu3x+zdeDuw+UpChBuQAe9dXYd52184lZ8lMrrfLYbx0h0+BGX+f6PwF+WydyAL eUr7BDpgQRvaL4e7juNkF1ds+0xdn96cg92oWtqi9DWYDkR4ZT/BzKoQE4bG7nU/DZ A7yQyTwCLgjjYLFVzA9tww4Xf850s8arEhVC9uW+WIoIeZ8xd0Xw59MeS/WKRDlPxn 2+7YyRN3B+n+w== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 87/89] KVM: arm64: Expose memory sharing hypercalls to protected guests Date: Thu, 19 May 2022 14:42:02 +0100 Message-Id: <20220519134204.5379-88-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064811_609537_0A3A9DB7 X-CRM114-Status: GOOD ( 28.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Extend our KVM "vendor" hypercalls to expose three new hypercalls to protected guests for the purpose of opening and closing shared memory windows with the host: MEMINFO: Query the stage-2 page size (i.e. the minimum granule at which memory can be shared) MEM_SHARE: Share a page RWX with the host, faulting the page in if necessary. MEM_UNSHARE: Unshare a page with the host. Subsequent host accesses to the page will result in a fault being injected by the hypervisor. Signed-off-by: Will Deacon --- Documentation/virt/kvm/arm/hypercalls.rst | 72 ++++++++++++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 24 ++++- arch/arm64/kvm/hyp/nvhe/pkvm.c | 109 +++++++++++++++++++++- include/linux/arm-smccc.h | 21 +++++ 4 files changed, 223 insertions(+), 3 deletions(-) diff --git a/Documentation/virt/kvm/arm/hypercalls.rst b/Documentation/virt/kvm/arm/hypercalls.rst index 17be111f493f..d96c9fd7d8c5 100644 --- a/Documentation/virt/kvm/arm/hypercalls.rst +++ b/Documentation/virt/kvm/arm/hypercalls.rst @@ -44,3 +44,75 @@ Provides a discovery mechanism for other KVM/arm64 hypercalls. ---------------------------------------- See ptp_kvm.rst + +``ARM_SMCCC_KVM_FUNC_HYP_MEMINFO`` +---------------------------------- + +Query the memory protection parameters for a protected virtual machine. + ++---------------------+-------------------------------------------------------------+ +| Presence: | Optional; protected guests only. | ++---------------------+-------------------------------------------------------------+ +| Calling convention: | HVC64 | ++---------------------+----------+--------------------------------------------------+ +| Function ID: | (uint32) | 0xC6000002 | ++---------------------+----------+----+---------------------------------------------+ +| Arguments: | (uint64) | R1 | Reserved / Must be zero | +| +----------+----+---------------------------------------------+ +| | (uint64) | R2 | Reserved / Must be zero | +| +----------+----+---------------------------------------------+ +| | (uint64) | R3 | Reserved / Must be zero | ++---------------------+----------+----+---------------------------------------------+ +| Return Values: | (int64) | R0 | ``INVALID_PARAMETER (-3)`` on error, else | +| | | | memory protection granule in bytes | ++---------------------+----------+----+---------------------------------------------+ + +``ARM_SMCCC_KVM_FUNC_MEM_SHARE`` +-------------------------------- + +Share a region of memory with the KVM host, granting it read, write and execute +permissions. The size of the region is equal to the memory protection granule +advertised by ``ARM_SMCCC_KVM_FUNC_HYP_MEMINFO``. + ++---------------------+-------------------------------------------------------------+ +| Presence: | Optional; protected guests only. | ++---------------------+-------------------------------------------------------------+ +| Calling convention: | HVC64 | ++---------------------+----------+--------------------------------------------------+ +| Function ID: | (uint32) | 0xC6000003 | ++---------------------+----------+----+---------------------------------------------+ +| Arguments: | (uint64) | R1 | Base IPA of memory region to share | +| +----------+----+---------------------------------------------+ +| | (uint64) | R2 | Reserved / Must be zero | +| +----------+----+---------------------------------------------+ +| | (uint64) | R3 | Reserved / Must be zero | ++---------------------+----------+----+---------------------------------------------+ +| Return Values: | (int64) | R0 | ``SUCCESS (0)`` | +| | | +---------------------------------------------+ +| | | | ``INVALID_PARAMETER (-3)`` | ++---------------------+----------+----+---------------------------------------------+ + +``ARM_SMCCC_KVM_FUNC_MEM_UNSHARE`` +---------------------------------- + +Revoke access permission from the KVM host to a memory region previously shared +with ``ARM_SMCCC_KVM_FUNC_MEM_SHARE``. The size of the region is equal to the +memory protection granule advertised by ``ARM_SMCCC_KVM_FUNC_HYP_MEMINFO``. + ++---------------------+-------------------------------------------------------------+ +| Presence: | Optional; protected guests only. | ++---------------------+-------------------------------------------------------------+ +| Calling convention: | HVC64 | ++---------------------+----------+--------------------------------------------------+ +| Function ID: | (uint32) | 0xC6000004 | ++---------------------+----------+----+---------------------------------------------+ +| Arguments: | (uint64) | R1 | Base IPA of memory region to unshare | +| +----------+----+---------------------------------------------+ +| | (uint64) | R2 | Reserved / Must be zero | +| +----------+----+---------------------------------------------+ +| | (uint64) | R3 | Reserved / Must be zero | ++---------------------+----------+----+---------------------------------------------+ +| Return Values: | (int64) | R0 | ``SUCCESS (0)`` | +| | | +---------------------------------------------+ +| | | | ``INVALID_PARAMETER (-3)`` | ++---------------------+----------+----+---------------------------------------------+ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 694e0071b13e..be000d457e44 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -45,7 +45,7 @@ static void handle_pvm_entry_wfx(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *sh KVM_ARM64_INCREMENT_PC; } -static void handle_pvm_entry_hvc64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +static void handle_pvm_entry_psci(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) { u32 psci_fn = smccc_get_function(shadow_vcpu); u64 ret = READ_ONCE(host_vcpu->arch.ctxt.regs.regs[0]); @@ -81,6 +81,22 @@ static void handle_pvm_entry_hvc64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu * vcpu_set_reg(shadow_vcpu, 0, ret); } +static void handle_pvm_entry_hvc64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) +{ + u32 fn = smccc_get_function(shadow_vcpu); + + switch (fn) { + case ARM_SMCCC_VENDOR_HYP_KVM_MEM_SHARE_FUNC_ID: + fallthrough; + case ARM_SMCCC_VENDOR_HYP_KVM_MEM_UNSHARE_FUNC_ID: + vcpu_set_reg(shadow_vcpu, 0, SMCCC_RET_SUCCESS); + break; + default: + handle_pvm_entry_psci(host_vcpu, shadow_vcpu); + break; + } +} + static void handle_pvm_entry_sys64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *shadow_vcpu) { unsigned long host_flags; @@ -257,6 +273,12 @@ static void handle_pvm_exit_hvc64(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *s n = 1; break; + case ARM_SMCCC_VENDOR_HYP_KVM_MEM_SHARE_FUNC_ID: + fallthrough; + case ARM_SMCCC_VENDOR_HYP_KVM_MEM_UNSHARE_FUNC_ID: + n = 4; + break; + case PSCI_1_1_FN_SYSTEM_RESET2: case PSCI_1_1_FN64_SYSTEM_RESET2: n = 3; diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index d44f524c936b..e33ba9067d7b 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -1118,6 +1118,82 @@ static bool pkvm_handle_psci(struct kvm_vcpu *vcpu) return pvm_psci_not_supported(vcpu); } +static u64 __pkvm_memshare_page_req(struct kvm_vcpu *vcpu, u64 ipa) +{ + u64 elr; + + /* Fake up a data abort (Level 3 translation fault on write) */ + vcpu->arch.fault.esr_el2 = (u32)ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT | + ESR_ELx_WNR | ESR_ELx_FSC_FAULT | + FIELD_PREP(ESR_ELx_FSC_LEVEL, 3); + + /* Shuffle the IPA around into the HPFAR */ + vcpu->arch.fault.hpfar_el2 = (ipa >> 8) & HPFAR_MASK; + + /* This is a virtual address. 0's good. Let's go with 0. */ + vcpu->arch.fault.far_el2 = 0; + + /* Rewind the ELR so we return to the HVC once the IPA is mapped */ + elr = read_sysreg(elr_el2); + elr -= 4; + write_sysreg(elr, elr_el2); + + return ARM_EXCEPTION_TRAP; +} + +static bool pkvm_memshare_call(struct kvm_vcpu *vcpu, u64 *exit_code) +{ + u64 ipa = smccc_get_arg1(vcpu); + u64 arg2 = smccc_get_arg2(vcpu); + u64 arg3 = smccc_get_arg3(vcpu); + int err; + + if (arg2 || arg3) + goto out_guest_err; + + err = __pkvm_guest_share_host(vcpu, ipa); + switch (err) { + case 0: + /* Success! Now tell the host. */ + goto out_host; + case -EFAULT: + /* + * Convert the exception into a data abort so that the page + * being shared is mapped into the guest next time. + */ + *exit_code = __pkvm_memshare_page_req(vcpu, ipa); + goto out_host; + } + +out_guest_err: + smccc_set_retval(vcpu, SMCCC_RET_INVALID_PARAMETER, 0, 0, 0); + return true; + +out_host: + return false; +} + +static bool pkvm_memunshare_call(struct kvm_vcpu *vcpu) +{ + u64 ipa = smccc_get_arg1(vcpu); + u64 arg2 = smccc_get_arg2(vcpu); + u64 arg3 = smccc_get_arg3(vcpu); + int err; + + if (arg2 || arg3) + goto out_guest_err; + + err = __pkvm_guest_unshare_host(vcpu, ipa); + if (err) + goto out_guest_err; + + return false; + +out_guest_err: + smccc_set_retval(vcpu, SMCCC_RET_INVALID_PARAMETER, 0, 0, 0); + return true; +} + /* * Handler for protected VM HVC calls. * @@ -1127,13 +1203,42 @@ static bool pkvm_handle_psci(struct kvm_vcpu *vcpu) bool kvm_handle_pvm_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code) { u32 fn = smccc_get_function(vcpu); + u64 val[4] = { SMCCC_RET_NOT_SUPPORTED }; switch (fn) { case ARM_SMCCC_VERSION_FUNC_ID: /* Nothing to be handled by the host. Go back to the guest. */ - smccc_set_retval(vcpu, ARM_SMCCC_VERSION_1_1, 0, 0, 0); - return true; + val[0] = ARM_SMCCC_VERSION_1_1; + break; + case ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID: + val[0] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_0; + val[1] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_1; + val[2] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_2; + val[3] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_3; + break; + case ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID: + val[0] = BIT(ARM_SMCCC_KVM_FUNC_FEATURES); + val[0] |= BIT(ARM_SMCCC_KVM_FUNC_HYP_MEMINFO); + val[0] |= BIT(ARM_SMCCC_KVM_FUNC_MEM_SHARE); + val[0] |= BIT(ARM_SMCCC_KVM_FUNC_MEM_UNSHARE); + break; + case ARM_SMCCC_VENDOR_HYP_KVM_HYP_MEMINFO_FUNC_ID: + if (smccc_get_arg1(vcpu) || + smccc_get_arg2(vcpu) || + smccc_get_arg3(vcpu)) { + val[0] = SMCCC_RET_INVALID_PARAMETER; + } else { + val[0] = PAGE_SIZE; + } + break; + case ARM_SMCCC_VENDOR_HYP_KVM_MEM_SHARE_FUNC_ID: + return pkvm_memshare_call(vcpu, exit_code); + case ARM_SMCCC_VENDOR_HYP_KVM_MEM_UNSHARE_FUNC_ID: + return pkvm_memunshare_call(vcpu); default: return pkvm_handle_psci(vcpu); } + + smccc_set_retval(vcpu, val[0], val[1], val[2], val[3]); + return true; } diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h index 220c8c60e021..25c576a910df 100644 --- a/include/linux/arm-smccc.h +++ b/include/linux/arm-smccc.h @@ -112,6 +112,9 @@ /* KVM "vendor specific" services */ #define ARM_SMCCC_KVM_FUNC_FEATURES 0 #define ARM_SMCCC_KVM_FUNC_PTP 1 +#define ARM_SMCCC_KVM_FUNC_HYP_MEMINFO 2 +#define ARM_SMCCC_KVM_FUNC_MEM_SHARE 3 +#define ARM_SMCCC_KVM_FUNC_MEM_UNSHARE 4 #define ARM_SMCCC_KVM_FUNC_FEATURES_2 127 #define ARM_SMCCC_KVM_NUM_FUNCS 128 @@ -134,6 +137,24 @@ ARM_SMCCC_OWNER_VENDOR_HYP, \ ARM_SMCCC_KVM_FUNC_PTP) +#define ARM_SMCCC_VENDOR_HYP_KVM_HYP_MEMINFO_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + ARM_SMCCC_KVM_FUNC_HYP_MEMINFO) + +#define ARM_SMCCC_VENDOR_HYP_KVM_MEM_SHARE_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + ARM_SMCCC_KVM_FUNC_MEM_SHARE) + +#define ARM_SMCCC_VENDOR_HYP_KVM_MEM_UNSHARE_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + ARM_SMCCC_KVM_FUNC_MEM_UNSHARE) + /* ptp_kvm counter type ID */ #define KVM_PTP_VIRT_COUNTER 0 #define KVM_PTP_PHYS_COUNTER 1 From patchwork Thu May 19 13:42:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 414FDC433EF for ; Thu, 19 May 2022 14:52:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=tX2lRv0Cz1WWSHF/58D+NZlHGYdNJosgmkk3bSIgq6o=; b=m1bzxDtVL2PUsX NLi0Lxd9py8H2Y7LziLysFsJyofkbwCXpNp/9NiQ3EjmyWdl0uyrBlyZlmcI0nX5pJTvdBqTITr2C 85x87f+M4HiSLmZDiwNC1HMMqE1kkf94ItRWAcSBMfRyyCOWj/SRAu/P6RmLKj69v07SmzdvZHi+e 6qMgCWae65eCJnzqIqvPC52KkYLcAKhPI+BryJ0Z9Chp5SfX/up9+uSOBVR+V4OuLSOreWbYHb1kX agkSyOoCpvYvsa7JuXpFY8qRqvsiAi5qzS5y8UGOabKQiGGqgaIoPRHLDOSiKgD1sBRaAPSbRmWpI cYidLJYaIKd70o1M3TPA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhU3-007aqK-UW; Thu, 19 May 2022 14:50:36 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgVl-0077ni-1w for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:48:19 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8BB33B824AB; Thu, 19 May 2022 13:48:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BA3E8C34116; Thu, 19 May 2022 13:48:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968094; bh=yfv/vQKADWjGkdBJvYkXwUJ0o4vv6c5SpCbBBPRGgJ8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=enJFxxMLTdCW1oDtkbsEEik4y7sHN9Bev+iph2vEztvwtIMBc8d1+tbGlXB8quSpT UL2qz4FhDigx3GQig5rWvxsVwLdbMi6wKOzZ/NCW/0jkQwtTpTaxawnSkOa20oOorA wYKhGCD5N+iJq73s8m+s3PqHFb3HZ8oevAhX7T+QRhsRS4FxAxPWG6PebGOhrKXqc2 7zRBSaoCSJbRRQwz18nrpYH/qLrdBYYo2qq9A00PUrT5nvoN+0VOLrEvTScBhAMdZS nJ1nQKRgPxOC3qQ6EI0XALxal/qEMkfEUUT4LjIgKl2ZAWmodutx8VrYZT/wZdHbcv maBd7eqbeoo/g== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 88/89] KVM: arm64: Introduce KVM_VM_TYPE_ARM_PROTECTED machine type for PVMs Date: Thu, 19 May 2022 14:42:03 +0100 Message-Id: <20220519134204.5379-89-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064817_504154_A057FC66 X-CRM114-Status: GOOD ( 17.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new virtual machine type, KVM_VM_TYPE_ARM_PROTECTED, which specifies that the guest memory pages are to be unmapped from the host stage-2 by the hypervisor. Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_pkvm.h | 2 +- arch/arm64/kvm/arm.c | 5 ++++- arch/arm64/kvm/mmu.c | 3 --- arch/arm64/kvm/pkvm.c | 10 +++++++++- include/uapi/linux/kvm.h | 6 ++++++ 5 files changed, 20 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index 062ae2ffbdfb..952e3c3fa32d 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -16,7 +16,7 @@ #define HYP_MEMBLOCK_REGIONS 128 -int kvm_init_pvm(struct kvm *kvm); +int kvm_init_pvm(struct kvm *kvm, unsigned long type); int kvm_shadow_create(struct kvm *kvm); void kvm_shadow_destroy(struct kvm *kvm); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 9c5a935a9a73..26fd69727c81 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -141,11 +141,14 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) { int ret; + if (type & ~KVM_VM_TYPE_MASK) + return -EINVAL; + ret = kvm_share_hyp(kvm, kvm + 1); if (ret) return ret; - ret = kvm_init_pvm(kvm); + ret = kvm_init_pvm(kvm, type); if (ret) goto err_unshare_kvm; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 137d4382ed1c..392ff7b2362d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -652,9 +652,6 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t u64 mmfr0, mmfr1; u32 phys_shift; - if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK) - return -EINVAL; - phys_shift = KVM_VM_TYPE_ARM_IPA_SIZE(type); if (is_protected_kvm_enabled()) { phys_shift = kvm_ipa_limit; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 67aad91dc3e5..ebf93ff6a77e 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -218,8 +218,16 @@ void kvm_shadow_destroy(struct kvm *kvm) } } -int kvm_init_pvm(struct kvm *kvm) +int kvm_init_pvm(struct kvm *kvm, unsigned long type) { mutex_init(&kvm->arch.pkvm.shadow_lock); + + if (!(type & KVM_VM_TYPE_ARM_PROTECTED)) + return 0; + + if (!is_protected_kvm_enabled()) + return -EINVAL; + + kvm->arch.pkvm.enabled = true; return 0; } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 91a6fe4e02c0..fdb0289cfecc 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -887,6 +887,12 @@ struct kvm_ppc_resize_hpt { #define KVM_VM_TYPE_ARM_IPA_SIZE_MASK 0xffULL #define KVM_VM_TYPE_ARM_IPA_SIZE(x) \ ((x) & KVM_VM_TYPE_ARM_IPA_SIZE_MASK) + +#define KVM_VM_TYPE_ARM_PROTECTED (1UL << 8) + +#define KVM_VM_TYPE_MASK (KVM_VM_TYPE_ARM_IPA_SIZE_MASK | \ + KVM_VM_TYPE_ARM_PROTECTED) + /* * ioctls for /dev/kvm fds: */ From patchwork Thu May 19 13:42:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855218 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7D4ECC433FE for ; Thu, 19 May 2022 14:52:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zPDKUeET2M2X6A6xlc0OwyYyt4KkT6KXkj9bLJqWlug=; b=UHpCb1+VDU4vtr 0NLW8GvAQdCqxwaZUiJrv79hAeBAg+RRU3Xchx1sTLznCi8nOAtLGNUj6MFU/uLdXEgvCX9wpnEmV Ghlz649d4yEHvh9X6W2pOj56IXwmtHJzkWcWP2abC4ZqSKwMcg2Xem9IZ9ubw1kEdjL6SEfZMOEkR MvkcNjW57BlWPGg8DgZHKMGMLjebyxaJ1jz3XEiq3tfseHWDDQfTHYHv80zE98YqOBFPDUwl+QJIK llIyzF7A5Ix0fLcTF/7XpTe8U+R4c3j/6JgFtdSmc0MaACt+aIvdMp19pQz3QYMAPLUsM+3csqpVy bu7U9DXD2VnEQtT42wpQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhUX-007b7Z-3s; Thu, 19 May 2022 14:51:05 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgVo-0077qF-Ok for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:48:23 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6AC2DB824B1; Thu, 19 May 2022 13:48:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B1902C34100; Thu, 19 May 2022 13:48:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968098; bh=OyFqMF74W3M0ZJLxEHZqbx0RH1+q2b7I75jA3NgE6M0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Sz4Qvp4rwr5g8e+HWqFvIn6o9WgbBHyu5HzK01BvWg2WbaepceHeDw2Oz4wrGY8uO aM7RkaxjPEp0BlTRFMv2G7K2U6Y54m/l1hoIiKuOssQGg1p4i8o4JRpOE4JkhEUjtu y1viTLNwKan5or5hlpGg7/3bb5GdQ2sSqlFRtVt8Kw26H188iSXMKY6TNLO9J0IR0v 09XbcaTj4KmKVntmkWy3hOmTiJBA1MJbGs/mTlm1mDdvISFqM46gbapaqFy9TNpQsw e3bHaqnQUCzRzxlAq/EASYphq1ah49F36RW2faizlxHl00uW96cYtRQcQ49lHdXwm4 jsg7IDRuYuYAg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 89/89] Documentation: KVM: Add some documentation for Protected KVM on arm64 Date: Thu, 19 May 2022 14:42:04 +0100 Message-Id: <20220519134204.5379-90-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064821_158561_38E3FE53 X-CRM114-Status: GOOD ( 30.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add some initial documentation for the Protected KVM (pKVM) feature on arm64, describing the user ABI for creating protected VMs as well as their limitations. Signed-off-by: Will Deacon --- .../admin-guide/kernel-parameters.txt | 4 +- Documentation/virt/kvm/arm/index.rst | 1 + Documentation/virt/kvm/arm/pkvm.rst | 96 +++++++++++++++++++ 3 files changed, 100 insertions(+), 1 deletion(-) create mode 100644 Documentation/virt/kvm/arm/pkvm.rst diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 63a764ec7fec..b8841a969f59 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2437,7 +2437,9 @@ protected guests. protected: nVHE-based mode with support for guests whose - state is kept private from the host. + state is kept private from the host. See + Documentation/virt/kvm/arm/pkvm.rst for more + information about this mode of operation. Defaults to VHE/nVHE based on hardware support. Setting mode to "protected" will disable kexec and hibernation diff --git a/Documentation/virt/kvm/arm/index.rst b/Documentation/virt/kvm/arm/index.rst index b4067da3fcb6..49c388df662a 100644 --- a/Documentation/virt/kvm/arm/index.rst +++ b/Documentation/virt/kvm/arm/index.rst @@ -9,6 +9,7 @@ ARM hyp-abi hypercalls + pkvm psci pvtime ptp_kvm diff --git a/Documentation/virt/kvm/arm/pkvm.rst b/Documentation/virt/kvm/arm/pkvm.rst new file mode 100644 index 000000000000..64f099a5ac2e --- /dev/null +++ b/Documentation/virt/kvm/arm/pkvm.rst @@ -0,0 +1,96 @@ +.. SPDX-License-Identifier: GPL-2.0 + +Protected virtual machines (pKVM) +================================= + +Introduction +------------ + +Protected KVM (pKVM) is a KVM/arm64 extension which uses the two-stage +translation capability of the Armv8 MMU to isolate guest memory from the host +system. This allows for the creation of a confidential computing environment +without relying on whizz-bang features in hardware, but still allowing room for +complementary technologies such as memory encryption and hardware-backed +attestation. + +The major implementation change brought about by pKVM is that the hypervisor +code running at EL2 is now largely independent of (and isolated from) the rest +of the host kernel running at EL1 and therefore additional hypercalls are +introduced to manage manipulation of guest stage-2 page tables, creation of VM +data structures and reclamation of memory on teardown. An immediate consequence +of this change is that the host itself runs with an identity mapping enabled +at stage-2, providing the hypervisor code with a mechanism to restrict host +access to an arbitrary physical page. + +Enabling pKVM +------------- + +The pKVM hypervisor is enabled by booting the host kernel at EL2 with +"``kvm-arm.mode=protected``" on the command-line. Once enabled, VMs can be spawned +in either protected or non-protected state, although the hypervisor is still +responsible for managing most of the VM metadata in either case. + +Limitations +----------- + +Enabling pKVM places some significant limitations on KVM guests, regardless of +whether they are spawned in protected state. It is therefore recommended only +to enable pKVM if protected VMs are required, with non-protected state acting +primarily as a debug and development aid. + +If you're still keen, then here is an incomplete list of caveats that apply +to all VMs running under pKVM: + +- Guest memory cannot be file-backed (with the exception of shmem/memfd) and is + pinned as it is mapped into the guest. This prevents the host from + swapping-out, migrating, merging or generally doing anything useful with the + guest pages. It also requires that the VMM has either ``CAP_IPC_LOCK`` or + sufficient ``RLIMIT_MEMLOCK`` to account for this pinned memory. + +- GICv2 is not supported and therefore GICv3 hardware is required in order + to expose a virtual GICv3 to the guest. + +- Read-only memslots are unsupported and therefore dirty logging cannot be + enabled. + +- Memslot configuration is fixed once a VM has started running, with subsequent + move or deletion requests being rejected with ``-EPERM``. + +- There are probably many others. + +Since the host is unable to tear down the hypervisor when pKVM is enabled, +hibernation (``CONFIG_HIBERNATION``) and kexec (``CONFIG_KEXEC``) will fail +with ``-EBUSY``. + +If you are not happy with these limitations, then please don't enable pKVM :) + +VM creation +----------- + +When pKVM is enabled, protected VMs can be created by specifying the +``KVM_VM_TYPE_ARM_PROTECTED`` flag in the machine type identifier parameter +passed to ``KVM_CREATE_VM``. + +Protected VMs are instantiated according to a fixed vCPU configuration +described by the ID register definitions in +``arch/arm64/include/asm/kvm_pkvm.h``. Only a subset of the architectural +features that may be available to the host are exposed to the guest and the +capabilities advertised by ``KVM_CHECK_EXTENSION`` are limited accordingly, +with the vCPU registers being initialised to their architecturally-defined +values. + +Where not defined by the architecture, the registers of a protected vCPU +are reset to zero with the exception of the PC and X0 which can be set +either by the ``KVM_SET_ONE_REG`` interface or by a call to PSCI ``CPU_ON``. + +VM runtime +---------- + +By default, memory pages mapped into a protected guest are inaccessible to the +host and any attempt by the host to access such a page will result in the +injection of an abort at EL1 by the hypervisor. For accesses originating from +EL0, the host will then terminate the current task with a ``SIGSEGV``. + +pKVM exposes additional hypercalls to protected guests, primarily for the +purpose of establishing shared-memory regions with the host for communication +and I/O. These hypercalls are documented in hypercalls.rst.