From patchwork Fri Jun 21 09:37:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 11009495 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C134876 for ; Fri, 21 Jun 2019 09:56:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AD9A1286EE for ; Fri, 21 Jun 2019 09:56:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A1757289E8; Fri, 21 Jun 2019 09:56:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 245AD286EE for ; Fri, 21 Jun 2019 09:56:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nZVhq8OJaCh84By6PKSgBmcwx8FL/r+lJq5F1ofSFjw=; b=esKCy/PXJkuj22 vsCmSQGOtkxVsg/dP+BX//xCfBw7HwVDSf7Hh0BNFdXuX+8yns0PFrs3kHn0M9q20/15902l2NCyF TzXZBFfBGpg01TSq8N0djlbGsTNa/SH/VUll1r68jjSObnH3fNjrt32AUKFt4anViDBPOsulAnRFv hsjmqpxwmyO+1NAiKbuSONQ8b8oCwVzaZfuWvJJh5cRkQ57LLel24JFlv//CGCp9CcTHQdMsO8mOT RzuNtylLHuKB7NBlmxeoSErU3k3t8t8GzQ8eS7TLppw689kd5VRX6c1L3yxlZRrIjZterlYjsYjYi lj7Kt5C93jsHRAHs0SAw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1heGHh-0007Sk-OE; Fri, 21 Jun 2019 09:56:41 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1heG0t-0006yA-E1 for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2019 09:39:22 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D95741478; Fri, 21 Jun 2019 02:39:18 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 860443F246; Fri, 21 Jun 2019 02:39:17 -0700 (PDT) From: Marc Zyngier To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 04/59] KVM: arm64: nv: Introduce nested virtualization VCPU feature Date: Fri, 21 Jun 2019 10:37:48 +0100 Message-Id: <20190621093843.220980-5-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190621093843.220980-1-marc.zyngier@arm.com> References: <20190621093843.220980-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190621_023919_567237_DEFFFA0B X-CRM114-Status: GOOD ( 12.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , Andre Przywara , Suzuki K Poulose , Christoffer Dall , Dave Martin , James Morse , Jintack Lim Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall Introduce the feature bit and a primitive that checks if the feature is set behind a static key check based on the cpus_have_const_cap check. Checking nested_virt_in_use() on systems without nested virt enabled should have neglgible overhead. We don't yet allow userspace to actually set this feature. Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier Reviewed-by: Julien Thierry --- arch/arm/include/asm/kvm_nested.h | 9 +++++++++ arch/arm64/include/asm/kvm_nested.h | 13 +++++++++++++ arch/arm64/include/uapi/asm/kvm.h | 1 + 3 files changed, 23 insertions(+) create mode 100644 arch/arm/include/asm/kvm_nested.h create mode 100644 arch/arm64/include/asm/kvm_nested.h diff --git a/arch/arm/include/asm/kvm_nested.h b/arch/arm/include/asm/kvm_nested.h new file mode 100644 index 000000000000..124ff6445f8f --- /dev/null +++ b/arch/arm/include/asm/kvm_nested.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ARM_KVM_NESTED_H +#define __ARM_KVM_NESTED_H + +#include + +static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu) { return false; } + +#endif /* __ARM_KVM_NESTED_H */ diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h new file mode 100644 index 000000000000..8a3d121a0b42 --- /dev/null +++ b/arch/arm64/include/asm/kvm_nested.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ARM64_KVM_NESTED_H +#define __ARM64_KVM_NESTED_H + +#include + +static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu) +{ + return cpus_have_const_cap(ARM64_HAS_NESTED_VIRT) && + test_bit(KVM_ARM_VCPU_NESTED_VIRT, vcpu->arch.features); +} + +#endif /* __ARM64_KVM_NESTED_H */ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index d819a3e8b552..563e2a8bae93 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -106,6 +106,7 @@ struct kvm_regs { #define KVM_ARM_VCPU_SVE 4 /* enable SVE for this CPU */ #define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */ #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */ +#define KVM_ARM_VCPU_NESTED_VIRT 7 /* Support nested virtualization */ struct kvm_vcpu_init { __u32 target;