From patchwork Sun Mar 19 08:49:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binbin Wu X-Patchwork-Id: 13180276 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71462C7618E for ; Sun, 19 Mar 2023 08:50:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230266AbjCSItt (ORCPT ); Sun, 19 Mar 2023 04:49:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229968AbjCSItp (ORCPT ); Sun, 19 Mar 2023 04:49:45 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 345AB1ABD1 for ; Sun, 19 Mar 2023 01:49:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679215783; x=1710751783; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6woDpsARbqGdJJRxLiAQ9w4hoqV7TtoFViGE/ANx9bY=; b=bDsUYNyZfiZHwknzyNtF0Pd5VMXVgxJb0c/Tb01SdcqM9fdSCTwRk6qe qPRIhC6xtCSofLjuL6Asy2xKVNIrxGuJtsWT8bf9VLsFvKWhGvRaAGdvJ 9OmHAm4bnbLMac+BlBFKM3I0sz2vvM/Z2WBsA5qilITnFlnMapToT/bSh ROC5hiQo9KfbahpxyiSP3FMa+K42EhiWNT1MUhZblD9ysLGqTAuvCiYlH jkFHxHtLt7RDkx121OJHYYOIlKomXvadhq2fRgB4dm72MZO1WpwWxlqfh 7btrk9Wle2dcefWyKz26Zbz7UbUl0okBSVM3eKasi1zQOSe5kl4e3bilX w==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="424767853" X-IronPort-AV: E=Sophos;i="5.98,273,1673942400"; d="scan'208";a="424767853" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2023 01:49:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="683146291" X-IronPort-AV: E=Sophos;i="5.98,273,1673942400"; d="scan'208";a="683146291" Received: from binbinwu-mobl.ccr.corp.intel.com ([10.254.209.111]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2023 01:49:41 -0700 From: Binbin Wu To: kvm@vger.kernel.org, seanjc@google.com, pbonzini@redhat.com Cc: chao.gao@intel.com, robert.hu@linux.intel.com, binbin.wu@linux.intel.com Subject: [PATCH v6 3/7] KVM: x86: Virtualize CR4.LAM_SUP Date: Sun, 19 Mar 2023 16:49:23 +0800 Message-Id: <20230319084927.29607-4-binbin.wu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230319084927.29607-1-binbin.wu@linux.intel.com> References: <20230319084927.29607-1-binbin.wu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Robert Hoo Allow setting of CR4.LAM_SUP (bit 28) by guest if vCPU supports LAM, and intercept the bit (as it already is). LAM uses CR4.LAM_SUP to configure LAM masking on supervisor mode address. To virtualize that, move CR4.LAM_SUP out of CR4_RESERVED_BITS and its reservation depends on vCPU has LAM feature or not. CR4.LAM_SUP is allowed to be set even not in 64-bit mode, but it will not take effect since LAM only applies to 64-bit linear address. Leave the bit intercepted to avoid vmread every time when KVM fetches its value, with the expectation that guest won't toggle the bit frequently. Hardware is not required to do TLB flush when CR4.LAM_SUP toggled, so KVM doesn't need to emulate TLB flush based on it. There's no other features/vmx_exec_controls connection, therefore no code need to be complemented in kvm/vmx_set_cr4(). Signed-off-by: Robert Hoo Co-developed-by: Binbin Wu Signed-off-by: Binbin Wu Reviewed-by: Chao Gao --- arch/x86/include/asm/kvm_host.h | 3 ++- arch/x86/kvm/vmx/vmx.c | 3 +++ arch/x86/kvm/x86.h | 2 ++ 3 files changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 8b38a4cb2e29..742fd84c7997 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -125,7 +125,8 @@ | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \ | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \ | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \ - | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP)) + | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \ + | X86_CR4_LAM_SUP)) #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index bbf60bda877e..66a50224293e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7617,6 +7617,9 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu) cr4_fixed1_update(X86_CR4_UMIP, ecx, feature_bit(UMIP)); cr4_fixed1_update(X86_CR4_LA57, ecx, feature_bit(LA57)); + entry = kvm_find_cpuid_entry_index(vcpu, 0x7, 1); + cr4_fixed1_update(X86_CR4_LAM_SUP, eax, feature_bit(LAM)); + #undef cr4_fixed1_update } diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 9de72586f406..fe32554e0da6 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -475,6 +475,8 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type); __reserved_bits |= X86_CR4_VMXE; \ if (!__cpu_has(__c, X86_FEATURE_PCID)) \ __reserved_bits |= X86_CR4_PCIDE; \ + if (!__cpu_has(__c, X86_FEATURE_LAM)) \ + __reserved_bits |= X86_CR4_LAM_SUP; \ __reserved_bits; \ })