From patchwork Sun Mar 19 08:37:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binbin Wu X-Patchwork-Id: 13180260 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA654C7618A for ; Sun, 19 Mar 2023 08:37:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229997AbjCSIhp (ORCPT ); Sun, 19 Mar 2023 04:37:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229490AbjCSIhn (ORCPT ); Sun, 19 Mar 2023 04:37:43 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2C4623C54 for ; Sun, 19 Mar 2023 01:37:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679215061; x=1710751061; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QlvC8KjyOonSyQURFfiETkdw4gYK4IaymmJRQr/dpF8=; b=J6PyrmlDzD23msCNejRPKxcGRrBRwAUsWtCYRmcxBlgz4kzlE46kh1YE aJBb+ZnhS/tLEYc4xO2oFn/XB6uP6mjEH127hGZUaqXGcORvPfIB+Wvbz GTBCJ3jnmezozDtog0Joz3xfo7tTlP4irXKz0i8dS/QARlVQDcboabiz0 JgmWxcxwTo0mRc37etJVw76Cnv2IcdhzRAE4I5tA1bk1g7R7OYcekZAAK jMudVVeY24Wg+88g+smgNxTR5UZIAe3tTB1/fO3m/Ds8yImYYMXLBiQG7 u/zj/2MEkz8xwAmpJnOxNbUUnqBhTaj9sGREE+eBIPErSHUaMeiPFWrmt g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="424767128" X-IronPort-AV: E=Sophos;i="5.98,273,1673942400"; d="scan'208";a="424767128" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2023 01:37:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="769853311" X-IronPort-AV: E=Sophos;i="5.98,273,1673942400"; d="scan'208";a="769853311" Received: from binbinwu-mobl.ccr.corp.intel.com ([10.254.209.111]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2023 01:37:40 -0700 From: Binbin Wu To: kvm@vger.kernel.org, seanjc@google.com, pbonzini@redhat.com Cc: chao.gao@intel.com, robert.hu@linux.intel.com, binbin.wu@linux.intel.com Subject: [kvm-unit-tests PATCH v2 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Date: Sun, 19 Mar 2023 16:37:29 +0800 Message-Id: <20230319083732.29458-2-binbin.wu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230319083732.29458-1-binbin.wu@linux.intel.com> References: <20230319083732.29458-1-binbin.wu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If LAM is supported, VM entry allows CR3.LAM_U48 (bit 62) and CR3.LAM_U57 (bit 61) to be set in CR3 field. Change the test result expectations when setting CR3.LAM_U48 or CR3.LAM_U57 on vmlaunch tests when LAM is supported. Signed-off-by: Binbin Wu Reviewed-by: Chao Gao --- lib/x86/processor.h | 2 ++ x86/vmx_tests.c | 6 +++++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 3d58ef7..8373bbe 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -55,6 +55,8 @@ #define X86_CR0_PG BIT(X86_CR0_PG_BIT) #define X86_CR3_PCID_MASK GENMASK(11, 0) +#define X86_CR3_LAM_U57_BIT (61) +#define X86_CR3_LAM_U48_BIT (62) #define X86_CR4_VME_BIT (0) #define X86_CR4_VME BIT(X86_CR4_VME_BIT) diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 7bba816..1be22ac 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -7000,7 +7000,11 @@ static void test_host_ctl_regs(void) cr3 = cr3_saved | (1ul << i); vmcs_write(HOST_CR3, cr3); report_prefix_pushf("HOST_CR3 %lx", cr3); - test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD); + if (this_cpu_has(X86_FEATURE_LAM) && + ((i==X86_CR3_LAM_U57_BIT) || (i==X86_CR3_LAM_U48_BIT))) + test_vmx_vmlaunch(0); + else + test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD); report_prefix_pop(); } From patchwork Sun Mar 19 08:37:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binbin Wu X-Patchwork-Id: 13180261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFBBAC6FD1F for ; Sun, 19 Mar 2023 08:37:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230032AbjCSIhs (ORCPT ); Sun, 19 Mar 2023 04:37:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229927AbjCSIhq (ORCPT ); Sun, 19 Mar 2023 04:37:46 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 328D123C6B for ; Sun, 19 Mar 2023 01:37:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679215065; x=1710751065; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tSJFwXefhT7C1mfo+0Zad2ESiY3x2waCLfng0ARiCQs=; b=RA0pm2Xa2oUL6B5p/+dpqKSkpI74guhaSbGsQf1J6vpGcepIq8nL3V0z hQME6um9rkUJrWObD+hX53ktjJ+msG/aeaVPaObpe85owGwGaMkZrUoRL UDWfuDN90/833K0/Byvh0ZXW1u5cFc1pFGoApUKPpTb9Cx+8vmUN8TbmQ mY/isV4HcNezrjGffnRTw8q7QkP+DkkaJ9yd3k5KbtzK93NI29A8QQfum eH/pCW90sFRNVuhhUN9U9wzIiIB6xlHS+hAXt8UnEP2cc4X21LFxYb+vv 5jfVXU1k/RFQ1JlO5xPs24ce82ZP3TzrVOQ/VgHZZ1GM0J4NkggQ2wHYO w==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="424767131" X-IronPort-AV: E=Sophos;i="5.98,273,1673942400"; d="scan'208";a="424767131" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2023 01:37:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="769853316" X-IronPort-AV: E=Sophos;i="5.98,273,1673942400"; d="scan'208";a="769853316" Received: from binbinwu-mobl.ccr.corp.intel.com ([10.254.209.111]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2023 01:37:43 -0700 From: Binbin Wu To: kvm@vger.kernel.org, seanjc@google.com, pbonzini@redhat.com Cc: chao.gao@intel.com, robert.hu@linux.intel.com, binbin.wu@linux.intel.com Subject: [kvm-unit-tests PATCH v2 2/4] x86: Add test case for LAM_SUP Date: Sun, 19 Mar 2023 16:37:30 +0800 Message-Id: <20230319083732.29458-3-binbin.wu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230319083732.29458-1-binbin.wu@linux.intel.com> References: <20230319083732.29458-1-binbin.wu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Robert Hoo This unit test covers: 1. CR4.LAM_SUP toggle has expected behavior according to LAM status. 2. Memory access (here is strcpy() for test example) with supervisor mode address containing LAM meta data, behave as expected per LAM status. 3. MMIO memory access with supervisor mode address containing LAM meta data, behave as expected per LAM status. 4. INVLPG memory operand doens't contain LAM meta data, if the address is non-canonical form then the INVLPG is the same as a NOP (no #GP). 5. INVPCID memory operand (descriptor pointer) could contain LAM meta data, however, the address in the descriptor should be canonical. In x86/unittests.cfg, add 2 test cases/guest conf, with and without LAM. LAM feature spec: https://cdrdv2.intel.com/v1/dl/getContent/671368, Chap 7 LINEAR ADDRESS MASKING (LAM) Signed-off-by: Robert Hoo Co-developed-by: Binbin Wu Signed-off-by: Binbin Wu --- lib/x86/processor.h | 3 + x86/Makefile.x86_64 | 1 + x86/lam.c | 296 ++++++++++++++++++++++++++++++++++++++++++++ x86/unittests.cfg | 10 ++ 4 files changed, 310 insertions(+) create mode 100644 x86/lam.c diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 8373bbe..4bb8cd7 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -107,6 +107,8 @@ #define X86_CR4_CET BIT(X86_CR4_CET_BIT) #define X86_CR4_PKS_BIT (24) #define X86_CR4_PKS BIT(X86_CR4_PKS_BIT) +#define X86_CR4_LAM_SUP_BIT (28) +#define X86_CR4_LAM_SUP BIT(X86_CR4_LAM_SUP_BIT) #define X86_EFLAGS_CF_BIT (0) #define X86_EFLAGS_CF BIT(X86_EFLAGS_CF_BIT) @@ -250,6 +252,7 @@ static inline bool is_intel(void) #define X86_FEATURE_SPEC_CTRL (CPUID(0x7, 0, EDX, 26)) #define X86_FEATURE_ARCH_CAPABILITIES (CPUID(0x7, 0, EDX, 29)) #define X86_FEATURE_PKS (CPUID(0x7, 0, ECX, 31)) +#define X86_FEATURE_LAM (CPUID(0x7, 1, EAX, 26)) /* * Extended Leafs, a.k.a. AMD defined diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64 index f483dea..fa11eb3 100644 --- a/x86/Makefile.x86_64 +++ b/x86/Makefile.x86_64 @@ -34,6 +34,7 @@ tests += $(TEST_DIR)/rdpru.$(exe) tests += $(TEST_DIR)/pks.$(exe) tests += $(TEST_DIR)/pmu_lbr.$(exe) tests += $(TEST_DIR)/pmu_pebs.$(exe) +tests += $(TEST_DIR)/lam.$(exe) ifeq ($(CONFIG_EFI),y) tests += $(TEST_DIR)/amd_sev.$(exe) diff --git a/x86/lam.c b/x86/lam.c new file mode 100644 index 0000000..a5f4e51 --- /dev/null +++ b/x86/lam.c @@ -0,0 +1,296 @@ +/* + * Intel LAM_SUP unit test + * + * Copyright (C) 2023 Intel + * + * Author: Robert Hoo + * Binbin Wu + * + * This work is licensed under the terms of the GNU LGPL, version 2 or + * later. + */ + +#include "libcflat.h" +#include "processor.h" +#include "desc.h" +#include "vmalloc.h" +#include "alloc_page.h" +#include "vm.h" +#include "asm/io.h" +#include "ioram.h" + +#define LAM57_BITS 6 +#define LAM48_BITS 15 +#define LAM57_MASK GENMASK_ULL(62, 57) +#define LAM48_MASK GENMASK_ULL(62, 48) + +struct invpcid_desc { + u64 pcid : 12; + u64 rsv : 52; + u64 addr : 64; +}; + +static int get_sup_lam_bits(void) +{ + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57) + return LAM57_BITS; + else + return LAM48_BITS; +} + +/* According to LAM mode, set metadata in high bits */ +static u64 set_metadata(u64 src, unsigned long lam) +{ + u64 metadata; + + switch (lam) { + case LAM57_BITS: /* Set metadata in bits 62:57 */ + metadata = (NONCANONICAL & ((1UL << LAM57_BITS) - 1)) << 57; + metadata |= (src & ~(LAM57_MASK)); + break; + case LAM48_BITS: /* Set metadata in bits 62:48 */ + metadata = (NONCANONICAL & ((1UL << LAM48_BITS) - 1)) << 48; + metadata |= (src & ~(LAM48_MASK)); + break; + default: + metadata = src; + break; + } + + return metadata; +} + +static void cr4_set_lam_sup(void *data) +{ + unsigned long cr4; + + cr4 = read_cr4(); + write_cr4_safe(cr4 | X86_CR4_LAM_SUP); +} + +static void cr4_clear_lam_sup(void *data) +{ + unsigned long cr4; + + cr4 = read_cr4(); + write_cr4_safe(cr4 & ~X86_CR4_LAM_SUP); +} + +static void test_cr4_lam_set_clear(bool lam_enumerated) +{ + bool fault; + + fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL); + if (lam_enumerated) + report(!fault && (read_cr4() & X86_CR4_LAM_SUP), + "Set CR4.LAM_SUP"); + else + report(fault, "Set CR4.LAM_SUP causes #GP"); + + fault = test_for_exception(GP_VECTOR, &cr4_clear_lam_sup, NULL); + report(!fault, "Clear CR4.LAM_SUP"); +} + +static void do_strcpy(void *mem) +{ + strcpy((char *)mem, "LAM SUP Test string."); +} + +static inline uint64_t test_tagged_ptr(uint64_t arg1, uint64_t arg2, + uint64_t arg3, uint64_t arg4) +{ + bool lam_enumerated = !!arg1; + int lam_bits = (int)arg2; + u64 *ptr = (u64 *)arg3; + bool la_57 = !!arg4; + bool fault; + + fault = test_for_exception(GP_VECTOR, do_strcpy, ptr); + report(!fault, "strcpy to untagged addr"); + + ptr = (u64 *)set_metadata((u64)ptr, lam_bits); + fault = test_for_exception(GP_VECTOR, do_strcpy, ptr); + if (lam_enumerated) + report(!fault, "strcpy to tagged addr"); + else + report(fault, "strcpy to tagged addr causes #GP"); + + if (lam_enumerated && (lam_bits==LAM57_BITS) && !la_57) { + ptr = (u64 *)set_metadata((u64)ptr, LAM48_BITS); + fault = test_for_exception(GP_VECTOR, do_strcpy, ptr); + report(fault, "strcpy to non-LAM-canonical addr causes #GP"); + } + + return 0; +} + +/* Refer to emulator.c */ +static void do_mov_mmio(void *mem) +{ + unsigned long t1, t2; + + // test mov reg, r/m and mov r/m, reg + t1 = 0x123456789abcdefull & -1ul; + asm volatile("mov %[t1], (%[mem])\n\t" + "mov (%[mem]), %[t2]" + : [t2]"=r"(t2) + : [t1]"r"(t1), [mem]"r"(mem) + : "memory"); +} + +static inline uint64_t test_tagged_mmio_ptr(uint64_t arg1, uint64_t arg2, + uint64_t arg3, uint64_t arg4) +{ + bool lam_enumerated = !!arg1; + int lam_bits = (int)arg2; + u64 *ptr = (u64 *)arg3; + bool la_57 = !!arg4; + bool fault; + + fault = test_for_exception(GP_VECTOR, do_mov_mmio, ptr); + report(!fault, "Access MMIO with untagged addr"); + + ptr = (u64 *)set_metadata((u64)ptr, lam_bits); + fault = test_for_exception(GP_VECTOR, do_mov_mmio, ptr); + if (lam_enumerated) + report(!fault, "Access MMIO with tagged addr"); + else + report(fault, "Access MMIO with tagged addr causes #GP"); + + if (lam_enumerated && (lam_bits==LAM57_BITS) && !la_57) { + ptr = (u64 *)set_metadata((u64)ptr, LAM48_BITS); + fault = test_for_exception(GP_VECTOR, do_mov_mmio, ptr); + report(fault, "Access MMIO with non-LAM-canonical addr" + " causes #GP"); + } + + return 0; +} + +static void do_invlpg(void *mem) +{ + invlpg(mem); +} + +static void do_invlpg_fep(void *mem) +{ + asm volatile(KVM_FEP "invlpg (%0)" ::"r" (mem) : "memory"); +} + +/* invlpg with tagged address is same as NOP, no #GP */ +static void test_invlpg(void *va, bool fep) +{ + bool fault; + u64 *ptr; + + ptr = (u64 *)set_metadata((u64)va, get_sup_lam_bits()); + if (fep) + fault = test_for_exception(GP_VECTOR, do_invlpg_fep, ptr); + else + fault = test_for_exception(GP_VECTOR, do_invlpg, ptr); + + report(!fault, "%sINVLPG with tagged addr", fep?"fep: ":""); +} + +static void do_invpcid(void *desc) +{ + unsigned long type = 0; + struct invpcid_desc *desc_ptr = (struct invpcid_desc *)desc; + + asm volatile("invpcid %0, %1" : + : "m" (*desc_ptr), "r" (type) + : "memory"); +} + +static void test_invpcid(bool lam_enumerated, void *data) +{ + struct invpcid_desc *desc_ptr = (struct invpcid_desc *) data; + int lam_bits = get_sup_lam_bits(); + bool fault; + + if (!this_cpu_has(X86_FEATURE_PCID) || + !this_cpu_has(X86_FEATURE_INVPCID)) { + report_skip("INVPCID not supported"); + return; + } + + memset(desc_ptr, 0, sizeof(struct invpcid_desc)); + desc_ptr->addr = (u64)data + 16; + + fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr); + report(!fault, "INVPCID: untagged pointer + untagged addr"); + + desc_ptr->addr = set_metadata(desc_ptr->addr, lam_bits); + fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr); + report(fault, "INVPCID: untagged pointer + tagged addr causes #GP"); + + desc_ptr->addr = (u64)data + 16; + desc_ptr = (struct invpcid_desc *)set_metadata((u64)desc_ptr, lam_bits); + fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr); + if (lam_enumerated && (read_cr4() & X86_CR4_LAM_SUP)) + report(!fault, "INVPCID: tagged pointer + untagged addr"); + else + report(fault, "INVPCID: tagged pointer + untagged addr" + " causes #GP"); + + desc_ptr = (struct invpcid_desc *)data; + desc_ptr->addr = (u64)data + 16; + desc_ptr->addr = set_metadata(desc_ptr->addr, lam_bits); + desc_ptr = (struct invpcid_desc *)set_metadata((u64)desc_ptr, lam_bits); + fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr); + report(fault, "INVPCID: tagged pointer + tagged addr causes #GP"); +} + +static void test_lam_sup(bool lam_enumerated, bool fep_available) +{ + void *vaddr, *vaddr_mmio; + phys_addr_t paddr; + bool fault; + bool la_57 = read_cr4() & X86_CR4_LA57; + int lam_bits = get_sup_lam_bits(); + + vaddr = alloc_vpage(); + vaddr_mmio = alloc_vpage(); + paddr = virt_to_phys(alloc_page()); + install_page(current_page_table(), paddr, vaddr); + install_page(current_page_table(), IORAM_BASE_PHYS, vaddr_mmio); + + test_cr4_lam_set_clear(lam_enumerated); + + /* Set for the following LAM_SUP tests */ + if (lam_enumerated) { + fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL); + report(!fault && (read_cr4() & X86_CR4_LAM_SUP), + "Set CR4.LAM_SUP"); + } + + test_tagged_ptr(lam_enumerated, lam_bits, (u64)vaddr, la_57); + test_tagged_mmio_ptr(lam_enumerated, lam_bits, (u64)vaddr_mmio, la_57); + test_invlpg(vaddr, false); + test_invpcid(lam_enumerated, vaddr); + + if (fep_available) + test_invlpg(vaddr, true); +} + +int main(int ac, char **av) +{ + bool lam_enumerated; + bool fep_available = is_fep_available(); + + setup_vm(); + + lam_enumerated = this_cpu_has(X86_FEATURE_LAM); + if (!lam_enumerated) + report_info("This CPU doesn't support LAM feature\n"); + else + report_info("This CPU supports LAM feature\n"); + + if (!fep_available) + report_skip("Skipping tests the forced emulation, " + "use kvm.force_emulation_prefix=1 to enable\n"); + + test_lam_sup(lam_enumerated, fep_available); + + return report_summary(); +} diff --git a/x86/unittests.cfg b/x86/unittests.cfg index f324e32..34b09eb 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -478,3 +478,13 @@ file = cet.flat arch = x86_64 smp = 2 extra_params = -enable-kvm -m 2048 -cpu host + +[intel-lam] +file = lam.flat +arch = x86_64 +extra_params = -enable-kvm -cpu host + +[intel-no-lam] +file = lam.flat +arch = x86_64 +extra_params = -enable-kvm -cpu host,-lam From patchwork Sun Mar 19 08:37:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binbin Wu X-Patchwork-Id: 13180262 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AE58C7618A for ; Sun, 19 Mar 2023 08:37:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230148AbjCSIhu (ORCPT ); Sun, 19 Mar 2023 04:37:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229927AbjCSIht (ORCPT ); Sun, 19 Mar 2023 04:37:49 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C5E223C54 for ; Sun, 19 Mar 2023 01:37:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679215068; x=1710751068; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HFxmk2rD1obiF2XNrKsMVvrSeRRxD3JtzbNcGpqCW/M=; b=B8ViR/piXkR41OYRQNQYUKC8VXsjWtO0I+MHbZhuxSz1Axg5aQlpW+Tc foyO0G+WfH7d3AZND6CvU47pPMy5EIdlVWvE5L6H9TpHeE9n7V5xpbQAy jjwMxI4M0QMNxtqDFdIeRoCd5DUyrRgHmETFRyedFtOcK0z8+YU5YlrMY MnArIuZW1V0tZoXgyTWIahVgTcoeHESALsSossTTkH93O881+Y28QcOQk vAMq3L9zDolDdtetl06eoRKJjkA9FPBleO7a8+bsCW7YD98p0fxiKK5CX vqV8bhofwcAVfQg8kFNFNpmNuJg969PbZXJCqEoN+miAaxB/VVEQxrPZR w==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="424767139" X-IronPort-AV: E=Sophos;i="5.98,273,1673942400"; d="scan'208";a="424767139" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2023 01:37:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="769853320" X-IronPort-AV: E=Sophos;i="5.98,273,1673942400"; d="scan'208";a="769853320" Received: from binbinwu-mobl.ccr.corp.intel.com ([10.254.209.111]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2023 01:37:46 -0700 From: Binbin Wu To: kvm@vger.kernel.org, seanjc@google.com, pbonzini@redhat.com Cc: chao.gao@intel.com, robert.hu@linux.intel.com, binbin.wu@linux.intel.com Subject: [kvm-unit-tests PATCH v2 3/4] x86: Add test cases for LAM_{U48,U57} Date: Sun, 19 Mar 2023 16:37:31 +0800 Message-Id: <20230319083732.29458-4-binbin.wu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230319083732.29458-1-binbin.wu@linux.intel.com> References: <20230319083732.29458-1-binbin.wu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This unit test covers: 1. CR3 LAM bits toggle has expected behavior according to LAM status. 2. Memory access using strcpy() with user mode address containing LAM meta data, behave as expected per LAM status. 3. MMIO memory access with user mode address containing LAM meta data, behave as expected per LAM status. Signed-off-by: Binbin Wu --- lib/x86/processor.h | 2 ++ x86/lam.c | 46 ++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 47 insertions(+), 1 deletion(-) diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 4bb8cd7..a181e0b 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -56,7 +56,9 @@ #define X86_CR3_PCID_MASK GENMASK(11, 0) #define X86_CR3_LAM_U57_BIT (61) +#define X86_CR3_LAM_U57 BIT_ULL(X86_CR3_LAM_U57_BIT) #define X86_CR3_LAM_U48_BIT (62) +#define X86_CR3_LAM_U48 BIT_ULL(X86_CR3_LAM_U48_BIT) #define X86_CR4_VME_BIT (0) #define X86_CR4_VME BIT(X86_CR4_VME_BIT) diff --git a/x86/lam.c b/x86/lam.c index a5f4e51..8945440 100644 --- a/x86/lam.c +++ b/x86/lam.c @@ -1,5 +1,5 @@ /* - * Intel LAM_SUP unit test + * Intel LAM unit test * * Copyright (C) 2023 Intel * @@ -18,11 +18,13 @@ #include "vm.h" #include "asm/io.h" #include "ioram.h" +#include "usermode.h" #define LAM57_BITS 6 #define LAM48_BITS 15 #define LAM57_MASK GENMASK_ULL(62, 57) #define LAM48_MASK GENMASK_ULL(62, 48) +#define CR3_LAM_BITS_MASK (X86_CR3_LAM_U48 | X86_CR3_LAM_U57) struct invpcid_desc { u64 pcid : 12; @@ -273,6 +275,47 @@ static void test_lam_sup(bool lam_enumerated, bool fep_available) test_invlpg(vaddr, true); } +static void test_lam_user(bool lam_enumerated) +{ + unsigned long cr3; + bool is_la57; + unsigned r; + bool raised_vector = false; + phys_addr_t paddr; + + paddr = virt_to_phys(alloc_page()); + install_page((void *)(read_cr3()& ~CR3_LAM_BITS_MASK), paddr, (void *)paddr); + install_page((void *)(read_cr3()& ~CR3_LAM_BITS_MASK), IORAM_BASE_PHYS, + (void *)IORAM_BASE_PHYS); + + cr3 = read_cr3(); + is_la57 = !!(read_cr4() & X86_CR4_LA57); + + /* Test LAM_U48 */ + if(lam_enumerated) { + r = write_cr3_safe((cr3 & ~X86_CR3_LAM_U57) | X86_CR3_LAM_U48); + report(r==0 && ((read_cr3() & CR3_LAM_BITS_MASK) == X86_CR3_LAM_U48), + "Set LAM_U48"); + } + + run_in_user((usermode_func)test_tagged_ptr, GP_VECTOR, lam_enumerated, + LAM48_BITS, paddr, is_la57, &raised_vector); + run_in_user((usermode_func)test_tagged_mmio_ptr, GP_VECTOR, lam_enumerated, + LAM48_BITS, IORAM_BASE_PHYS, is_la57, &raised_vector); + + + /* Test LAM_U57 */ + if(lam_enumerated) { + r = write_cr3_safe(cr3 | X86_CR3_LAM_U57); + report(r==0 && (read_cr3() & X86_CR3_LAM_U57), "Set LAM_U57"); + } + + run_in_user((usermode_func)test_tagged_ptr, GP_VECTOR, lam_enumerated, + LAM57_BITS, paddr, is_la57, &raised_vector); + run_in_user((usermode_func)test_tagged_mmio_ptr, GP_VECTOR, lam_enumerated, + LAM57_BITS, IORAM_BASE_PHYS, is_la57, &raised_vector); +} + int main(int ac, char **av) { bool lam_enumerated; @@ -291,6 +334,7 @@ int main(int ac, char **av) "use kvm.force_emulation_prefix=1 to enable\n"); test_lam_sup(lam_enumerated, fep_available); + test_lam_user(lam_enumerated); return report_summary(); } From patchwork Sun Mar 19 08:37:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binbin Wu X-Patchwork-Id: 13180263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9178FC6FD1F for ; Sun, 19 Mar 2023 08:37:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230169AbjCSIhy (ORCPT ); Sun, 19 Mar 2023 04:37:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230123AbjCSIhw (ORCPT ); Sun, 19 Mar 2023 04:37:52 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 141F923C77 for ; Sun, 19 Mar 2023 01:37:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679215071; x=1710751071; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AA1LhgMGpiTDo/qs/Zr+Cp2CNmiv1WbX0CpLXr5ylTI=; b=hsqrqgY4nBN9Nu1L5ThJ5QJX36Q4/EQG91zUUEr3GDUcWCUV9R8kK4Bm H+fTzFsk57E+TJX6uhLqY3oYoj8KHbfApts0hNnjCWcFPl+N40DDu2Gx4 Cq7pHMXZaojmpo80mO/W7HuH2116abtOxAAPrM5Dr9VEGsYud5CY5jI9r l1ZB6Bo0E+VNUeR2s9mUrmLG7meJyDuGyPDq3IG3+3BAagOHrbU7gtW2g jevbT7/TA1/BJyInbs9mdOJ6NOp0Ml7xMfzfbk6LCkdWgvuaMhiQXo+To Th+fNlIWcuaGSyo8uHzrb9sIWibnazMSGhcZO+V6dx7dP+4kWLMjY6Obr Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="424767145" X-IronPort-AV: E=Sophos;i="5.98,273,1673942400"; d="scan'208";a="424767145" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2023 01:37:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="769853329" X-IronPort-AV: E=Sophos;i="5.98,273,1673942400"; d="scan'208";a="769853329" Received: from binbinwu-mobl.ccr.corp.intel.com ([10.254.209.111]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2023 01:37:49 -0700 From: Binbin Wu To: kvm@vger.kernel.org, seanjc@google.com, pbonzini@redhat.com Cc: chao.gao@intel.com, robert.hu@linux.intel.com, binbin.wu@linux.intel.com Subject: [kvm-unit-tests PATCH v2 4/4] x86: Add test case for INVVPID with LAM Date: Sun, 19 Mar 2023 16:37:32 +0800 Message-Id: <20230319083732.29458-5-binbin.wu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230319083732.29458-1-binbin.wu@linux.intel.com> References: <20230319083732.29458-1-binbin.wu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When LAM is on, the linear address of INVVPID operand can contain metadata, and the linear address in the INVVPID descriptor can caontain metadata. The added cases use tagged descriptor address or/and tagged target invalidation address to make sure the behaviors are expected when LAM is on. Also, INVVPID cases can be used as the common test cases for VMX instruction vmexits. Signed-off-by: Binbin Wu --- x86/vmx_tests.c | 73 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 73 insertions(+) diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 1be22ac..9e9589f 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -3225,6 +3225,78 @@ static void invvpid_test_not_in_vmx_operation(void) TEST_ASSERT(!vmx_on()); } +#define LAM57_BITS 6 +#define LAM48_BITS 15 +#define LAM57_MASK GENMASK_ULL(62, 57) +#define LAM48_MASK GENMASK_ULL(62, 48) + +/* According to LAM mode, set metadata in high bits */ +static u64 set_metadata(u64 src, unsigned long lam) +{ + u64 metadata; + + switch (lam) { + case LAM57_BITS: /* Set metadata in bits 62:57 */ + metadata = (NONCANONICAL & ((1UL << LAM57_BITS) - 1)) << 57; + metadata |= (src & ~(LAM57_MASK)); + break; + case LAM48_BITS: /* Set metadata in bits 62:48 */ + metadata = (NONCANONICAL & ((1UL << LAM48_BITS) - 1)) << 48; + metadata |= (src & ~(LAM48_MASK)); + break; + default: + metadata = src; + break; + } + + return metadata; +} + +static void invvpid_test_lam(void) +{ + void *vaddr; + struct invvpid_operand *operand; + int lam_bits = LAM48_BITS; + bool fault; + + if (!this_cpu_has(X86_FEATURE_LAM)) { + report_skip("LAM is not supported, skip INVVPID with LAM"); + return; + } + + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57) + lam_bits = LAM57_BITS; + + vaddr = alloc_vpage(); + install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr); + operand = (struct invvpid_operand *)vaddr; + operand->vpid = 0xffff; + operand->gla = (u64)vaddr; + + write_cr4_safe(read_cr4() | X86_CR4_LAM_SUP); + if (!(read_cr4() & X86_CR4_LAM_SUP)) { + report_skip("Failed to enable LAM_SUP"); + return; + } + + operand = (struct invvpid_operand *)vaddr; + operand->gla = set_metadata(operand->gla, lam_bits); + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); + report(!fault, "INVVPID (LAM on): untagged pointer + tagged addr"); + + operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_bits); + operand->gla = (u64)vaddr; + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); + report(!fault, "INVVPID (LAM on): tagged pointer + untagged addr"); + + operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_bits); + operand->gla = set_metadata(operand->gla, lam_bits); + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); + report(!fault, "INVVPID (LAM on): tagged pointer + tagged addr"); + + write_cr4_safe(read_cr4() & ~X86_CR4_LAM_SUP); +} + /* * This does not test real-address mode, virtual-8086 mode, protected mode, * or CPL > 0. @@ -3282,6 +3354,7 @@ static void invvpid_test(void) invvpid_test_pf(); invvpid_test_compatibility_mode(); invvpid_test_not_in_vmx_operation(); + invvpid_test_lam(); } /*