From patchwork Wed Jun 12 14:45:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Karasulli X-Patchwork-Id: 13695140 Received: from mail-ej1-f41.google.com (mail-ej1-f41.google.com [209.85.218.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 97FC717F389 for ; Wed, 12 Jun 2024 14:45:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203550; cv=none; b=n+bdqHZf9md/mgok1wKAD5p4HuDe9GMkIGRhLCS+LaIGyIf7jBxHz/+r/CeGX2xBHk075zzluz7nDy8ORrTh5zNu2svkqx30da8g5NZ8n9BbBIvgZ8ERpOz/F2oUSqjQihheITFAoxKPoL0+cY2H3RV8QtKZqhaue53LVbfG2/M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203550; c=relaxed/simple; bh=iAANJ2p/nSzXgD/Z5FUt3VQfhKLfqwozao5yJo6zS7w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=IUxdeIhvZwSXVo6mAujBEpuGlSNpOM4MOolnixTd0/GIjzJNwsmtoe1X9Y7AH6En6V9rPHjNDNDdaOjJlUIWX9vmKJOTqsOETfAmmYotJk0XG3rABZAHynQN7eBShsIllrEIfPnnwVcrSign9xcT0Crvm8qIqR+cjShr2zbBAuk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=WBeG0y7S; arc=none smtp.client-ip=209.85.218.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WBeG0y7S" Received: by mail-ej1-f41.google.com with SMTP id a640c23a62f3a-a6e43dad8ecso180826866b.1 for ; Wed, 12 Jun 2024 07:45:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718203546; x=1718808346; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=w00YTvhj7M2Qv2izNUFGdgsZ7hyyyvg58jlwJNiZZzg=; b=WBeG0y7SX66ouC64P4eWcWEsLH1/s28JMQMtuUucI+9lccL+7E9gjMRBN2UhvcMG+z G6CCqQFnV5+A9lFxkcY9v7YlOAK4MVzHEtM33c8VKhrn9XhTprrKdP21M8YgBQhtmI3e IMUZzbxQgSDMZnCbYHCyrxq446cKnfIzFkScNizsayAo3O9xmmYcCC4kT0WTb4VqjEAm IO9oxcJCODyfviD6NJZPJ2GMoeWOoB5fci+svssptIh4wnkrpBwngNwaG6IKeAktPRL9 HsGFksKl5aOVGssHWNR1mGOXLAYWpJQk4Mdf2/J4zjEfELOkFlUA0yRWm8JZuPpYL03j 9D4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718203546; x=1718808346; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=w00YTvhj7M2Qv2izNUFGdgsZ7hyyyvg58jlwJNiZZzg=; b=jDT/lCO14wjaOZybXojmM7nyDBDxGH1e7a1kIwI8l0BNYzMK00gH7IsBMPGQ8XjcMP hfB62j0pOW4PLZqxV7AGiybDbNoNYH1DUoF0bHe+/Grjq7oRMwZAhJyhRwyklmvBKDT4 tH5t7ms87JLfXwMTiorvd9GLKnE8DffsXESwhWLFdvZuSzXISG3SuV7a2jPWqWS/1PtD 4A9XVijGj+368+eX2lwRV3v3wF6GbHVeLkNqrHJk5Nm5fCKTbfoVnTcC68FsljLHVcWq VhWaejvE0BXxKGQsjrWL6YaYGCfsOZRbTBqYqKhMt84P6ON0yghdO3mXvaGryGqciaTp eTVw== X-Gm-Message-State: AOJu0Yz+gajHsY8HjorT5MFeSQ0JnZTq2LstAM6xBhy6TUMcBhVKn4EC BSjW07fcgFDEH9iqxNqzyMHIXEoTINTZKREu/+53PT1b3TWRlM+hPeYHrd2W X-Google-Smtp-Source: AGHT+IGEW29I7ggpQrcC/4NynPXewpaRYuM/z5CODSCMKwQhudw9QEp6Z5jmu8vYIDLT1eT70yHD4Q== X-Received: by 2002:a17:906:f584:b0:a6f:3707:e369 with SMTP id a640c23a62f3a-a6f464b4f3bmr181001266b.0.1718203546405; Wed, 12 Jun 2024 07:45:46 -0700 (PDT) Received: from vasant-suse.suse.cz ([2001:9e8:ab7c:f800:473b:7cbe:2ac7:effa]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f18bbf3cbsm456440366b.1.2024.06.12.07.45.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 07:45:45 -0700 (PDT) From: vsntk18@gmail.com To: kvm@vger.kernel.org Cc: vsntk18@gmail.com, andrew.jones@linux.dev, jroedel@suse.de, papaluri@amd.com, pbonzini@redhat.com, seanjc@google.com, vkarasulli@suse.de Subject: [kvm-unit-tests PATCH v8 01/12] x86: AMD SEV-ES: Setup #VC exception handler for AMD SEV-ES Date: Wed, 12 Jun 2024 16:45:28 +0200 Message-Id: <20240612144539.16147-2-vsntk18@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240612144539.16147-1-vsntk18@gmail.com> References: <20240612144539.16147-1-vsntk18@gmail.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Vasant Karasulli AMD SEV-ES defines a new guest exception that gets triggered on some vmexits to allow the guest to control what state gets shared with the host. kvm-unit-tests currently relies on UEFI to provide this #VC exception handler. This leads to the following problems: 1) The test's page table needs to map the firmware and the shared GHCB used by the firmware. 2) The firmware needs to keep its #VC handler in the current IDT so that kvm-unit-tests can copy the #VC entry into its own IDT. 3) The firmware #VC handler might use state which is not available anymore after ExitBootServices. 4) After ExitBootServices, the firmware needs to get the GHCB address from the GHCB MSR if it needs to use the kvm-unit-test GHCB. This requires keeping an identity mapping, and the GHCB address must be in the MSR at all times where a #VC could happen. Problems 1) and 2) were temporarily mitigated via commits b114aa57ab ("x86 AMD SEV-ES: Set up GHCB page") and 706ede1833 ("x86 AMD SEV-ES: Copy UEFI #VC IDT entry") respectively. However, to make kvm-unit-tests reliable against 3) and 4), the tests must supply their own #VC handler [1][2]. Switch the tests to install a #VC handler on early bootup, just after GHCB has been mapped. The tests will use this handler by default. If --amdsev-efi-vc is passed during ./configure, the tests will continue using the UEFI #VC handler. [1] https://lore.kernel.org/all/Yf0GO8EydyQSdZvu@suse.de/ [2] https://lore.kernel.org/all/YSA%2FsYhGgMU72tn+@google.com/ Signed-off-by: Vasant Karasulli --- Makefile | 3 +++ configure | 21 +++++++++++++++++++++ lib/x86/amd_sev.c | 13 +++++-------- lib/x86/amd_sev.h | 1 + lib/x86/amd_sev_vc.c | 15 +++++++++++++++ lib/x86/desc.c | 17 +++++++++++++++++ lib/x86/desc.h | 1 + lib/x86/setup.c | 8 ++++++++ x86/Makefile.common | 1 + x86/efi/README.md | 4 ++++ 10 files changed, 76 insertions(+), 8 deletions(-) create mode 100644 lib/x86/amd_sev_vc.c -- 2.34.1 diff --git a/Makefile b/Makefile index 5b7998b7..c718054c 100644 --- a/Makefile +++ b/Makefile @@ -41,6 +41,9 @@ OBJDIRS += $(LIBFDT_objdir) # EFI App ifeq ($(CONFIG_EFI),y) EFI_CFLAGS := -DCONFIG_EFI -DCONFIG_RELOC +ifeq ($(AMDSEV_EFI_VC),y) +EFI_CFLAGS += -DAMDSEV_EFI_VC +endif # The following CFLAGS and LDFLAGS come from: # - GNU-EFI/Makefile.defaults # - GNU-EFI/apps/Makefile diff --git a/configure b/configure index 0e0a2882..67f9d7c5 100755 --- a/configure +++ b/configure @@ -33,6 +33,12 @@ page_size= earlycon= efi= efi_direct= +# For AMD SEV-ES, the tests build to use their own #VC exception handler +# by default, instead of using the one installed by UEFI. This ensures +# that the tests do not depend on UEFI state after ExitBootServices. +# To continue using the UEFI #VC handler, ./configure can be run with +# --amdsev-efi-vc. +amdsev_efi_vc= # Enable -Werror by default for git repositories only (i.e. developer builds) if [ -e "$srcdir"/.git ]; then @@ -97,6 +103,8 @@ usage() { system and run from the UEFI shell. Ignored when efi isn't enabled and defaults to enabled when efi is enabled for riscv64. (arm64 and riscv64 only) + --amdsev-efi-vc Use UEFI-provided #VC handlers on AMD SEV/ES. Requires + --enable-efi. EOF exit 1 } @@ -188,6 +196,9 @@ while [[ "$1" = -* ]]; do --disable-werror) werror= ;; + --amdsev-efi-vc) + amdsev_efi_vc=y + ;; --help) usage ;; @@ -274,8 +285,17 @@ elif [ "$processor" = "arm" ]; then processor="cortex-a15" fi +if [ "$amdsev_efi_vc" ] && [ "$arch" != "x86_64" ]; then + echo "--amdsev-efi-vc requires arch x86_64." + usage +fi + if [ "$arch" = "i386" ] || [ "$arch" = "x86_64" ]; then testdir=x86 + if [ "$amdsev_efi_vc" ] && [ -z "$efi" ]; then + echo "--amdsev-efi-vc requires --enable-efi." + usage + fi elif [ "$arch" = "arm" ] || [ "$arch" = "arm64" ]; then testdir=arm if [ "$target" = "qemu" ]; then @@ -451,6 +471,7 @@ HOST_KEY_DOCUMENT=$host_key_document CONFIG_DUMP=$enable_dump CONFIG_EFI=$efi EFI_DIRECT=$efi_direct +AMDSEV_EFI_VC=$amdsev_efi_vc CONFIG_WERROR=$werror GEN_SE_HEADER=$gen_se_header EOF diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c index 66722141..987b59f9 100644 --- a/lib/x86/amd_sev.c +++ b/lib/x86/amd_sev.c @@ -14,6 +14,7 @@ #include "x86/vm.h" static unsigned short amd_sev_c_bit_pos; +phys_addr_t ghcb_addr; bool amd_sev_enabled(void) { @@ -100,14 +101,10 @@ efi_status_t setup_amd_sev_es(void) /* * Copy UEFI's #VC IDT entry, so KVM-Unit-Tests can reuse it and does - * not have to re-implement a #VC handler. Also update the #VC IDT code - * segment to use KVM-Unit-Tests segments, KERNEL_CS, so that we do not + * not have to re-implement a #VC handler for #VC exceptions before + * GHCB is mapped. Also update the #VC IDT code segment to use + * KVM-Unit-Tests segments, KERNEL_CS, so that we do not * have to copy the UEFI GDT entries into KVM-Unit-Tests GDT. - * - * TODO: Reusing UEFI #VC handler is a temporary workaround to simplify - * the boot up process, the long-term solution is to implement a #VC - * handler in kvm-unit-tests and load it, so that kvm-unit-tests does - * not depend on specific UEFI #VC handler implementation. */ sidt(&idtr); idt = (idt_entry_t *)idtr.base; @@ -126,7 +123,7 @@ void setup_ghcb_pte(pgd_t *page_table) * function searches GHCB's L1 pte, creates corresponding L1 ptes if not * found, and unsets the c-bit of GHCB's L1 pte. */ - phys_addr_t ghcb_addr, ghcb_base_addr; + phys_addr_t ghcb_base_addr; pteval_t *pte; /* Read the current GHCB page addr */ diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h index ed6e3385..713019c2 100644 --- a/lib/x86/amd_sev.h +++ b/lib/x86/amd_sev.h @@ -54,6 +54,7 @@ efi_status_t setup_amd_sev(void); bool amd_sev_es_enabled(void); efi_status_t setup_amd_sev_es(void); void setup_ghcb_pte(pgd_t *page_table); +void handle_sev_es_vc(struct ex_regs *regs); unsigned long long get_amd_sev_c_bit_mask(void); unsigned long long get_amd_sev_addr_upperbound(void); diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c new file mode 100644 index 00000000..f6227030 --- /dev/null +++ b/lib/x86/amd_sev_vc.c @@ -0,0 +1,15 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "amd_sev.h" + +extern phys_addr_t ghcb_addr; + +void handle_sev_es_vc(struct ex_regs *regs) +{ + struct ghcb *ghcb = (struct ghcb *) ghcb_addr; + + if (!ghcb) { + /* TODO: kill guest */ + return; + } +} diff --git a/lib/x86/desc.c b/lib/x86/desc.c index d054899c..2ca403bb 100644 --- a/lib/x86/desc.c +++ b/lib/x86/desc.c @@ -4,6 +4,9 @@ #include "smp.h" #include #include "apic-defs.h" +#ifdef CONFIG_EFI +#include "amd_sev.h" +#endif /* Boot-related data structures */ @@ -250,6 +253,9 @@ EX_E(ac, 17); EX(mc, 18); EX(xm, 19); EX_E(cp, 21); +#ifdef CONFIG_EFI +EX_E(vc, 29); +#endif asm (".pushsection .text \n\t" "__handle_exception: \n\t" @@ -316,6 +322,17 @@ void setup_idt(void) } } +#ifdef CONFIG_EFI +void setup_amd_sev_es_vc(void) +{ + if (!amd_sev_es_enabled()) + return; + + set_idt_entry(29, &vc_fault, 0); + handle_exception(29, handle_sev_es_vc); +} +#endif + void load_idt(void) { lidt(&idt_descr); diff --git a/lib/x86/desc.h b/lib/x86/desc.h index 7778a0f8..f8251194 100644 --- a/lib/x86/desc.h +++ b/lib/x86/desc.h @@ -252,6 +252,7 @@ void print_current_tss_info(void); handler handle_exception(u8 v, handler fn); void unhandled_exception(struct ex_regs *regs, bool cpu); const char* exception_mnemonic(int vector); +void setup_amd_sev_es_vc(void); bool test_for_exception(unsigned int ex, void (*trigger_func)(void *data), void *data); diff --git a/lib/x86/setup.c b/lib/x86/setup.c index d509a248..65f5972a 100644 --- a/lib/x86/setup.c +++ b/lib/x86/setup.c @@ -363,6 +363,14 @@ efi_status_t setup_efi(efi_bootinfo_t *efi_bootinfo) save_id(); bsp_rest_init(); +#ifndef AMDSEV_EFI_VC + /* + * Switch away from the UEFI-installed #VC handler. + * GHCB has already been mapped at this point. + */ + setup_amd_sev_es_vc(); +#endif /* AMDSEV_EFI_VC */ + return EFI_SUCCESS; } diff --git a/x86/Makefile.common b/x86/Makefile.common index 4ae9a557..bc0ed2c8 100644 --- a/x86/Makefile.common +++ b/x86/Makefile.common @@ -25,6 +25,7 @@ cflatobjs += lib/x86/delay.o cflatobjs += lib/x86/pmu.o ifeq ($(CONFIG_EFI),y) cflatobjs += lib/x86/amd_sev.o +cflatobjs += lib/x86/amd_sev_vc.o cflatobjs += lib/efi.o cflatobjs += x86/efi/reloc_x86_64.o endif diff --git a/x86/efi/README.md b/x86/efi/README.md index aa1dbcdd..af6e339c 100644 --- a/x86/efi/README.md +++ b/x86/efi/README.md @@ -18,6 +18,10 @@ To build: ./configure --enable-efi make +To make use of UEFI #VC handler for AMD SEV-ES: + + ./configure --enable-efi --amdsev-efi-vc + ### Run test cases with UEFI To run a test case with UEFI: From patchwork Wed Jun 12 14:45:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Karasulli X-Patchwork-Id: 13695141 Received: from mail-ej1-f50.google.com (mail-ej1-f50.google.com [209.85.218.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C9F717F4E0 for ; Wed, 12 Jun 2024 14:45:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203552; cv=none; b=ZOECbF094Jc80nD8k7xcWbEh/4jrP/OBZhRdweIUu1dR6OWzNLrdCkErPPpJCzbytaTl40mbtiP1Vzja7EGe+4tESucSt2eAtVi2G5/de7LfYoTqkZkbHLumG3Oldmeaj8FMchuiwTpsYNIhxyZJ8l+YUqn0/ALPsnemQDkoEHs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203552; c=relaxed/simple; bh=xujwh6lH2Bs1x9EAbKCi6M+2vDidaWbKqXLLxpfVlaU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=t926PByBkfQ2Fbw2Kxln4DmWO205N3S/1y9tszEucXwn8cUYSV9fYRDnDJhyIEbbdTiR/iessfFU5jX70L8+IsI64s5quzmWmodZ0+BJAhAuUEeKgsrQ9fg54gvepvqsD79kDZhI5T09MkAxOQSsvfSY7bmiQReu1/OI3jFyFNk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=HzbNcj80; arc=none smtp.client-ip=209.85.218.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="HzbNcj80" Received: by mail-ej1-f50.google.com with SMTP id a640c23a62f3a-a63359aaacaso329418466b.1 for ; Wed, 12 Jun 2024 07:45:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718203548; x=1718808348; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AfDoPk7aBDLgtejyaw77/ZfknsZulE2TGCGFLzn+aIU=; b=HzbNcj80S+LDtCGjkEWcU1tJNgOAbZx9Gioeu1z6Mo7hD6pDkjgK1z0SugLNK3q6IJ xt4uAD2NWpO3TLwEOdd9USuWzToNm5lvdeN+J1lWqRXOtUQPE6BXMjmwgVVKEm3uHCEy ZxvT0/4KD8bdlnHEKZj4n/tJiUahgQvoh933aE1kfBzOgNIWS5dilhzqQ7ewtXv511ZJ 7Udc85327j5/GQNLmq+mOB09NHzEOT7hJ4zAQfWMocxVR2mlx+/lvE552dYJZu7CG8Ij Q9qfLs0c0KOTcXb0zRBQ/HvUvVE4Qk4uefzaopoqo3RhtoNOe+2MwY68sDz3pRga0+XC zj0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718203548; x=1718808348; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AfDoPk7aBDLgtejyaw77/ZfknsZulE2TGCGFLzn+aIU=; b=b0A85atAhKYS9umX2D3qNThlhEp2EFV31gppT+/OTc1Rxvn8A/DOEbA9B5lmrENvhw dG2H0w2fKlqT2Vy7QNKNmNYLC0daQ9KsD3H8XLnKLNgYX6V60CwKs/7rs5ak72PPG/eJ uU61/BSW/spOTGqdrezOptB1o2HkJkNG7/qz4kEcWwC8udQ+icDqP061iX9UIKS/xi1F WaZm5clHkvDWvvz370VXGRs54gcvRS+2xu5r9sd+j8V/ws2jD4eHd9kJi489E3MqlfH9 VpiZ/RLtccc9cWESoT8Y3zmazZsj+EMS7K/FR2T2DViDycUTWydXScH9D1d3XoKQS79/ BXzw== X-Gm-Message-State: AOJu0Yyr4hp6/rzVepSfxMLWrgsEgKVKI2qIisMYUVzJ41056mMST03N nEHo6Btdz5y1Nfo+HCaPNR8BBMhTulaUNwEa/XIEgUqcs4wwj3TDkl8QeRPt X-Google-Smtp-Source: AGHT+IFzK5eY/Usal7HeP2vxeXVqhelcKfp+29RioGHFusgC1cgqXiBD1USzDI/GQDWrPFwnSgn9Pg== X-Received: by 2002:a17:906:f6d4:b0:a6f:467d:19ec with SMTP id a640c23a62f3a-a6f47cbc19cmr132611366b.18.1718203547580; Wed, 12 Jun 2024 07:45:47 -0700 (PDT) Received: from vasant-suse.suse.cz ([2001:9e8:ab7c:f800:473b:7cbe:2ac7:effa]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f18bbf3cbsm456440366b.1.2024.06.12.07.45.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 07:45:47 -0700 (PDT) From: vsntk18@gmail.com To: kvm@vger.kernel.org Cc: vsntk18@gmail.com, andrew.jones@linux.dev, jroedel@suse.de, papaluri@amd.com, pbonzini@redhat.com, seanjc@google.com, vkarasulli@suse.de Subject: [kvm-unit-tests PATCH v8 02/12] x86: Move architectural code to lib/x86 Date: Wed, 12 Jun 2024 16:45:29 +0200 Message-Id: <20240612144539.16147-3-vsntk18@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240612144539.16147-1-vsntk18@gmail.com> References: <20240612144539.16147-1-vsntk18@gmail.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Vasant Karasulli This enables sharing common definitions across testcases and lib/. Signed-off-by: Vasant Karasulli --- {x86 => lib/x86}/svm.h | 105 -------------------------------------- x86/svm.c | 2 +- x86/svm_npt.c | 2 +- x86/svm_tests.c | 2 +- x86/svm_tests.h | 113 +++++++++++++++++++++++++++++++++++++++++ 5 files changed, 116 insertions(+), 108 deletions(-) rename {x86 => lib/x86}/svm.h (76%) create mode 100644 x86/svm_tests.h -- 2.34.1 diff --git a/x86/svm.h b/lib/x86/svm.h similarity index 76% rename from x86/svm.h rename to lib/x86/svm.h index 308daa55..0fc64be7 100644 --- a/x86/svm.h +++ b/lib/x86/svm.h @@ -372,22 +372,6 @@ struct __attribute__ ((__packed__)) vmcb { #define LBR_CTL_ENABLE_MASK BIT_ULL(0) -struct svm_test { - const char *name; - bool (*supported)(void); - void (*prepare)(struct svm_test *test); - void (*prepare_gif_clear)(struct svm_test *test); - void (*guest_func)(struct svm_test *test); - bool (*finished)(struct svm_test *test); - bool (*succeeded)(struct svm_test *test); - int exits; - ulong scratch; - /* Alternative test interface. */ - void (*v2)(void); - int on_vcpu; - bool on_vcpu_done; -}; - struct regs { u64 rax; u64 rbx; @@ -408,93 +392,4 @@ struct regs { u64 rflags; }; -typedef void (*test_guest_func)(struct svm_test *); - -int run_svm_tests(int ac, char **av, struct svm_test *svm_tests); -u64 *npt_get_pte(u64 address); -u64 *npt_get_pde(u64 address); -u64 *npt_get_pdpe(u64 address); -u64 *npt_get_pml4e(void); -bool smp_supported(void); -bool default_supported(void); -bool vgif_supported(void); -bool lbrv_supported(void); -bool tsc_scale_supported(void); -bool pause_filter_supported(void); -bool pause_threshold_supported(void); -void default_prepare(struct svm_test *test); -void default_prepare_gif_clear(struct svm_test *test); -bool default_finished(struct svm_test *test); -bool npt_supported(void); -bool vnmi_supported(void); -int get_test_stage(struct svm_test *test); -void set_test_stage(struct svm_test *test, int s); -void inc_test_stage(struct svm_test *test); -void vmcb_ident(struct vmcb *vmcb); -struct regs get_regs(void); -void vmmcall(void); -void svm_setup_vmrun(u64 rip); -int __svm_vmrun(u64 rip); -int svm_vmrun(void); -void test_set_guest(test_guest_func func); - -extern struct vmcb *vmcb; - -static inline void stgi(void) -{ - asm volatile ("stgi"); -} - -static inline void clgi(void) -{ - asm volatile ("clgi"); -} - - - -#define SAVE_GPR_C \ - "xchg %%rbx, regs+0x8\n\t" \ - "xchg %%rcx, regs+0x10\n\t" \ - "xchg %%rdx, regs+0x18\n\t" \ - "xchg %%rbp, regs+0x28\n\t" \ - "xchg %%rsi, regs+0x30\n\t" \ - "xchg %%rdi, regs+0x38\n\t" \ - "xchg %%r8, regs+0x40\n\t" \ - "xchg %%r9, regs+0x48\n\t" \ - "xchg %%r10, regs+0x50\n\t" \ - "xchg %%r11, regs+0x58\n\t" \ - "xchg %%r12, regs+0x60\n\t" \ - "xchg %%r13, regs+0x68\n\t" \ - "xchg %%r14, regs+0x70\n\t" \ - "xchg %%r15, regs+0x78\n\t" - -#define LOAD_GPR_C SAVE_GPR_C - -#define ASM_PRE_VMRUN_CMD \ - "vmload %%rax\n\t" \ - "mov regs+0x80, %%r15\n\t" \ - "mov %%r15, 0x170(%%rax)\n\t" \ - "mov regs, %%r15\n\t" \ - "mov %%r15, 0x1f8(%%rax)\n\t" \ - LOAD_GPR_C \ - -#define ASM_POST_VMRUN_CMD \ - SAVE_GPR_C \ - "mov 0x170(%%rax), %%r15\n\t" \ - "mov %%r15, regs+0x80\n\t" \ - "mov 0x1f8(%%rax), %%r15\n\t" \ - "mov %%r15, regs\n\t" \ - "vmsave %%rax\n\t" \ - - - -#define SVM_BARE_VMRUN \ - asm volatile ( \ - ASM_PRE_VMRUN_CMD \ - "vmrun %%rax\n\t" \ - ASM_POST_VMRUN_CMD \ - : \ - : "a" (virt_to_phys(vmcb)) \ - : "memory", "r15") \ - #endif diff --git a/x86/svm.c b/x86/svm.c index e715e270..251e9ed6 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -2,7 +2,7 @@ * Framework for testing nested virtualization */ -#include "svm.h" +#include "svm_tests.h" #include "libcflat.h" #include "processor.h" #include "desc.h" diff --git a/x86/svm_npt.c b/x86/svm_npt.c index b791f1ac..c248a66f 100644 --- a/x86/svm_npt.c +++ b/x86/svm_npt.c @@ -1,4 +1,4 @@ -#include "svm.h" +#include "svm_tests.h" #include "vm.h" #include "alloc_page.h" #include "vmalloc.h" diff --git a/x86/svm_tests.c b/x86/svm_tests.c index c81b7465..0f206632 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -1,4 +1,4 @@ -#include "svm.h" +#include "svm_tests.h" #include "libcflat.h" #include "processor.h" #include "desc.h" diff --git a/x86/svm_tests.h b/x86/svm_tests.h new file mode 100644 index 00000000..fcf3bcb5 --- /dev/null +++ b/x86/svm_tests.h @@ -0,0 +1,107 @@ +#ifndef X86_SVM_TESTS_H +#define X86_SVM_TESTS_H + +#include "x86/svm.h" + +struct svm_test { + const char *name; + bool (*supported)(void); + void (*prepare)(struct svm_test *test); + void (*prepare_gif_clear)(struct svm_test *test); + void (*guest_func)(struct svm_test *test); + bool (*finished)(struct svm_test *test); + bool (*succeeded)(struct svm_test *test); + int exits; + ulong scratch; + /* Alternative test interface. */ + void (*v2)(void); + int on_vcpu; + bool on_vcpu_done; +}; + +typedef void (*test_guest_func)(struct svm_test *); + +int run_svm_tests(int ac, char **av, struct svm_test *svm_tests); +u64 *npt_get_pte(u64 address); +u64 *npt_get_pde(u64 address); +u64 *npt_get_pdpe(u64 address); +u64 *npt_get_pml4e(void); +bool smp_supported(void); +bool default_supported(void); +bool vgif_supported(void); +bool lbrv_supported(void); +bool tsc_scale_supported(void); +bool pause_filter_supported(void); +bool pause_threshold_supported(void); +void default_prepare(struct svm_test *test); +void default_prepare_gif_clear(struct svm_test *test); +bool default_finished(struct svm_test *test); +bool npt_supported(void); +bool vnmi_supported(void); +int get_test_stage(struct svm_test *test); +void set_test_stage(struct svm_test *test, int s); +void inc_test_stage(struct svm_test *test); +void vmcb_ident(struct vmcb *vmcb); +struct regs get_regs(void); +void vmmcall(void); +void svm_setup_vmrun(u64 rip); +int __svm_vmrun(u64 rip); +int svm_vmrun(void); +void test_set_guest(test_guest_func func); + +extern struct vmcb *vmcb; + +static inline void stgi(void) +{ + asm volatile ("stgi"); +} + +static inline void clgi(void) +{ + asm volatile ("clgi"); +} + +#define SAVE_GPR_C \ + "xchg %%rbx, regs+0x8\n\t" \ + "xchg %%rcx, regs+0x10\n\t" \ + "xchg %%rdx, regs+0x18\n\t" \ + "xchg %%rbp, regs+0x28\n\t" \ + "xchg %%rsi, regs+0x30\n\t" \ + "xchg %%rdi, regs+0x38\n\t" \ + "xchg %%r8, regs+0x40\n\t" \ + "xchg %%r9, regs+0x48\n\t" \ + "xchg %%r10, regs+0x50\n\t" \ + "xchg %%r11, regs+0x58\n\t" \ + "xchg %%r12, regs+0x60\n\t" \ + "xchg %%r13, regs+0x68\n\t" \ + "xchg %%r14, regs+0x70\n\t" \ + "xchg %%r15, regs+0x78\n\t" + +#define LOAD_GPR_C SAVE_GPR_C + +#define ASM_PRE_VMRUN_CMD \ + "vmload %%rax\n\t" \ + "mov regs+0x80, %%r15\n\t" \ + "mov %%r15, 0x170(%%rax)\n\t" \ + "mov regs, %%r15\n\t" \ + "mov %%r15, 0x1f8(%%rax)\n\t" \ + LOAD_GPR_C \ + +#define ASM_POST_VMRUN_CMD \ + SAVE_GPR_C \ + "mov 0x170(%%rax), %%r15\n\t" \ + "mov %%r15, regs+0x80\n\t" \ + "mov 0x1f8(%%rax), %%r15\n\t" \ + "mov %%r15, regs\n\t" \ + "vmsave %%rax\n\t" \ + +#define SVM_BARE_VMRUN \ + asm volatile ( \ + ASM_PRE_VMRUN_CMD \ + "vmrun %%rax\n\t" \ + ASM_POST_VMRUN_CMD \ + : \ + : "a" (virt_to_phys(vmcb)) \ + : "memory", "r15") \ + +#endif From patchwork Wed Jun 12 14:45:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Karasulli X-Patchwork-Id: 13695142 Received: from mail-ed1-f43.google.com (mail-ed1-f43.google.com [209.85.208.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D546F17F4ED for ; Wed, 12 Jun 2024 14:45:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203553; cv=none; b=edgzf+TwF/4dixJmgvetgzNFCAvOj0oaUz7seafJQ6nQ4TVpdWrKmMZsm9CpwQa0p6uTdyFSwO5xFlBR7K+Q+kg28WjM2P4hA3m3XlPQQ3OZm11Tu/boZKeYSQfLMpjOSB7j3GI76pBQtfIHkO0g+QyXMYFd1au6wWOGKZI/eus= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203553; c=relaxed/simple; bh=prygXyc2CjFiGgmw21MZ2CoC1VKsVtXn642Jr7lv5N4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=IyRl3L/oVUtk9MdCXzGAmNj73WypeORaDx3P+lR6SVGcQUrp37r/cdrZ1tArY5xtpFsjXfjuAv1UT1gFsYW7HT8MI7lb3kkMRe0u736MCb7zMeobZke3m6gfWN9Q7xZHmSnPN+1ZA2bEfayqdUXNf5wzKjEHY1NkFokdYVuIsa4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ODqIPSOC; arc=none smtp.client-ip=209.85.208.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ODqIPSOC" Received: by mail-ed1-f43.google.com with SMTP id 4fb4d7f45d1cf-57ca81533d0so1605714a12.0 for ; Wed, 12 Jun 2024 07:45:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718203550; x=1718808350; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=W39EJR6BkfNLyaz3oo06mjRwnSViVBqQpmbUWo8omS0=; b=ODqIPSOCRg9ALV0R7coFFkpFMj3lZTQAAv8+Lw3QetXxwWZaFV+5kGAMW6Eh85feLe n2ZGQetl2EGACh7DaYVMocZCiLrIIua3SWMzFDc3gTtxI42ayZRHJwVSGQMAa8mupnV4 5PLNcubk5IcJlBWtadgZC5UMK42O+Am5SCArkHC8QJcIxxWvS/sLtYPJGrRX8ejnSkIm Xec/QAhd9bxGKVcfSkah7fZhlXWpZib+HpN3BinLZw4R3Rr8vsfrPdRu+egXlXWaBqeH pk6REYs0b681Kd3mBpLPn+xIR8FZC8pRlGngIutrWiocc9rdaKgK72u8jMPk+mkzkv32 1+Wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718203550; x=1718808350; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W39EJR6BkfNLyaz3oo06mjRwnSViVBqQpmbUWo8omS0=; b=SpfqUCjlxmBRd0XLbxgBQcg6p3Y00i/yHHUTMGmMQcgnuRmPpC1BANDpcdC7gaLX41 3IAnt7bmgm4YxA/IKoKMMensnLycaRctUva6EMljuQRLfjfIqF30VsHsTt8EfoxrsX69 M0Pm0bMmOd4YTpH6twMOkpPEps6pw3gVDH+BjgAzQKSseQHfG37WnlDSP1H8A/oj/bBR 5VcAowEjd92XyvsrEwImGBel/aeqEDGXpAiQUTLm7YoWA8Ys5kl+7HEUi1c3+XTLaQ8H jBH0yah8Kr+vSfxBCkiOUZ72M+B4s0VS0ISUsIWQHetnh1pL19EZe1JUGSLA9axkMCxg DU2g== X-Gm-Message-State: AOJu0YzRdwmToH1ZLJ4gtmQr1wN273npH71jQljggRynIvdWJzRCEI9L SvdKCEBif6R1esxYnv9fuELfh/SoVc6RBpeR96SOo991bFLrqHagQIzb1/AQ X-Google-Smtp-Source: AGHT+IFS5OM9ZCk3tGmg6Nzwlqe2nkzS/IDogU1CDjyPVmobpgy6ZNGBeqBEsJqMh+A+giFIwp7XdA== X-Received: by 2002:a17:906:b895:b0:a6f:386f:6835 with SMTP id a640c23a62f3a-a6f386f6e10mr354792066b.4.1718203549315; Wed, 12 Jun 2024 07:45:49 -0700 (PDT) Received: from vasant-suse.suse.cz ([2001:9e8:ab7c:f800:473b:7cbe:2ac7:effa]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f18bbf3cbsm456440366b.1.2024.06.12.07.45.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 07:45:48 -0700 (PDT) From: vsntk18@gmail.com To: kvm@vger.kernel.org Cc: vsntk18@gmail.com, andrew.jones@linux.dev, jroedel@suse.de, papaluri@amd.com, pbonzini@redhat.com, seanjc@google.com, vkarasulli@suse.de Subject: [kvm-unit-tests PATCH v8 03/12] lib: Define unlikely()/likely() macros in compiler.h Date: Wed, 12 Jun 2024 16:45:30 +0200 Message-Id: <20240612144539.16147-4-vsntk18@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240612144539.16147-1-vsntk18@gmail.com> References: <20240612144539.16147-1-vsntk18@gmail.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Vasant Karasulli So that they can be shared across testcases and lib/. Linux's x86 instruction decoder refrences them. Signed-off-by: Vasant Karasulli --- lib/linux/compiler.h | 3 +++ x86/kvmclock.c | 4 ---- 2 files changed, 3 insertions(+), 4 deletions(-) -- 2.34.1 diff --git a/lib/linux/compiler.h b/lib/linux/compiler.h index bf3313bd..9f4ef162 100644 --- a/lib/linux/compiler.h +++ b/lib/linux/compiler.h @@ -121,5 +121,8 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s __u.__val; \ }) +#define unlikely(x) __builtin_expect(!!(x), 0) +#define likely(x) __builtin_expect(!!(x), 1) + #endif /* !__ASSEMBLY__ */ #endif /* !__LINUX_COMPILER_H */ diff --git a/x86/kvmclock.c b/x86/kvmclock.c index f9f21032..487c12af 100644 --- a/x86/kvmclock.c +++ b/x86/kvmclock.c @@ -5,10 +5,6 @@ #include "kvmclock.h" #include "asm/barrier.h" -#define unlikely(x) __builtin_expect(!!(x), 0) -#define likely(x) __builtin_expect(!!(x), 1) - - struct pvclock_vcpu_time_info __attribute__((aligned(4))) hv_clock[MAX_CPU]; struct pvclock_wall_clock wall_clock; static unsigned char valid_flags = 0; From patchwork Wed Jun 12 14:45:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Karasulli X-Patchwork-Id: 13695146 Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2EEA317F500 for ; Wed, 12 Jun 2024 14:45:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203559; cv=none; b=jYtESEqWCkaKiaDvzN/Lv3lFQyJqg7JSCj6MTUYHFXn2IrS40mUmFDZiyOekkq/rh20WH1bKf5zxE6ULTERTHTLUwdJeXVUoAhnEnHeLdd19JT/n6L4hnIhl4B3zG5yOqUTHz1Q0P5WWgYs3H/h6ZAfsKjScVbpK2DXyf07+c6M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203559; c=relaxed/simple; bh=2Luhup3n9fkMCPPeKpZY7JWtlZJQRRnES4JwzDbB2Us=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=NYZGd+f4fYPShdMnIEDAztJ+P41va9q3NanOJM2g4UBS2eauBGHInE2qcpOU8CaU+k9/7x5zJN5RkNCCL2P0Tjia0GKPE1353pVqbXXZBLx9+XB4s1JJ3ZmaFSk07MYX8EbUNDmnkJ7vvy0UeKYINctQ/ak8eXSfyleEc1iEaG0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=E7LIBbyg; arc=none smtp.client-ip=209.85.208.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="E7LIBbyg" Received: by mail-ed1-f49.google.com with SMTP id 4fb4d7f45d1cf-57a30dbdb7fso10259159a12.3 for ; Wed, 12 Jun 2024 07:45:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718203552; x=1718808352; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Yd5MDeaDeaHVKSBG7Phcaz9Y42rztu0FqJ/5sKXpjro=; b=E7LIBbygNTUn2ToLXBYDhQsd+knYewtKlvfrkTebV2dgEUuvwBRCLn2hmQQ3T6KtUw WvJgowGI8fwv0bWj/rbwspZcYQ21C0pdbHi3pDq3vrYO4zLO0ZnTbck/+Nf90hZy2sSM w+jgKgbI1+0JDRdJeaaE1+zeHoVelg1+Q1RoRWDQ78yJwebvgMktXKR9Z58lgib46sDU MjwCiFPmQsu5GlLKv3L+2C15Yi2uDOUeiDvhlbxMlVwIsAZD5hT58EIclSn4c8lZq0Tr 24LrbU0dyd1uzyVPWC3uaQI0Xbl94a4tTleUyVZLu5WsKMiTUVsJsOKDzRSYuTZDRm6/ 6TVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718203552; x=1718808352; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Yd5MDeaDeaHVKSBG7Phcaz9Y42rztu0FqJ/5sKXpjro=; b=FEjrOuhZUUYC/KzW13BCQUdu0DPnXUK9QWpucre9kUdSpPcDboGCNbUkG+YP6+7hvx k1iEau1I/6s2Zq+REdPEs2yxDl4GEQba+E6ZpHqjOy+uK/0BbcHoIAzf9zsAajM17ER/ xnGl3ltVq6DQwgN8p0cZ5D4R8dFIgqSN4qi310IbzpZGgyVnbOcK0Ifpq1bPlNFeJYlI 6+hHwdUPJ4vIadweXn+oq5vshLojvfg19ttdtN3D7yajOtAinh9YpQsYw8YTRAk4Zniv JVMbsL+Dr5SnoF51ijJ0Df50AP4Z/IMYii/N2L5aPBc02HMl725gUlwTApTYJZBG/qL+ e2gg== X-Gm-Message-State: AOJu0YzNrw45CVy7rRICqh19x94qEJgu0chmIySgpXuHyvynsPhRINq/ RqeMapb6Et9qC0y5meETmSXalax88ZhR7Fb1aDpDfTtQDOrGt+d+B4dHOJtn X-Google-Smtp-Source: AGHT+IEu1nZVK9AjWHsU+dPPmBUcqTuEkvEfBP6aNLoo7ygP+R7h9wShiB/FiSm+eY8KVR0X+44i5w== X-Received: by 2002:a17:906:3c13:b0:a6f:5150:b805 with SMTP id a640c23a62f3a-a6f5150c297mr11249866b.33.1718203550983; Wed, 12 Jun 2024 07:45:50 -0700 (PDT) Received: from vasant-suse.suse.cz ([2001:9e8:ab7c:f800:473b:7cbe:2ac7:effa]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f18bbf3cbsm456440366b.1.2024.06.12.07.45.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 07:45:50 -0700 (PDT) From: vsntk18@gmail.com To: kvm@vger.kernel.org Cc: vsntk18@gmail.com, andrew.jones@linux.dev, jroedel@suse.de, papaluri@amd.com, pbonzini@redhat.com, seanjc@google.com, vkarasulli@suse.de Subject: [kvm-unit-tests PATCH v8 04/12] lib: x86: Import insn decoder from Linux Date: Wed, 12 Jun 2024 16:45:31 +0200 Message-Id: <20240612144539.16147-5-vsntk18@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240612144539.16147-1-vsntk18@gmail.com> References: <20240612144539.16147-1-vsntk18@gmail.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Vasant Karasulli Processing #VC exceptions on AMD SEV-ES requires instruction decoding logic to set up the right GHCB state before exiting to the host. Pull in the instruction decoder from Linux for this purpose. Origin: Linux e8c39d0f57f358950356a8e44ee5159f57f86ec5 Signed-off-by: Vasant Karasulli --- .gitignore | 2 + lib/x86/insn/README | 23 + lib/x86/insn/gen-insn-attr-x86.awk | 443 +++++++++++ lib/x86/insn/inat.c | 86 ++ lib/x86/insn/inat.h | 233 ++++++ lib/x86/insn/inat_types.h | 18 + lib/x86/insn/insn.c | 735 +++++++++++++++++ lib/x86/insn/insn.h | 279 +++++++ lib/x86/insn/insn_glue.h | 32 + lib/x86/insn/x86-opcode-map.txt | 1191 ++++++++++++++++++++++++++++ x86/Makefile.common | 14 +- 11 files changed, 3054 insertions(+), 2 deletions(-) create mode 100644 lib/x86/insn/README create mode 100644 lib/x86/insn/gen-insn-attr-x86.awk create mode 100644 lib/x86/insn/inat.c create mode 100644 lib/x86/insn/inat.h create mode 100644 lib/x86/insn/inat_types.h create mode 100644 lib/x86/insn/insn.c create mode 100644 lib/x86/insn/insn.h create mode 100644 lib/x86/insn/insn_glue.h create mode 100644 lib/x86/insn/x86-opcode-map.txt -- 2.34.1 diff --git a/.gitignore b/.gitignore index 2168e013..3f2df5f4 100644 --- a/.gitignore +++ b/.gitignore @@ -16,6 +16,8 @@ cscope.* *.swp /lib/asm /lib/config.h +/lib/x86/insn/inat-tables.c +/lib/x86/insn/*.d /config.mak /*-run /msr.out diff --git a/lib/x86/insn/README b/lib/x86/insn/README new file mode 100644 index 00000000..9b23bd4c --- /dev/null +++ b/lib/x86/insn/README @@ -0,0 +1,23 @@ +README +====== + +lib/x86/insn/ contains x86 instruction decoder src from Linux. + +The following files were taken as-is from Linux@e8c39d0f57f and +adapted to build with kvm-unit-tests source: +- U: lib/x86/insn/gen-insn-attr-x86.awk +- I: lib/x86/insn/inat.c +- I: lib/x86/insn/inat.h +- I: lib/x86/insn/inat_types.h +- I: lib/x86/insn/insn.c +- I: lib/x86/insn/insn.h +- U: lib/x86/insn/x86-opcode-map.txt + +U: Unmodified, except source attribution. +I: Modified for #include path fixup. + +lib/x86/insn/insn_glue.h contains additional code from Linux that +is relevant to the insn decoder, and not required elsewhere by +kvm-unit-tests. These definitions are placed in a separate file to +keep the diff between Linux and kvm-unit-tests's insn decoder copy +minimal. diff --git a/lib/x86/insn/gen-insn-attr-x86.awk b/lib/x86/insn/gen-insn-attr-x86.awk new file mode 100644 index 00000000..3cf67c6f --- /dev/null +++ b/lib/x86/insn/gen-insn-attr-x86.awk @@ -0,0 +1,443 @@ +#!/bin/awk -f +# SPDX-License-Identifier: GPL-2.0 +# gen-insn-attr-x86.awk: Instruction attribute table generator +# Written by Masami Hiramatsu +# +# Usage: awk -f gen-insn-attr-x86.awk x86-opcode-map.txt > inat-tables.c +# +# kvm-unit-tests origin: Linux@e8c39d0f57f +# arch/x86/tools/gen-insn-attr-x86.awk + +# Awk implementation sanity check +function check_awk_implement() { + if (sprintf("%x", 0) != "0") + return "Your awk has a printf-format problem." + return "" +} + +# Clear working vars +function clear_vars() { + delete table + delete lptable2 + delete lptable1 + delete lptable3 + eid = -1 # escape id + gid = -1 # group id + aid = -1 # AVX id + tname = "" +} + +BEGIN { + # Implementation error checking + awkchecked = check_awk_implement() + if (awkchecked != "") { + print "Error: " awkchecked > "/dev/stderr" + print "Please try to use gawk." > "/dev/stderr" + exit 1 + } + + # Setup generating tables + print "/* x86 opcode map generated from x86-opcode-map.txt */" + print "/* Do not change this code. */\n" + ggid = 1 + geid = 1 + gaid = 0 + delete etable + delete gtable + delete atable + + opnd_expr = "^[A-Za-z/]" + ext_expr = "^\\(" + sep_expr = "^\\|$" + group_expr = "^Grp[0-9A-Za-z]+" + + imm_expr = "^[IJAOL][a-z]" + imm_flag["Ib"] = "INAT_MAKE_IMM(INAT_IMM_BYTE)" + imm_flag["Jb"] = "INAT_MAKE_IMM(INAT_IMM_BYTE)" + imm_flag["Iw"] = "INAT_MAKE_IMM(INAT_IMM_WORD)" + imm_flag["Id"] = "INAT_MAKE_IMM(INAT_IMM_DWORD)" + imm_flag["Iq"] = "INAT_MAKE_IMM(INAT_IMM_QWORD)" + imm_flag["Ap"] = "INAT_MAKE_IMM(INAT_IMM_PTR)" + imm_flag["Iz"] = "INAT_MAKE_IMM(INAT_IMM_VWORD32)" + imm_flag["Jz"] = "INAT_MAKE_IMM(INAT_IMM_VWORD32)" + imm_flag["Iv"] = "INAT_MAKE_IMM(INAT_IMM_VWORD)" + imm_flag["Ob"] = "INAT_MOFFSET" + imm_flag["Ov"] = "INAT_MOFFSET" + imm_flag["Lx"] = "INAT_MAKE_IMM(INAT_IMM_BYTE)" + + modrm_expr = "^([CDEGMNPQRSUVW/][a-z]+|NTA|T[012])" + force64_expr = "\\([df]64\\)" + rex_expr = "^REX(\\.[XRWB]+)*" + fpu_expr = "^ESC" # TODO + + lprefix1_expr = "\\((66|!F3)\\)" + lprefix2_expr = "\\(F3\\)" + lprefix3_expr = "\\((F2|!F3|66&F2)\\)" + lprefix_expr = "\\((66|F2|F3)\\)" + max_lprefix = 4 + + # All opcodes starting with lower-case 'v', 'k' or with (v1) superscript + # accepts VEX prefix + vexok_opcode_expr = "^[vk].*" + vexok_expr = "\\(v1\\)" + # All opcodes with (v) superscript supports *only* VEX prefix + vexonly_expr = "\\(v\\)" + # All opcodes with (ev) superscript supports *only* EVEX prefix + evexonly_expr = "\\(ev\\)" + + prefix_expr = "\\(Prefix\\)" + prefix_num["Operand-Size"] = "INAT_PFX_OPNDSZ" + prefix_num["REPNE"] = "INAT_PFX_REPNE" + prefix_num["REP/REPE"] = "INAT_PFX_REPE" + prefix_num["XACQUIRE"] = "INAT_PFX_REPNE" + prefix_num["XRELEASE"] = "INAT_PFX_REPE" + prefix_num["LOCK"] = "INAT_PFX_LOCK" + prefix_num["SEG=CS"] = "INAT_PFX_CS" + prefix_num["SEG=DS"] = "INAT_PFX_DS" + prefix_num["SEG=ES"] = "INAT_PFX_ES" + prefix_num["SEG=FS"] = "INAT_PFX_FS" + prefix_num["SEG=GS"] = "INAT_PFX_GS" + prefix_num["SEG=SS"] = "INAT_PFX_SS" + prefix_num["Address-Size"] = "INAT_PFX_ADDRSZ" + prefix_num["VEX+1byte"] = "INAT_PFX_VEX2" + prefix_num["VEX+2byte"] = "INAT_PFX_VEX3" + prefix_num["EVEX"] = "INAT_PFX_EVEX" + + clear_vars() +} + +function semantic_error(msg) { + print "Semantic error at " NR ": " msg > "/dev/stderr" + exit 1 +} + +function debug(msg) { + print "DEBUG: " msg +} + +function array_size(arr, i,c) { + c = 0 + for (i in arr) + c++ + return c +} + +/^Table:/ { + print "/* " $0 " */" + if (tname != "") + semantic_error("Hit Table: before EndTable:."); +} + +/^Referrer:/ { + if (NF != 1) { + # escape opcode table + ref = "" + for (i = 2; i <= NF; i++) + ref = ref $i + eid = escape[ref] + tname = sprintf("inat_escape_table_%d", eid) + } +} + +/^AVXcode:/ { + if (NF != 1) { + # AVX/escape opcode table + aid = $2 + if (gaid <= aid) + gaid = aid + 1 + if (tname == "") # AVX only opcode table + tname = sprintf("inat_avx_table_%d", $2) + } + if (aid == -1 && eid == -1) # primary opcode table + tname = "inat_primary_table" +} + +/^GrpTable:/ { + print "/* " $0 " */" + if (!($2 in group)) + semantic_error("No group: " $2 ) + gid = group[$2] + tname = "inat_group_table_" gid +} + +function print_table(tbl,name,fmt,n) +{ + print "const insn_attr_t " name " = {" + for (i = 0; i < n; i++) { + id = sprintf(fmt, i) + if (tbl[id]) + print " [" id "] = " tbl[id] "," + } + print "};" +} + +/^EndTable/ { + if (gid != -1) { + # print group tables + if (array_size(table) != 0) { + print_table(table, tname "[INAT_GROUP_TABLE_SIZE]", + "0x%x", 8) + gtable[gid,0] = tname + } + if (array_size(lptable1) != 0) { + print_table(lptable1, tname "_1[INAT_GROUP_TABLE_SIZE]", + "0x%x", 8) + gtable[gid,1] = tname "_1" + } + if (array_size(lptable2) != 0) { + print_table(lptable2, tname "_2[INAT_GROUP_TABLE_SIZE]", + "0x%x", 8) + gtable[gid,2] = tname "_2" + } + if (array_size(lptable3) != 0) { + print_table(lptable3, tname "_3[INAT_GROUP_TABLE_SIZE]", + "0x%x", 8) + gtable[gid,3] = tname "_3" + } + } else { + # print primary/escaped tables + if (array_size(table) != 0) { + print_table(table, tname "[INAT_OPCODE_TABLE_SIZE]", + "0x%02x", 256) + etable[eid,0] = tname + if (aid >= 0) + atable[aid,0] = tname + } + if (array_size(lptable1) != 0) { + print_table(lptable1,tname "_1[INAT_OPCODE_TABLE_SIZE]", + "0x%02x", 256) + etable[eid,1] = tname "_1" + if (aid >= 0) + atable[aid,1] = tname "_1" + } + if (array_size(lptable2) != 0) { + print_table(lptable2,tname "_2[INAT_OPCODE_TABLE_SIZE]", + "0x%02x", 256) + etable[eid,2] = tname "_2" + if (aid >= 0) + atable[aid,2] = tname "_2" + } + if (array_size(lptable3) != 0) { + print_table(lptable3,tname "_3[INAT_OPCODE_TABLE_SIZE]", + "0x%02x", 256) + etable[eid,3] = tname "_3" + if (aid >= 0) + atable[aid,3] = tname "_3" + } + } + print "" + clear_vars() +} + +function add_flags(old,new) { + if (old && new) + return old " | " new + else if (old) + return old + else + return new +} + +# convert operands to flags. +function convert_operands(count,opnd, i,j,imm,mod) +{ + imm = null + mod = null + for (j = 1; j <= count; j++) { + i = opnd[j] + if (match(i, imm_expr) == 1) { + if (!imm_flag[i]) + semantic_error("Unknown imm opnd: " i) + if (imm) { + if (i != "Ib") + semantic_error("Second IMM error") + imm = add_flags(imm, "INAT_SCNDIMM") + } else + imm = imm_flag[i] + } else if (match(i, modrm_expr)) + mod = "INAT_MODRM" + } + return add_flags(imm, mod) +} + +/^[0-9a-f]+:/ { + if (NR == 1) + next + # get index + idx = "0x" substr($1, 1, index($1,":") - 1) + if (idx in table) + semantic_error("Redefine " idx " in " tname) + + # check if escaped opcode + if ("escape" == $2) { + if ($3 != "#") + semantic_error("No escaped name") + ref = "" + for (i = 4; i <= NF; i++) + ref = ref $i + if (ref in escape) + semantic_error("Redefine escape (" ref ")") + escape[ref] = geid + geid++ + table[idx] = "INAT_MAKE_ESCAPE(" escape[ref] ")" + next + } + + variant = null + # converts + i = 2 + while (i <= NF) { + opcode = $(i++) + delete opnds + ext = null + flags = null + opnd = null + # parse one opcode + if (match($i, opnd_expr)) { + opnd = $i + count = split($(i++), opnds, ",") + flags = convert_operands(count, opnds) + } + if (match($i, ext_expr)) + ext = $(i++) + if (match($i, sep_expr)) + i++ + else if (i < NF) + semantic_error($i " is not a separator") + + # check if group opcode + if (match(opcode, group_expr)) { + if (!(opcode in group)) { + group[opcode] = ggid + ggid++ + } + flags = add_flags(flags, "INAT_MAKE_GROUP(" group[opcode] ")") + } + # check force(or default) 64bit + if (match(ext, force64_expr)) + flags = add_flags(flags, "INAT_FORCE64") + + # check REX prefix + if (match(opcode, rex_expr)) + flags = add_flags(flags, "INAT_MAKE_PREFIX(INAT_PFX_REX)") + + # check coprocessor escape : TODO + if (match(opcode, fpu_expr)) + flags = add_flags(flags, "INAT_MODRM") + + # check VEX codes + if (match(ext, evexonly_expr)) + flags = add_flags(flags, "INAT_VEXOK | INAT_EVEXONLY") + else if (match(ext, vexonly_expr)) + flags = add_flags(flags, "INAT_VEXOK | INAT_VEXONLY") + else if (match(ext, vexok_expr) || match(opcode, vexok_opcode_expr)) + flags = add_flags(flags, "INAT_VEXOK") + + # check prefixes + if (match(ext, prefix_expr)) { + if (!prefix_num[opcode]) + semantic_error("Unknown prefix: " opcode) + flags = add_flags(flags, "INAT_MAKE_PREFIX(" prefix_num[opcode] ")") + } + if (length(flags) == 0) + continue + # check if last prefix + if (match(ext, lprefix1_expr)) { + lptable1[idx] = add_flags(lptable1[idx],flags) + variant = "INAT_VARIANT" + } + if (match(ext, lprefix2_expr)) { + lptable2[idx] = add_flags(lptable2[idx],flags) + variant = "INAT_VARIANT" + } + if (match(ext, lprefix3_expr)) { + lptable3[idx] = add_flags(lptable3[idx],flags) + variant = "INAT_VARIANT" + } + if (!match(ext, lprefix_expr)){ + table[idx] = add_flags(table[idx],flags) + } + } + if (variant) + table[idx] = add_flags(table[idx],variant) +} + +END { + if (awkchecked != "") + exit 1 + + print "#ifndef __BOOT_COMPRESSED\n" + + # print escape opcode map's array + print "/* Escape opcode map array */" + print "const insn_attr_t * const inat_escape_tables[INAT_ESC_MAX + 1]" \ + "[INAT_LSTPFX_MAX + 1] = {" + for (i = 0; i < geid; i++) + for (j = 0; j < max_lprefix; j++) + if (etable[i,j]) + print " ["i"]["j"] = "etable[i,j]"," + print "};\n" + # print group opcode map's array + print "/* Group opcode map array */" + print "const insn_attr_t * const inat_group_tables[INAT_GRP_MAX + 1]"\ + "[INAT_LSTPFX_MAX + 1] = {" + for (i = 0; i < ggid; i++) + for (j = 0; j < max_lprefix; j++) + if (gtable[i,j]) + print " ["i"]["j"] = "gtable[i,j]"," + print "};\n" + # print AVX opcode map's array + print "/* AVX opcode map array */" + print "const insn_attr_t * const inat_avx_tables[X86_VEX_M_MAX + 1]"\ + "[INAT_LSTPFX_MAX + 1] = {" + for (i = 0; i < gaid; i++) + for (j = 0; j < max_lprefix; j++) + if (atable[i,j]) + print " ["i"]["j"] = "atable[i,j]"," + print "};\n" + + print "#else /* !__BOOT_COMPRESSED */\n" + + print "/* Escape opcode map array */" + print "static const insn_attr_t *inat_escape_tables[INAT_ESC_MAX + 1]" \ + "[INAT_LSTPFX_MAX + 1];" + print "" + + print "/* Group opcode map array */" + print "static const insn_attr_t *inat_group_tables[INAT_GRP_MAX + 1]"\ + "[INAT_LSTPFX_MAX + 1];" + print "" + + print "/* AVX opcode map array */" + print "static const insn_attr_t *inat_avx_tables[X86_VEX_M_MAX + 1]"\ + "[INAT_LSTPFX_MAX + 1];" + print "" + + print "static void inat_init_tables(void)" + print "{" + + # print escape opcode map's array + print "\t/* Print Escape opcode map array */" + for (i = 0; i < geid; i++) + for (j = 0; j < max_lprefix; j++) + if (etable[i,j]) + print "\tinat_escape_tables["i"]["j"] = "etable[i,j]";" + print "" + + # print group opcode map's array + print "\t/* Print Group opcode map array */" + for (i = 0; i < ggid; i++) + for (j = 0; j < max_lprefix; j++) + if (gtable[i,j]) + print "\tinat_group_tables["i"]["j"] = "gtable[i,j]";" + print "" + # print AVX opcode map's array + print "\t/* Print AVX opcode map array */" + for (i = 0; i < gaid; i++) + for (j = 0; j < max_lprefix; j++) + if (atable[i,j]) + print "\tinat_avx_tables["i"]["j"] = "atable[i,j]";" + + print "}" + print "#endif" +} diff --git a/lib/x86/insn/inat.c b/lib/x86/insn/inat.c new file mode 100644 index 00000000..8d0340b9 --- /dev/null +++ b/lib/x86/insn/inat.c @@ -0,0 +1,86 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * x86 instruction attribute tables + * + * Written by Masami Hiramatsu + * + * kvm-unit-tests origin: Linux@e8c39d0f57f + * tools/arch/x86/lib/inat.c + */ +#include "insn.h" /* __ignore_sync_check__ */ + +/* Attribute tables are generated from opcode map */ +#include "inat-tables.c" + +/* Attribute search APIs */ +insn_attr_t inat_get_opcode_attribute(insn_byte_t opcode) +{ + return inat_primary_table[opcode]; +} + +int inat_get_last_prefix_id(insn_byte_t last_pfx) +{ + insn_attr_t lpfx_attr; + + lpfx_attr = inat_get_opcode_attribute(last_pfx); + return inat_last_prefix_id(lpfx_attr); +} + +insn_attr_t inat_get_escape_attribute(insn_byte_t opcode, int lpfx_id, + insn_attr_t esc_attr) +{ + const insn_attr_t *table; + int n; + + n = inat_escape_id(esc_attr); + + table = inat_escape_tables[n][0]; + if (!table) + return 0; + if (inat_has_variant(table[opcode]) && lpfx_id) { + table = inat_escape_tables[n][lpfx_id]; + if (!table) + return 0; + } + return table[opcode]; +} + +insn_attr_t inat_get_group_attribute(insn_byte_t modrm, int lpfx_id, + insn_attr_t grp_attr) +{ + const insn_attr_t *table; + int n; + + n = inat_group_id(grp_attr); + + table = inat_group_tables[n][0]; + if (!table) + return inat_group_common_attribute(grp_attr); + if (inat_has_variant(table[X86_MODRM_REG(modrm)]) && lpfx_id) { + table = inat_group_tables[n][lpfx_id]; + if (!table) + return inat_group_common_attribute(grp_attr); + } + return table[X86_MODRM_REG(modrm)] | + inat_group_common_attribute(grp_attr); +} + +insn_attr_t inat_get_avx_attribute(insn_byte_t opcode, insn_byte_t vex_m, + insn_byte_t vex_p) +{ + const insn_attr_t *table; + + if (vex_m > X86_VEX_M_MAX || vex_p > INAT_LSTPFX_MAX) + return 0; + /* At first, this checks the master table */ + table = inat_avx_tables[vex_m][0]; + if (!table) + return 0; + if (!inat_is_group(table[opcode]) && vex_p) { + /* If this is not a group, get attribute directly */ + table = inat_avx_tables[vex_m][vex_p]; + if (!table) + return 0; + } + return table[opcode]; +} diff --git a/lib/x86/insn/inat.h b/lib/x86/insn/inat.h new file mode 100644 index 00000000..d11dbef5 --- /dev/null +++ b/lib/x86/insn/inat.h @@ -0,0 +1,233 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _ASM_X86_INAT_H +#define _ASM_X86_INAT_H +/* + * x86 instruction attributes + * + * Written by Masami Hiramatsu + * + * kvm-unit-tests origin: Linux@e8c39d0f57f + * tools/arch/x86/include/asm/inat.h + */ +#include "inat_types.h" + +/* + * Internal bits. Don't use bitmasks directly, because these bits are + * unstable. You should use checking functions. + */ + +#define INAT_OPCODE_TABLE_SIZE 256 +#define INAT_GROUP_TABLE_SIZE 8 + +/* Legacy last prefixes */ +#define INAT_PFX_OPNDSZ 1 /* 0x66 */ /* LPFX1 */ +#define INAT_PFX_REPE 2 /* 0xF3 */ /* LPFX2 */ +#define INAT_PFX_REPNE 3 /* 0xF2 */ /* LPFX3 */ +/* Other Legacy prefixes */ +#define INAT_PFX_LOCK 4 /* 0xF0 */ +#define INAT_PFX_CS 5 /* 0x2E */ +#define INAT_PFX_DS 6 /* 0x3E */ +#define INAT_PFX_ES 7 /* 0x26 */ +#define INAT_PFX_FS 8 /* 0x64 */ +#define INAT_PFX_GS 9 /* 0x65 */ +#define INAT_PFX_SS 10 /* 0x36 */ +#define INAT_PFX_ADDRSZ 11 /* 0x67 */ +/* x86-64 REX prefix */ +#define INAT_PFX_REX 12 /* 0x4X */ +/* AVX VEX prefixes */ +#define INAT_PFX_VEX2 13 /* 2-bytes VEX prefix */ +#define INAT_PFX_VEX3 14 /* 3-bytes VEX prefix */ +#define INAT_PFX_EVEX 15 /* EVEX prefix */ + +#define INAT_LSTPFX_MAX 3 +#define INAT_LGCPFX_MAX 11 + +/* Immediate size */ +#define INAT_IMM_BYTE 1 +#define INAT_IMM_WORD 2 +#define INAT_IMM_DWORD 3 +#define INAT_IMM_QWORD 4 +#define INAT_IMM_PTR 5 +#define INAT_IMM_VWORD32 6 +#define INAT_IMM_VWORD 7 + +/* Legacy prefix */ +#define INAT_PFX_OFFS 0 +#define INAT_PFX_BITS 4 +#define INAT_PFX_MAX ((1 << INAT_PFX_BITS) - 1) +#define INAT_PFX_MASK (INAT_PFX_MAX << INAT_PFX_OFFS) +/* Escape opcodes */ +#define INAT_ESC_OFFS (INAT_PFX_OFFS + INAT_PFX_BITS) +#define INAT_ESC_BITS 2 +#define INAT_ESC_MAX ((1 << INAT_ESC_BITS) - 1) +#define INAT_ESC_MASK (INAT_ESC_MAX << INAT_ESC_OFFS) +/* Group opcodes (1-16) */ +#define INAT_GRP_OFFS (INAT_ESC_OFFS + INAT_ESC_BITS) +#define INAT_GRP_BITS 5 +#define INAT_GRP_MAX ((1 << INAT_GRP_BITS) - 1) +#define INAT_GRP_MASK (INAT_GRP_MAX << INAT_GRP_OFFS) +/* Immediates */ +#define INAT_IMM_OFFS (INAT_GRP_OFFS + INAT_GRP_BITS) +#define INAT_IMM_BITS 3 +#define INAT_IMM_MASK (((1 << INAT_IMM_BITS) - 1) << INAT_IMM_OFFS) +/* Flags */ +#define INAT_FLAG_OFFS (INAT_IMM_OFFS + INAT_IMM_BITS) +#define INAT_MODRM (1 << (INAT_FLAG_OFFS)) +#define INAT_FORCE64 (1 << (INAT_FLAG_OFFS + 1)) +#define INAT_SCNDIMM (1 << (INAT_FLAG_OFFS + 2)) +#define INAT_MOFFSET (1 << (INAT_FLAG_OFFS + 3)) +#define INAT_VARIANT (1 << (INAT_FLAG_OFFS + 4)) +#define INAT_VEXOK (1 << (INAT_FLAG_OFFS + 5)) +#define INAT_VEXONLY (1 << (INAT_FLAG_OFFS + 6)) +#define INAT_EVEXONLY (1 << (INAT_FLAG_OFFS + 7)) +/* Attribute making macros for attribute tables */ +#define INAT_MAKE_PREFIX(pfx) (pfx << INAT_PFX_OFFS) +#define INAT_MAKE_ESCAPE(esc) (esc << INAT_ESC_OFFS) +#define INAT_MAKE_GROUP(grp) ((grp << INAT_GRP_OFFS) | INAT_MODRM) +#define INAT_MAKE_IMM(imm) (imm << INAT_IMM_OFFS) + +/* Identifiers for segment registers */ +#define INAT_SEG_REG_IGNORE 0 +#define INAT_SEG_REG_DEFAULT 1 +#define INAT_SEG_REG_CS 2 +#define INAT_SEG_REG_SS 3 +#define INAT_SEG_REG_DS 4 +#define INAT_SEG_REG_ES 5 +#define INAT_SEG_REG_FS 6 +#define INAT_SEG_REG_GS 7 + +/* Attribute search APIs */ +extern insn_attr_t inat_get_opcode_attribute(insn_byte_t opcode); +extern int inat_get_last_prefix_id(insn_byte_t last_pfx); +extern insn_attr_t inat_get_escape_attribute(insn_byte_t opcode, + int lpfx_id, + insn_attr_t esc_attr); +extern insn_attr_t inat_get_group_attribute(insn_byte_t modrm, + int lpfx_id, + insn_attr_t esc_attr); +extern insn_attr_t inat_get_avx_attribute(insn_byte_t opcode, + insn_byte_t vex_m, + insn_byte_t vex_pp); + +/* Attribute checking functions */ +static inline int inat_is_legacy_prefix(insn_attr_t attr) +{ + attr &= INAT_PFX_MASK; + return attr && attr <= INAT_LGCPFX_MAX; +} + +static inline int inat_is_address_size_prefix(insn_attr_t attr) +{ + return (attr & INAT_PFX_MASK) == INAT_PFX_ADDRSZ; +} + +static inline int inat_is_operand_size_prefix(insn_attr_t attr) +{ + return (attr & INAT_PFX_MASK) == INAT_PFX_OPNDSZ; +} + +static inline int inat_is_rex_prefix(insn_attr_t attr) +{ + return (attr & INAT_PFX_MASK) == INAT_PFX_REX; +} + +static inline int inat_last_prefix_id(insn_attr_t attr) +{ + if ((attr & INAT_PFX_MASK) > INAT_LSTPFX_MAX) + return 0; + else + return attr & INAT_PFX_MASK; +} + +static inline int inat_is_vex_prefix(insn_attr_t attr) +{ + attr &= INAT_PFX_MASK; + return attr == INAT_PFX_VEX2 || attr == INAT_PFX_VEX3 || + attr == INAT_PFX_EVEX; +} + +static inline int inat_is_evex_prefix(insn_attr_t attr) +{ + return (attr & INAT_PFX_MASK) == INAT_PFX_EVEX; +} + +static inline int inat_is_vex3_prefix(insn_attr_t attr) +{ + return (attr & INAT_PFX_MASK) == INAT_PFX_VEX3; +} + +static inline int inat_is_escape(insn_attr_t attr) +{ + return attr & INAT_ESC_MASK; +} + +static inline int inat_escape_id(insn_attr_t attr) +{ + return (attr & INAT_ESC_MASK) >> INAT_ESC_OFFS; +} + +static inline int inat_is_group(insn_attr_t attr) +{ + return attr & INAT_GRP_MASK; +} + +static inline int inat_group_id(insn_attr_t attr) +{ + return (attr & INAT_GRP_MASK) >> INAT_GRP_OFFS; +} + +static inline int inat_group_common_attribute(insn_attr_t attr) +{ + return attr & ~INAT_GRP_MASK; +} + +static inline int inat_has_immediate(insn_attr_t attr) +{ + return attr & INAT_IMM_MASK; +} + +static inline int inat_immediate_size(insn_attr_t attr) +{ + return (attr & INAT_IMM_MASK) >> INAT_IMM_OFFS; +} + +static inline int inat_has_modrm(insn_attr_t attr) +{ + return attr & INAT_MODRM; +} + +static inline int inat_is_force64(insn_attr_t attr) +{ + return attr & INAT_FORCE64; +} + +static inline int inat_has_second_immediate(insn_attr_t attr) +{ + return attr & INAT_SCNDIMM; +} + +static inline int inat_has_moffset(insn_attr_t attr) +{ + return attr & INAT_MOFFSET; +} + +static inline int inat_has_variant(insn_attr_t attr) +{ + return attr & INAT_VARIANT; +} + +static inline int inat_accept_vex(insn_attr_t attr) +{ + return attr & INAT_VEXOK; +} + +static inline int inat_must_vex(insn_attr_t attr) +{ + return attr & (INAT_VEXONLY | INAT_EVEXONLY); +} + +static inline int inat_must_evex(insn_attr_t attr) +{ + return attr & INAT_EVEXONLY; +} +#endif diff --git a/lib/x86/insn/inat_types.h b/lib/x86/insn/inat_types.h new file mode 100644 index 00000000..817c0b23 --- /dev/null +++ b/lib/x86/insn/inat_types.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _ASM_X86_INAT_TYPES_H +#define _ASM_X86_INAT_TYPES_H +/* + * x86 instruction attributes + * + * Written by Masami Hiramatsu + * + * kvm-unit-tests origin: Linux@e8c39d0f57f + * tools/arch/x86/include/asm/inat_types.h + */ + +/* Instruction attributes */ +typedef unsigned int insn_attr_t; +typedef unsigned char insn_byte_t; +typedef signed int insn_value_t; + +#endif diff --git a/lib/x86/insn/insn.c b/lib/x86/insn/insn.c new file mode 100644 index 00000000..d66367be --- /dev/null +++ b/lib/x86/insn/insn.c @@ -0,0 +1,735 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * x86 instruction analysis + * + * Copyright (C) IBM Corporation, 2002, 2004, 2009 + * + * kvm-unit-tests origin: Linux@e8c39d0f57f + * tools/arch/x86/lib/insn.c + */ + +#include "x86/vm.h" + +#include "inat.h" +#include "insn.h" +#include "insn_glue.h" + +#define leXX_to_cpu(t, r) \ +({ \ + __typeof__(t) v; \ + switch (sizeof(t)) { \ + case 4: v = le32_to_cpu(r); break; \ + case 2: v = le16_to_cpu(r); break; \ + case 1: v = r; break; \ + default: \ + BUILD_BUG(); break; \ + } \ + v; \ +}) + +/* Verify next sizeof(t) bytes can be on the same instruction */ +#define validate_next(t, insn, n) \ + ((insn)->next_byte + sizeof(t) + n <= (insn)->end_kaddr) + +#define __get_next(t, insn) \ + ({ t r = get_unaligned((t *)(insn)->next_byte); (insn)->next_byte += sizeof(t); leXX_to_cpu(t, r); }) + +#define __peek_nbyte_next(t, insn, n) \ + ({ t r = get_unaligned((t *)(insn)->next_byte + n); leXX_to_cpu(t, r); }) + +#define get_next(t, insn) \ + ({ if (unlikely(!validate_next(t, insn, 0))) goto err_out; __get_next(t, insn); }) + +#define peek_nbyte_next(t, insn, n) \ + ({ if (unlikely(!validate_next(t, insn, n))) goto err_out; __peek_nbyte_next(t, insn, n); }) + +#define peek_next(t, insn) peek_nbyte_next(t, insn, 0) + +/** + * insn_init() - initialize struct insn + * @insn: &struct insn to be initialized + * @kaddr: address (in kernel memory) of instruction (or copy thereof) + * @buf_len: length of the insn buffer at @kaddr + * @x86_64: !0 for 64-bit kernel or 64-bit app + */ +void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64) +{ + /* + * Instructions longer than MAX_INSN_SIZE (15 bytes) are invalid + * even if the input buffer is long enough to hold them. + */ + if (buf_len > MAX_INSN_SIZE) + buf_len = MAX_INSN_SIZE; + + memset(insn, 0, sizeof(*insn)); + insn->kaddr = kaddr; + insn->end_kaddr = kaddr + buf_len; + insn->next_byte = kaddr; + insn->x86_64 = x86_64; + insn->opnd_bytes = 4; + if (x86_64) + insn->addr_bytes = 8; + else + insn->addr_bytes = 4; +} + +static const insn_byte_t xen_prefix[] = { __XEN_EMULATE_PREFIX }; +static const insn_byte_t kvm_prefix[] = { __KVM_EMULATE_PREFIX }; + +static int __insn_get_emulate_prefix(struct insn *insn, + const insn_byte_t *prefix, size_t len) +{ + size_t i; + + for (i = 0; i < len; i++) { + if (peek_nbyte_next(insn_byte_t, insn, i) != prefix[i]) + goto err_out; + } + + insn->emulate_prefix_size = len; + insn->next_byte += len; + + return 1; + +err_out: + return 0; +} + +static void insn_get_emulate_prefix(struct insn *insn) +{ + if (__insn_get_emulate_prefix(insn, xen_prefix, sizeof(xen_prefix))) + return; + + __insn_get_emulate_prefix(insn, kvm_prefix, sizeof(kvm_prefix)); +} + +/** + * insn_get_prefixes - scan x86 instruction prefix bytes + * @insn: &struct insn containing instruction + * + * Populates the @insn->prefixes bitmap, and updates @insn->next_byte + * to point to the (first) opcode. No effect if @insn->prefixes.got + * is already set. + * + * * Returns: + * 0: on success + * < 0: on error + */ +int insn_get_prefixes(struct insn *insn) +{ + struct insn_field *prefixes = &insn->prefixes; + insn_attr_t attr; + insn_byte_t b, lb; + int i, nb; + + if (prefixes->got) + return 0; + + insn_get_emulate_prefix(insn); + + nb = 0; + lb = 0; + b = peek_next(insn_byte_t, insn); + attr = inat_get_opcode_attribute(b); + while (inat_is_legacy_prefix(attr)) { + /* Skip if same prefix */ + for (i = 0; i < nb; i++) + if (prefixes->bytes[i] == b) + goto found; + if (nb == 4) + /* Invalid instruction */ + break; + prefixes->bytes[nb++] = b; + if (inat_is_address_size_prefix(attr)) { + /* address size switches 2/4 or 4/8 */ + if (insn->x86_64) + insn->addr_bytes ^= 12; + else + insn->addr_bytes ^= 6; + } else if (inat_is_operand_size_prefix(attr)) { + /* oprand size switches 2/4 */ + insn->opnd_bytes ^= 6; + } +found: + prefixes->nbytes++; + insn->next_byte++; + lb = b; + b = peek_next(insn_byte_t, insn); + attr = inat_get_opcode_attribute(b); + } + /* Set the last prefix */ + if (lb && lb != insn->prefixes.bytes[3]) { + if (unlikely(insn->prefixes.bytes[3])) { + /* Swap the last prefix */ + b = insn->prefixes.bytes[3]; + for (i = 0; i < nb; i++) + if (prefixes->bytes[i] == lb) + insn_set_byte(prefixes, i, b); + } + insn_set_byte(&insn->prefixes, 3, lb); + } + + /* Decode REX prefix */ + if (insn->x86_64) { + b = peek_next(insn_byte_t, insn); + attr = inat_get_opcode_attribute(b); + if (inat_is_rex_prefix(attr)) { + insn_field_set(&insn->rex_prefix, b, 1); + insn->next_byte++; + if (X86_REX_W(b)) + /* REX.W overrides opnd_size */ + insn->opnd_bytes = 8; + } + } + insn->rex_prefix.got = 1; + + /* Decode VEX prefix */ + b = peek_next(insn_byte_t, insn); + attr = inat_get_opcode_attribute(b); + if (inat_is_vex_prefix(attr)) { + insn_byte_t b2 = peek_nbyte_next(insn_byte_t, insn, 1); + if (!insn->x86_64) { + /* + * In 32-bits mode, if the [7:6] bits (mod bits of + * ModRM) on the second byte are not 11b, it is + * LDS or LES or BOUND. + */ + if (X86_MODRM_MOD(b2) != 3) + goto vex_end; + } + insn_set_byte(&insn->vex_prefix, 0, b); + insn_set_byte(&insn->vex_prefix, 1, b2); + if (inat_is_evex_prefix(attr)) { + b2 = peek_nbyte_next(insn_byte_t, insn, 2); + insn_set_byte(&insn->vex_prefix, 2, b2); + b2 = peek_nbyte_next(insn_byte_t, insn, 3); + insn_set_byte(&insn->vex_prefix, 3, b2); + insn->vex_prefix.nbytes = 4; + insn->next_byte += 4; + if (insn->x86_64 && X86_VEX_W(b2)) + /* VEX.W overrides opnd_size */ + insn->opnd_bytes = 8; + } else if (inat_is_vex3_prefix(attr)) { + b2 = peek_nbyte_next(insn_byte_t, insn, 2); + insn_set_byte(&insn->vex_prefix, 2, b2); + insn->vex_prefix.nbytes = 3; + insn->next_byte += 3; + if (insn->x86_64 && X86_VEX_W(b2)) + /* VEX.W overrides opnd_size */ + insn->opnd_bytes = 8; + } else { + /* + * For VEX2, fake VEX3-like byte#2. + * Makes it easier to decode vex.W, vex.vvvv, + * vex.L and vex.pp. Masking with 0x7f sets vex.W == 0. + */ + insn_set_byte(&insn->vex_prefix, 2, b2 & 0x7f); + insn->vex_prefix.nbytes = 2; + insn->next_byte += 2; + } + } +vex_end: + insn->vex_prefix.got = 1; + + prefixes->got = 1; + + return 0; + +err_out: + return -ENODATA; +} + +/** + * insn_get_opcode - collect opcode(s) + * @insn: &struct insn containing instruction + * + * Populates @insn->opcode, updates @insn->next_byte to point past the + * opcode byte(s), and set @insn->attr (except for groups). + * If necessary, first collects any preceding (prefix) bytes. + * Sets @insn->opcode.value = opcode1. No effect if @insn->opcode.got + * is already 1. + * + * Returns: + * 0: on success + * < 0: on error + */ +int insn_get_opcode(struct insn *insn) +{ + struct insn_field *opcode = &insn->opcode; + int pfx_id, ret; + insn_byte_t op; + + if (opcode->got) + return 0; + + ret = insn_get_prefixes(insn); + if (ret) + return ret; + + /* Get first opcode */ + op = get_next(insn_byte_t, insn); + insn_set_byte(opcode, 0, op); + opcode->nbytes = 1; + + /* Check if there is VEX prefix or not */ + if (insn_is_avx(insn)) { + insn_byte_t m, p; + m = insn_vex_m_bits(insn); + p = insn_vex_p_bits(insn); + insn->attr = inat_get_avx_attribute(op, m, p); + if ((inat_must_evex(insn->attr) && !insn_is_evex(insn)) || + (!inat_accept_vex(insn->attr) && + !inat_is_group(insn->attr))) { + /* This instruction is bad */ + insn->attr = 0; + return -EINVAL; + } + /* VEX has only 1 byte for opcode */ + goto end; + } + + insn->attr = inat_get_opcode_attribute(op); + while (inat_is_escape(insn->attr)) { + /* Get escaped opcode */ + op = get_next(insn_byte_t, insn); + opcode->bytes[opcode->nbytes++] = op; + pfx_id = insn_last_prefix_id(insn); + insn->attr = inat_get_escape_attribute(op, pfx_id, insn->attr); + } + + if (inat_must_vex(insn->attr)) { + /* This instruction is bad */ + insn->attr = 0; + return -EINVAL; + } +end: + opcode->got = 1; + return 0; + +err_out: + return -ENODATA; +} + +/** + * insn_get_modrm - collect ModRM byte, if any + * @insn: &struct insn containing instruction + * + * Populates @insn->modrm and updates @insn->next_byte to point past the + * ModRM byte, if any. If necessary, first collects the preceding bytes + * (prefixes and opcode(s)). No effect if @insn->modrm.got is already 1. + * + * Returns: + * 0: on success + * < 0: on error + */ +int insn_get_modrm(struct insn *insn) +{ + struct insn_field *modrm = &insn->modrm; + insn_byte_t pfx_id, mod; + int ret; + + if (modrm->got) + return 0; + + ret = insn_get_opcode(insn); + if (ret) + return ret; + + if (inat_has_modrm(insn->attr)) { + mod = get_next(insn_byte_t, insn); + insn_field_set(modrm, mod, 1); + if (inat_is_group(insn->attr)) { + pfx_id = insn_last_prefix_id(insn); + insn->attr = inat_get_group_attribute(mod, pfx_id, + insn->attr); + if (insn_is_avx(insn) && !inat_accept_vex(insn->attr)) { + /* Bad insn */ + insn->attr = 0; + return -EINVAL; + } + } + } + + if (insn->x86_64 && inat_is_force64(insn->attr)) + insn->opnd_bytes = 8; + + modrm->got = 1; + return 0; + +err_out: + return -ENODATA; +} + + +/** + * insn_rip_relative() - Does instruction use RIP-relative addressing mode? + * @insn: &struct insn containing instruction + * + * If necessary, first collects the instruction up to and including the + * ModRM byte. No effect if @insn->x86_64 is 0. + */ +int insn_rip_relative(struct insn *insn) +{ + struct insn_field *modrm = &insn->modrm; + int ret; + + if (!insn->x86_64) + return 0; + + ret = insn_get_modrm(insn); + if (ret) + return 0; + /* + * For rip-relative instructions, the mod field (top 2 bits) + * is zero and the r/m field (bottom 3 bits) is 0x5. + */ + return (modrm->nbytes && (modrm->bytes[0] & 0xc7) == 0x5); +} + +/** + * insn_get_sib() - Get the SIB byte of instruction + * @insn: &struct insn containing instruction + * + * If necessary, first collects the instruction up to and including the + * ModRM byte. + * + * Returns: + * 0: if decoding succeeded + * < 0: otherwise. + */ +int insn_get_sib(struct insn *insn) +{ + insn_byte_t modrm; + int ret; + + if (insn->sib.got) + return 0; + + ret = insn_get_modrm(insn); + if (ret) + return ret; + + if (insn->modrm.nbytes) { + modrm = insn->modrm.bytes[0]; + if (insn->addr_bytes != 2 && + X86_MODRM_MOD(modrm) != 3 && X86_MODRM_RM(modrm) == 4) { + insn_field_set(&insn->sib, + get_next(insn_byte_t, insn), 1); + } + } + insn->sib.got = 1; + + return 0; + +err_out: + return -ENODATA; +} + + +/** + * insn_get_displacement() - Get the displacement of instruction + * @insn: &struct insn containing instruction + * + * If necessary, first collects the instruction up to and including the + * SIB byte. + * Displacement value is sign-expanded. + * + * * Returns: + * 0: if decoding succeeded + * < 0: otherwise. + */ +int insn_get_displacement(struct insn *insn) +{ + insn_byte_t mod, rm, base; + int ret; + + if (insn->displacement.got) + return 0; + + ret = insn_get_sib(insn); + if (ret) + return ret; + + if (insn->modrm.nbytes) { + /* + * Interpreting the modrm byte: + * mod = 00 - no displacement fields (exceptions below) + * mod = 01 - 1-byte displacement field + * mod = 10 - displacement field is 4 bytes, or 2 bytes if + * address size = 2 (0x67 prefix in 32-bit mode) + * mod = 11 - no memory operand + * + * If address size = 2... + * mod = 00, r/m = 110 - displacement field is 2 bytes + * + * If address size != 2... + * mod != 11, r/m = 100 - SIB byte exists + * mod = 00, SIB base = 101 - displacement field is 4 bytes + * mod = 00, r/m = 101 - rip-relative addressing, displacement + * field is 4 bytes + */ + mod = X86_MODRM_MOD(insn->modrm.value); + rm = X86_MODRM_RM(insn->modrm.value); + base = X86_SIB_BASE(insn->sib.value); + if (mod == 3) + goto out; + if (mod == 1) { + insn_field_set(&insn->displacement, + get_next(signed char, insn), 1); + } else if (insn->addr_bytes == 2) { + if ((mod == 0 && rm == 6) || mod == 2) { + insn_field_set(&insn->displacement, + get_next(short, insn), 2); + } + } else { + if ((mod == 0 && rm == 5) || mod == 2 || + (mod == 0 && base == 5)) { + insn_field_set(&insn->displacement, + get_next(int, insn), 4); + } + } + } +out: + insn->displacement.got = 1; + return 0; + +err_out: + return -ENODATA; +} + +/* Decode moffset16/32/64. Return 0 if failed */ +static int __get_moffset(struct insn *insn) +{ + switch (insn->addr_bytes) { + case 2: + insn_field_set(&insn->moffset1, get_next(short, insn), 2); + break; + case 4: + insn_field_set(&insn->moffset1, get_next(int, insn), 4); + break; + case 8: + insn_field_set(&insn->moffset1, get_next(int, insn), 4); + insn_field_set(&insn->moffset2, get_next(int, insn), 4); + break; + default: /* opnd_bytes must be modified manually */ + goto err_out; + } + insn->moffset1.got = insn->moffset2.got = 1; + + return 1; + +err_out: + return 0; +} + +/* Decode imm v32(Iz). Return 0 if failed */ +static int __get_immv32(struct insn *insn) +{ + switch (insn->opnd_bytes) { + case 2: + insn_field_set(&insn->immediate, get_next(short, insn), 2); + break; + case 4: + case 8: + insn_field_set(&insn->immediate, get_next(int, insn), 4); + break; + default: /* opnd_bytes must be modified manually */ + goto err_out; + } + + return 1; + +err_out: + return 0; +} + +/* Decode imm v64(Iv/Ov), Return 0 if failed */ +static int __get_immv(struct insn *insn) +{ + switch (insn->opnd_bytes) { + case 2: + insn_field_set(&insn->immediate1, get_next(short, insn), 2); + break; + case 4: + insn_field_set(&insn->immediate1, get_next(int, insn), 4); + insn->immediate1.nbytes = 4; + break; + case 8: + insn_field_set(&insn->immediate1, get_next(int, insn), 4); + insn_field_set(&insn->immediate2, get_next(int, insn), 4); + break; + default: /* opnd_bytes must be modified manually */ + goto err_out; + } + insn->immediate1.got = insn->immediate2.got = 1; + + return 1; +err_out: + return 0; +} + +/* Decode ptr16:16/32(Ap) */ +static int __get_immptr(struct insn *insn) +{ + switch (insn->opnd_bytes) { + case 2: + insn_field_set(&insn->immediate1, get_next(short, insn), 2); + break; + case 4: + insn_field_set(&insn->immediate1, get_next(int, insn), 4); + break; + case 8: + /* ptr16:64 is not exist (no segment) */ + return 0; + default: /* opnd_bytes must be modified manually */ + goto err_out; + } + insn_field_set(&insn->immediate2, get_next(unsigned short, insn), 2); + insn->immediate1.got = insn->immediate2.got = 1; + + return 1; +err_out: + return 0; +} + +/** + * insn_get_immediate() - Get the immediate in an instruction + * @insn: &struct insn containing instruction + * + * If necessary, first collects the instruction up to and including the + * displacement bytes. + * Basically, most of immediates are sign-expanded. Unsigned-value can be + * computed by bit masking with ((1 << (nbytes * 8)) - 1) + * + * Returns: + * 0: on success + * < 0: on error + */ +int insn_get_immediate(struct insn *insn) +{ + int ret; + + if (insn->immediate.got) + return 0; + + ret = insn_get_displacement(insn); + if (ret) + return ret; + + if (inat_has_moffset(insn->attr)) { + if (!__get_moffset(insn)) + goto err_out; + goto done; + } + + if (!inat_has_immediate(insn->attr)) + /* no immediates */ + goto done; + + switch (inat_immediate_size(insn->attr)) { + case INAT_IMM_BYTE: + insn_field_set(&insn->immediate, get_next(signed char, insn), 1); + break; + case INAT_IMM_WORD: + insn_field_set(&insn->immediate, get_next(short, insn), 2); + break; + case INAT_IMM_DWORD: + insn_field_set(&insn->immediate, get_next(int, insn), 4); + break; + case INAT_IMM_QWORD: + insn_field_set(&insn->immediate1, get_next(int, insn), 4); + insn_field_set(&insn->immediate2, get_next(int, insn), 4); + break; + case INAT_IMM_PTR: + if (!__get_immptr(insn)) + goto err_out; + break; + case INAT_IMM_VWORD32: + if (!__get_immv32(insn)) + goto err_out; + break; + case INAT_IMM_VWORD: + if (!__get_immv(insn)) + goto err_out; + break; + default: + /* Here, insn must have an immediate, but failed */ + goto err_out; + } + if (inat_has_second_immediate(insn->attr)) { + insn_field_set(&insn->immediate2, get_next(signed char, insn), 1); + } +done: + insn->immediate.got = 1; + return 0; + +err_out: + return -ENODATA; +} + +/** + * insn_get_length() - Get the length of instruction + * @insn: &struct insn containing instruction + * + * If necessary, first collects the instruction up to and including the + * immediates bytes. + * + * Returns: + * - 0 on success + * - < 0 on error +*/ +int insn_get_length(struct insn *insn) +{ + int ret; + + if (insn->length) + return 0; + + ret = insn_get_immediate(insn); + if (ret) + return ret; + + insn->length = (unsigned char)((unsigned long)insn->next_byte + - (unsigned long)insn->kaddr); + + return 0; +} + +/* Ensure this instruction is decoded completely */ +static inline int insn_complete(struct insn *insn) +{ + return insn->opcode.got && insn->modrm.got && insn->sib.got && + insn->displacement.got && insn->immediate.got; +} + +/** + * insn_decode() - Decode an x86 instruction + * @insn: &struct insn to be initialized + * @kaddr: address (in kernel memory) of instruction (or copy thereof) + * @buf_len: length of the insn buffer at @kaddr + * @m: insn mode, see enum insn_mode + * + * Returns: + * 0: if decoding succeeded + * < 0: otherwise. + */ +int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m) +{ + int ret; + +/* #define INSN_MODE_KERN -1 __ignore_sync_check__ mode is only valid in the kernel */ + + if (m == INSN_MODE_KERN) + insn_init(insn, kaddr, buf_len, IS_ENABLED(CONFIG_X86_64)); + else + insn_init(insn, kaddr, buf_len, m == INSN_MODE_64); + + ret = insn_get_length(insn); + if (ret) + return ret; + + if (insn_complete(insn)) + return 0; + + return -EINVAL; +} diff --git a/lib/x86/insn/insn.h b/lib/x86/insn/insn.h new file mode 100644 index 00000000..89866a5f --- /dev/null +++ b/lib/x86/insn/insn.h @@ -0,0 +1,279 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _ASM_X86_INSN_H +#define _ASM_X86_INSN_H +/* + * x86 instruction analysis + * + * Copyright (C) IBM Corporation, 2009 + * + * kvm-unit-tests origin: Linux@e8c39d0f57f + * tools/arch/x86/include/asm/insn.h + */ + +#include +/* insn_attr_t is defined in inat.h */ +#include "inat.h" /* __ignore_sync_check__ */ + +#if defined(__BYTE_ORDER) ? __BYTE_ORDER == __LITTLE_ENDIAN : defined(__LITTLE_ENDIAN) + +struct insn_field { + union { + insn_value_t value; + insn_byte_t bytes[4]; + }; + /* !0 if we've run insn_get_xxx() for this field */ + unsigned char got; + unsigned char nbytes; +}; + +static inline void insn_field_set(struct insn_field *p, insn_value_t v, + unsigned char n) +{ + p->value = v; + p->nbytes = n; +} + +static inline void insn_set_byte(struct insn_field *p, unsigned char n, + insn_byte_t v) +{ + p->bytes[n] = v; +} + +#else + +struct insn_field { + insn_value_t value; + union { + insn_value_t little; + insn_byte_t bytes[4]; + }; + /* !0 if we've run insn_get_xxx() for this field */ + unsigned char got; + unsigned char nbytes; +}; + +static inline void insn_field_set(struct insn_field *p, insn_value_t v, + unsigned char n) +{ + p->value = v; + p->little = __cpu_to_le32(v); + p->nbytes = n; +} + +static inline void insn_set_byte(struct insn_field *p, unsigned char n, + insn_byte_t v) +{ + p->bytes[n] = v; + p->value = __le32_to_cpu(p->little); +} +#endif + +struct insn { + struct insn_field prefixes; /* + * Prefixes + * prefixes.bytes[3]: last prefix + */ + struct insn_field rex_prefix; /* REX prefix */ + struct insn_field vex_prefix; /* VEX prefix */ + struct insn_field opcode; /* + * opcode.bytes[0]: opcode1 + * opcode.bytes[1]: opcode2 + * opcode.bytes[2]: opcode3 + */ + struct insn_field modrm; + struct insn_field sib; + struct insn_field displacement; + union { + struct insn_field immediate; + struct insn_field moffset1; /* for 64bit MOV */ + struct insn_field immediate1; /* for 64bit imm or off16/32 */ + }; + union { + struct insn_field moffset2; /* for 64bit MOV */ + struct insn_field immediate2; /* for 64bit imm or seg16 */ + }; + + int emulate_prefix_size; + insn_attr_t attr; + unsigned char opnd_bytes; + unsigned char addr_bytes; + unsigned char length; + unsigned char x86_64; + + const insn_byte_t *kaddr; /* kernel address of insn to analyze */ + const insn_byte_t *end_kaddr; /* kernel address of last insn in buffer */ + const insn_byte_t *next_byte; +}; + +#define MAX_INSN_SIZE 15 + +#define X86_MODRM_MOD(modrm) (((modrm) & 0xc0) >> 6) +#define X86_MODRM_REG(modrm) (((modrm) & 0x38) >> 3) +#define X86_MODRM_RM(modrm) ((modrm) & 0x07) + +#define X86_SIB_SCALE(sib) (((sib) & 0xc0) >> 6) +#define X86_SIB_INDEX(sib) (((sib) & 0x38) >> 3) +#define X86_SIB_BASE(sib) ((sib) & 0x07) + +#define X86_REX_W(rex) ((rex) & 8) +#define X86_REX_R(rex) ((rex) & 4) +#define X86_REX_X(rex) ((rex) & 2) +#define X86_REX_B(rex) ((rex) & 1) + +/* VEX bit flags */ +#define X86_VEX_W(vex) ((vex) & 0x80) /* VEX3 Byte2 */ +#define X86_VEX_R(vex) ((vex) & 0x80) /* VEX2/3 Byte1 */ +#define X86_VEX_X(vex) ((vex) & 0x40) /* VEX3 Byte1 */ +#define X86_VEX_B(vex) ((vex) & 0x20) /* VEX3 Byte1 */ +#define X86_VEX_L(vex) ((vex) & 0x04) /* VEX3 Byte2, VEX2 Byte1 */ +/* VEX bit fields */ +#define X86_EVEX_M(vex) ((vex) & 0x07) /* EVEX Byte1 */ +#define X86_VEX3_M(vex) ((vex) & 0x1f) /* VEX3 Byte1 */ +#define X86_VEX2_M 1 /* VEX2.M always 1 */ +#define X86_VEX_V(vex) (((vex) & 0x78) >> 3) /* VEX3 Byte2, VEX2 Byte1 */ +#define X86_VEX_P(vex) ((vex) & 0x03) /* VEX3 Byte2, VEX2 Byte1 */ +#define X86_VEX_M_MAX 0x1f /* VEX3.M Maximum value */ + +extern void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64); +extern int insn_get_prefixes(struct insn *insn); +extern int insn_get_opcode(struct insn *insn); +extern int insn_get_modrm(struct insn *insn); +extern int insn_get_sib(struct insn *insn); +extern int insn_get_displacement(struct insn *insn); +extern int insn_get_immediate(struct insn *insn); +extern int insn_get_length(struct insn *insn); + +enum insn_mode { + INSN_MODE_32, + INSN_MODE_64, + /* Mode is determined by the current kernel build. */ + INSN_MODE_KERN, + INSN_NUM_MODES, +}; + +extern int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m); + +#define insn_decode_kernel(_insn, _ptr) insn_decode((_insn), (_ptr), MAX_INSN_SIZE, INSN_MODE_KERN) + +/* Attribute will be determined after getting ModRM (for opcode groups) */ +static inline void insn_get_attribute(struct insn *insn) +{ + insn_get_modrm(insn); +} + +/* Instruction uses RIP-relative addressing */ +extern int insn_rip_relative(struct insn *insn); + +static inline int insn_is_avx(struct insn *insn) +{ + if (!insn->prefixes.got) + insn_get_prefixes(insn); + return (insn->vex_prefix.value != 0); +} + +static inline int insn_is_evex(struct insn *insn) +{ + if (!insn->prefixes.got) + insn_get_prefixes(insn); + return (insn->vex_prefix.nbytes == 4); +} + +static inline int insn_has_emulate_prefix(struct insn *insn) +{ + return !!insn->emulate_prefix_size; +} + +static inline insn_byte_t insn_vex_m_bits(struct insn *insn) +{ + if (insn->vex_prefix.nbytes == 2) /* 2 bytes VEX */ + return X86_VEX2_M; + else if (insn->vex_prefix.nbytes == 3) /* 3 bytes VEX */ + return X86_VEX3_M(insn->vex_prefix.bytes[1]); + else /* EVEX */ + return X86_EVEX_M(insn->vex_prefix.bytes[1]); +} + +static inline insn_byte_t insn_vex_p_bits(struct insn *insn) +{ + if (insn->vex_prefix.nbytes == 2) /* 2 bytes VEX */ + return X86_VEX_P(insn->vex_prefix.bytes[1]); + else + return X86_VEX_P(insn->vex_prefix.bytes[2]); +} + +/* Get the last prefix id from last prefix or VEX prefix */ +static inline int insn_last_prefix_id(struct insn *insn) +{ + if (insn_is_avx(insn)) + return insn_vex_p_bits(insn); /* VEX_p is a SIMD prefix id */ + + if (insn->prefixes.bytes[3]) + return inat_get_last_prefix_id(insn->prefixes.bytes[3]); + + return 0; +} + +/* Offset of each field from kaddr */ +static inline int insn_offset_rex_prefix(struct insn *insn) +{ + return insn->prefixes.nbytes; +} +static inline int insn_offset_vex_prefix(struct insn *insn) +{ + return insn_offset_rex_prefix(insn) + insn->rex_prefix.nbytes; +} +static inline int insn_offset_opcode(struct insn *insn) +{ + return insn_offset_vex_prefix(insn) + insn->vex_prefix.nbytes; +} +static inline int insn_offset_modrm(struct insn *insn) +{ + return insn_offset_opcode(insn) + insn->opcode.nbytes; +} +static inline int insn_offset_sib(struct insn *insn) +{ + return insn_offset_modrm(insn) + insn->modrm.nbytes; +} +static inline int insn_offset_displacement(struct insn *insn) +{ + return insn_offset_sib(insn) + insn->sib.nbytes; +} +static inline int insn_offset_immediate(struct insn *insn) +{ + return insn_offset_displacement(insn) + insn->displacement.nbytes; +} + +/** + * for_each_insn_prefix() -- Iterate prefixes in the instruction + * @insn: Pointer to struct insn. + * @idx: Index storage. + * @prefix: Prefix byte. + * + * Iterate prefix bytes of given @insn. Each prefix byte is stored in @prefix + * and the index is stored in @idx (note that this @idx is just for a cursor, + * do not change it.) + * Since prefixes.nbytes can be bigger than 4 if some prefixes + * are repeated, it cannot be used for looping over the prefixes. + */ +#define for_each_insn_prefix(insn, idx, prefix) \ + for (idx = 0; idx < ARRAY_SIZE(insn->prefixes.bytes) && (prefix = insn->prefixes.bytes[idx]) != 0; idx++) + +#define POP_SS_OPCODE 0x1f +#define MOV_SREG_OPCODE 0x8e + +/* + * Intel SDM Vol.3A 6.8.3 states; + * "Any single-step trap that would be delivered following the MOV to SS + * instruction or POP to SS instruction (because EFLAGS.TF is 1) is + * suppressed." + * This function returns true if @insn is MOV SS or POP SS. On these + * instructions, single stepping is suppressed. + */ +static inline int insn_masking_exception(struct insn *insn) +{ + return insn->opcode.bytes[0] == POP_SS_OPCODE || + (insn->opcode.bytes[0] == MOV_SREG_OPCODE && + X86_MODRM_REG(insn->modrm.bytes[0]) == 2); +} + +#endif /* _ASM_X86_INSN_H */ diff --git a/lib/x86/insn/insn_glue.h b/lib/x86/insn/insn_glue.h new file mode 100644 index 00000000..bbbbcab3 --- /dev/null +++ b/lib/x86/insn/insn_glue.h @@ -0,0 +1,32 @@ +/* + * Necessary definitions from Linux to adapt the insn decoder for + * kvm-unit-tests. + * + * SPDX-License-Identifier: GPL-2.0 + */ + +/** + * BUILD_BUG - no-op. + */ +#define BUILD_BUG() + +/* + * Virt escape sequences to trigger instruction emulation; + * ideally these would decode to 'whole' instruction and not destroy + * the instruction stream; sadly this is not true for the 'kvm' one :/ + */ +#define __XEN_EMULATE_PREFIX 0x0f,0x0b,0x78,0x65,0x6e /* ud2 ; .ascii "xen" */ +#define __KVM_EMULATE_PREFIX 0x0f,0x0b,0x6b,0x76,0x6d /* ud2 ; .ascii "kvm" */ + +# define __packed __attribute__((__packed__)) +#define __get_unaligned_t(type, ptr) ({ \ + const struct { type x; } __packed *__pptr = (typeof(__pptr))(ptr); \ + __pptr->x; \ +}) +#define get_unaligned(ptr) __get_unaligned_t(typeof(*(ptr)), (ptr)) + +#define EINVAL 22 /* Invalid argument */ +#define ENODATA 61 /* No data available */ + +#define CONFIG_X86_64 1 +#define IS_ENABLED(option) 1 diff --git a/lib/x86/insn/x86-opcode-map.txt b/lib/x86/insn/x86-opcode-map.txt new file mode 100644 index 00000000..fb8615ce --- /dev/null +++ b/lib/x86/insn/x86-opcode-map.txt @@ -0,0 +1,1191 @@ +# x86 Opcode Maps +# +# This is (mostly) based on following documentations. +# - Intel(R) 64 and IA-32 Architectures Software Developer's Manual Vol.2C +# (#326018-047US, June 2013) +# +# kvm-unit-tests origin: Linux@e8c39d0f57f +# arch/x86/lib/x86-opcode-map.txt +# +# Table: table-name +# Referrer: escaped-name +# AVXcode: avx-code +# opcode: mnemonic|GrpXXX [operand1[,operand2...]] [(extra1)[,(extra2)...] [| 2nd-mnemonic ...] +# (or) +# opcode: escape # escaped-name +# EndTable +# +# mnemonics that begin with lowercase 'v' accept a VEX or EVEX prefix +# mnemonics that begin with lowercase 'k' accept a VEX prefix +# +# +# GrpTable: GrpXXX +# reg: mnemonic [operand1[,operand2...]] [(extra1)[,(extra2)...] [| 2nd-mnemonic ...] +# EndTable +# +# AVX Superscripts +# (ev): this opcode requires EVEX prefix. +# (evo): this opcode is changed by EVEX prefix (EVEX opcode) +# (v): this opcode requires VEX prefix. +# (v1): this opcode only supports 128bit VEX. +# +# Last Prefix Superscripts +# - (66): the last prefix is 0x66 +# - (F3): the last prefix is 0xF3 +# - (F2): the last prefix is 0xF2 +# - (!F3) : the last prefix is not 0xF3 (including non-last prefix case) +# - (66&F2): Both 0x66 and 0xF2 prefixes are specified. + +Table: one byte opcode +Referrer: +AVXcode: +# 0x00 - 0x0f +00: ADD Eb,Gb +01: ADD Ev,Gv +02: ADD Gb,Eb +03: ADD Gv,Ev +04: ADD AL,Ib +05: ADD rAX,Iz +06: PUSH ES (i64) +07: POP ES (i64) +08: OR Eb,Gb +09: OR Ev,Gv +0a: OR Gb,Eb +0b: OR Gv,Ev +0c: OR AL,Ib +0d: OR rAX,Iz +0e: PUSH CS (i64) +0f: escape # 2-byte escape +# 0x10 - 0x1f +10: ADC Eb,Gb +11: ADC Ev,Gv +12: ADC Gb,Eb +13: ADC Gv,Ev +14: ADC AL,Ib +15: ADC rAX,Iz +16: PUSH SS (i64) +17: POP SS (i64) +18: SBB Eb,Gb +19: SBB Ev,Gv +1a: SBB Gb,Eb +1b: SBB Gv,Ev +1c: SBB AL,Ib +1d: SBB rAX,Iz +1e: PUSH DS (i64) +1f: POP DS (i64) +# 0x20 - 0x2f +20: AND Eb,Gb +21: AND Ev,Gv +22: AND Gb,Eb +23: AND Gv,Ev +24: AND AL,Ib +25: AND rAx,Iz +26: SEG=ES (Prefix) +27: DAA (i64) +28: SUB Eb,Gb +29: SUB Ev,Gv +2a: SUB Gb,Eb +2b: SUB Gv,Ev +2c: SUB AL,Ib +2d: SUB rAX,Iz +2e: SEG=CS (Prefix) +2f: DAS (i64) +# 0x30 - 0x3f +30: XOR Eb,Gb +31: XOR Ev,Gv +32: XOR Gb,Eb +33: XOR Gv,Ev +34: XOR AL,Ib +35: XOR rAX,Iz +36: SEG=SS (Prefix) +37: AAA (i64) +38: CMP Eb,Gb +39: CMP Ev,Gv +3a: CMP Gb,Eb +3b: CMP Gv,Ev +3c: CMP AL,Ib +3d: CMP rAX,Iz +3e: SEG=DS (Prefix) +3f: AAS (i64) +# 0x40 - 0x4f +40: INC eAX (i64) | REX (o64) +41: INC eCX (i64) | REX.B (o64) +42: INC eDX (i64) | REX.X (o64) +43: INC eBX (i64) | REX.XB (o64) +44: INC eSP (i64) | REX.R (o64) +45: INC eBP (i64) | REX.RB (o64) +46: INC eSI (i64) | REX.RX (o64) +47: INC eDI (i64) | REX.RXB (o64) +48: DEC eAX (i64) | REX.W (o64) +49: DEC eCX (i64) | REX.WB (o64) +4a: DEC eDX (i64) | REX.WX (o64) +4b: DEC eBX (i64) | REX.WXB (o64) +4c: DEC eSP (i64) | REX.WR (o64) +4d: DEC eBP (i64) | REX.WRB (o64) +4e: DEC eSI (i64) | REX.WRX (o64) +4f: DEC eDI (i64) | REX.WRXB (o64) +# 0x50 - 0x5f +50: PUSH rAX/r8 (d64) +51: PUSH rCX/r9 (d64) +52: PUSH rDX/r10 (d64) +53: PUSH rBX/r11 (d64) +54: PUSH rSP/r12 (d64) +55: PUSH rBP/r13 (d64) +56: PUSH rSI/r14 (d64) +57: PUSH rDI/r15 (d64) +58: POP rAX/r8 (d64) +59: POP rCX/r9 (d64) +5a: POP rDX/r10 (d64) +5b: POP rBX/r11 (d64) +5c: POP rSP/r12 (d64) +5d: POP rBP/r13 (d64) +5e: POP rSI/r14 (d64) +5f: POP rDI/r15 (d64) +# 0x60 - 0x6f +60: PUSHA/PUSHAD (i64) +61: POPA/POPAD (i64) +62: BOUND Gv,Ma (i64) | EVEX (Prefix) +63: ARPL Ew,Gw (i64) | MOVSXD Gv,Ev (o64) +64: SEG=FS (Prefix) +65: SEG=GS (Prefix) +66: Operand-Size (Prefix) +67: Address-Size (Prefix) +68: PUSH Iz (d64) +69: IMUL Gv,Ev,Iz +6a: PUSH Ib (d64) +6b: IMUL Gv,Ev,Ib +6c: INS/INSB Yb,DX +6d: INS/INSW/INSD Yz,DX +6e: OUTS/OUTSB DX,Xb +6f: OUTS/OUTSW/OUTSD DX,Xz +# 0x70 - 0x7f +70: JO Jb +71: JNO Jb +72: JB/JNAE/JC Jb +73: JNB/JAE/JNC Jb +74: JZ/JE Jb +75: JNZ/JNE Jb +76: JBE/JNA Jb +77: JNBE/JA Jb +78: JS Jb +79: JNS Jb +7a: JP/JPE Jb +7b: JNP/JPO Jb +7c: JL/JNGE Jb +7d: JNL/JGE Jb +7e: JLE/JNG Jb +7f: JNLE/JG Jb +# 0x80 - 0x8f +80: Grp1 Eb,Ib (1A) +81: Grp1 Ev,Iz (1A) +82: Grp1 Eb,Ib (1A),(i64) +83: Grp1 Ev,Ib (1A) +84: TEST Eb,Gb +85: TEST Ev,Gv +86: XCHG Eb,Gb +87: XCHG Ev,Gv +88: MOV Eb,Gb +89: MOV Ev,Gv +8a: MOV Gb,Eb +8b: MOV Gv,Ev +8c: MOV Ev,Sw +8d: LEA Gv,M +8e: MOV Sw,Ew +8f: Grp1A (1A) | POP Ev (d64) +# 0x90 - 0x9f +90: NOP | PAUSE (F3) | XCHG r8,rAX +91: XCHG rCX/r9,rAX +92: XCHG rDX/r10,rAX +93: XCHG rBX/r11,rAX +94: XCHG rSP/r12,rAX +95: XCHG rBP/r13,rAX +96: XCHG rSI/r14,rAX +97: XCHG rDI/r15,rAX +98: CBW/CWDE/CDQE +99: CWD/CDQ/CQO +9a: CALLF Ap (i64) +9b: FWAIT/WAIT +9c: PUSHF/D/Q Fv (d64) +9d: POPF/D/Q Fv (d64) +9e: SAHF +9f: LAHF +# 0xa0 - 0xaf +a0: MOV AL,Ob +a1: MOV rAX,Ov +a2: MOV Ob,AL +a3: MOV Ov,rAX +a4: MOVS/B Yb,Xb +a5: MOVS/W/D/Q Yv,Xv +a6: CMPS/B Xb,Yb +a7: CMPS/W/D Xv,Yv +a8: TEST AL,Ib +a9: TEST rAX,Iz +aa: STOS/B Yb,AL +ab: STOS/W/D/Q Yv,rAX +ac: LODS/B AL,Xb +ad: LODS/W/D/Q rAX,Xv +ae: SCAS/B AL,Yb +# Note: The May 2011 Intel manual shows Xv for the second parameter of the +# next instruction but Yv is correct +af: SCAS/W/D/Q rAX,Yv +# 0xb0 - 0xbf +b0: MOV AL/R8L,Ib +b1: MOV CL/R9L,Ib +b2: MOV DL/R10L,Ib +b3: MOV BL/R11L,Ib +b4: MOV AH/R12L,Ib +b5: MOV CH/R13L,Ib +b6: MOV DH/R14L,Ib +b7: MOV BH/R15L,Ib +b8: MOV rAX/r8,Iv +b9: MOV rCX/r9,Iv +ba: MOV rDX/r10,Iv +bb: MOV rBX/r11,Iv +bc: MOV rSP/r12,Iv +bd: MOV rBP/r13,Iv +be: MOV rSI/r14,Iv +bf: MOV rDI/r15,Iv +# 0xc0 - 0xcf +c0: Grp2 Eb,Ib (1A) +c1: Grp2 Ev,Ib (1A) +c2: RETN Iw (f64) +c3: RETN +c4: LES Gz,Mp (i64) | VEX+2byte (Prefix) +c5: LDS Gz,Mp (i64) | VEX+1byte (Prefix) +c6: Grp11A Eb,Ib (1A) +c7: Grp11B Ev,Iz (1A) +c8: ENTER Iw,Ib +c9: LEAVE (d64) +ca: RETF Iw +cb: RETF +cc: INT3 +cd: INT Ib +ce: INTO (i64) +cf: IRET/D/Q +# 0xd0 - 0xdf +d0: Grp2 Eb,1 (1A) +d1: Grp2 Ev,1 (1A) +d2: Grp2 Eb,CL (1A) +d3: Grp2 Ev,CL (1A) +d4: AAM Ib (i64) +d5: AAD Ib (i64) +d6: +d7: XLAT/XLATB +d8: ESC +d9: ESC +da: ESC +db: ESC +dc: ESC +dd: ESC +de: ESC +df: ESC +# 0xe0 - 0xef +# Note: "forced64" is Intel CPU behavior: they ignore 0x66 prefix +# in 64-bit mode. AMD CPUs accept 0x66 prefix, it causes RIP truncation +# to 16 bits. In 32-bit mode, 0x66 is accepted by both Intel and AMD. +e0: LOOPNE/LOOPNZ Jb (f64) +e1: LOOPE/LOOPZ Jb (f64) +e2: LOOP Jb (f64) +e3: JrCXZ Jb (f64) +e4: IN AL,Ib +e5: IN eAX,Ib +e6: OUT Ib,AL +e7: OUT Ib,eAX +# With 0x66 prefix in 64-bit mode, for AMD CPUs immediate offset +# in "near" jumps and calls is 16-bit. For CALL, +# push of return address is 16-bit wide, RSP is decremented by 2 +# but is not truncated to 16 bits, unlike RIP. +e8: CALL Jz (f64) +e9: JMP-near Jz (f64) +ea: JMP-far Ap (i64) +eb: JMP-short Jb (f64) +ec: IN AL,DX +ed: IN eAX,DX +ee: OUT DX,AL +ef: OUT DX,eAX +# 0xf0 - 0xff +f0: LOCK (Prefix) +f1: +f2: REPNE (Prefix) | XACQUIRE (Prefix) +f3: REP/REPE (Prefix) | XRELEASE (Prefix) +f4: HLT +f5: CMC +f6: Grp3_1 Eb (1A) +f7: Grp3_2 Ev (1A) +f8: CLC +f9: STC +fa: CLI +fb: STI +fc: CLD +fd: STD +fe: Grp4 (1A) +ff: Grp5 (1A) +EndTable + +Table: 2-byte opcode (0x0f) +Referrer: 2-byte escape +AVXcode: 1 +# 0x0f 0x00-0x0f +00: Grp6 (1A) +01: Grp7 (1A) +02: LAR Gv,Ew +03: LSL Gv,Ew +04: +05: SYSCALL (o64) +06: CLTS +07: SYSRET (o64) +08: INVD +09: WBINVD | WBNOINVD (F3) +0a: +0b: UD2 (1B) +0c: +# AMD's prefetch group. Intel supports prefetchw(/1) only. +0d: GrpP +0e: FEMMS +# 3DNow! uses the last imm byte as opcode extension. +0f: 3DNow! Pq,Qq,Ib +# 0x0f 0x10-0x1f +# NOTE: According to Intel SDM opcode map, vmovups and vmovupd has no operands +# but it actually has operands. And also, vmovss and vmovsd only accept 128bit. +# MOVSS/MOVSD has too many forms(3) on SDM. This map just shows a typical form. +# Many AVX instructions lack v1 superscript, according to Intel AVX-Prgramming +# Reference A.1 +10: vmovups Vps,Wps | vmovupd Vpd,Wpd (66) | vmovss Vx,Hx,Wss (F3),(v1) | vmovsd Vx,Hx,Wsd (F2),(v1) +11: vmovups Wps,Vps | vmovupd Wpd,Vpd (66) | vmovss Wss,Hx,Vss (F3),(v1) | vmovsd Wsd,Hx,Vsd (F2),(v1) +12: vmovlps Vq,Hq,Mq (v1) | vmovhlps Vq,Hq,Uq (v1) | vmovlpd Vq,Hq,Mq (66),(v1) | vmovsldup Vx,Wx (F3) | vmovddup Vx,Wx (F2) +13: vmovlps Mq,Vq (v1) | vmovlpd Mq,Vq (66),(v1) +14: vunpcklps Vx,Hx,Wx | vunpcklpd Vx,Hx,Wx (66) +15: vunpckhps Vx,Hx,Wx | vunpckhpd Vx,Hx,Wx (66) +16: vmovhps Vdq,Hq,Mq (v1) | vmovlhps Vdq,Hq,Uq (v1) | vmovhpd Vdq,Hq,Mq (66),(v1) | vmovshdup Vx,Wx (F3) +17: vmovhps Mq,Vq (v1) | vmovhpd Mq,Vq (66),(v1) +18: Grp16 (1A) +19: +# Intel SDM opcode map does not list MPX instructions. For now using Gv for +# bnd registers and Ev for everything else is OK because the instruction +# decoder does not use the information except as an indication that there is +# a ModR/M byte. +1a: BNDCL Gv,Ev (F3) | BNDCU Gv,Ev (F2) | BNDMOV Gv,Ev (66) | BNDLDX Gv,Ev +1b: BNDCN Gv,Ev (F2) | BNDMOV Ev,Gv (66) | BNDMK Gv,Ev (F3) | BNDSTX Ev,Gv +1c: Grp20 (1A),(1C) +1d: +1e: Grp21 (1A) +1f: NOP Ev +# 0x0f 0x20-0x2f +20: MOV Rd,Cd +21: MOV Rd,Dd +22: MOV Cd,Rd +23: MOV Dd,Rd +24: +25: +26: +27: +28: vmovaps Vps,Wps | vmovapd Vpd,Wpd (66) +29: vmovaps Wps,Vps | vmovapd Wpd,Vpd (66) +2a: cvtpi2ps Vps,Qpi | cvtpi2pd Vpd,Qpi (66) | vcvtsi2ss Vss,Hss,Ey (F3),(v1) | vcvtsi2sd Vsd,Hsd,Ey (F2),(v1) +2b: vmovntps Mps,Vps | vmovntpd Mpd,Vpd (66) +2c: cvttps2pi Ppi,Wps | cvttpd2pi Ppi,Wpd (66) | vcvttss2si Gy,Wss (F3),(v1) | vcvttsd2si Gy,Wsd (F2),(v1) +2d: cvtps2pi Ppi,Wps | cvtpd2pi Qpi,Wpd (66) | vcvtss2si Gy,Wss (F3),(v1) | vcvtsd2si Gy,Wsd (F2),(v1) +2e: vucomiss Vss,Wss (v1) | vucomisd Vsd,Wsd (66),(v1) +2f: vcomiss Vss,Wss (v1) | vcomisd Vsd,Wsd (66),(v1) +# 0x0f 0x30-0x3f +30: WRMSR +31: RDTSC +32: RDMSR +33: RDPMC +34: SYSENTER +35: SYSEXIT +36: +37: GETSEC +38: escape # 3-byte escape 1 +39: +3a: escape # 3-byte escape 2 +3b: +3c: +3d: +3e: +3f: +# 0x0f 0x40-0x4f +40: CMOVO Gv,Ev +41: CMOVNO Gv,Ev | kandw/q Vk,Hk,Uk | kandb/d Vk,Hk,Uk (66) +42: CMOVB/C/NAE Gv,Ev | kandnw/q Vk,Hk,Uk | kandnb/d Vk,Hk,Uk (66) +43: CMOVAE/NB/NC Gv,Ev +44: CMOVE/Z Gv,Ev | knotw/q Vk,Uk | knotb/d Vk,Uk (66) +45: CMOVNE/NZ Gv,Ev | korw/q Vk,Hk,Uk | korb/d Vk,Hk,Uk (66) +46: CMOVBE/NA Gv,Ev | kxnorw/q Vk,Hk,Uk | kxnorb/d Vk,Hk,Uk (66) +47: CMOVA/NBE Gv,Ev | kxorw/q Vk,Hk,Uk | kxorb/d Vk,Hk,Uk (66) +48: CMOVS Gv,Ev +49: CMOVNS Gv,Ev +4a: CMOVP/PE Gv,Ev | kaddw/q Vk,Hk,Uk | kaddb/d Vk,Hk,Uk (66) +4b: CMOVNP/PO Gv,Ev | kunpckbw Vk,Hk,Uk (66) | kunpckwd/dq Vk,Hk,Uk +4c: CMOVL/NGE Gv,Ev +4d: CMOVNL/GE Gv,Ev +4e: CMOVLE/NG Gv,Ev +4f: CMOVNLE/G Gv,Ev +# 0x0f 0x50-0x5f +50: vmovmskps Gy,Ups | vmovmskpd Gy,Upd (66) +51: vsqrtps Vps,Wps | vsqrtpd Vpd,Wpd (66) | vsqrtss Vss,Hss,Wss (F3),(v1) | vsqrtsd Vsd,Hsd,Wsd (F2),(v1) +52: vrsqrtps Vps,Wps | vrsqrtss Vss,Hss,Wss (F3),(v1) +53: vrcpps Vps,Wps | vrcpss Vss,Hss,Wss (F3),(v1) +54: vandps Vps,Hps,Wps | vandpd Vpd,Hpd,Wpd (66) +55: vandnps Vps,Hps,Wps | vandnpd Vpd,Hpd,Wpd (66) +56: vorps Vps,Hps,Wps | vorpd Vpd,Hpd,Wpd (66) +57: vxorps Vps,Hps,Wps | vxorpd Vpd,Hpd,Wpd (66) +58: vaddps Vps,Hps,Wps | vaddpd Vpd,Hpd,Wpd (66) | vaddss Vss,Hss,Wss (F3),(v1) | vaddsd Vsd,Hsd,Wsd (F2),(v1) +59: vmulps Vps,Hps,Wps | vmulpd Vpd,Hpd,Wpd (66) | vmulss Vss,Hss,Wss (F3),(v1) | vmulsd Vsd,Hsd,Wsd (F2),(v1) +5a: vcvtps2pd Vpd,Wps | vcvtpd2ps Vps,Wpd (66) | vcvtss2sd Vsd,Hx,Wss (F3),(v1) | vcvtsd2ss Vss,Hx,Wsd (F2),(v1) +5b: vcvtdq2ps Vps,Wdq | vcvtqq2ps Vps,Wqq (evo) | vcvtps2dq Vdq,Wps (66) | vcvttps2dq Vdq,Wps (F3) +5c: vsubps Vps,Hps,Wps | vsubpd Vpd,Hpd,Wpd (66) | vsubss Vss,Hss,Wss (F3),(v1) | vsubsd Vsd,Hsd,Wsd (F2),(v1) +5d: vminps Vps,Hps,Wps | vminpd Vpd,Hpd,Wpd (66) | vminss Vss,Hss,Wss (F3),(v1) | vminsd Vsd,Hsd,Wsd (F2),(v1) +5e: vdivps Vps,Hps,Wps | vdivpd Vpd,Hpd,Wpd (66) | vdivss Vss,Hss,Wss (F3),(v1) | vdivsd Vsd,Hsd,Wsd (F2),(v1) +5f: vmaxps Vps,Hps,Wps | vmaxpd Vpd,Hpd,Wpd (66) | vmaxss Vss,Hss,Wss (F3),(v1) | vmaxsd Vsd,Hsd,Wsd (F2),(v1) +# 0x0f 0x60-0x6f +60: punpcklbw Pq,Qd | vpunpcklbw Vx,Hx,Wx (66),(v1) +61: punpcklwd Pq,Qd | vpunpcklwd Vx,Hx,Wx (66),(v1) +62: punpckldq Pq,Qd | vpunpckldq Vx,Hx,Wx (66),(v1) +63: packsswb Pq,Qq | vpacksswb Vx,Hx,Wx (66),(v1) +64: pcmpgtb Pq,Qq | vpcmpgtb Vx,Hx,Wx (66),(v1) +65: pcmpgtw Pq,Qq | vpcmpgtw Vx,Hx,Wx (66),(v1) +66: pcmpgtd Pq,Qq | vpcmpgtd Vx,Hx,Wx (66),(v1) +67: packuswb Pq,Qq | vpackuswb Vx,Hx,Wx (66),(v1) +68: punpckhbw Pq,Qd | vpunpckhbw Vx,Hx,Wx (66),(v1) +69: punpckhwd Pq,Qd | vpunpckhwd Vx,Hx,Wx (66),(v1) +6a: punpckhdq Pq,Qd | vpunpckhdq Vx,Hx,Wx (66),(v1) +6b: packssdw Pq,Qd | vpackssdw Vx,Hx,Wx (66),(v1) +6c: vpunpcklqdq Vx,Hx,Wx (66),(v1) +6d: vpunpckhqdq Vx,Hx,Wx (66),(v1) +6e: movd/q Pd,Ey | vmovd/q Vy,Ey (66),(v1) +6f: movq Pq,Qq | vmovdqa Vx,Wx (66) | vmovdqa32/64 Vx,Wx (66),(evo) | vmovdqu Vx,Wx (F3) | vmovdqu32/64 Vx,Wx (F3),(evo) | vmovdqu8/16 Vx,Wx (F2),(ev) +# 0x0f 0x70-0x7f +70: pshufw Pq,Qq,Ib | vpshufd Vx,Wx,Ib (66),(v1) | vpshufhw Vx,Wx,Ib (F3),(v1) | vpshuflw Vx,Wx,Ib (F2),(v1) +71: Grp12 (1A) +72: Grp13 (1A) +73: Grp14 (1A) +74: pcmpeqb Pq,Qq | vpcmpeqb Vx,Hx,Wx (66),(v1) +75: pcmpeqw Pq,Qq | vpcmpeqw Vx,Hx,Wx (66),(v1) +76: pcmpeqd Pq,Qq | vpcmpeqd Vx,Hx,Wx (66),(v1) +# Note: Remove (v), because vzeroall and vzeroupper becomes emms without VEX. +77: emms | vzeroupper | vzeroall +78: VMREAD Ey,Gy | vcvttps2udq/pd2udq Vx,Wpd (evo) | vcvttsd2usi Gv,Wx (F2),(ev) | vcvttss2usi Gv,Wx (F3),(ev) | vcvttps2uqq/pd2uqq Vx,Wx (66),(ev) +79: VMWRITE Gy,Ey | vcvtps2udq/pd2udq Vx,Wpd (evo) | vcvtsd2usi Gv,Wx (F2),(ev) | vcvtss2usi Gv,Wx (F3),(ev) | vcvtps2uqq/pd2uqq Vx,Wx (66),(ev) +7a: vcvtudq2pd/uqq2pd Vpd,Wx (F3),(ev) | vcvtudq2ps/uqq2ps Vpd,Wx (F2),(ev) | vcvttps2qq/pd2qq Vx,Wx (66),(ev) +7b: vcvtusi2sd Vpd,Hpd,Ev (F2),(ev) | vcvtusi2ss Vps,Hps,Ev (F3),(ev) | vcvtps2qq/pd2qq Vx,Wx (66),(ev) +7c: vhaddpd Vpd,Hpd,Wpd (66) | vhaddps Vps,Hps,Wps (F2) +7d: vhsubpd Vpd,Hpd,Wpd (66) | vhsubps Vps,Hps,Wps (F2) +7e: movd/q Ey,Pd | vmovd/q Ey,Vy (66),(v1) | vmovq Vq,Wq (F3),(v1) +7f: movq Qq,Pq | vmovdqa Wx,Vx (66) | vmovdqa32/64 Wx,Vx (66),(evo) | vmovdqu Wx,Vx (F3) | vmovdqu32/64 Wx,Vx (F3),(evo) | vmovdqu8/16 Wx,Vx (F2),(ev) +# 0x0f 0x80-0x8f +# Note: "forced64" is Intel CPU behavior (see comment about CALL insn). +80: JO Jz (f64) +81: JNO Jz (f64) +82: JB/JC/JNAE Jz (f64) +83: JAE/JNB/JNC Jz (f64) +84: JE/JZ Jz (f64) +85: JNE/JNZ Jz (f64) +86: JBE/JNA Jz (f64) +87: JA/JNBE Jz (f64) +88: JS Jz (f64) +89: JNS Jz (f64) +8a: JP/JPE Jz (f64) +8b: JNP/JPO Jz (f64) +8c: JL/JNGE Jz (f64) +8d: JNL/JGE Jz (f64) +8e: JLE/JNG Jz (f64) +8f: JNLE/JG Jz (f64) +# 0x0f 0x90-0x9f +90: SETO Eb | kmovw/q Vk,Wk | kmovb/d Vk,Wk (66) +91: SETNO Eb | kmovw/q Mv,Vk | kmovb/d Mv,Vk (66) +92: SETB/C/NAE Eb | kmovw Vk,Rv | kmovb Vk,Rv (66) | kmovq/d Vk,Rv (F2) +93: SETAE/NB/NC Eb | kmovw Gv,Uk | kmovb Gv,Uk (66) | kmovq/d Gv,Uk (F2) +94: SETE/Z Eb +95: SETNE/NZ Eb +96: SETBE/NA Eb +97: SETA/NBE Eb +98: SETS Eb | kortestw/q Vk,Uk | kortestb/d Vk,Uk (66) +99: SETNS Eb | ktestw/q Vk,Uk | ktestb/d Vk,Uk (66) +9a: SETP/PE Eb +9b: SETNP/PO Eb +9c: SETL/NGE Eb +9d: SETNL/GE Eb +9e: SETLE/NG Eb +9f: SETNLE/G Eb +# 0x0f 0xa0-0xaf +a0: PUSH FS (d64) +a1: POP FS (d64) +a2: CPUID +a3: BT Ev,Gv +a4: SHLD Ev,Gv,Ib +a5: SHLD Ev,Gv,CL +a6: GrpPDLK +a7: GrpRNG +a8: PUSH GS (d64) +a9: POP GS (d64) +aa: RSM +ab: BTS Ev,Gv +ac: SHRD Ev,Gv,Ib +ad: SHRD Ev,Gv,CL +ae: Grp15 (1A),(1C) +af: IMUL Gv,Ev +# 0x0f 0xb0-0xbf +b0: CMPXCHG Eb,Gb +b1: CMPXCHG Ev,Gv +b2: LSS Gv,Mp +b3: BTR Ev,Gv +b4: LFS Gv,Mp +b5: LGS Gv,Mp +b6: MOVZX Gv,Eb +b7: MOVZX Gv,Ew +b8: JMPE (!F3) | POPCNT Gv,Ev (F3) +b9: Grp10 (1A) +ba: Grp8 Ev,Ib (1A) +bb: BTC Ev,Gv +bc: BSF Gv,Ev (!F3) | TZCNT Gv,Ev (F3) +bd: BSR Gv,Ev (!F3) | LZCNT Gv,Ev (F3) +be: MOVSX Gv,Eb +bf: MOVSX Gv,Ew +# 0x0f 0xc0-0xcf +c0: XADD Eb,Gb +c1: XADD Ev,Gv +c2: vcmpps Vps,Hps,Wps,Ib | vcmppd Vpd,Hpd,Wpd,Ib (66) | vcmpss Vss,Hss,Wss,Ib (F3),(v1) | vcmpsd Vsd,Hsd,Wsd,Ib (F2),(v1) +c3: movnti My,Gy +c4: pinsrw Pq,Ry/Mw,Ib | vpinsrw Vdq,Hdq,Ry/Mw,Ib (66),(v1) +c5: pextrw Gd,Nq,Ib | vpextrw Gd,Udq,Ib (66),(v1) +c6: vshufps Vps,Hps,Wps,Ib | vshufpd Vpd,Hpd,Wpd,Ib (66) +c7: Grp9 (1A) +c8: BSWAP RAX/EAX/R8/R8D +c9: BSWAP RCX/ECX/R9/R9D +ca: BSWAP RDX/EDX/R10/R10D +cb: BSWAP RBX/EBX/R11/R11D +cc: BSWAP RSP/ESP/R12/R12D +cd: BSWAP RBP/EBP/R13/R13D +ce: BSWAP RSI/ESI/R14/R14D +cf: BSWAP RDI/EDI/R15/R15D +# 0x0f 0xd0-0xdf +d0: vaddsubpd Vpd,Hpd,Wpd (66) | vaddsubps Vps,Hps,Wps (F2) +d1: psrlw Pq,Qq | vpsrlw Vx,Hx,Wx (66),(v1) +d2: psrld Pq,Qq | vpsrld Vx,Hx,Wx (66),(v1) +d3: psrlq Pq,Qq | vpsrlq Vx,Hx,Wx (66),(v1) +d4: paddq Pq,Qq | vpaddq Vx,Hx,Wx (66),(v1) +d5: pmullw Pq,Qq | vpmullw Vx,Hx,Wx (66),(v1) +d6: vmovq Wq,Vq (66),(v1) | movq2dq Vdq,Nq (F3) | movdq2q Pq,Uq (F2) +d7: pmovmskb Gd,Nq | vpmovmskb Gd,Ux (66),(v1) +d8: psubusb Pq,Qq | vpsubusb Vx,Hx,Wx (66),(v1) +d9: psubusw Pq,Qq | vpsubusw Vx,Hx,Wx (66),(v1) +da: pminub Pq,Qq | vpminub Vx,Hx,Wx (66),(v1) +db: pand Pq,Qq | vpand Vx,Hx,Wx (66),(v1) | vpandd/q Vx,Hx,Wx (66),(evo) +dc: paddusb Pq,Qq | vpaddusb Vx,Hx,Wx (66),(v1) +dd: paddusw Pq,Qq | vpaddusw Vx,Hx,Wx (66),(v1) +de: pmaxub Pq,Qq | vpmaxub Vx,Hx,Wx (66),(v1) +df: pandn Pq,Qq | vpandn Vx,Hx,Wx (66),(v1) | vpandnd/q Vx,Hx,Wx (66),(evo) +# 0x0f 0xe0-0xef +e0: pavgb Pq,Qq | vpavgb Vx,Hx,Wx (66),(v1) +e1: psraw Pq,Qq | vpsraw Vx,Hx,Wx (66),(v1) +e2: psrad Pq,Qq | vpsrad Vx,Hx,Wx (66),(v1) +e3: pavgw Pq,Qq | vpavgw Vx,Hx,Wx (66),(v1) +e4: pmulhuw Pq,Qq | vpmulhuw Vx,Hx,Wx (66),(v1) +e5: pmulhw Pq,Qq | vpmulhw Vx,Hx,Wx (66),(v1) +e6: vcvttpd2dq Vx,Wpd (66) | vcvtdq2pd Vx,Wdq (F3) | vcvtdq2pd/qq2pd Vx,Wdq (F3),(evo) | vcvtpd2dq Vx,Wpd (F2) +e7: movntq Mq,Pq | vmovntdq Mx,Vx (66) +e8: psubsb Pq,Qq | vpsubsb Vx,Hx,Wx (66),(v1) +e9: psubsw Pq,Qq | vpsubsw Vx,Hx,Wx (66),(v1) +ea: pminsw Pq,Qq | vpminsw Vx,Hx,Wx (66),(v1) +eb: por Pq,Qq | vpor Vx,Hx,Wx (66),(v1) | vpord/q Vx,Hx,Wx (66),(evo) +ec: paddsb Pq,Qq | vpaddsb Vx,Hx,Wx (66),(v1) +ed: paddsw Pq,Qq | vpaddsw Vx,Hx,Wx (66),(v1) +ee: pmaxsw Pq,Qq | vpmaxsw Vx,Hx,Wx (66),(v1) +ef: pxor Pq,Qq | vpxor Vx,Hx,Wx (66),(v1) | vpxord/q Vx,Hx,Wx (66),(evo) +# 0x0f 0xf0-0xff +f0: vlddqu Vx,Mx (F2) +f1: psllw Pq,Qq | vpsllw Vx,Hx,Wx (66),(v1) +f2: pslld Pq,Qq | vpslld Vx,Hx,Wx (66),(v1) +f3: psllq Pq,Qq | vpsllq Vx,Hx,Wx (66),(v1) +f4: pmuludq Pq,Qq | vpmuludq Vx,Hx,Wx (66),(v1) +f5: pmaddwd Pq,Qq | vpmaddwd Vx,Hx,Wx (66),(v1) +f6: psadbw Pq,Qq | vpsadbw Vx,Hx,Wx (66),(v1) +f7: maskmovq Pq,Nq | vmaskmovdqu Vx,Ux (66),(v1) +f8: psubb Pq,Qq | vpsubb Vx,Hx,Wx (66),(v1) +f9: psubw Pq,Qq | vpsubw Vx,Hx,Wx (66),(v1) +fa: psubd Pq,Qq | vpsubd Vx,Hx,Wx (66),(v1) +fb: psubq Pq,Qq | vpsubq Vx,Hx,Wx (66),(v1) +fc: paddb Pq,Qq | vpaddb Vx,Hx,Wx (66),(v1) +fd: paddw Pq,Qq | vpaddw Vx,Hx,Wx (66),(v1) +fe: paddd Pq,Qq | vpaddd Vx,Hx,Wx (66),(v1) +ff: UD0 +EndTable + +Table: 3-byte opcode 1 (0x0f 0x38) +Referrer: 3-byte escape 1 +AVXcode: 2 +# 0x0f 0x38 0x00-0x0f +00: pshufb Pq,Qq | vpshufb Vx,Hx,Wx (66),(v1) +01: phaddw Pq,Qq | vphaddw Vx,Hx,Wx (66),(v1) +02: phaddd Pq,Qq | vphaddd Vx,Hx,Wx (66),(v1) +03: phaddsw Pq,Qq | vphaddsw Vx,Hx,Wx (66),(v1) +04: pmaddubsw Pq,Qq | vpmaddubsw Vx,Hx,Wx (66),(v1) +05: phsubw Pq,Qq | vphsubw Vx,Hx,Wx (66),(v1) +06: phsubd Pq,Qq | vphsubd Vx,Hx,Wx (66),(v1) +07: phsubsw Pq,Qq | vphsubsw Vx,Hx,Wx (66),(v1) +08: psignb Pq,Qq | vpsignb Vx,Hx,Wx (66),(v1) +09: psignw Pq,Qq | vpsignw Vx,Hx,Wx (66),(v1) +0a: psignd Pq,Qq | vpsignd Vx,Hx,Wx (66),(v1) +0b: pmulhrsw Pq,Qq | vpmulhrsw Vx,Hx,Wx (66),(v1) +0c: vpermilps Vx,Hx,Wx (66),(v) +0d: vpermilpd Vx,Hx,Wx (66),(v) +0e: vtestps Vx,Wx (66),(v) +0f: vtestpd Vx,Wx (66),(v) +# 0x0f 0x38 0x10-0x1f +10: pblendvb Vdq,Wdq (66) | vpsrlvw Vx,Hx,Wx (66),(evo) | vpmovuswb Wx,Vx (F3),(ev) +11: vpmovusdb Wx,Vd (F3),(ev) | vpsravw Vx,Hx,Wx (66),(ev) +12: vpmovusqb Wx,Vq (F3),(ev) | vpsllvw Vx,Hx,Wx (66),(ev) +13: vcvtph2ps Vx,Wx (66),(v) | vpmovusdw Wx,Vd (F3),(ev) +14: blendvps Vdq,Wdq (66) | vpmovusqw Wx,Vq (F3),(ev) | vprorvd/q Vx,Hx,Wx (66),(evo) +15: blendvpd Vdq,Wdq (66) | vpmovusqd Wx,Vq (F3),(ev) | vprolvd/q Vx,Hx,Wx (66),(evo) +16: vpermps Vqq,Hqq,Wqq (66),(v) | vpermps/d Vqq,Hqq,Wqq (66),(evo) +17: vptest Vx,Wx (66) +18: vbroadcastss Vx,Wd (66),(v) +19: vbroadcastsd Vqq,Wq (66),(v) | vbroadcastf32x2 Vqq,Wq (66),(evo) +1a: vbroadcastf128 Vqq,Mdq (66),(v) | vbroadcastf32x4/64x2 Vqq,Wq (66),(evo) +1b: vbroadcastf32x8/64x4 Vqq,Mdq (66),(ev) +1c: pabsb Pq,Qq | vpabsb Vx,Wx (66),(v1) +1d: pabsw Pq,Qq | vpabsw Vx,Wx (66),(v1) +1e: pabsd Pq,Qq | vpabsd Vx,Wx (66),(v1) +1f: vpabsq Vx,Wx (66),(ev) +# 0x0f 0x38 0x20-0x2f +20: vpmovsxbw Vx,Ux/Mq (66),(v1) | vpmovswb Wx,Vx (F3),(ev) +21: vpmovsxbd Vx,Ux/Md (66),(v1) | vpmovsdb Wx,Vd (F3),(ev) +22: vpmovsxbq Vx,Ux/Mw (66),(v1) | vpmovsqb Wx,Vq (F3),(ev) +23: vpmovsxwd Vx,Ux/Mq (66),(v1) | vpmovsdw Wx,Vd (F3),(ev) +24: vpmovsxwq Vx,Ux/Md (66),(v1) | vpmovsqw Wx,Vq (F3),(ev) +25: vpmovsxdq Vx,Ux/Mq (66),(v1) | vpmovsqd Wx,Vq (F3),(ev) +26: vptestmb/w Vk,Hx,Wx (66),(ev) | vptestnmb/w Vk,Hx,Wx (F3),(ev) +27: vptestmd/q Vk,Hx,Wx (66),(ev) | vptestnmd/q Vk,Hx,Wx (F3),(ev) +28: vpmuldq Vx,Hx,Wx (66),(v1) | vpmovm2b/w Vx,Uk (F3),(ev) +29: vpcmpeqq Vx,Hx,Wx (66),(v1) | vpmovb2m/w2m Vk,Ux (F3),(ev) +2a: vmovntdqa Vx,Mx (66),(v1) | vpbroadcastmb2q Vx,Uk (F3),(ev) +2b: vpackusdw Vx,Hx,Wx (66),(v1) +2c: vmaskmovps Vx,Hx,Mx (66),(v) | vscalefps/d Vx,Hx,Wx (66),(evo) +2d: vmaskmovpd Vx,Hx,Mx (66),(v) | vscalefss/d Vx,Hx,Wx (66),(evo) +2e: vmaskmovps Mx,Hx,Vx (66),(v) +2f: vmaskmovpd Mx,Hx,Vx (66),(v) +# 0x0f 0x38 0x30-0x3f +30: vpmovzxbw Vx,Ux/Mq (66),(v1) | vpmovwb Wx,Vx (F3),(ev) +31: vpmovzxbd Vx,Ux/Md (66),(v1) | vpmovdb Wx,Vd (F3),(ev) +32: vpmovzxbq Vx,Ux/Mw (66),(v1) | vpmovqb Wx,Vq (F3),(ev) +33: vpmovzxwd Vx,Ux/Mq (66),(v1) | vpmovdw Wx,Vd (F3),(ev) +34: vpmovzxwq Vx,Ux/Md (66),(v1) | vpmovqw Wx,Vq (F3),(ev) +35: vpmovzxdq Vx,Ux/Mq (66),(v1) | vpmovqd Wx,Vq (F3),(ev) +36: vpermd Vqq,Hqq,Wqq (66),(v) | vpermd/q Vqq,Hqq,Wqq (66),(evo) +37: vpcmpgtq Vx,Hx,Wx (66),(v1) +38: vpminsb Vx,Hx,Wx (66),(v1) | vpmovm2d/q Vx,Uk (F3),(ev) +39: vpminsd Vx,Hx,Wx (66),(v1) | vpminsd/q Vx,Hx,Wx (66),(evo) | vpmovd2m/q2m Vk,Ux (F3),(ev) +3a: vpminuw Vx,Hx,Wx (66),(v1) | vpbroadcastmw2d Vx,Uk (F3),(ev) +3b: vpminud Vx,Hx,Wx (66),(v1) | vpminud/q Vx,Hx,Wx (66),(evo) +3c: vpmaxsb Vx,Hx,Wx (66),(v1) +3d: vpmaxsd Vx,Hx,Wx (66),(v1) | vpmaxsd/q Vx,Hx,Wx (66),(evo) +3e: vpmaxuw Vx,Hx,Wx (66),(v1) +3f: vpmaxud Vx,Hx,Wx (66),(v1) | vpmaxud/q Vx,Hx,Wx (66),(evo) +# 0x0f 0x38 0x40-0x8f +40: vpmulld Vx,Hx,Wx (66),(v1) | vpmulld/q Vx,Hx,Wx (66),(evo) +41: vphminposuw Vdq,Wdq (66),(v1) +42: vgetexpps/d Vx,Wx (66),(ev) +43: vgetexpss/d Vx,Hx,Wx (66),(ev) +44: vplzcntd/q Vx,Wx (66),(ev) +45: vpsrlvd/q Vx,Hx,Wx (66),(v) +46: vpsravd Vx,Hx,Wx (66),(v) | vpsravd/q Vx,Hx,Wx (66),(evo) +47: vpsllvd/q Vx,Hx,Wx (66),(v) +# Skip 0x48 +49: TILERELEASE (v1),(000),(11B) | LDTILECFG Mtc (v1)(000) | STTILECFG Mtc (66),(v1),(000) | TILEZERO Vt (F2),(v1),(11B) +# Skip 0x4a +4b: TILELOADD Vt,Wsm (F2),(v1) | TILELOADDT1 Vt,Wsm (66),(v1) | TILESTORED Wsm,Vt (F3),(v) +4c: vrcp14ps/d Vpd,Wpd (66),(ev) +4d: vrcp14ss/d Vsd,Hpd,Wsd (66),(ev) +4e: vrsqrt14ps/d Vpd,Wpd (66),(ev) +4f: vrsqrt14ss/d Vsd,Hsd,Wsd (66),(ev) +50: vpdpbusd Vx,Hx,Wx (66),(ev) +51: vpdpbusds Vx,Hx,Wx (66),(ev) +52: vdpbf16ps Vx,Hx,Wx (F3),(ev) | vpdpwssd Vx,Hx,Wx (66),(ev) | vp4dpwssd Vdqq,Hdqq,Wdq (F2),(ev) +53: vpdpwssds Vx,Hx,Wx (66),(ev) | vp4dpwssds Vdqq,Hdqq,Wdq (F2),(ev) +54: vpopcntb/w Vx,Wx (66),(ev) +55: vpopcntd/q Vx,Wx (66),(ev) +58: vpbroadcastd Vx,Wx (66),(v) +59: vpbroadcastq Vx,Wx (66),(v) | vbroadcasti32x2 Vx,Wx (66),(evo) +5a: vbroadcasti128 Vqq,Mdq (66),(v) | vbroadcasti32x4/64x2 Vx,Wx (66),(evo) +5b: vbroadcasti32x8/64x4 Vqq,Mdq (66),(ev) +5c: TDPBF16PS Vt,Wt,Ht (F3),(v1) +# Skip 0x5d +5e: TDPBSSD Vt,Wt,Ht (F2),(v1) | TDPBSUD Vt,Wt,Ht (F3),(v1) | TDPBUSD Vt,Wt,Ht (66),(v1) | TDPBUUD Vt,Wt,Ht (v1) +# Skip 0x5f-0x61 +62: vpexpandb/w Vx,Wx (66),(ev) +63: vpcompressb/w Wx,Vx (66),(ev) +64: vpblendmd/q Vx,Hx,Wx (66),(ev) +65: vblendmps/d Vx,Hx,Wx (66),(ev) +66: vpblendmb/w Vx,Hx,Wx (66),(ev) +68: vp2intersectd/q Kx,Hx,Wx (F2),(ev) +# Skip 0x69-0x6f +70: vpshldvw Vx,Hx,Wx (66),(ev) +71: vpshldvd/q Vx,Hx,Wx (66),(ev) +72: vcvtne2ps2bf16 Vx,Hx,Wx (F2),(ev) | vcvtneps2bf16 Vx,Wx (F3),(ev) | vpshrdvw Vx,Hx,Wx (66),(ev) +73: vpshrdvd/q Vx,Hx,Wx (66),(ev) +75: vpermi2b/w Vx,Hx,Wx (66),(ev) +76: vpermi2d/q Vx,Hx,Wx (66),(ev) +77: vpermi2ps/d Vx,Hx,Wx (66),(ev) +78: vpbroadcastb Vx,Wx (66),(v) +79: vpbroadcastw Vx,Wx (66),(v) +7a: vpbroadcastb Vx,Rv (66),(ev) +7b: vpbroadcastw Vx,Rv (66),(ev) +7c: vpbroadcastd/q Vx,Rv (66),(ev) +7d: vpermt2b/w Vx,Hx,Wx (66),(ev) +7e: vpermt2d/q Vx,Hx,Wx (66),(ev) +7f: vpermt2ps/d Vx,Hx,Wx (66),(ev) +80: INVEPT Gy,Mdq (66) +81: INVVPID Gy,Mdq (66) +82: INVPCID Gy,Mdq (66) +83: vpmultishiftqb Vx,Hx,Wx (66),(ev) +88: vexpandps/d Vpd,Wpd (66),(ev) +89: vpexpandd/q Vx,Wx (66),(ev) +8a: vcompressps/d Wx,Vx (66),(ev) +8b: vpcompressd/q Wx,Vx (66),(ev) +8c: vpmaskmovd/q Vx,Hx,Mx (66),(v) +8d: vpermb/w Vx,Hx,Wx (66),(ev) +8e: vpmaskmovd/q Mx,Vx,Hx (66),(v) +8f: vpshufbitqmb Kx,Hx,Wx (66),(ev) +# 0x0f 0x38 0x90-0xbf (FMA) +90: vgatherdd/q Vx,Hx,Wx (66),(v) | vpgatherdd/q Vx,Wx (66),(evo) +91: vgatherqd/q Vx,Hx,Wx (66),(v) | vpgatherqd/q Vx,Wx (66),(evo) +92: vgatherdps/d Vx,Hx,Wx (66),(v) +93: vgatherqps/d Vx,Hx,Wx (66),(v) +94: +95: +96: vfmaddsub132ps/d Vx,Hx,Wx (66),(v) +97: vfmsubadd132ps/d Vx,Hx,Wx (66),(v) +98: vfmadd132ps/d Vx,Hx,Wx (66),(v) +99: vfmadd132ss/d Vx,Hx,Wx (66),(v),(v1) +9a: vfmsub132ps/d Vx,Hx,Wx (66),(v) | v4fmaddps Vdqq,Hdqq,Wdq (F2),(ev) +9b: vfmsub132ss/d Vx,Hx,Wx (66),(v),(v1) | v4fmaddss Vdq,Hdq,Wdq (F2),(ev) +9c: vfnmadd132ps/d Vx,Hx,Wx (66),(v) +9d: vfnmadd132ss/d Vx,Hx,Wx (66),(v),(v1) +9e: vfnmsub132ps/d Vx,Hx,Wx (66),(v) +9f: vfnmsub132ss/d Vx,Hx,Wx (66),(v),(v1) +a0: vpscatterdd/q Wx,Vx (66),(ev) +a1: vpscatterqd/q Wx,Vx (66),(ev) +a2: vscatterdps/d Wx,Vx (66),(ev) +a3: vscatterqps/d Wx,Vx (66),(ev) +a6: vfmaddsub213ps/d Vx,Hx,Wx (66),(v) +a7: vfmsubadd213ps/d Vx,Hx,Wx (66),(v) +a8: vfmadd213ps/d Vx,Hx,Wx (66),(v) +a9: vfmadd213ss/d Vx,Hx,Wx (66),(v),(v1) +aa: vfmsub213ps/d Vx,Hx,Wx (66),(v) | v4fnmaddps Vdqq,Hdqq,Wdq (F2),(ev) +ab: vfmsub213ss/d Vx,Hx,Wx (66),(v),(v1) | v4fnmaddss Vdq,Hdq,Wdq (F2),(ev) +ac: vfnmadd213ps/d Vx,Hx,Wx (66),(v) +ad: vfnmadd213ss/d Vx,Hx,Wx (66),(v),(v1) +ae: vfnmsub213ps/d Vx,Hx,Wx (66),(v) +af: vfnmsub213ss/d Vx,Hx,Wx (66),(v),(v1) +b4: vpmadd52luq Vx,Hx,Wx (66),(ev) +b5: vpmadd52huq Vx,Hx,Wx (66),(ev) +b6: vfmaddsub231ps/d Vx,Hx,Wx (66),(v) +b7: vfmsubadd231ps/d Vx,Hx,Wx (66),(v) +b8: vfmadd231ps/d Vx,Hx,Wx (66),(v) +b9: vfmadd231ss/d Vx,Hx,Wx (66),(v),(v1) +ba: vfmsub231ps/d Vx,Hx,Wx (66),(v) +bb: vfmsub231ss/d Vx,Hx,Wx (66),(v),(v1) +bc: vfnmadd231ps/d Vx,Hx,Wx (66),(v) +bd: vfnmadd231ss/d Vx,Hx,Wx (66),(v),(v1) +be: vfnmsub231ps/d Vx,Hx,Wx (66),(v) +bf: vfnmsub231ss/d Vx,Hx,Wx (66),(v),(v1) +# 0x0f 0x38 0xc0-0xff +c4: vpconflictd/q Vx,Wx (66),(ev) +c6: Grp18 (1A) +c7: Grp19 (1A) +c8: sha1nexte Vdq,Wdq | vexp2ps/d Vx,Wx (66),(ev) +c9: sha1msg1 Vdq,Wdq +ca: sha1msg2 Vdq,Wdq | vrcp28ps/d Vx,Wx (66),(ev) +cb: sha256rnds2 Vdq,Wdq | vrcp28ss/d Vx,Hx,Wx (66),(ev) +cc: sha256msg1 Vdq,Wdq | vrsqrt28ps/d Vx,Wx (66),(ev) +cd: sha256msg2 Vdq,Wdq | vrsqrt28ss/d Vx,Hx,Wx (66),(ev) +cf: vgf2p8mulb Vx,Wx (66) +db: VAESIMC Vdq,Wdq (66),(v1) +dc: vaesenc Vx,Hx,Wx (66) +dd: vaesenclast Vx,Hx,Wx (66) +de: vaesdec Vx,Hx,Wx (66) +df: vaesdeclast Vx,Hx,Wx (66) +f0: MOVBE Gy,My | MOVBE Gw,Mw (66) | CRC32 Gd,Eb (F2) | CRC32 Gd,Eb (66&F2) +f1: MOVBE My,Gy | MOVBE Mw,Gw (66) | CRC32 Gd,Ey (F2) | CRC32 Gd,Ew (66&F2) +f2: ANDN Gy,By,Ey (v) +f3: Grp17 (1A) +f5: BZHI Gy,Ey,By (v) | PEXT Gy,By,Ey (F3),(v) | PDEP Gy,By,Ey (F2),(v) | WRUSSD/Q My,Gy (66) +f6: ADCX Gy,Ey (66) | ADOX Gy,Ey (F3) | MULX By,Gy,rDX,Ey (F2),(v) | WRSSD/Q My,Gy +f7: BEXTR Gy,Ey,By (v) | SHLX Gy,Ey,By (66),(v) | SARX Gy,Ey,By (F3),(v) | SHRX Gy,Ey,By (F2),(v) +f8: MOVDIR64B Gv,Mdqq (66) | ENQCMD Gv,Mdqq (F2) | ENQCMDS Gv,Mdqq (F3) +f9: MOVDIRI My,Gy +EndTable + +Table: 3-byte opcode 2 (0x0f 0x3a) +Referrer: 3-byte escape 2 +AVXcode: 3 +# 0x0f 0x3a 0x00-0xff +00: vpermq Vqq,Wqq,Ib (66),(v) +01: vpermpd Vqq,Wqq,Ib (66),(v) +02: vpblendd Vx,Hx,Wx,Ib (66),(v) +03: valignd/q Vx,Hx,Wx,Ib (66),(ev) +04: vpermilps Vx,Wx,Ib (66),(v) +05: vpermilpd Vx,Wx,Ib (66),(v) +06: vperm2f128 Vqq,Hqq,Wqq,Ib (66),(v) +07: +08: vroundps Vx,Wx,Ib (66) | vrndscaleps Vx,Wx,Ib (66),(evo) | vrndscaleph Vx,Wx,Ib (evo) +09: vroundpd Vx,Wx,Ib (66) | vrndscalepd Vx,Wx,Ib (66),(evo) +0a: vroundss Vss,Wss,Ib (66),(v1) | vrndscaless Vx,Hx,Wx,Ib (66),(evo) | vrndscalesh Vx,Hx,Wx,Ib (evo) +0b: vroundsd Vsd,Wsd,Ib (66),(v1) | vrndscalesd Vx,Hx,Wx,Ib (66),(evo) +0c: vblendps Vx,Hx,Wx,Ib (66) +0d: vblendpd Vx,Hx,Wx,Ib (66) +0e: vpblendw Vx,Hx,Wx,Ib (66),(v1) +0f: palignr Pq,Qq,Ib | vpalignr Vx,Hx,Wx,Ib (66),(v1) +14: vpextrb Rd/Mb,Vdq,Ib (66),(v1) +15: vpextrw Rd/Mw,Vdq,Ib (66),(v1) +16: vpextrd/q Ey,Vdq,Ib (66),(v1) +17: vextractps Ed,Vdq,Ib (66),(v1) +18: vinsertf128 Vqq,Hqq,Wqq,Ib (66),(v) | vinsertf32x4/64x2 Vqq,Hqq,Wqq,Ib (66),(evo) +19: vextractf128 Wdq,Vqq,Ib (66),(v) | vextractf32x4/64x2 Wdq,Vqq,Ib (66),(evo) +1a: vinsertf32x8/64x4 Vqq,Hqq,Wqq,Ib (66),(ev) +1b: vextractf32x8/64x4 Wdq,Vqq,Ib (66),(ev) +1d: vcvtps2ph Wx,Vx,Ib (66),(v) +1e: vpcmpud/q Vk,Hd,Wd,Ib (66),(ev) +1f: vpcmpd/q Vk,Hd,Wd,Ib (66),(ev) +20: vpinsrb Vdq,Hdq,Ry/Mb,Ib (66),(v1) +21: vinsertps Vdq,Hdq,Udq/Md,Ib (66),(v1) +22: vpinsrd/q Vdq,Hdq,Ey,Ib (66),(v1) +23: vshuff32x4/64x2 Vx,Hx,Wx,Ib (66),(ev) +25: vpternlogd/q Vx,Hx,Wx,Ib (66),(ev) +26: vgetmantps/d Vx,Wx,Ib (66),(ev) | vgetmantph Vx,Wx,Ib (ev) +27: vgetmantss/d Vx,Hx,Wx,Ib (66),(ev) | vgetmantsh Vx,Hx,Wx,Ib (ev) +30: kshiftrb/w Vk,Uk,Ib (66),(v) +31: kshiftrd/q Vk,Uk,Ib (66),(v) +32: kshiftlb/w Vk,Uk,Ib (66),(v) +33: kshiftld/q Vk,Uk,Ib (66),(v) +38: vinserti128 Vqq,Hqq,Wqq,Ib (66),(v) | vinserti32x4/64x2 Vqq,Hqq,Wqq,Ib (66),(evo) +39: vextracti128 Wdq,Vqq,Ib (66),(v) | vextracti32x4/64x2 Wdq,Vqq,Ib (66),(evo) +3a: vinserti32x8/64x4 Vqq,Hqq,Wqq,Ib (66),(ev) +3b: vextracti32x8/64x4 Wdq,Vqq,Ib (66),(ev) +3e: vpcmpub/w Vk,Hk,Wx,Ib (66),(ev) +3f: vpcmpb/w Vk,Hk,Wx,Ib (66),(ev) +40: vdpps Vx,Hx,Wx,Ib (66) +41: vdppd Vdq,Hdq,Wdq,Ib (66),(v1) +42: vmpsadbw Vx,Hx,Wx,Ib (66),(v1) | vdbpsadbw Vx,Hx,Wx,Ib (66),(evo) +43: vshufi32x4/64x2 Vx,Hx,Wx,Ib (66),(ev) +44: vpclmulqdq Vx,Hx,Wx,Ib (66) +46: vperm2i128 Vqq,Hqq,Wqq,Ib (66),(v) +4a: vblendvps Vx,Hx,Wx,Lx (66),(v) +4b: vblendvpd Vx,Hx,Wx,Lx (66),(v) +4c: vpblendvb Vx,Hx,Wx,Lx (66),(v1) +50: vrangeps/d Vx,Hx,Wx,Ib (66),(ev) +51: vrangess/d Vx,Hx,Wx,Ib (66),(ev) +54: vfixupimmps/d Vx,Hx,Wx,Ib (66),(ev) +55: vfixupimmss/d Vx,Hx,Wx,Ib (66),(ev) +56: vreduceps/d Vx,Wx,Ib (66),(ev) | vreduceph Vx,Wx,Ib (ev) +57: vreducess/d Vx,Hx,Wx,Ib (66),(ev) | vreducesh Vx,Hx,Wx,Ib (ev) +60: vpcmpestrm Vdq,Wdq,Ib (66),(v1) +61: vpcmpestri Vdq,Wdq,Ib (66),(v1) +62: vpcmpistrm Vdq,Wdq,Ib (66),(v1) +63: vpcmpistri Vdq,Wdq,Ib (66),(v1) +66: vfpclassps/d Vk,Wx,Ib (66),(ev) | vfpclassph Vx,Wx,Ib (ev) +67: vfpclassss/d Vk,Wx,Ib (66),(ev) | vfpclasssh Vx,Wx,Ib (ev) +70: vpshldw Vx,Hx,Wx,Ib (66),(ev) +71: vpshldd/q Vx,Hx,Wx,Ib (66),(ev) +72: vpshrdw Vx,Hx,Wx,Ib (66),(ev) +73: vpshrdd/q Vx,Hx,Wx,Ib (66),(ev) +c2: vcmpph Vx,Hx,Wx,Ib (ev) | vcmpsh Vx,Hx,Wx,Ib (F3),(ev) +cc: sha1rnds4 Vdq,Wdq,Ib +ce: vgf2p8affineqb Vx,Wx,Ib (66) +cf: vgf2p8affineinvqb Vx,Wx,Ib (66) +df: VAESKEYGEN Vdq,Wdq,Ib (66),(v1) +f0: RORX Gy,Ey,Ib (F2),(v) | HRESET Gv,Ib (F3),(000),(11B) +EndTable + +Table: EVEX map 5 +Referrer: +AVXcode: 5 +10: vmovsh Vx,Hx,Wx (F3),(ev) | vmovsh Vx,Wx (F3),(ev) +11: vmovsh Wx,Hx,Vx (F3),(ev) | vmovsh Wx,Vx (F3),(ev) +1d: vcvtps2phx Vx,Wx (66),(ev) | vcvtss2sh Vx,Hx,Wx (ev) +2a: vcvtsi2sh Vx,Hx,Wx (F3),(ev) +2c: vcvttsh2si Vx,Wx (F3),(ev) +2d: vcvtsh2si Vx,Wx (F3),(ev) +2e: vucomish Vx,Wx (ev) +2f: vcomish Vx,Wx (ev) +51: vsqrtph Vx,Wx (ev) | vsqrtsh Vx,Hx,Wx (F3),(ev) +58: vaddph Vx,Hx,Wx (ev) | vaddsh Vx,Hx,Wx (F3),(ev) +59: vmulph Vx,Hx,Wx (ev) | vmulsh Vx,Hx,Wx (F3),(ev) +5a: vcvtpd2ph Vx,Wx (66),(ev) | vcvtph2pd Vx,Wx (ev) | vcvtsd2sh Vx,Hx,Wx (F2),(ev) | vcvtsh2sd Vx,Hx,Wx (F3),(ev) +5b: vcvtdq2ph Vx,Wx (ev) | vcvtph2dq Vx,Wx (66),(ev) | vcvtqq2ph Vx,Wx (ev) | vcvttph2dq Vx,Wx (F3),(ev) +5c: vsubph Vx,Hx,Wx (ev) | vsubsh Vx,Hx,Wx (F3),(ev) +5d: vminph Vx,Hx,Wx (ev) | vminsh Vx,Hx,Wx (F3),(ev) +5e: vdivph Vx,Hx,Wx (ev) | vdivsh Vx,Hx,Wx (F3),(ev) +5f: vmaxph Vx,Hx,Wx (ev) | vmaxsh Vx,Hx,Wx (F3),(ev) +6e: vmovw Vx,Wx (66),(ev) +78: vcvttph2udq Vx,Wx (ev) | vcvttph2uqq Vx,Wx (66),(ev) | vcvttsh2usi Vx,Wx (F3),(ev) +79: vcvtph2udq Vx,Wx (ev) | vcvtph2uqq Vx,Wx (66),(ev) | vcvtsh2usi Vx,Wx (F3),(ev) +7a: vcvttph2qq Vx,Wx (66),(ev) | vcvtudq2ph Vx,Wx (F2),(ev) | vcvtuqq2ph Vx,Wx (F2),(ev) +7b: vcvtph2qq Vx,Wx (66),(ev) | vcvtusi2sh Vx,Hx,Wx (F3),(ev) +7c: vcvttph2uw Vx,Wx (ev) | vcvttph2w Vx,Wx (66),(ev) +7d: vcvtph2uw Vx,Wx (ev) | vcvtph2w Vx,Wx (66),(ev) | vcvtuw2ph Vx,Wx (F2),(ev) | vcvtw2ph Vx,Wx (F3),(ev) +7e: vmovw Wx,Vx (66),(ev) +EndTable + +Table: EVEX map 6 +Referrer: +AVXcode: 6 +13: vcvtph2psx Vx,Wx (66),(ev) | vcvtsh2ss Vx,Hx,Wx (ev) +2c: vscalefph Vx,Hx,Wx (66),(ev) +2d: vscalefsh Vx,Hx,Wx (66),(ev) +42: vgetexpph Vx,Wx (66),(ev) +43: vgetexpsh Vx,Hx,Wx (66),(ev) +4c: vrcpph Vx,Wx (66),(ev) +4d: vrcpsh Vx,Hx,Wx (66),(ev) +4e: vrsqrtph Vx,Wx (66),(ev) +4f: vrsqrtsh Vx,Hx,Wx (66),(ev) +56: vfcmaddcph Vx,Hx,Wx (F2),(ev) | vfmaddcph Vx,Hx,Wx (F3),(ev) +57: vfcmaddcsh Vx,Hx,Wx (F2),(ev) | vfmaddcsh Vx,Hx,Wx (F3),(ev) +96: vfmaddsub132ph Vx,Hx,Wx (66),(ev) +97: vfmsubadd132ph Vx,Hx,Wx (66),(ev) +98: vfmadd132ph Vx,Hx,Wx (66),(ev) +99: vfmadd132sh Vx,Hx,Wx (66),(ev) +9a: vfmsub132ph Vx,Hx,Wx (66),(ev) +9b: vfmsub132sh Vx,Hx,Wx (66),(ev) +9c: vfnmadd132ph Vx,Hx,Wx (66),(ev) +9d: vfnmadd132sh Vx,Hx,Wx (66),(ev) +9e: vfnmsub132ph Vx,Hx,Wx (66),(ev) +9f: vfnmsub132sh Vx,Hx,Wx (66),(ev) +a6: vfmaddsub213ph Vx,Hx,Wx (66),(ev) +a7: vfmsubadd213ph Vx,Hx,Wx (66),(ev) +a8: vfmadd213ph Vx,Hx,Wx (66),(ev) +a9: vfmadd213sh Vx,Hx,Wx (66),(ev) +aa: vfmsub213ph Vx,Hx,Wx (66),(ev) +ab: vfmsub213sh Vx,Hx,Wx (66),(ev) +ac: vfnmadd213ph Vx,Hx,Wx (66),(ev) +ad: vfnmadd213sh Vx,Hx,Wx (66),(ev) +ae: vfnmsub213ph Vx,Hx,Wx (66),(ev) +af: vfnmsub213sh Vx,Hx,Wx (66),(ev) +b6: vfmaddsub231ph Vx,Hx,Wx (66),(ev) +b7: vfmsubadd231ph Vx,Hx,Wx (66),(ev) +b8: vfmadd231ph Vx,Hx,Wx (66),(ev) +b9: vfmadd231sh Vx,Hx,Wx (66),(ev) +ba: vfmsub231ph Vx,Hx,Wx (66),(ev) +bb: vfmsub231sh Vx,Hx,Wx (66),(ev) +bc: vfnmadd231ph Vx,Hx,Wx (66),(ev) +bd: vfnmadd231sh Vx,Hx,Wx (66),(ev) +be: vfnmsub231ph Vx,Hx,Wx (66),(ev) +bf: vfnmsub231sh Vx,Hx,Wx (66),(ev) +d6: vfcmulcph Vx,Hx,Wx (F2),(ev) | vfmulcph Vx,Hx,Wx (F3),(ev) +d7: vfcmulcsh Vx,Hx,Wx (F2),(ev) | vfmulcsh Vx,Hx,Wx (F3),(ev) +EndTable + +GrpTable: Grp1 +0: ADD +1: OR +2: ADC +3: SBB +4: AND +5: SUB +6: XOR +7: CMP +EndTable + +GrpTable: Grp1A +0: POP +EndTable + +GrpTable: Grp2 +0: ROL +1: ROR +2: RCL +3: RCR +4: SHL/SAL +5: SHR +6: +7: SAR +EndTable + +GrpTable: Grp3_1 +0: TEST Eb,Ib +1: TEST Eb,Ib +2: NOT Eb +3: NEG Eb +4: MUL AL,Eb +5: IMUL AL,Eb +6: DIV AL,Eb +7: IDIV AL,Eb +EndTable + +GrpTable: Grp3_2 +0: TEST Ev,Iz +1: TEST Ev,Iz +2: NOT Ev +3: NEG Ev +4: MUL rAX,Ev +5: IMUL rAX,Ev +6: DIV rAX,Ev +7: IDIV rAX,Ev +EndTable + +GrpTable: Grp4 +0: INC Eb +1: DEC Eb +EndTable + +GrpTable: Grp5 +0: INC Ev +1: DEC Ev +# Note: "forced64" is Intel CPU behavior (see comment about CALL insn). +2: CALLN Ev (f64) +3: CALLF Ep +4: JMPN Ev (f64) +5: JMPF Mp +6: PUSH Ev (d64) +7: +EndTable + +GrpTable: Grp6 +0: SLDT Rv/Mw +1: STR Rv/Mw +2: LLDT Ew +3: LTR Ew +4: VERR Ew +5: VERW Ew +6: LKGS Ew (F2) +EndTable + +GrpTable: Grp7 +0: SGDT Ms | VMCALL (001),(11B) | VMLAUNCH (010),(11B) | VMRESUME (011),(11B) | VMXOFF (100),(11B) | PCONFIG (101),(11B) | ENCLV (000),(11B) | WRMSRNS (110),(11B) +1: SIDT Ms | MONITOR (000),(11B) | MWAIT (001),(11B) | CLAC (010),(11B) | STAC (011),(11B) | ENCLS (111),(11B) | ERETU (F3),(010),(11B) | ERETS (F2),(010),(11B) +2: LGDT Ms | XGETBV (000),(11B) | XSETBV (001),(11B) | VMFUNC (100),(11B) | XEND (101)(11B) | XTEST (110)(11B) | ENCLU (111),(11B) +3: LIDT Ms +4: SMSW Mw/Rv +5: rdpkru (110),(11B) | wrpkru (111),(11B) | SAVEPREVSSP (F3),(010),(11B) | RSTORSSP Mq (F3) | SETSSBSY (F3),(000),(11B) | CLUI (F3),(110),(11B) | SERIALIZE (000),(11B) | STUI (F3),(111),(11B) | TESTUI (F3)(101)(11B) | UIRET (F3),(100),(11B) | XRESLDTRK (F2),(000),(11B) | XSUSLDTRK (F2),(001),(11B) +6: LMSW Ew +7: INVLPG Mb | SWAPGS (o64),(000),(11B) | RDTSCP (001),(11B) +EndTable + +GrpTable: Grp8 +4: BT +5: BTS +6: BTR +7: BTC +EndTable + +GrpTable: Grp9 +1: CMPXCHG8B/16B Mq/Mdq +3: xrstors +4: xsavec +5: xsaves +6: VMPTRLD Mq | VMCLEAR Mq (66) | VMXON Mq (F3) | RDRAND Rv (11B) | SENDUIPI Gq (F3) +7: VMPTRST Mq | VMPTRST Mq (F3) | RDSEED Rv (11B) +EndTable + +GrpTable: Grp10 +# all are UD1 +0: UD1 +1: UD1 +2: UD1 +3: UD1 +4: UD1 +5: UD1 +6: UD1 +7: UD1 +EndTable + +# Grp11A and Grp11B are expressed as Grp11 in Intel SDM +GrpTable: Grp11A +0: MOV Eb,Ib +7: XABORT Ib (000),(11B) +EndTable + +GrpTable: Grp11B +0: MOV Eb,Iz +7: XBEGIN Jz (000),(11B) +EndTable + +GrpTable: Grp12 +2: psrlw Nq,Ib (11B) | vpsrlw Hx,Ux,Ib (66),(11B),(v1) +4: psraw Nq,Ib (11B) | vpsraw Hx,Ux,Ib (66),(11B),(v1) +6: psllw Nq,Ib (11B) | vpsllw Hx,Ux,Ib (66),(11B),(v1) +EndTable + +GrpTable: Grp13 +0: vprord/q Hx,Wx,Ib (66),(ev) +1: vprold/q Hx,Wx,Ib (66),(ev) +2: psrld Nq,Ib (11B) | vpsrld Hx,Ux,Ib (66),(11B),(v1) +4: psrad Nq,Ib (11B) | vpsrad Hx,Ux,Ib (66),(11B),(v1) | vpsrad/q Hx,Ux,Ib (66),(evo) +6: pslld Nq,Ib (11B) | vpslld Hx,Ux,Ib (66),(11B),(v1) +EndTable + +GrpTable: Grp14 +2: psrlq Nq,Ib (11B) | vpsrlq Hx,Ux,Ib (66),(11B),(v1) +3: vpsrldq Hx,Ux,Ib (66),(11B),(v1) +6: psllq Nq,Ib (11B) | vpsllq Hx,Ux,Ib (66),(11B),(v1) +7: vpslldq Hx,Ux,Ib (66),(11B),(v1) +EndTable + +GrpTable: Grp15 +0: fxsave | RDFSBASE Ry (F3),(11B) +1: fxstor | RDGSBASE Ry (F3),(11B) +2: vldmxcsr Md (v1) | WRFSBASE Ry (F3),(11B) +3: vstmxcsr Md (v1) | WRGSBASE Ry (F3),(11B) +4: XSAVE | ptwrite Ey (F3),(11B) +5: XRSTOR | lfence (11B) | INCSSPD/Q Ry (F3),(11B) +6: XSAVEOPT | clwb (66) | mfence (11B) | TPAUSE Rd (66),(11B) | UMONITOR Rv (F3),(11B) | UMWAIT Rd (F2),(11B) | CLRSSBSY Mq (F3) +7: clflush | clflushopt (66) | sfence (11B) +EndTable + +GrpTable: Grp16 +0: prefetch NTA +1: prefetch T0 +2: prefetch T1 +3: prefetch T2 +EndTable + +GrpTable: Grp17 +1: BLSR By,Ey (v) +2: BLSMSK By,Ey (v) +3: BLSI By,Ey (v) +EndTable + +GrpTable: Grp18 +1: vgatherpf0dps/d Wx (66),(ev) +2: vgatherpf1dps/d Wx (66),(ev) +5: vscatterpf0dps/d Wx (66),(ev) +6: vscatterpf1dps/d Wx (66),(ev) +EndTable + +GrpTable: Grp19 +1: vgatherpf0qps/d Wx (66),(ev) +2: vgatherpf1qps/d Wx (66),(ev) +5: vscatterpf0qps/d Wx (66),(ev) +6: vscatterpf1qps/d Wx (66),(ev) +EndTable + +GrpTable: Grp20 +0: cldemote Mb +EndTable + +GrpTable: Grp21 +1: RDSSPD/Q Ry (F3),(11B) +7: ENDBR64 (F3),(010),(11B) | ENDBR32 (F3),(011),(11B) +EndTable + +# AMD's Prefetch Group +GrpTable: GrpP +0: PREFETCH +1: PREFETCHW +EndTable + +GrpTable: GrpPDLK +0: MONTMUL +1: XSHA1 +2: XSHA2 +EndTable + +GrpTable: GrpRNG +0: xstore-rng +1: xcrypt-ecb +2: xcrypt-cbc +4: xcrypt-cfb +5: xcrypt-ofb +EndTable diff --git a/x86/Makefile.common b/x86/Makefile.common index bc0ed2c8..25ae6f78 100644 --- a/x86/Makefile.common +++ b/x86/Makefile.common @@ -26,6 +26,8 @@ cflatobjs += lib/x86/pmu.o ifeq ($(CONFIG_EFI),y) cflatobjs += lib/x86/amd_sev.o cflatobjs += lib/x86/amd_sev_vc.o +cflatobjs += lib/x86/insn/insn.o +cflatobjs += lib/x86/insn/inat.o cflatobjs += lib/efi.o cflatobjs += x86/efi/reloc_x86_64.o endif @@ -49,7 +51,14 @@ FLATLIBS = lib/libcflat.a ifeq ($(CONFIG_EFI),y) .PRECIOUS: %.efi %.so -%.so: %.o $(FLATLIBS) $(SRCDIR)/x86/efi/elf_x86_64_efi.lds $(cstart.o) +inat_tables_script = $(SRCDIR)/lib/x86/insn/gen-insn-attr-x86.awk +inat_tables_maps = $(SRCDIR)/lib/x86/insn/x86-opcode-map.txt +inat_tables_c = $(SRCDIR)/lib/x86/insn/inat-tables.c + +$(inat_tables_c): $(inat_tables_script) $(inat_tables_maps) + awk -f $(inat_tables_script) $(inat_tables_maps) > $@ + +%.so: $(inat_tables_c) %.o $(FLATLIBS) $(SRCDIR)/x86/efi/elf_x86_64_efi.lds $(cstart.o) $(LD) -T $(SRCDIR)/x86/efi/elf_x86_64_efi.lds $(EFI_LDFLAGS) -o $@ \ $(filter %.o, $^) $(FLATLIBS) @chmod a-x $@ @@ -128,4 +137,5 @@ arch_clean: $(RM) $(TEST_DIR)/*.o $(TEST_DIR)/*.flat $(TEST_DIR)/*.elf \ $(TEST_DIR)/.*.d lib/x86/.*.d \ $(TEST_DIR)/efi/*.o $(TEST_DIR)/efi/.*.d \ - $(TEST_DIR)/*.so $(TEST_DIR)/*.efi $(TEST_DIR)/*.debug + $(TEST_DIR)/*.so $(TEST_DIR)/*.efi $(TEST_DIR)/*.debug \ + $(inat_tables_c) lib/x86/insn/.*.d From patchwork Wed Jun 12 14:45:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Karasulli X-Patchwork-Id: 13695143 Received: from mail-lj1-f171.google.com (mail-lj1-f171.google.com [209.85.208.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE31E17F4ED for ; Wed, 12 Jun 2024 14:45:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203556; cv=none; b=Ij26LxwVPstshEtS43XC2GsHIcAjBlXLMdxz0AgknRK3rWF8+f1B1f1JjtM7cInh9iTek6fkfxbkCpOKR/z7+wSJ94hPJhaYqkfyEjCt6mA8etovfF52JzIv+bRDl4o/GGdtByobPnj5DkvKoADDFpeH+ONWD7C4aso+/Xd0bjw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203556; c=relaxed/simple; bh=n+HiePUSK4nbjnB37DbjThD7qF0Lvxx8ojbyqjSwJrU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=u00ByoxIsSbHHdJVBwCoPJIgLt+LBUfho/7GKO275q2DNngAv7OkbWlfbIOZPu/wxNMMqeEP/zksJmAXPXZl1+igTbLvCjMpc5SX0dU2Y8k9tblq0LrpLHvUuuUDB+MRQjVIN0eUqldqwaZSSZoHW4EQoxYEyY6gv6yAJya3eQM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=JwQw7o+R; arc=none smtp.client-ip=209.85.208.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="JwQw7o+R" Received: by mail-lj1-f171.google.com with SMTP id 38308e7fff4ca-2eabd22d441so110865431fa.2 for ; Wed, 12 Jun 2024 07:45:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718203552; x=1718808352; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IB3CcC7U5Vnz5NTmOXn/40tXEY5EWl8d3bB7zSXBveU=; b=JwQw7o+RK27Y4IVw7WXVyhnGgkJf3X68jT3Jv/fZKbu+BVPNLZiarqeUp+IQsnRdAI /BGlM2xWVgntz+feq0Z7SKlbRuI3WXuW7dHRQt5IUIdZpVeNAF1KP+uu7WZ7Az/asSVc 1+1dsgFxE4jmTJWGZdY8vSik02BxPdJFjkOh6+26YuQjPVfY3cfV3HFAEhFajutWj6wj sqq7DlqxQRQ9LaBDWuqanExcwDYIYkhs6ec9/SLJozTsQfCU3QVwxvMhhXrVTzEaiPmT cCaCalH2nmporVZQmySVET36mfmePoxSrE5XHGibcWb+EnrFSKNyUiBEvOTfF0GKxlYR eRTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718203552; x=1718808352; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IB3CcC7U5Vnz5NTmOXn/40tXEY5EWl8d3bB7zSXBveU=; b=Njw0Edh+NorybLcCeom2ebdJb6t8bshMfgCkem0/QUDyvOXB35Cki6sHgt3qDSqWFK 9ZQvSx16GCEXEHX81aN8ty6ZlzNOc5GXvObfQekY7w4d8kSk/he2DpFemt2cioYY1KAT 2nrAx2g9PfoljKM46N5YEFBPsrqYEamASeRhQmXvyx+yhFcJrAuSM7/ZHm9ndRpDmTsO g78nDklcjx/bkOewzMS7jCW+XWdsV3skf7PyBlef/cCQtISXudIj3mbQ1UXzyJdruh7H rBU93TxaX2zGjweX9D4HvwODy6UbYQ8D/ySbPShWqP5RI+H2HPMe4UccPNX9o0OI7xvf OwOA== X-Gm-Message-State: AOJu0YyZzo1L3KD/KDEP29hBTtZN1aCU7aCePdAKKrJVgOsA1ACkbO/k y60PP7DIuSEJd0asge/Vt/hn0cCw+srxOBZ/hB8Wi+a/upY+FKwQksCnFyaF X-Google-Smtp-Source: AGHT+IEve3akEIV3pxXjCzqnpeHgDsnSh6d3B9a8MIXELrPvrynzl2KuzyWBAHCqbbTvwkXy9j1vMg== X-Received: by 2002:a2e:9d0e:0:b0:2ea:edac:4886 with SMTP id 38308e7fff4ca-2ebfca475dbmr14203981fa.45.1718203552258; Wed, 12 Jun 2024 07:45:52 -0700 (PDT) Received: from vasant-suse.suse.cz ([2001:9e8:ab7c:f800:473b:7cbe:2ac7:effa]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f18bbf3cbsm456440366b.1.2024.06.12.07.45.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 07:45:51 -0700 (PDT) From: vsntk18@gmail.com To: kvm@vger.kernel.org Cc: vsntk18@gmail.com, andrew.jones@linux.dev, jroedel@suse.de, papaluri@amd.com, pbonzini@redhat.com, seanjc@google.com, vkarasulli@suse.de Subject: [kvm-unit-tests PATCH v8 05/12] x86: AMD SEV-ES: Pull related GHCB definitions and helpers from Linux Date: Wed, 12 Jun 2024 16:45:32 +0200 Message-Id: <20240612144539.16147-6-vsntk18@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240612144539.16147-1-vsntk18@gmail.com> References: <20240612144539.16147-1-vsntk18@gmail.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Vasant Karasulli Origin: Linux e8c39d0f57f358950356a8e44ee5159f57f86ec5 Suppress -Waddress-of-packed-member to allow taking addresses on struct ghcb / struct vmcb_save_area fields. Signed-off-by: Vasant Karasulli --- lib/x86/amd_sev.h | 136 ++++++++++++++++++++++++++++++++++++++++++++ lib/x86/msr.h | 1 + lib/x86/processor.h | 8 +++ lib/x86/svm.h | 19 +++++-- x86/Makefile.x86_64 | 1 + 5 files changed, 159 insertions(+), 6 deletions(-) -- 2.34.1 diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h index 713019c2..b6b7a13f 100644 --- a/lib/x86/amd_sev.h +++ b/lib/x86/amd_sev.h @@ -18,6 +18,90 @@ #include "desc.h" #include "asm/page.h" #include "efi.h" +#include "processor.h" +#include "insn/insn.h" +#include "svm.h" + +#define GHCB_SHARED_BUF_SIZE 2032 + +struct ghcb_save_area { + u8 reserved_0x0[203]; + u8 cpl; + u8 reserved_0xcc[116]; + u64 xss; + u8 reserved_0x148[24]; + u64 dr7; + u8 reserved_0x168[16]; + u64 rip; + u8 reserved_0x180[88]; + u64 rsp; + u8 reserved_0x1e0[24]; + u64 rax; + u8 reserved_0x200[264]; + u64 rcx; + u64 rdx; + u64 rbx; + u8 reserved_0x320[8]; + u64 rbp; + u64 rsi; + u64 rdi; + u64 r8; + u64 r9; + u64 r10; + u64 r11; + u64 r12; + u64 r13; + u64 r14; + u64 r15; + u8 reserved_0x380[16]; + u64 sw_exit_code; + u64 sw_exit_info_1; + u64 sw_exit_info_2; + u64 sw_scratch; + u8 reserved_0x3b0[56]; + u64 xcr0; + u8 valid_bitmap[16]; + u64 x87_state_gpa; +} __packed; + +struct ghcb { + struct ghcb_save_area save; + u8 reserved_save[2048 - sizeof(struct ghcb_save_area)]; + + u8 shared_buffer[GHCB_SHARED_BUF_SIZE]; + + u8 reserved_0xff0[10]; + u16 protocol_version; /* negotiated SEV-ES/GHCB protocol version */ + u32 ghcb_usage; +} __packed; + +#define GHCB_PROTO_OUR 0x0001UL +#define GHCB_PROTOCOL_MAX 1ULL +#define GHCB_DEFAULT_USAGE 0ULL + +#define VMGEXIT() { asm volatile("rep; vmmcall\n\r"); } + +enum es_result { + ES_OK, /* All good */ + ES_UNSUPPORTED, /* Requested operation not supported */ + ES_VMM_ERROR, /* Unexpected state from the VMM */ + ES_DECODE_FAILED, /* Instruction decoding failed */ + ES_EXCEPTION, /* Instruction caused exception */ + ES_RETRY, /* Retry instruction emulation */ +}; + +struct es_fault_info { + unsigned long vector; + unsigned long error_code; + unsigned long cr2; +}; + +/* ES instruction emulation context */ +struct es_em_ctxt { + struct ex_regs *regs; + struct insn insn; + struct es_fault_info fi; +}; /* * AMD Programmer's Manual Volume 3 @@ -59,6 +143,58 @@ void handle_sev_es_vc(struct ex_regs *regs); unsigned long long get_amd_sev_c_bit_mask(void); unsigned long long get_amd_sev_addr_upperbound(void); +/* GHCB Accessor functions from Linux's include/asm/svm.h */ +#define GHCB_BITMAP_IDX(field) \ + (offsetof(struct ghcb_save_area, field) / sizeof(u64)) + +#define DEFINE_GHCB_ACCESSORS(field) \ + static __always_inline bool ghcb_##field##_is_valid(const struct ghcb *ghcb) \ + { \ + return test_bit(GHCB_BITMAP_IDX(field), \ + (unsigned long *)&ghcb->save.valid_bitmap); \ + } \ + \ + static __always_inline u64 ghcb_get_##field(struct ghcb *ghcb) \ + { \ + return ghcb->save.field; \ + } \ + \ + static __always_inline u64 ghcb_get_##field##_if_valid(struct ghcb *ghcb) \ + { \ + return ghcb_##field##_is_valid(ghcb) ? ghcb->save.field : 0; \ + } \ + \ + static __always_inline void ghcb_set_##field(struct ghcb *ghcb, u64 value) \ + { \ + set_bit(GHCB_BITMAP_IDX(field), \ + (u8 *)&ghcb->save.valid_bitmap); \ + ghcb->save.field = value; \ + } + +DEFINE_GHCB_ACCESSORS(cpl) +DEFINE_GHCB_ACCESSORS(rip) +DEFINE_GHCB_ACCESSORS(rsp) +DEFINE_GHCB_ACCESSORS(rax) +DEFINE_GHCB_ACCESSORS(rcx) +DEFINE_GHCB_ACCESSORS(rdx) +DEFINE_GHCB_ACCESSORS(rbx) +DEFINE_GHCB_ACCESSORS(rbp) +DEFINE_GHCB_ACCESSORS(rsi) +DEFINE_GHCB_ACCESSORS(rdi) +DEFINE_GHCB_ACCESSORS(r8) +DEFINE_GHCB_ACCESSORS(r9) +DEFINE_GHCB_ACCESSORS(r10) +DEFINE_GHCB_ACCESSORS(r11) +DEFINE_GHCB_ACCESSORS(r12) +DEFINE_GHCB_ACCESSORS(r13) +DEFINE_GHCB_ACCESSORS(r14) +DEFINE_GHCB_ACCESSORS(r15) +DEFINE_GHCB_ACCESSORS(sw_exit_code) +DEFINE_GHCB_ACCESSORS(sw_exit_info_1) +DEFINE_GHCB_ACCESSORS(sw_exit_info_2) +DEFINE_GHCB_ACCESSORS(sw_scratch) +DEFINE_GHCB_ACCESSORS(xcr0) + #endif /* CONFIG_EFI */ #endif /* _X86_AMD_SEV_H_ */ diff --git a/lib/x86/msr.h b/lib/x86/msr.h index 8abccf86..5661b221 100644 --- a/lib/x86/msr.h +++ b/lib/x86/msr.h @@ -151,6 +151,7 @@ #define MSR_AMD64_IBSDCLINAD 0xc0011038 #define MSR_AMD64_IBSDCPHYSAD 0xc0011039 #define MSR_AMD64_IBSCTL 0xc001103a +#define MSR_AMD64_SEV_ES_GHCB 0xc0010130 /* Fam 10h MSRs */ #define MSR_FAM10H_MMIO_CONF_BASE 0xc0010058 diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 44f4fd1e..b324cbf0 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -837,6 +837,14 @@ static inline void set_bit(int bit, u8 *addr) : "+m" (*addr) : "Ir" (bit) : "cc", "memory"); } +static inline int test_bit(int nr, const volatile unsigned long *addr) +{ + const volatile unsigned long *word = addr + BIT_WORD(nr); + unsigned long mask = BIT_MASK(nr); + + return (*word & mask) != 0; +} + static inline void flush_tlb(void) { ulong cr4; diff --git a/lib/x86/svm.h b/lib/x86/svm.h index 0fc64be7..01404010 100644 --- a/lib/x86/svm.h +++ b/lib/x86/svm.h @@ -175,11 +175,13 @@ struct __attribute__ ((__packed__)) vmcb_save_area { struct vmcb_seg ldtr; struct vmcb_seg idtr; struct vmcb_seg tr; - u8 reserved_1[43]; + /* Reserved fields are named following their struct offset */ + u8 reserved_0xa0[42]; + u8 vmpl; u8 cpl; - u8 reserved_2[4]; + u8 reserved_0xcc[4]; u64 efer; - u8 reserved_3[112]; + u8 reserved_0xd8[112]; u64 cr4; u64 cr3; u64 cr0; @@ -187,9 +189,11 @@ struct __attribute__ ((__packed__)) vmcb_save_area { u64 dr6; u64 rflags; u64 rip; - u8 reserved_4[88]; + u8 reserved_0x180[88]; u64 rsp; - u8 reserved_5[24]; + u64 s_cet; + u64 ssp; + u64 isst_addr; u64 rax; u64 star; u64 lstar; @@ -200,13 +204,15 @@ struct __attribute__ ((__packed__)) vmcb_save_area { u64 sysenter_esp; u64 sysenter_eip; u64 cr2; - u8 reserved_6[32]; + u8 reserved_0x248[32]; u64 g_pat; u64 dbgctl; u64 br_from; u64 br_to; u64 last_excp_from; u64 last_excp_to; + u8 reserved_0x298[72]; + u32 spec_ctrl; /* Guest version of SPEC_CTRL at 0x2E0 */ }; struct __attribute__ ((__packed__)) vmcb { @@ -307,6 +313,7 @@ struct __attribute__ ((__packed__)) vmcb { #define SVM_EXIT_WRITE_DR6 0x036 #define SVM_EXIT_WRITE_DR7 0x037 #define SVM_EXIT_EXCP_BASE 0x040 +#define SVM_EXIT_LAST_EXCP 0x05f #define SVM_EXIT_INTR 0x060 #define SVM_EXIT_NMI 0x061 #define SVM_EXIT_SMI 0x062 diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64 index 2771a6fa..c59b3b61 100644 --- a/x86/Makefile.x86_64 +++ b/x86/Makefile.x86_64 @@ -19,6 +19,7 @@ endif fcf_protection_full := $(call cc-option, -fcf-protection=full,) COMMON_CFLAGS += -mno-red-zone -mno-sse -mno-sse2 $(fcf_protection_full) +COMMON_CFLAGS += -Wno-address-of-packed-member cflatobjs += lib/x86/setjmp64.o cflatobjs += lib/x86/intel-iommu.o From patchwork Wed Jun 12 14:45:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Karasulli X-Patchwork-Id: 13695144 Received: from mail-ej1-f42.google.com (mail-ej1-f42.google.com [209.85.218.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0732217E8EB for ; Wed, 12 Jun 2024 14:45:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203557; cv=none; b=YGALJa5FGcpLUPk7J2NgnuBLvlfudZKOdSm7My7hIgKj68QB9VHi9LF7szBEQOJT1KdbaavJLGjtFMl8qBayOrnuKcsLZtY77ootBgsWyF20whwqQChefgiUD5OUyvA9JCigosJRHS6wIjz0LNmv0AP3NcMO+HMXbeZvrlPfti0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203557; c=relaxed/simple; bh=/k9TRSW7avGLBg0Xf94hpP3GMPSzLyyZ4nKJ+0FZy5o=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Pb8m/1VlTobTxwaTTFXZUY/0U0YXTJspz4gOikK9qK2MwAbLBklxnUq7mJ+QYx74yZ3VC3K48+37CBIDZXpig6amu56L0ib9EPlUHvpZBE3NT5WZr1CnAfb++bMmKl00SGutCtuuHyvlgp/bHjhJ+0rtaeyIhXD/V+aiZjwEQDk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=hM0s90M3; arc=none smtp.client-ip=209.85.218.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hM0s90M3" Received: by mail-ej1-f42.google.com with SMTP id a640c23a62f3a-a6f0c3d0792so287249766b.3 for ; Wed, 12 Jun 2024 07:45:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718203554; x=1718808354; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=shE8lsjbK4iNnGN91vjLHfsdXySp8yV1+reUcv8/w/0=; b=hM0s90M3215cVnLPQm6mGHUAQ4t84QDZorWvfwGdbFHg/tXm1z/uD2o0ba/FV/bqCv wXjhTe2iokdngC/Ww4K4O+oKIBFbYvJj5ehEB58ws841w/Eqw91J/j+Glx+zWraRua6x KRbcBS7HsngEEEHOQrSjHbHoKbrM1erjMrF5fK9sx9mgboBhzXiPs01P3Y1NTlhg6fIA hUudpiQZRb+Fm7EOAAYFBxjCVVc9NB1N1XC83MNJ7GocQ2rWK9uwxJ5gIVV+oaM4Uz1T XUhEE8ztY2mK1iiji2L2w6eY7+83sP2dO9w2CNaqze8to2HUewGnxzogIWO8P3tVF4Hx xlXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718203554; x=1718808354; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=shE8lsjbK4iNnGN91vjLHfsdXySp8yV1+reUcv8/w/0=; b=d3e2RvrpV8XT+DRmjqD5rMqdTLqW0lEVD6ty4pW/BEfsohvlVhvcSy8GQInFQ4l3YB gitvbGynfnxZQStZ/9HWnrjkBJRYQmSRLDL6d3Xnj/ffaaKfv7zXb+u7/M9wEyv6bv/K hH7sPCe13iKOiX+wqLmJRIkiVCbrFg4ddEp6HGxr/3DuEyMGQjEVIN173u/dQX1ruhI6 IMPzdXx805Dn6nqrlXip/Gt9oGlCGTPJYj9TgwmDiL5zyTCzMAqszsmTdLV+U8/MAhtn is9f5jEhyVzPpDzP+KkKk7t73Ut1XPjGyIBf5we1U0Fg7iQ9chpxsfwLo31ZCzt5mBrb T8cA== X-Gm-Message-State: AOJu0YwklfmyA3Th5jKZxXhQc22j6dJBqvtS9Kl9nT3MMi29odCgKHDA 8ao/JRrpfTtGxH7zzQ47CU6W+uBSqKDBAUd3Or0phVy5RhmMwPbz+EXlKooG X-Google-Smtp-Source: AGHT+IExI8ssESZtiR0F3hiSwB7uZxMUvEXnITqLnnkNQEU+aoTvkJ65XDavWIdOioIK+R+MHL4JbQ== X-Received: by 2002:a17:906:d9c7:b0:a6f:4e58:ffda with SMTP id a640c23a62f3a-a6f4e590048mr50247866b.50.1718203553694; Wed, 12 Jun 2024 07:45:53 -0700 (PDT) Received: from vasant-suse.suse.cz ([2001:9e8:ab7c:f800:473b:7cbe:2ac7:effa]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f18bbf3cbsm456440366b.1.2024.06.12.07.45.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 07:45:52 -0700 (PDT) From: vsntk18@gmail.com To: kvm@vger.kernel.org Cc: vsntk18@gmail.com, andrew.jones@linux.dev, jroedel@suse.de, papaluri@amd.com, pbonzini@redhat.com, seanjc@google.com, vkarasulli@suse.de Subject: [kvm-unit-tests PATCH v8 06/12] x86: AMD SEV-ES: Prepare for #VC processing Date: Wed, 12 Jun 2024 16:45:33 +0200 Message-Id: <20240612144539.16147-7-vsntk18@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240612144539.16147-1-vsntk18@gmail.com> References: <20240612144539.16147-1-vsntk18@gmail.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Vasant Karasulli Lay the groundwork for processing #VC exceptions in the handler. This includes clearing the GHCB, decoding the insn that triggered this #VC, and continuing execution after the exception has been processed. Based on Linux e8c39d0f57f358950356a8e44ee5159f57f86ec5 Signed-off-by: Vasant Karasulli --- lib/x86/amd_sev_vc.c | 87 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 87 insertions(+) -- 2.34.1 diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c index f6227030..30b892f9 100644 --- a/lib/x86/amd_sev_vc.c +++ b/lib/x86/amd_sev_vc.c @@ -1,15 +1,102 @@ // SPDX-License-Identifier: GPL-2.0 +/* + * AMD SEV-ES #VC exception handling. + * Adapted from Linux@b320441c04: + * - arch/x86/kernel/sev.c + * - arch/x86/kernel/sev-shared.c + */ #include "amd_sev.h" +#include "svm.h" extern phys_addr_t ghcb_addr; +static void vc_ghcb_invalidate(struct ghcb *ghcb) +{ + ghcb->save.sw_exit_code = 0; + memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap)); +} + +static bool vc_decoding_needed(unsigned long exit_code) +{ + /* Exceptions don't require to decode the instruction */ + return !(exit_code >= SVM_EXIT_EXCP_BASE && + exit_code <= SVM_EXIT_LAST_EXCP); +} + +static enum es_result vc_decode_insn(struct es_em_ctxt *ctxt) +{ + unsigned char buffer[MAX_INSN_SIZE]; + int ret; + + memcpy(buffer, (unsigned char *)ctxt->regs->rip, MAX_INSN_SIZE); + + ret = insn_decode(&ctxt->insn, buffer, MAX_INSN_SIZE, INSN_MODE_64); + if (ret < 0) + return ES_DECODE_FAILED; + else + return ES_OK; +} + +static enum es_result vc_init_em_ctxt(struct es_em_ctxt *ctxt, + struct ex_regs *regs, + unsigned long exit_code) +{ + enum es_result ret = ES_OK; + + memset(ctxt, 0, sizeof(*ctxt)); + ctxt->regs = regs; + + if (vc_decoding_needed(exit_code)) + ret = vc_decode_insn(ctxt); + + return ret; +} + +static void vc_finish_insn(struct es_em_ctxt *ctxt) +{ + ctxt->regs->rip += ctxt->insn.length; +} + +static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt, + struct ghcb *ghcb, + unsigned long exit_code) +{ + enum es_result result; + + switch (exit_code) { + default: + /* + * Unexpected #VC exception + */ + result = ES_UNSUPPORTED; + } + + return result; +} + void handle_sev_es_vc(struct ex_regs *regs) { struct ghcb *ghcb = (struct ghcb *) ghcb_addr; + unsigned long exit_code = regs->error_code; + struct es_em_ctxt ctxt; + enum es_result result; if (!ghcb) { /* TODO: kill guest */ return; } + + vc_ghcb_invalidate(ghcb); + result = vc_init_em_ctxt(&ctxt, regs, exit_code); + if (result == ES_OK) + result = vc_handle_exitcode(&ctxt, ghcb, exit_code); + if (result == ES_OK) { + vc_finish_insn(&ctxt); + } else { + printf("Unable to handle #VC exitcode, exit_code=%lx result=%x\n", + exit_code, result); + } + + return; } From patchwork Wed Jun 12 14:45:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Karasulli X-Patchwork-Id: 13695145 Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EB0617E8F6 for ; Wed, 12 Jun 2024 14:45:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203558; cv=none; b=R9RNpJVRFdi8Y3WluA9ox8j+41GQL1nfOUNu1Dt5oy6oyKKxvTXwWJE+UniwjvDB7APaZ9UK0vRy7siXuy2hrOYoJxpry1HC1GGWofCKH2ZvJuBBVx7LX5Idpx9FGDSWmZCU6KC03775p+cVQ+wspLVm3lQi2K81KJkusQIq8Po= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203558; c=relaxed/simple; bh=yqVIQaYfrIrHTNK3OkFmaV+HIjIKWjhtnBb+BAqytCA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=H7p9p+y3HEJ7O7NCTcwbRBeF/rYGhpsnNwkJE3KnsEASMD3WRJPC9jdnoiSTa7enVbWqb7TfbC1WEbR2ts9Fc+9iQqgNm19vdQH08g4N9YfbB8XnlAtzrHVZKZj+96fkQUMZ+rrReyTN0SruTfj/E7NgHnJMy6EBkSCL+/y84lQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=k9B3Av6H; arc=none smtp.client-ip=209.85.218.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="k9B3Av6H" Received: by mail-ej1-f46.google.com with SMTP id a640c23a62f3a-a6269885572so177179966b.1 for ; Wed, 12 Jun 2024 07:45:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718203555; x=1718808355; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=txPdngU7ZvAHKIsUQF4Or85ok4oAkjD81XgTYTjxzhA=; b=k9B3Av6Hu6DttWSjFkqx8eQo2orNsEfp6LcLgvUoi38kR77BcgrG3KBt7BuvyO86Yw 4BQgo0BPqilZa7GmumE51pEZNoJ/ZV/Mpz8iVQbzIwpRw/pxdDybyilzgzY4C+tzRv6G WaGwUBT1ZW0Rc6Y5xNeONtOxkBCPxNv29lY1hRoW6qxOs9Uv2fMMx2kS2bbSzTScGqQV I6mcuHFVNAEHyrcgr/eST+DLkXDuWTxgAPZeHghU4CBzpa+VrK5K94AWACZAZxMnvJE0 UA4ulSc+T6+z9IMe246F9Wv/50DFWopMA5cLRkV2m25BpAKywhNtD2EJijX26K00SFeB it7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718203555; x=1718808355; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=txPdngU7ZvAHKIsUQF4Or85ok4oAkjD81XgTYTjxzhA=; b=H3sImt2pWbjFrw3hXOOYLi8dcdIgH2VAzHF8B+P4adfn9yNOn2MoefMcT6adv8J7Hi gSnrtUMMqxDgTp4eExcGH9FAVEH1/jAQbxN+ihggYFPQuH0DaF9M7yFBkDT9G2uWgKkz YUuA+os76gDkQB7O9+MY2/2cVwZDbpIdLJ2mycNwDQiJ91c3NyGMG5OwiYEmVmi3zf07 mBkcPksjEinrFTG7HqoASu4xEJG0cMnFYH+A9eak1HqciUFVVBiY7l3F3YK3hdQhzbXC aeaKlvpZZE2qjVwAVfr5nbfK8ouMftVQNxcJljj6y+nwxi8I6oq1nslAH0T6Ueoj/8ai jzNA== X-Gm-Message-State: AOJu0YxGCkH9/b2GQR9Ynd+HFEHTRs6fHLeCzinALiSQ/3a3Qfr4tuWi G65mk8ADahp36Z1xr69AUrCtmLO3860yumrQ56oqEZCV2eG6v5wcFPAwTKao X-Google-Smtp-Source: AGHT+IGtfRMppDEgNRdntb1Mk4m3/ezyjhd5RVMXoPm+oZs2xzoLVjtEDJM6D+wsWb9qoo/5wsklTQ== X-Received: by 2002:a17:907:1186:b0:a6f:2000:9811 with SMTP id a640c23a62f3a-a6f46800881mr143188266b.13.1718203554848; Wed, 12 Jun 2024 07:45:54 -0700 (PDT) Received: from vasant-suse.suse.cz ([2001:9e8:ab7c:f800:473b:7cbe:2ac7:effa]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f18bbf3cbsm456440366b.1.2024.06.12.07.45.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 07:45:54 -0700 (PDT) From: vsntk18@gmail.com To: kvm@vger.kernel.org Cc: vsntk18@gmail.com, andrew.jones@linux.dev, jroedel@suse.de, papaluri@amd.com, pbonzini@redhat.com, seanjc@google.com, vkarasulli@suse.de Subject: [kvm-unit-tests PATCH v8 07/12] lib/x86: Move xsave helpers to lib/ Date: Wed, 12 Jun 2024 16:45:34 +0200 Message-Id: <20240612144539.16147-8-vsntk18@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240612144539.16147-1-vsntk18@gmail.com> References: <20240612144539.16147-1-vsntk18@gmail.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Vasant Karasulli Processing CPUID #VC for AMD SEV-ES requires copying xcr0 into GHCB. Move the xsave read/write helpers used by xsave testcase to lib/x86 to share as common code. Signed-off-by: Vasant Karasulli --- lib/x86/processor.h | 8 ++++++++ x86/xsave.c | 7 ------- 2 files changed, 8 insertions(+), 7 deletions(-) -- 2.34.1 diff --git a/lib/x86/processor.h b/lib/x86/processor.h index b324cbf0..e85f9e0e 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -477,6 +477,14 @@ static inline uint64_t rdpmc(uint32_t index) return val; } +/* XCR0 related definitions */ +#define XCR_XFEATURE_ENABLED_MASK 0x00000000 +#define XCR_XFEATURE_ILLEGAL_MASK 0x00000010 + +#define XSTATE_FP 0x1 +#define XSTATE_SSE 0x2 +#define XSTATE_YMM 0x4 + static inline int xgetbv_safe(u32 index, u64 *result) { return rdreg64_safe(".byte 0x0f,0x01,0xd0", index, result); diff --git a/x86/xsave.c b/x86/xsave.c index 5d80f245..feb8db28 100644 --- a/x86/xsave.c +++ b/x86/xsave.c @@ -17,13 +17,6 @@ static uint64_t get_supported_xcr0(void) return r.a + ((u64)r.d << 32); } -#define XCR_XFEATURE_ENABLED_MASK 0x00000000 -#define XCR_XFEATURE_ILLEGAL_MASK 0x00000010 - -#define XSTATE_FP 0x1 -#define XSTATE_SSE 0x2 -#define XSTATE_YMM 0x4 - static void test_xsave(void) { unsigned long cr4; From patchwork Wed Jun 12 14:45:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Karasulli X-Patchwork-Id: 13695147 Received: from mail-ej1-f48.google.com (mail-ej1-f48.google.com [209.85.218.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3143417F514 for ; Wed, 12 Jun 2024 14:45:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203559; cv=none; b=axr/WB3yQXGl5exgQr8RsJVGsSeb9OmdQOSAr4vrx8K5FH5BRma8aLEUO/Y2UmdP9LETBoHQEsds5jiEaoWDKZ+9imVCUtvT7e0PLrqkZrpWMqyJZJnFr2oRm0Mz1YJb5wTisQOX+kqu1EkHj8bzi51pxkGh8HZkGsKkz30ZeCE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203559; c=relaxed/simple; bh=pWEo4rehcTpG5+D1JR3lXzd0VUms7x1yeKkMzdClPaE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=t3hiJAam9kupMZHU9mZ/A1so5DXUZ+u2kw9eE8n6NhT1NMcShL9mY3/TEqa96QNZXM3KhMJjf+EAh67BpxeHb0q+NTzneYCpqCDVcFjDmXskvDHufrs/tthjfon82ZLa2W2PUtVxrtZqN+QAYWOycypeY0cBlXeo9vG9ZQJbjmI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=FlEassWS; arc=none smtp.client-ip=209.85.218.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FlEassWS" Received: by mail-ej1-f48.google.com with SMTP id a640c23a62f3a-a6f3efa1cc7so179301466b.0 for ; Wed, 12 Jun 2024 07:45:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718203556; x=1718808356; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VPmf+ULw9UxGqX77U4KfH0lMoJbFqr1mImBQcazrTZk=; b=FlEassWS/fRwqgvqRLGm03fNyPAbT0JSlVDXtMT+gz4DHPPHmMvspWvg9Tzh5CwN/C P3x+bvS+98SKjJAOCmiNfae3n6mCfQeovtFp8EvztqGCisrbq52nqfEm2C07/7Qf+uNc yo5qiWpLcMd9R3uduwf1Yh6APpv6/EcVrwYxMqG1n3EZzNizY75eso+imU6upCK756+s D1W3DebYjX85vF3wLkewC7WE3JazzjcC8IQXk1XgUfx4TBA2sMqWrPCF4NnEaolRCSDM xh/ATu95JJKzId2/xdibrT2+2Y6UgaIIlCa27UZq9ND/WfRUWGYu0LqLzAO5WI5EB4mG fUSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718203556; x=1718808356; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VPmf+ULw9UxGqX77U4KfH0lMoJbFqr1mImBQcazrTZk=; b=ZUl8E6ELQXpurXRT1K8HEMz7ynuRmSE4buv9ch8PzAb+0Zh513sYc4B45f9SdAkizp BFcOE0uowQMntP/d/BUqGh7q2+Hw7vBV1l3sAXrTQTqQyxe3X7TlbOnaHef3x924JHUX RBH0eruTEvUFtMizbtQlQm0FHqi8ePc/Nf2bdIwDpjjpYe15YBr1p5X4nSCXf++49u9x X/maysIyg2E2/lxErSRLTHrxMc/vmA/AjmMLkjutDzFQWBuYRfbvWQWV2IlBYefIfWmn Rugy8YFFN7NwtotNGkk/Ahg56Wxy0+KNWnVyu1Ybx5Sf0nTNkbCsoehP8TtINL2gYvCr ap4A== X-Gm-Message-State: AOJu0YzFUsvu90qijz0Q6P3qtFnLhILKC/mDJSUf15fbGX4W77QqHcWT bZVdPS09DkwMV5nThDFuSXpuhrC16zSo+AR36snp13E2cjMZVE8yaYVr8IPs X-Google-Smtp-Source: AGHT+IHgyh+3DozoxpWdiGogxIKvfwBWYPSgnUbnEigcpdB/MdyPoWhFal9G9AzFsz1t4A2XkLIyaQ== X-Received: by 2002:a17:906:328f:b0:a6f:4ebd:1463 with SMTP id a640c23a62f3a-a6f4ebd1d37mr46815266b.20.1718203555952; Wed, 12 Jun 2024 07:45:55 -0700 (PDT) Received: from vasant-suse.suse.cz ([2001:9e8:ab7c:f800:473b:7cbe:2ac7:effa]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f18bbf3cbsm456440366b.1.2024.06.12.07.45.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 07:45:55 -0700 (PDT) From: vsntk18@gmail.com To: kvm@vger.kernel.org Cc: vsntk18@gmail.com, andrew.jones@linux.dev, jroedel@suse.de, papaluri@amd.com, pbonzini@redhat.com, seanjc@google.com, vkarasulli@suse.de Subject: [kvm-unit-tests PATCH v8 08/12] x86: AMD SEV-ES: Handle CPUID #VC Date: Wed, 12 Jun 2024 16:45:35 +0200 Message-Id: <20240612144539.16147-9-vsntk18@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240612144539.16147-1-vsntk18@gmail.com> References: <20240612144539.16147-1-vsntk18@gmail.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Vasant Karasulli Using Linux's CPUID #VC processing logic. Signed-off-by: Vasant Karasulli --- lib/x86/amd_sev.h | 5 ++- lib/x86/amd_sev_vc.c | 85 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 89 insertions(+), 1 deletion(-) -- 2.34.1 diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h index b6b7a13f..efd439fb 100644 --- a/lib/x86/amd_sev.h +++ b/lib/x86/amd_sev.h @@ -71,7 +71,7 @@ struct ghcb { u8 shared_buffer[GHCB_SHARED_BUF_SIZE]; u8 reserved_0xff0[10]; - u16 protocol_version; /* negotiated SEV-ES/GHCB protocol version */ + u16 version; /* version of the GHCB data structure */ u32 ghcb_usage; } __packed; @@ -79,6 +79,9 @@ struct ghcb { #define GHCB_PROTOCOL_MAX 1ULL #define GHCB_DEFAULT_USAGE 0ULL +/* Version of the GHCB data structure */ +#define GHCB_VERSION 1 + #define VMGEXIT() { asm volatile("rep; vmmcall\n\r"); } enum es_result { diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c index 30b892f9..39bca09e 100644 --- a/lib/x86/amd_sev_vc.c +++ b/lib/x86/amd_sev_vc.c @@ -58,6 +58,88 @@ static void vc_finish_insn(struct es_em_ctxt *ctxt) ctxt->regs->rip += ctxt->insn.length; } +static enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb, + struct es_em_ctxt *ctxt, + u64 exit_code, u64 exit_info_1, + u64 exit_info_2) +{ + enum es_result ret; + + /* Fill in protocol and format specifiers */ + ghcb->version = GHCB_VERSION; + ghcb->ghcb_usage = GHCB_DEFAULT_USAGE; + + ghcb_set_sw_exit_code(ghcb, exit_code); + ghcb_set_sw_exit_info_1(ghcb, exit_info_1); + ghcb_set_sw_exit_info_2(ghcb, exit_info_2); + + wrmsr(MSR_AMD64_SEV_ES_GHCB, __pa(ghcb)); + VMGEXIT(); + + if ((ghcb->save.sw_exit_info_1 & 0xffffffff) == 1) { + u64 info = ghcb->save.sw_exit_info_2; + unsigned long v; + + v = info & SVM_EVTINJ_VEC_MASK; + + /* Check if exception information from hypervisor is sane. */ + if ((info & SVM_EVTINJ_VALID) && + ((v == GP_VECTOR) || (v == UD_VECTOR)) && + ((info & SVM_EVTINJ_TYPE_MASK) == SVM_EVTINJ_TYPE_EXEPT)) { + ctxt->fi.vector = v; + if (info & SVM_EVTINJ_VALID_ERR) + ctxt->fi.error_code = info >> 32; + ret = ES_EXCEPTION; + } else { + ret = ES_VMM_ERROR; + } + } else if (ghcb->save.sw_exit_info_1 & 0xffffffff) { + ret = ES_VMM_ERROR; + } else { + ret = ES_OK; + } + + return ret; +} + +static enum es_result vc_handle_cpuid(struct ghcb *ghcb, + struct es_em_ctxt *ctxt) +{ + struct ex_regs *regs = ctxt->regs; + u32 cr4 = read_cr4(); + enum es_result ret; + + ghcb_set_rax(ghcb, regs->rax); + ghcb_set_rcx(ghcb, regs->rcx); + + if (cr4 & X86_CR4_OSXSAVE) { + /* Safe to read xcr0 */ + u64 xcr0; + xgetbv_safe(XCR_XFEATURE_ENABLED_MASK, &xcr0); + ghcb_set_xcr0(ghcb, xcr0); + } else { + /* xgetbv will cause #GP - use reset value for xcr0 */ + ghcb_set_xcr0(ghcb, 1); + } + + ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_CPUID, 0, 0); + if (ret != ES_OK) + return ret; + + if (!(ghcb_rax_is_valid(ghcb) && + ghcb_rbx_is_valid(ghcb) && + ghcb_rcx_is_valid(ghcb) && + ghcb_rdx_is_valid(ghcb))) + return ES_VMM_ERROR; + + regs->rax = ghcb->save.rax; + regs->rbx = ghcb->save.rbx; + regs->rcx = ghcb->save.rcx; + regs->rdx = ghcb->save.rdx; + + return ES_OK; +} + static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt, struct ghcb *ghcb, unsigned long exit_code) @@ -65,6 +147,9 @@ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt, enum es_result result; switch (exit_code) { + case SVM_EXIT_CPUID: + result = vc_handle_cpuid(ghcb, ctxt); + break; default: /* * Unexpected #VC exception From patchwork Wed Jun 12 14:45:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Karasulli X-Patchwork-Id: 13695148 Received: from mail-ej1-f41.google.com (mail-ej1-f41.google.com [209.85.218.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26BC917F373 for ; Wed, 12 Jun 2024 14:45:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203560; cv=none; b=ZTZMmV6IxGwbIwi9KgkkkeSSBMQbVkna8S74WY0NqrnZMYhwxJjP+Ba07OAVCEF43RYSTBrMzRhQbhOY6XTBLNu7W+zEofVgO4KiIrkr6EFljsssU56QZIolMA24tn1Lg5vcDTTD6ZSUQq2GRaxpjoHD4UeF+QMWEfx6Gxg1Y3Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203560; c=relaxed/simple; bh=vYy996Id7SG5T+Tpak3oFN1dudkcN2xdXRo6iswSjZE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ll9JXZN6x5/H1eKYxGS3gcKszLtIFLiHHu/MosJ33XWyxztcfvCejsD/AtrNevW921ev6JqlPwxjni91MXQ2V2tYq+ux5/T6KfStc32qadi/r4YzsUPaKMTg4jRGWNXitHpDl4Zw2HQuiB0bOpmC7fFRfdGsWD4msOv59Q/sCss= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=HoKKT/Y7; arc=none smtp.client-ip=209.85.218.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="HoKKT/Y7" Received: by mail-ej1-f41.google.com with SMTP id a640c23a62f3a-a6f13dddf7eso495234666b.0 for ; Wed, 12 Jun 2024 07:45:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718203557; x=1718808357; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aM2Ct0cQV1dfF+w9k5l+MMyl+fTONNWbkjb0FsBk8bY=; b=HoKKT/Y7wVo/0OuScRoStWIB8LnrRGTA+K/3M5huDNAX3omTre6lUch996ZODUbaT3 OB7Wo6DE7tNe60f9g9BwxEV+Rr/6ILUoF6MLg4NuQsk1rojovdHV0LOzPO2LzTmnj8nq MYMqDTy33FM27lt0rCgO6f/IX8C7zjr92CYR8tIt/DOz+rV1pfZc1hyV5xRkpws/+Z9g attXmjxjFOkMz61dsys1jDLcSQ82sZSGm3f4W/Iym/BMGmJl8XIgKKbHqBD/y9QCaXIK wt4BKHJF5vJ2M3ChFWiFu5kLSHVC4Sjidw5sOH2nTTdefDpSnJ2OBN6JbRd8KSFjpN0S nRPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718203557; x=1718808357; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aM2Ct0cQV1dfF+w9k5l+MMyl+fTONNWbkjb0FsBk8bY=; b=YSlNn2gW6StI7v5/q/tXxJkPpdF9TZ1apq88ogZ9gWjjj8o9DV3jlrQhFSfkEZxyE2 j7kSTIlTlX18rfnS1KrwFe9sKchlbcsPkh4PTXzri3wTXGpTX8Jg6ODqu2o0eWCS6Lkq FTIxBZus9n3D/EB80kl9LliWeHamwpbm5i1WyKv4Vi7fOQX48TOqzqHggQZNvMF4UOLe zKVukFB+E1oO+4OqGbpxz7wDKFSq/KcR4oM/wEz7oiOcdUKtcnrax3KBFiKERpzD4m5a Z1GhTYxdjrIi18T/vgHsjxExh3/czCGZHwTe0ImNnVDpWEwwI55ZEpaXR9Hqt/GDIhuM 9UPQ== X-Gm-Message-State: AOJu0Yz0CMYhcAY5x+s6wLJKWRBcz+1oDz/IGMYjon4orgQfBKJoc+AR CAD5Xyab4RhtS0t9rXgp5Q9tvBaYrF3p9l2Ro14trasQ/EgO8bIbsuLD8hKH X-Google-Smtp-Source: AGHT+IERv9S86t7ARxGw6B6q1B8cyEvIp/c+sJ30vC1u4q9Tf5KsAU185AyArehzT9PEJdCxvwXu+w== X-Received: by 2002:a17:906:aac6:b0:a6f:153b:6613 with SMTP id a640c23a62f3a-a6f47f52e23mr131378366b.16.1718203557077; Wed, 12 Jun 2024 07:45:57 -0700 (PDT) Received: from vasant-suse.suse.cz ([2001:9e8:ab7c:f800:473b:7cbe:2ac7:effa]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f18bbf3cbsm456440366b.1.2024.06.12.07.45.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 07:45:56 -0700 (PDT) From: vsntk18@gmail.com To: kvm@vger.kernel.org Cc: vsntk18@gmail.com, andrew.jones@linux.dev, jroedel@suse.de, papaluri@amd.com, pbonzini@redhat.com, seanjc@google.com, vkarasulli@suse.de Subject: [kvm-unit-tests PATCH v8 09/12] x86: AMD SEV-ES: Handle MSR #VC Date: Wed, 12 Jun 2024 16:45:36 +0200 Message-Id: <20240612144539.16147-10-vsntk18@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240612144539.16147-1-vsntk18@gmail.com> References: <20240612144539.16147-1-vsntk18@gmail.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Vasant Karasulli Using Linux's MSR #VC processing logic. Signed-off-by: Vasant Karasulli --- lib/x86/amd_sev_vc.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) -- 2.34.1 diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c index 39bca09e..72253817 100644 --- a/lib/x86/amd_sev_vc.c +++ b/lib/x86/amd_sev_vc.c @@ -140,6 +140,31 @@ static enum es_result vc_handle_cpuid(struct ghcb *ghcb, return ES_OK; } +static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) +{ + struct ex_regs *regs = ctxt->regs; + enum es_result ret; + u64 exit_info_1; + + /* Is it a WRMSR? */ + exit_info_1 = (ctxt->insn.opcode.bytes[1] == 0x30) ? 1 : 0; + + ghcb_set_rcx(ghcb, regs->rcx); + if (exit_info_1) { + ghcb_set_rax(ghcb, regs->rax); + ghcb_set_rdx(ghcb, regs->rdx); + } + + ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_MSR, exit_info_1, 0); + + if ((ret == ES_OK) && (!exit_info_1)) { + regs->rax = ghcb->save.rax; + regs->rdx = ghcb->save.rdx; + } + + return ret; +} + static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt, struct ghcb *ghcb, unsigned long exit_code) @@ -150,6 +175,9 @@ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt, case SVM_EXIT_CPUID: result = vc_handle_cpuid(ghcb, ctxt); break; + case SVM_EXIT_MSR: + result = vc_handle_msr(ghcb, ctxt); + break; default: /* * Unexpected #VC exception From patchwork Wed Jun 12 14:45:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Karasulli X-Patchwork-Id: 13695149 Received: from mail-ej1-f54.google.com (mail-ej1-f54.google.com [209.85.218.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54D6E17FABC for ; Wed, 12 Jun 2024 14:46:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203562; cv=none; b=mP8gGHYiHosl78Mxe1rN2wfGIU/MTSXfe0CaEE92zipvHzYjrdWRY8Omch1O0dJZE8fM5DESAxcArZIUM34Bcdle80QohVnoq28ebIwl/fjX3bvaw0Y6hzt9oxZtILpDgbnbXsSK8PnXoRaJQ1qq/Gnen04CfEiUvkwzpNPTjuw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203562; c=relaxed/simple; bh=ADSy9qiOVZPb69AWewC4lMF1QZ1Yc+pUnevzFE6jw2s=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=mDsJNY6w6bzo0sp4qhHIU7OI0h27eaEd8LnpGeyk3gqNmNbL7f1Q2PsKmNRx+/9URdFdvqGCorxGbawOfbTTb80vvveUK0WEtAQObkm+itSI6lCxZhvY/l/Qas8eXeOxGBMWLRPOCjHtAk5BPXTbf9kB4VFl3i/CrM1Mg8M9R0o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Z09m3mTM; arc=none smtp.client-ip=209.85.218.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Z09m3mTM" Received: by mail-ej1-f54.google.com with SMTP id a640c23a62f3a-a6f09eaf420so464482266b.3 for ; Wed, 12 Jun 2024 07:46:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718203558; x=1718808358; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LmMu36s/tTTZOWVF+OkMRemw2+tCfV7gSSWNOOYUC5s=; b=Z09m3mTMFl+GXNwBPb2BRoOD7z952Ok2vTc1m4NUBsPLKpb/ugtRmoOeXdJkw2Getw 88e1pyq72MGW/RE/BxdCqdF5+3NEUc6WlMIbIzFqxL+r1y13Q9WgQZX9XQGBZKGq04z1 gTpF9cWaswvYfwNuvH08fjkntfSZBcrQ+ptAliF26120UX8w4iqjChgQoUrtaAHJ6Em1 doni+bdbj64rIsehU2dCC/Bzl7MMbslFhh5jXlAaWz/+45sEorz/jghiwaOZoik1LdKi in5ZyGG6iFr9TM7iCsT6c4kDr/RkJNcWSeMeoUtIMqOqWK/u0i5sUJsNVc75FjnpamaV tuJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718203558; x=1718808358; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LmMu36s/tTTZOWVF+OkMRemw2+tCfV7gSSWNOOYUC5s=; b=DNqrmo5fV5+XKgJ8TBvWtkW5LBmMn3kW7lCXIDXhuWNZNUCqbqSBfOz6iufs+iddYJ mPOeJATfgXBVOO242f4SBnnRdfmvS488KshdSUV792cULObGHeR2+uk+SUTzHKTn5pEL qnaQhU0YFwIxkL3q/+PQ+UK5FhIfHiiuH0WzrMCBL0oggn805sUVBEspwvym+9HqkI3Z VZJMXoqFGOh4kmYSRnFmkdodZ9CT+4oB39lfbyciv7d/MFBYU9QRGbQJZTobdmuQPWKs paWjWgqlh9NAgynTwoT4f42a0g2z6uvAqfVwXCuvKEKBZGh5dkZIf/O9qghldVAe1RJ2 urjg== X-Gm-Message-State: AOJu0YzV5Qm/0wPS3DdsjR/X5y4F/fzX8i+sHEZXYXuSurkGuqtypHGl xsWmDjstJFfFotUelkXEn7hjsHPoKnY/fr3xJX4fdOenuYaHExCoihrYfcuC X-Google-Smtp-Source: AGHT+IEEszAid0vXSJ1DflYJuzp3wbER50YuS8o5noYd7Z/h1t/YaVjxkz74c8Q5zn24M1e/i1K1HQ== X-Received: by 2002:a17:907:9485:b0:a6f:201a:299a with SMTP id a640c23a62f3a-a6f48013ba4mr151708066b.60.1718203558068; Wed, 12 Jun 2024 07:45:58 -0700 (PDT) Received: from vasant-suse.suse.cz ([2001:9e8:ab7c:f800:473b:7cbe:2ac7:effa]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f18bbf3cbsm456440366b.1.2024.06.12.07.45.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 07:45:57 -0700 (PDT) From: vsntk18@gmail.com To: kvm@vger.kernel.org Cc: vsntk18@gmail.com, andrew.jones@linux.dev, jroedel@suse.de, papaluri@amd.com, pbonzini@redhat.com, seanjc@google.com, vkarasulli@suse.de Subject: [kvm-unit-tests PATCH v8 10/12] x86: AMD SEV-ES: Handle IOIO #VC Date: Wed, 12 Jun 2024 16:45:37 +0200 Message-Id: <20240612144539.16147-11-vsntk18@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240612144539.16147-1-vsntk18@gmail.com> References: <20240612144539.16147-1-vsntk18@gmail.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Vasant Karasulli Using Linux's IOIO #VC processing logic. Signed-off-by: Vasant Karasulli --- lib/x86/amd_sev_vc.c | 150 +++++++++++++++++++++++++++++++++++++++++++ lib/x86/processor.h | 7 ++ lib/x86/svm.h | 19 ++++++ 3 files changed, 176 insertions(+) -- 2.34.1 diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c index 72253817..0cdb9c06 100644 --- a/lib/x86/amd_sev_vc.c +++ b/lib/x86/amd_sev_vc.c @@ -165,6 +165,153 @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) return ret; } +/** + * insn_has_rep_prefix() - Determine if instruction has a REP prefix + * @insn: Instruction containing the prefix to inspect + * + * Returns: + * + * 1 if the instruction has a REP prefix, 0 if not. + */ +static int insn_has_rep_prefix(struct insn *insn) +{ + insn_byte_t p; + int i; + + insn_get_prefixes(insn); + + for_each_insn_prefix(insn, i, p) { + if (p == 0xf2 || p == 0xf3) + return 1; + } + + return 0; +} + +static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo) +{ + struct insn *insn = &ctxt->insn; + *exitinfo = 0; + + switch (insn->opcode.bytes[0]) { + /* INS opcodes */ + case 0x6c: + case 0x6d: + *exitinfo |= SVM_IOIO_TYPE_INS; + *exitinfo |= SVM_IOIO_SEG_ES; + *exitinfo |= (ctxt->regs->rdx & 0xffff) << 16; + break; + + /* OUTS opcodes */ + case 0x6e: + case 0x6f: + *exitinfo |= SVM_IOIO_TYPE_OUTS; + *exitinfo |= SVM_IOIO_SEG_DS; + *exitinfo |= (ctxt->regs->rdx & 0xffff) << 16; + break; + + /* IN immediate opcodes */ + case 0xe4: + case 0xe5: + *exitinfo |= SVM_IOIO_TYPE_IN; + *exitinfo |= (u8)insn->immediate.value << 16; + break; + + /* OUT immediate opcodes */ + case 0xe6: + case 0xe7: + *exitinfo |= SVM_IOIO_TYPE_OUT; + *exitinfo |= (u8)insn->immediate.value << 16; + break; + + /* IN register opcodes */ + case 0xec: + case 0xed: + *exitinfo |= SVM_IOIO_TYPE_IN; + *exitinfo |= (ctxt->regs->rdx & 0xffff) << 16; + break; + + /* OUT register opcodes */ + case 0xee: + case 0xef: + *exitinfo |= SVM_IOIO_TYPE_OUT; + *exitinfo |= (ctxt->regs->rdx & 0xffff) << 16; + break; + + default: + return ES_DECODE_FAILED; + } + + switch (insn->opcode.bytes[0]) { + case 0x6c: + case 0x6e: + case 0xe4: + case 0xe6: + case 0xec: + case 0xee: + /* Single byte opcodes */ + *exitinfo |= SVM_IOIO_DATA_8; + break; + default: + /* Length determined by instruction parsing */ + *exitinfo |= (insn->opnd_bytes == 2) ? SVM_IOIO_DATA_16 + : SVM_IOIO_DATA_32; + } + switch (insn->addr_bytes) { + case 2: + *exitinfo |= SVM_IOIO_ADDR_16; + break; + case 4: + *exitinfo |= SVM_IOIO_ADDR_32; + break; + case 8: + *exitinfo |= SVM_IOIO_ADDR_64; + break; + } + + if (insn_has_rep_prefix(insn)) + *exitinfo |= SVM_IOIO_REP; + + return ES_OK; +} + +static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt) +{ + struct ex_regs *regs = ctxt->regs; + u64 exit_info_1; + enum es_result ret; + + ret = vc_ioio_exitinfo(ctxt, &exit_info_1); + if (ret != ES_OK) + return ret; + + if (exit_info_1 & SVM_IOIO_TYPE_STR) { + ret = ES_VMM_ERROR; + } else { + /* IN/OUT into/from rAX */ + + int bits = (exit_info_1 & 0x70) >> 1; + u64 rax = 0; + + if (!(exit_info_1 & SVM_IOIO_TYPE_IN)) + rax = lower_bits(regs->rax, bits); + + ghcb_set_rax(ghcb, rax); + + ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_IOIO, exit_info_1, 0); + if (ret != ES_OK) + return ret; + + if (exit_info_1 & SVM_IOIO_TYPE_IN) { + if (!ghcb_rax_is_valid(ghcb)) + return ES_VMM_ERROR; + regs->rax = lower_bits(ghcb->save.rax, bits); + } + } + + return ret; +} + static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt, struct ghcb *ghcb, unsigned long exit_code) @@ -178,6 +325,9 @@ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt, case SVM_EXIT_MSR: result = vc_handle_msr(ghcb, ctxt); break; + case SVM_EXIT_IOIO: + result = vc_handle_ioio(ghcb, ctxt); + break; default: /* * Unexpected #VC exception diff --git a/lib/x86/processor.h b/lib/x86/processor.h index e85f9e0e..a7dd3de3 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -853,6 +853,13 @@ static inline int test_bit(int nr, const volatile unsigned long *addr) return (*word & mask) != 0; } +static inline u64 lower_bits(u64 val, unsigned int bits) +{ + u64 mask = (1ULL << bits) - 1; + + return (val & mask); +} + static inline void flush_tlb(void) { ulong cr4; diff --git a/lib/x86/svm.h b/lib/x86/svm.h index 01404010..96e17dc3 100644 --- a/lib/x86/svm.h +++ b/lib/x86/svm.h @@ -151,6 +151,25 @@ struct __attribute__ ((__packed__)) vmcb_control_area { #define SVM_IOIO_SIZE_MASK (7 << SVM_IOIO_SIZE_SHIFT) #define SVM_IOIO_ASIZE_MASK (7 << SVM_IOIO_ASIZE_SHIFT) +#define SVM_IOIO_TYPE_STR BIT(2) +#define SVM_IOIO_TYPE_IN 1 +#define SVM_IOIO_TYPE_INS (SVM_IOIO_TYPE_IN | SVM_IOIO_TYPE_STR) +#define SVM_IOIO_TYPE_OUT 0 +#define SVM_IOIO_TYPE_OUTS (SVM_IOIO_TYPE_OUT | SVM_IOIO_TYPE_STR) + +#define SVM_IOIO_REP BIT(3) + +#define SVM_IOIO_ADDR_64 BIT(9) +#define SVM_IOIO_ADDR_32 BIT(8) +#define SVM_IOIO_ADDR_16 BIT(7) + +#define SVM_IOIO_DATA_32 BIT(6) +#define SVM_IOIO_DATA_16 BIT(5) +#define SVM_IOIO_DATA_8 BIT(4) + +#define SVM_IOIO_SEG_ES (0 << 10) +#define SVM_IOIO_SEG_DS (3 << 10) + #define SVM_VM_CR_VALID_MASK 0x001fULL #define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL #define SVM_VM_CR_SVM_DIS_MASK 0x0010ULL From patchwork Wed Jun 12 14:45:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Karasulli X-Patchwork-Id: 13695150 Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC59617FAD2 for ; Wed, 12 Jun 2024 14:46:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203563; cv=none; b=qn4OXP0PPIOohvI/U20CF5SbvhaEDfLeNRSxPbeZdR0hVWOf0YNn+mCv3sOa5i8o4llOmx4NRkHXqWm2+vRL4n/UEBthc1YI8Dyxo6Spo7N40n91P6TxfF4HSm0jW18dWZ8e3FKRPI6BQ9mzah+jz8yZQN2WNaHG9mxPS2++LKo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203563; c=relaxed/simple; bh=ib1JPsbUsbK8RoXIG46rQaPxg2Drwv+ltu0NArK18Yw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=lRMhA1j8cQIC2wFxQDq9IvuF5FRFseVWsj68TnstoepC+vINRvL0fbFg+QQ4o+/sFNrCYwap2aIpDTY/ruBPnjBO1yQuaQccAw0HcEXCvQn53ksLEHTzGHQqiIrLqtwgrbXfsyQ1L1r26xfoe6Fs2VjJmzL5+OA9mosQLDNMGXo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=k+k1fBzv; arc=none smtp.client-ip=209.85.218.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="k+k1fBzv" Received: by mail-ej1-f46.google.com with SMTP id a640c23a62f3a-a6f0c3d0792so287260466b.3 for ; Wed, 12 Jun 2024 07:46:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718203559; x=1718808359; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vBrtORPHJ7zoFoq2iM8wpE5czqyK4BpZzNU3+HrHvvg=; b=k+k1fBzveV9Y29Z2LMhDg3mU7Mo1/x4gHMuowUa4s2y0xM5sHoaMqopZLrzhHCQgCJ x0jmU6KrKfAHDUuuQ//UtRi3fUi8AX6wekLXQrYaRx43VUnRc1EIMwQBRZ4RjA/KJJCy 9MEpt5RtHicEpiDKdm91/aNVjsFBM+PiDS+Gj4Xyih5sljJMGPT9QQe4wQT13w2QsFg9 DLGvm7HF2lDsxR51Dsu6uHF5QUUXeLDQVRmF8m+B4zZz/dHHYg5exxcLNI1ZhtrItcF8 tv+q1zbbn0MD6al3zAdibKa1Ve6NNHcj11Uikx/TeyfJQrfgjVJ2lvHVJEhyJcEPZhbY ylIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718203559; x=1718808359; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vBrtORPHJ7zoFoq2iM8wpE5czqyK4BpZzNU3+HrHvvg=; b=wRMIc/Rb1M7WtovLX5AAC+vOM6l/HgIrNPRqCQmYrsi0aaHVAw3ydVMhltFEw/+Rc8 bzLfEUs3AF7TMJmwEe7jJWLTdIXdXr2XdgGUK6g5Y0xjerT1/blX2Rx8+Dk2CTXfeHVK T7/fBnVePmEWbE6ohgui/Yshdt4uEQX7B7h0TJ5gDyVOb3/LPITZAQbFrdMwFFKkZIml Ce2O0S/Dc8RF30EbPNg+r+lBpve/CLjrM6BH21aMYboQG7s4up5FyQLSlj46QEB8Ewlp 9O2q1TkMIRBoe90N5w3Yoow49hPUF3avz8DZLykvRC56bQvjr0OacAgjYtrwUcbrybvt +43g== X-Gm-Message-State: AOJu0YxwLhOImAAovkfUB2jAGT44OIc7dUL1JkhrT4kAcDCM0WNm4Qz7 hDwsgxSKtJWnFkEFyp9ST6eZ5w8arXKDNKFLJTD088HMUpqLQuot2292DMlY X-Google-Smtp-Source: AGHT+IGWWm12pu0UYVU5/K+mxAmEmZ8nlj36wQc69DmFhJNkBofURT7MltiDL0h9FXRBWGp/eEPAqA== X-Received: by 2002:a17:906:c00d:b0:a6f:50ae:e09 with SMTP id a640c23a62f3a-a6f50ae0fa0mr18353166b.4.1718203559366; Wed, 12 Jun 2024 07:45:59 -0700 (PDT) Received: from vasant-suse.suse.cz ([2001:9e8:ab7c:f800:473b:7cbe:2ac7:effa]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f18bbf3cbsm456440366b.1.2024.06.12.07.45.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 07:45:58 -0700 (PDT) From: vsntk18@gmail.com To: kvm@vger.kernel.org Cc: vsntk18@gmail.com, andrew.jones@linux.dev, jroedel@suse.de, papaluri@amd.com, pbonzini@redhat.com, seanjc@google.com, vkarasulli@suse.de Subject: [kvm-unit-tests PATCH v8 11/12] x86: AMD SEV-ES: Handle string IO for IOIO #VC Date: Wed, 12 Jun 2024 16:45:38 +0200 Message-Id: <20240612144539.16147-12-vsntk18@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240612144539.16147-1-vsntk18@gmail.com> References: <20240612144539.16147-1-vsntk18@gmail.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Vasant Karasulli Using Linux's IOIO #VC processing logic. Signed-off-by: Vasant Karasulli --- lib/x86/amd_sev_vc.c | 108 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 106 insertions(+), 2 deletions(-) -- 2.34.1 diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c index 0cdb9c06..77892edd 100644 --- a/lib/x86/amd_sev_vc.c +++ b/lib/x86/amd_sev_vc.c @@ -275,10 +275,46 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo) return ES_OK; } +static enum es_result vc_insn_string_read(struct es_em_ctxt *ctxt, + void *src, unsigned char *buf, + unsigned int data_size, + unsigned int count, + bool backwards) +{ + int i, b = backwards ? -1 : 1; + + for (i = 0; i < count; i++) { + void *s = src + (i * data_size * b); + unsigned char *d = buf + (i * data_size); + + memcpy(d, s, data_size); + } + + return ES_OK; +} + +static enum es_result vc_insn_string_write(struct es_em_ctxt *ctxt, + void *dst, unsigned char *buf, + unsigned int data_size, + unsigned int count, + bool backwards) +{ + int i, s = backwards ? -1 : 1; + + for (i = 0; i < count; i++) { + void *d = dst + (i * data_size * s); + unsigned char *b = buf + (i * data_size); + + memcpy(d, b, data_size); + } + + return ES_OK; +} + static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt) { struct ex_regs *regs = ctxt->regs; - u64 exit_info_1; + u64 exit_info_1, exit_info_2; enum es_result ret; ret = vc_ioio_exitinfo(ctxt, &exit_info_1); @@ -286,7 +322,75 @@ static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt) return ret; if (exit_info_1 & SVM_IOIO_TYPE_STR) { - ret = ES_VMM_ERROR; + /* (REP) INS/OUTS */ + + bool df = ((regs->rflags & X86_EFLAGS_DF) == X86_EFLAGS_DF); + unsigned int io_bytes, exit_bytes; + unsigned int ghcb_count, op_count; + unsigned long es_base; + u64 sw_scratch; + + /* + * For the string variants with rep prefix the amount of in/out + * operations per #VC exception is limited so that the kernel + * has a chance to take interrupts and re-schedule while the + * instruction is emulated. + */ + io_bytes = (exit_info_1 >> 4) & 0x7; + ghcb_count = sizeof(ghcb->shared_buffer) / io_bytes; + + op_count = (exit_info_1 & SVM_IOIO_REP) ? regs->rcx : 1; + exit_info_2 = op_count < ghcb_count ? op_count : ghcb_count; + exit_bytes = exit_info_2 * io_bytes; + + es_base = 0; + + /* Read bytes of OUTS into the shared buffer */ + if (!(exit_info_1 & SVM_IOIO_TYPE_IN)) { + ret = vc_insn_string_read(ctxt, + (void *)(es_base + regs->rsi), + ghcb->shared_buffer, io_bytes, + exit_info_2, df); + if (ret) + return ret; + } + + /* + * Issue an VMGEXIT to the HV to consume the bytes from the + * shared buffer or to have it write them into the shared buffer + * depending on the instruction: OUTS or INS. + */ + sw_scratch = __pa(ghcb) + offsetof(struct ghcb, shared_buffer); + ghcb_set_sw_scratch(ghcb, sw_scratch); + ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_IOIO, + exit_info_1, exit_info_2); + if (ret != ES_OK) + return ret; + + /* Read bytes from shared buffer into the guest's destination. */ + if (exit_info_1 & SVM_IOIO_TYPE_IN) { + ret = vc_insn_string_write(ctxt, + (void *)(es_base + regs->rdi), + ghcb->shared_buffer, io_bytes, + exit_info_2, df); + if (ret) + return ret; + + if (df) + regs->rdi -= exit_bytes; + else + regs->rdi += exit_bytes; + } else { + if (df) + regs->rsi -= exit_bytes; + else + regs->rsi += exit_bytes; + } + + if (exit_info_1 & SVM_IOIO_REP) + regs->rcx -= exit_info_2; + + ret = regs->rcx ? ES_RETRY : ES_OK; } else { /* IN/OUT into/from rAX */ From patchwork Wed Jun 12 14:45:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Karasulli X-Patchwork-Id: 13695151 Received: from mail-lf1-f44.google.com (mail-lf1-f44.google.com [209.85.167.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 516601802A0 for ; Wed, 12 Jun 2024 14:46:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203564; cv=none; b=oPU9TePV49xjZaEaenwmwvbsZvUfZRUL2zwqNKuoGwNydaZmzxsOjyON1dadYpdHKz+KZgC2Of2FrGqgtQcK0PKQl6BdBV9NI+L6saw6dPmcGuSk5p8xdb14WeVeVawVfBpKwgLAkHsDd0f7dN3+nd8Yd+A4u6JNk5M22E/77HM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718203564; c=relaxed/simple; bh=XZpuq8XXaphEtICVe+CBZq6Xjt9zGTjTiS89MwFA7m0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LQOPiGNdZCj+/HEFhJ3uoipVO06ri4/YTK+2hwfMnx2vYZw8fzlRIoRGc8+Kc8UnRv7j5aNLXTt6Sava5FFdnDLtSmy51Pq/BWwoSua534s7a9Xu7+GPHjNzfQKBu71YFF5u60v+Nv9AnYn5FCRA3KIBZEDCT2pdm8dxmIvQ4l8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=gysKcROM; arc=none smtp.client-ip=209.85.167.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gysKcROM" Received: by mail-lf1-f44.google.com with SMTP id 2adb3069b0e04-52c82101407so7481922e87.3 for ; Wed, 12 Jun 2024 07:46:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718203561; x=1718808361; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=90jPOH96cHSDTO17IWoMlm6kFPsoA10tI2fctk2DnOA=; b=gysKcROMClXMds984GKgn3ja6Ma0qiVyRyqj7P3PMrXRy+0gd2T1ywVVDOPMXGs53X N9sEvomq2Y/mBfjRzDEuQ01mbXAqUrukiHy8rEB8YQt3TrcGXa46Pnr/3OT63BHeiotf GPrnKcvHAZ7/3ARAiWy6pR/Wv0BKaVNpPa6Vcwq0ARKaaMFpkQjSnphLQNcO7bpOXCGz X3sLXzeNMhVbv/r4JSVofNMDQWu9lYwNzBY2doIXGBv8kMpYBQhsU2CNVLIXA9p/J/OA H+r6dCSDmeJCGS7viwH3ZKxFIYU+M9MtDyE6pnTufzCdRtIV3ohGEvo07DUtZVqaG+kM b+VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718203561; x=1718808361; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=90jPOH96cHSDTO17IWoMlm6kFPsoA10tI2fctk2DnOA=; b=O/PJEyJTSnKxQQIRXH65w+HlunX8Ex1tur9P/a18H+pizy5JPBQlwandZM0nqvWz6c 1tlB9v8MkFGLT8LRX3eRR/T8ih8uMGTWByknVNlchO+6tMm4tA+azspoayMRVRL/hhCJ IHKZEkf1InqlbL0pIi3PPUe9YSjWoYiky6OlwXhSTsYBf0wJks715uIpuFaaKO+9a0db McoyuNDHofsm2c5puJ0oSACdMxJTIMKc4HAqwnzGk1SEBcCUOqV1NtBqWatMGP6AWIyq kJyXsGnqZJxi+uvlc8dZp2caUDIimLf0a0ZkmeA2n5/bXQ0hVl+kkhCfZdzlg0tawkR8 /8Qw== X-Gm-Message-State: AOJu0YwcpBrEv56lA09CxTWANGB+kH3QPSu+zDG7QpXP+dQY0Z7+1A7j 48lJ3QYrav403BdIR6YK+sIYSJ6oEq3QuU5vloVXsPGsmmJhZp3+4eJEXou7 X-Google-Smtp-Source: AGHT+IG1Uohaz+v1ibTQJTDkvtswUWqLaYmK3XmvfFBNJugZk1Gr/534VzletMV74kZX7xbqPipaSw== X-Received: by 2002:a05:6512:1253:b0:529:b718:8d00 with SMTP id 2adb3069b0e04-52c9a3bfb36mr2138177e87.8.1718203560720; Wed, 12 Jun 2024 07:46:00 -0700 (PDT) Received: from vasant-suse.suse.cz ([2001:9e8:ab7c:f800:473b:7cbe:2ac7:effa]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f18bbf3cbsm456440366b.1.2024.06.12.07.45.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 07:46:00 -0700 (PDT) From: vsntk18@gmail.com To: kvm@vger.kernel.org Cc: vsntk18@gmail.com, andrew.jones@linux.dev, jroedel@suse.de, papaluri@amd.com, pbonzini@redhat.com, seanjc@google.com, vkarasulli@suse.de Subject: [kvm-unit-tests PATCH v8 12/12] lib/x86: remove unused SVM_IOIO_* macros Date: Wed, 12 Jun 2024 16:45:39 +0200 Message-Id: <20240612144539.16147-13-vsntk18@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240612144539.16147-1-vsntk18@gmail.com> References: <20240612144539.16147-1-vsntk18@gmail.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Vasant Karasulli svm.h contains many such macros are not used anywhere. Signed-off-by: Vasant Karasulli --- lib/x86/svm.h | 9 --------- 1 file changed, 9 deletions(-) -- 2.34.1 diff --git a/lib/x86/svm.h b/lib/x86/svm.h index 96e17dc3..53bf115a 100644 --- a/lib/x86/svm.h +++ b/lib/x86/svm.h @@ -140,16 +140,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area { #define SVM_INTERRUPT_SHADOW_MASK 1 -#define SVM_IOIO_STR_SHIFT 2 -#define SVM_IOIO_REP_SHIFT 3 #define SVM_IOIO_SIZE_SHIFT 4 -#define SVM_IOIO_ASIZE_SHIFT 7 - -#define SVM_IOIO_TYPE_MASK 1 -#define SVM_IOIO_STR_MASK (1 << SVM_IOIO_STR_SHIFT) -#define SVM_IOIO_REP_MASK (1 << SVM_IOIO_REP_SHIFT) -#define SVM_IOIO_SIZE_MASK (7 << SVM_IOIO_SIZE_SHIFT) -#define SVM_IOIO_ASIZE_MASK (7 << SVM_IOIO_ASIZE_SHIFT) #define SVM_IOIO_TYPE_STR BIT(2) #define SVM_IOIO_TYPE_IN 1