From patchwork Mon Jul 5 10:46:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Edmondson X-Patchwork-Id: 12358815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA928C07E99 for ; Mon, 5 Jul 2021 10:48:08 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8D40861358 for ; Mon, 5 Jul 2021 10:48:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8D40861358 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:40858 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m0M91-000623-H4 for qemu-devel@archiver.kernel.org; Mon, 05 Jul 2021 06:48:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:47612) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7h-0003RF-O9 for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:45 -0400 Received: from forward1-smtp.messagingengine.com ([66.111.4.223]:36073) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7f-0000C7-JM for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:45 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailforward.nyi.internal (Postfix) with ESMTP id 8ABEA1940A07; Mon, 5 Jul 2021 06:46:41 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Mon, 05 Jul 2021 06:46:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=xjQzsrjvikRqRv3L5KeF76tdYlXtfvFF4dDp5D0t5u0=; b=AEMRMrBH YLPAmjlM9hOULoQ4VetZA1PRdU6KEWjR/i08RAHHYfR4rv0yE5nS/KRPxsTglR7u t3jX21kfw0Xe19hsbljsbxmtwJCJqYUi1u7/GusYFcEBVWaH/Aq9q1RMDD/kJMRZ PYQzmbsIXOim5BRkUdcQmGdvEHU3uiVwk7UhPY1FoqUk1WcE6Gvc8eCwS5RQwajx j5yzp3Vprds7HeI5xGtTreuWsVn6EO1yLZ1CumlsVPKnv4Op1inWh3x7usdPcoBw aQiybmK1XCUcqukm5LGx4uLSd/ySDdH5njLOHmAX2+CvyIJwoDeXUQiMNcJBvc+I +ZySDESf4zgiPg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfeejgedgfedtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffrghvihgu ucfgughmohhnughsohhnuceouggrvhhiugdrvggumhhonhgushhonhesohhrrggtlhgvrd gtohhmqeenucggtffrrghtthgvrhhnpedufeetjefgfefhtdejhfehtdfftefhteekhefg leehfffhiefhgeelgfejtdehkeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegurghvihgurdgvughmohhnughsohhnsehorhgrtghlvgdrtgho mh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 5 Jul 2021 06:46:35 -0400 (EDT) Received: from localhost (disaster-area.hh.sledj.net [local]) by disaster-area.hh.sledj.net (OpenSMTPD) with ESMTPA id 04371dc7; Mon, 5 Jul 2021 10:46:32 +0000 (UTC) From: David Edmondson To: qemu-devel@nongnu.org Subject: [RFC PATCH 1/8] target/i386: Declare constants for XSAVE offsets Date: Mon, 5 Jul 2021 11:46:25 +0100 Message-Id: <20210705104632.2902400-2-david.edmondson@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210705104632.2902400-1-david.edmondson@oracle.com> References: <20210705104632.2902400-1-david.edmondson@oracle.com> MIME-Version: 1.0 Received-SPF: softfail client-ip=66.111.4.223; envelope-from=david.edmondson@oracle.com; helo=forward1-smtp.messagingengine.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_SOFTFAIL=0.665, UNPARSEABLE_RELAY=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Eduardo Habkost , kvm@vger.kernel.org, Michael Roth , Marcelo Tosatti , Richard Henderson , Cameron Esfahani , David Edmondson , babu.moger@amd.com, Roman Bolshakov , Paolo Bonzini Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Declare and use manifest constants for the XSAVE state component offsets. Signed-off-by: David Edmondson --- target/i386/cpu.h | 30 +++++++++++++++++++++++------- 1 file changed, 23 insertions(+), 7 deletions(-) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index f7fa5870b1..aedb8f2e01 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -1305,6 +1305,22 @@ typedef struct XSavePKRU { uint32_t padding; } XSavePKRU; +#define XSAVE_FCW_FSW_OFFSET 0x000 +#define XSAVE_FTW_FOP_OFFSET 0x004 +#define XSAVE_CWD_RIP_OFFSET 0x008 +#define XSAVE_CWD_RDP_OFFSET 0x010 +#define XSAVE_MXCSR_OFFSET 0x018 +#define XSAVE_ST_SPACE_OFFSET 0x020 +#define XSAVE_XMM_SPACE_OFFSET 0x0a0 +#define XSAVE_XSTATE_BV_OFFSET 0x200 +#define XSAVE_AVX_OFFSET 0x240 +#define XSAVE_BNDREG_OFFSET 0x3c0 +#define XSAVE_BNDCSR_OFFSET 0x400 +#define XSAVE_OPMASK_OFFSET 0x440 +#define XSAVE_ZMM_HI256_OFFSET 0x480 +#define XSAVE_HI16_ZMM_OFFSET 0x680 +#define XSAVE_PKRU_OFFSET 0xa80 + typedef struct X86XSaveArea { X86LegacyXSaveArea legacy; X86XSaveHeader header; @@ -1325,19 +1341,19 @@ typedef struct X86XSaveArea { XSavePKRU pkru_state; } X86XSaveArea; -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != 0x240); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != 0x3c0); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != 0x400); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != 0x440); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveOpmask) != 0x40); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != 0x480); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != 0x680); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != 0xA80); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8); typedef enum TPRAccess { From patchwork Mon Jul 5 10:46:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Edmondson X-Patchwork-Id: 12358823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A6CAC07E99 for ; Mon, 5 Jul 2021 10:51:13 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9634A61350 for ; Mon, 5 Jul 2021 10:51:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9634A61350 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:48860 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m0MBz-0002xp-ME for qemu-devel@archiver.kernel.org; Mon, 05 Jul 2021 06:51:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:47614) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7i-0003RN-3n for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:46 -0400 Received: from forward1-smtp.messagingengine.com ([66.111.4.223]:55815) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7g-0000EL-6H for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:45 -0400 Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailforward.nyi.internal (Postfix) with ESMTP id AB9631940A1F; Mon, 5 Jul 2021 06:46:43 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute6.internal (MEProxy); Mon, 05 Jul 2021 06:46:43 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=APn3koshsRsOL4WACDIqmSFbrCC2pqhcz0qRBSDXuLc=; b=ktxLoH54 UwF6x58B8FsPNKSIx4pIttXdp7pvAzMzlQStTJhgC7jbC6JUvzyDhFMueHP0fAZJ 23UPEJ+rvRbXL/9O2RvANBMBUDIke3Y/7wgujTTvrwiqDnAlzgSbSZEFk9H8Ck4b kdMRLUxytXY8c42kCNGuULGhpM0sBAjund5MfwQW6wY/eU8OK1yfPL0yX9G7HjzM DEEGjVwe1iylwTiJsfUEyw8HMD8/oXjKSSqA2DzV+OjeByHpaSkkIDlO/XwZd/tG 17X8ChmbLdQXcK8oqkg0rFFVvOWVVso/Fll555YotPHCJk3tN/iAJ5MtZ1gLWQB1 1dPJTHqBx+HSOQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfeejgedgfeduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffrghvihgu ucfgughmohhnughsohhnuceouggrvhhiugdrvggumhhonhgushhonhesohhrrggtlhgvrd gtohhmqeenucggtffrrghtthgvrhhnpedufeetjefgfefhtdejhfehtdfftefhteekhefg leehfffhiefhgeelgfejtdehkeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegurghvihgurdgvughmohhnughsohhnsehorhgrtghlvgdrtgho mh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 5 Jul 2021 06:46:38 -0400 (EDT) Received: from localhost (disaster-area.hh.sledj.net [local]) by disaster-area.hh.sledj.net (OpenSMTPD) with ESMTPA id d504d057; Mon, 5 Jul 2021 10:46:32 +0000 (UTC) From: David Edmondson To: qemu-devel@nongnu.org Subject: [RFC PATCH 2/8] target/i386: Consolidate the X86XSaveArea offset checks Date: Mon, 5 Jul 2021 11:46:26 +0100 Message-Id: <20210705104632.2902400-3-david.edmondson@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210705104632.2902400-1-david.edmondson@oracle.com> References: <20210705104632.2902400-1-david.edmondson@oracle.com> MIME-Version: 1.0 Received-SPF: softfail client-ip=66.111.4.223; envelope-from=david.edmondson@oracle.com; helo=forward1-smtp.messagingengine.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_SOFTFAIL=0.665, UNPARSEABLE_RELAY=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Eduardo Habkost , kvm@vger.kernel.org, Michael Roth , Marcelo Tosatti , Richard Henderson , Cameron Esfahani , David Edmondson , babu.moger@amd.com, Roman Bolshakov , Paolo Bonzini Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Rather than having similar but different checks in cpu.h and kvm.c, move them all to cpu.h. --- target/i386/cpu.h | 22 +++++++++++++++------- target/i386/kvm/kvm.c | 39 --------------------------------------- 2 files changed, 15 insertions(+), 46 deletions(-) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index aedb8f2e01..6590ad6391 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -1341,21 +1341,29 @@ typedef struct X86XSaveArea { XSavePKRU pkru_state; } X86XSaveArea; -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveOpmask) != 0x40); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET); + typedef enum TPRAccess { TPR_ACCESS_READ, TPR_ACCESS_WRITE, diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c index 04e4ec063f..3ab1d71775 100644 --- a/target/i386/kvm/kvm.c +++ b/target/i386/kvm/kvm.c @@ -2466,45 +2466,6 @@ static int kvm_put_fpu(X86CPU *cpu) return kvm_vcpu_ioctl(CPU(cpu), KVM_SET_FPU, &fpu); } -#define XSAVE_FCW_FSW 0 -#define XSAVE_FTW_FOP 1 -#define XSAVE_CWD_RIP 2 -#define XSAVE_CWD_RDP 4 -#define XSAVE_MXCSR 6 -#define XSAVE_ST_SPACE 8 -#define XSAVE_XMM_SPACE 40 -#define XSAVE_XSTATE_BV 128 -#define XSAVE_YMMH_SPACE 144 -#define XSAVE_BNDREGS 240 -#define XSAVE_BNDCSR 256 -#define XSAVE_OPMASK 272 -#define XSAVE_ZMM_Hi256 288 -#define XSAVE_Hi16_ZMM 416 -#define XSAVE_PKRU 672 - -#define XSAVE_BYTE_OFFSET(word_offset) \ - ((word_offset) * sizeof_field(struct kvm_xsave, region[0])) - -#define ASSERT_OFFSET(word_offset, field) \ - QEMU_BUILD_BUG_ON(XSAVE_BYTE_OFFSET(word_offset) != \ - offsetof(X86XSaveArea, field)) - -ASSERT_OFFSET(XSAVE_FCW_FSW, legacy.fcw); -ASSERT_OFFSET(XSAVE_FTW_FOP, legacy.ftw); -ASSERT_OFFSET(XSAVE_CWD_RIP, legacy.fpip); -ASSERT_OFFSET(XSAVE_CWD_RDP, legacy.fpdp); -ASSERT_OFFSET(XSAVE_MXCSR, legacy.mxcsr); -ASSERT_OFFSET(XSAVE_ST_SPACE, legacy.fpregs); -ASSERT_OFFSET(XSAVE_XMM_SPACE, legacy.xmm_regs); -ASSERT_OFFSET(XSAVE_XSTATE_BV, header.xstate_bv); -ASSERT_OFFSET(XSAVE_YMMH_SPACE, avx_state); -ASSERT_OFFSET(XSAVE_BNDREGS, bndreg_state); -ASSERT_OFFSET(XSAVE_BNDCSR, bndcsr_state); -ASSERT_OFFSET(XSAVE_OPMASK, opmask_state); -ASSERT_OFFSET(XSAVE_ZMM_Hi256, zmm_hi256_state); -ASSERT_OFFSET(XSAVE_Hi16_ZMM, hi16_zmm_state); -ASSERT_OFFSET(XSAVE_PKRU, pkru_state); - static int kvm_put_xsave(X86CPU *cpu) { CPUX86State *env = &cpu->env; From patchwork Mon Jul 5 10:46:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Edmondson X-Patchwork-Id: 12358831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F23A1C07E99 for ; Mon, 5 Jul 2021 10:53:44 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8B73861350 for ; Mon, 5 Jul 2021 10:53:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8B73861350 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:56808 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m0MER-0008HQ-Jo for qemu-devel@archiver.kernel.org; Mon, 05 Jul 2021 06:53:43 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:47670) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7j-0003Rj-Dl for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:47 -0400 Received: from forward1-smtp.messagingengine.com ([66.111.4.223]:44407) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7h-0000Cn-1W for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:47 -0400 Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailforward.nyi.internal (Postfix) with ESMTP id 618C119409BD; Mon, 5 Jul 2021 06:46:42 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute6.internal (MEProxy); Mon, 05 Jul 2021 06:46:42 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=WSyFBvKjNTyRy9dxJZwN/t3hhKqHdrXbYEIv5+h4b8M=; b=hrIsB282 XiARxhjwun9tx2bBkj9b1k5YYlQSaQhe7TaRRBxHxlB6677AIItSaD14No6XJTTf Z+tYJHxuLB9jq43owkmMLrjC6POjdI4NGRohej9gxfxQ9TXX678cbNzb855TieLg U0AZOjwCZwQlFbZ3CNF4OmEyy1JM2ebjiKnILFIffRLR58cCYZ3uwPSCoueesWWG T8Syqh5BDjDwitVZQYsYIAkOcQs4BQ31vHmhxqGsPkQhItKLnjNx+t+NfKDr2D0s B9p6Exn+kkVHOVuPpxY1oeUEvp6rbDm/B24CkNTb+gyAvvEZH6HXlFnDWuUkze2x gA1ojdCSIiGlKg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfeejgedgfeduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffrghvihgu ucfgughmohhnughsohhnuceouggrvhhiugdrvggumhhonhgushhonhesohhrrggtlhgvrd gtohhmqeenucggtffrrghtthgvrhhnpedufeetjefgfefhtdejhfehtdfftefhteekhefg leehfffhiefhgeelgfejtdehkeenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmh epmhgrihhlfhhrohhmpegurghvihgurdgvughmohhnughsohhnsehorhgrtghlvgdrtgho mh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 5 Jul 2021 06:46:41 -0400 (EDT) Received: from localhost (disaster-area.hh.sledj.net [local]) by disaster-area.hh.sledj.net (OpenSMTPD) with ESMTPA id 9b149bf5; Mon, 5 Jul 2021 10:46:32 +0000 (UTC) From: David Edmondson To: qemu-devel@nongnu.org Subject: [RFC PATCH 3/8] target/i386: Clarify the padding requirements of X86XSaveArea Date: Mon, 5 Jul 2021 11:46:27 +0100 Message-Id: <20210705104632.2902400-4-david.edmondson@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210705104632.2902400-1-david.edmondson@oracle.com> References: <20210705104632.2902400-1-david.edmondson@oracle.com> MIME-Version: 1.0 Received-SPF: softfail client-ip=66.111.4.223; envelope-from=david.edmondson@oracle.com; helo=forward1-smtp.messagingengine.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_SOFTFAIL=0.665, UNPARSEABLE_RELAY=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Eduardo Habkost , kvm@vger.kernel.org, Michael Roth , Marcelo Tosatti , Richard Henderson , Cameron Esfahani , David Edmondson , babu.moger@amd.com, Roman Bolshakov , Paolo Bonzini Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Replace the hard-coded size of offsets or structure elements with defined constants or sizeof(). Signed-off-by: David Edmondson --- target/i386/cpu.h | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 6590ad6391..92f9ca264c 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -1329,7 +1329,13 @@ typedef struct X86XSaveArea { /* AVX State: */ XSaveAVX avx_state; - uint8_t padding[960 - 576 - sizeof(XSaveAVX)]; + + /* Ensure that XSaveBNDREG is properly aligned. */ + uint8_t padding[XSAVE_BNDREG_OFFSET + - sizeof(X86LegacyXSaveArea) + - sizeof(X86XSaveHeader) + - sizeof(XSaveAVX)]; + /* MPX State: */ XSaveBNDREG bndreg_state; XSaveBNDCSR bndcsr_state; From patchwork Mon Jul 5 10:46:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Edmondson X-Patchwork-Id: 12358829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EBF9C07E99 for ; Mon, 5 Jul 2021 10:52:48 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0F5676135B for ; Mon, 5 Jul 2021 10:52:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0F5676135B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:54372 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m0MDX-0006eJ-6y for qemu-devel@archiver.kernel.org; Mon, 05 Jul 2021 06:52:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:47664) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7j-0003Rc-7f for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:47 -0400 Received: from forward1-smtp.messagingengine.com ([66.111.4.223]:37855) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7g-0000Eh-EF for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:46 -0400 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailforward.nyi.internal (Postfix) with ESMTP id D6E5E1940A24; Mon, 5 Jul 2021 06:46:43 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Mon, 05 Jul 2021 06:46:43 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=zn9zUGz57ivQbtIoDzG6NwEdWAVLVWzxpCG0kNQ8R9Y=; b=whhAGXfQ uHYstsMicIYNU+NXuUCVrYe7QZcmblGHhbPIyrpO/+rJB+V47yzm31kiZ2radUQ9 SfOA73gQp7H0dXTzOZta2s4g/zuZ8JyEMvEZFTsT5TT9wTrOwqJINksp33dKJKCt zZ3F4LXYarFelCjF+Ef/c2uI8NJSIGXohejuBaGNvpEO0Ep4wLfZ/e9HfRfK8IPI xITKltiKx4Qkxs1IIFr43PcJHowfT6kf5ZI1OQQAWbLS068CVLKXw5IkZ/XmZOSl 1eRJD105FCKWwsvxOaKb92TpiBdQ9jfJRKUh5V80F5FQjWdEQ4h4/g2B41/3Qpdo VFp3usuzWNMe6A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfeejgedgfeduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffrghvihgu ucfgughmohhnughsohhnuceouggrvhhiugdrvggumhhonhgushhonhesohhrrggtlhgvrd gtohhmqeenucggtffrrghtthgvrhhnpedufeetjefgfefhtdejhfehtdfftefhteekhefg leehfffhiefhgeelgfejtdehkeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegurghvihgurdgvughmohhnughsohhnsehorhgrtghlvgdrtgho mh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 5 Jul 2021 06:46:41 -0400 (EDT) Received: from localhost (disaster-area.hh.sledj.net [local]) by disaster-area.hh.sledj.net (OpenSMTPD) with ESMTPA id 22e66db2; Mon, 5 Jul 2021 10:46:32 +0000 (UTC) From: David Edmondson To: qemu-devel@nongnu.org Subject: [RFC PATCH 4/8] target/i386: Pass buffer and length to XSAVE helper Date: Mon, 5 Jul 2021 11:46:28 +0100 Message-Id: <20210705104632.2902400-5-david.edmondson@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210705104632.2902400-1-david.edmondson@oracle.com> References: <20210705104632.2902400-1-david.edmondson@oracle.com> MIME-Version: 1.0 Received-SPF: softfail client-ip=66.111.4.223; envelope-from=david.edmondson@oracle.com; helo=forward1-smtp.messagingengine.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_SOFTFAIL=0.665, UNPARSEABLE_RELAY=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Eduardo Habkost , kvm@vger.kernel.org, Michael Roth , Marcelo Tosatti , Richard Henderson , Cameron Esfahani , David Edmondson , babu.moger@amd.com, Roman Bolshakov , Paolo Bonzini Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" In preparation for removing assumptions about XSAVE area offsets, pass a buffer pointer and buffer length to the XSAVE helper functions. Signed-off-by: David Edmondson --- target/i386/cpu.h | 5 +++-- target/i386/hvf/hvf.c | 3 ++- target/i386/hvf/x86hvf.c | 19 ++++++++----------- target/i386/kvm/kvm.c | 13 +++++++------ target/i386/xsave_helper.c | 17 +++++++++-------- 5 files changed, 29 insertions(+), 28 deletions(-) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 92f9ca264c..ada2941c6e 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -1667,6 +1667,7 @@ typedef struct CPUX86State { uint64_t apic_bus_freq; #if defined(CONFIG_KVM) || defined(CONFIG_HVF) void *xsave_buf; + uint32_t xsave_buf_len; #endif #if defined(CONFIG_KVM) struct kvm_nested_state *nested_state; @@ -2227,8 +2228,8 @@ void x86_cpu_dump_local_apic_state(CPUState *cs, int flags); /* cpu.c */ bool cpu_is_bsp(X86CPU *cpu); -void x86_cpu_xrstor_all_areas(X86CPU *cpu, const X86XSaveArea *buf); -void x86_cpu_xsave_all_areas(X86CPU *cpu, X86XSaveArea *buf); +void x86_cpu_xrstor_all_areas(X86CPU *cpu, const void *buf, uint32_t buflen); +void x86_cpu_xsave_all_areas(X86CPU *cpu, void *buf, uint32_t buflen); void x86_update_hflags(CPUX86State* env); static inline bool hyperv_feat_enabled(X86CPU *cpu, int feat) diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c index 346dbcc26f..e62e8df028 100644 --- a/target/i386/hvf/hvf.c +++ b/target/i386/hvf/hvf.c @@ -267,7 +267,8 @@ int hvf_arch_init_vcpu(CPUState *cpu) wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, 0); x86cpu = X86_CPU(cpu); - x86cpu->env.xsave_buf = qemu_memalign(4096, 4096); + x86cpu->env.xsave_buf_len = 4096; + x86cpu->env.xsave_buf = qemu_memalign(4096, x86cpu->env.xsave_buf_len); hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_STAR, 1); hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_LSTAR, 1); diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c index 2ced2c2478..05ec1bddc4 100644 --- a/target/i386/hvf/x86hvf.c +++ b/target/i386/hvf/x86hvf.c @@ -73,14 +73,12 @@ void hvf_get_segment(SegmentCache *qseg, struct vmx_segment *vmx_seg) void hvf_put_xsave(CPUState *cpu_state) { + void *xsave = X86_CPU(cpu_state)->env.xsave_buf; + uint32_t xsave_len = X86_CPU(cpu_state)->env.xsave_buf_len; - struct X86XSaveArea *xsave; + x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave, xsave_len); - xsave = X86_CPU(cpu_state)->env.xsave_buf; - - x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave); - - if (hv_vcpu_write_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) { + if (hv_vcpu_write_fpstate(cpu_state->hvf->fd, xsave, xsave_len)) { abort(); } } @@ -158,15 +156,14 @@ void hvf_put_msrs(CPUState *cpu_state) void hvf_get_xsave(CPUState *cpu_state) { - struct X86XSaveArea *xsave; - - xsave = X86_CPU(cpu_state)->env.xsave_buf; + void *xsave = X86_CPU(cpu_state)->env.xsave_buf; + uint32_t xsave_len = X86_CPU(cpu_state)->env.xsave_buf_len; - if (hv_vcpu_read_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) { + if (hv_vcpu_read_fpstate(cpu_state->hvf->fd, xsave, xsave_len)) { abort(); } - x86_cpu_xrstor_all_areas(X86_CPU(cpu_state), xsave); + x86_cpu_xrstor_all_areas(X86_CPU(cpu_state), xsave, xsave_len); } void hvf_get_segments(CPUState *cpu_state) diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c index 3ab1d71775..41b0764ab7 100644 --- a/target/i386/kvm/kvm.c +++ b/target/i386/kvm/kvm.c @@ -1888,8 +1888,9 @@ int kvm_arch_init_vcpu(CPUState *cs) } if (has_xsave) { - env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave)); - memset(env->xsave_buf, 0, sizeof(struct kvm_xsave)); + env->xsave_buf_len = sizeof(struct kvm_xsave); + env->xsave_buf = qemu_memalign(4096, env->xsave_buf_len); + memset(env->xsave_buf, 0, env->xsave_buf_len); } max_nested_state_len = kvm_max_nested_state_length(); @@ -2469,12 +2470,12 @@ static int kvm_put_fpu(X86CPU *cpu) static int kvm_put_xsave(X86CPU *cpu) { CPUX86State *env = &cpu->env; - X86XSaveArea *xsave = env->xsave_buf; + void *xsave = env->xsave_buf; if (!has_xsave) { return kvm_put_fpu(cpu); } - x86_cpu_xsave_all_areas(cpu, xsave); + x86_cpu_xsave_all_areas(cpu, xsave, env->xsave_buf_len); return kvm_vcpu_ioctl(CPU(cpu), KVM_SET_XSAVE, xsave); } @@ -3119,7 +3120,7 @@ static int kvm_get_fpu(X86CPU *cpu) static int kvm_get_xsave(X86CPU *cpu) { CPUX86State *env = &cpu->env; - X86XSaveArea *xsave = env->xsave_buf; + void *xsave = env->xsave_buf; int ret; if (!has_xsave) { @@ -3130,7 +3131,7 @@ static int kvm_get_xsave(X86CPU *cpu) if (ret < 0) { return ret; } - x86_cpu_xrstor_all_areas(cpu, xsave); + x86_cpu_xrstor_all_areas(cpu, xsave, env->xsave_buf_len); return 0; } diff --git a/target/i386/xsave_helper.c b/target/i386/xsave_helper.c index 818115e7d2..b16c6ac0fe 100644 --- a/target/i386/xsave_helper.c +++ b/target/i386/xsave_helper.c @@ -6,14 +6,16 @@ #include "cpu.h" -void x86_cpu_xsave_all_areas(X86CPU *cpu, X86XSaveArea *buf) +void x86_cpu_xsave_all_areas(X86CPU *cpu, void *buf, uint32_t buflen) { CPUX86State *env = &cpu->env; X86XSaveArea *xsave = buf; - uint16_t cwd, swd, twd; int i; - memset(xsave, 0, sizeof(X86XSaveArea)); + + assert(buflen >= sizeof(*xsave)); + + memset(xsave, 0, buflen); twd = 0; swd = env->fpus & ~(7 << 11); swd |= (env->fpstt & 7) << 11; @@ -56,17 +58,17 @@ void x86_cpu_xsave_all_areas(X86CPU *cpu, X86XSaveArea *buf) 16 * sizeof env->xmm_regs[16]); memcpy(&xsave->pkru_state, &env->pkru, sizeof env->pkru); #endif - } -void x86_cpu_xrstor_all_areas(X86CPU *cpu, const X86XSaveArea *buf) +void x86_cpu_xrstor_all_areas(X86CPU *cpu, const void *buf, uint32_t buflen) { - CPUX86State *env = &cpu->env; const X86XSaveArea *xsave = buf; - int i; uint16_t cwd, swd, twd; + + assert(buflen >= sizeof(*xsave)); + cwd = xsave->legacy.fcw; swd = xsave->legacy.fsw; twd = xsave->legacy.ftw; @@ -108,5 +110,4 @@ void x86_cpu_xrstor_all_areas(X86CPU *cpu, const X86XSaveArea *buf) 16 * sizeof env->xmm_regs[16]); memcpy(&env->pkru, &xsave->pkru_state, sizeof env->pkru); #endif - } From patchwork Mon Jul 5 10:46:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Edmondson X-Patchwork-Id: 12358819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E5F9C07E9E for ; Mon, 5 Jul 2021 10:48:10 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 06F3561358 for ; Mon, 5 Jul 2021 10:48:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 06F3561358 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:40956 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m0M93-00066I-53 for qemu-devel@archiver.kernel.org; Mon, 05 Jul 2021 06:48:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:47674) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7j-0003SY-QG for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:47 -0400 Received: from forward1-smtp.messagingengine.com ([66.111.4.223]:36947) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7i-0000Fh-4o for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:47 -0400 Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailforward.nyi.internal (Postfix) with ESMTP id 01C2719409D5; Mon, 5 Jul 2021 06:46:44 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Mon, 05 Jul 2021 06:46:44 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=uHtzc5hD5n2kpYfDdA4+xmzSapD2LfwWyUZ1ufnaLg4=; b=Hx9D4/IF IAxJNCW8R6Ewi2C7ssRf/JdN1kYEpb4nNkbRtKVjYt8RuzVwEjkFR5P8kxbx26w2 CnfDKtOdmUmlz9DZ21WThY5gNDxjUF415Y0+0nleZZ5yLAff2ORijnvqZys8ndot Y/sq2zpKWz4kxYuplYn5eOITGD0C3HijEuXvmanOFaKJApoXQGlADyuViu+i5ys7 d4rpbSLaxxm5cVc++uh0jrEmnTghDrF/E+CJBnzBVqOiLOImKTv39zM2No+jWuO0 7OcF5hgKIteNgTPq5Iy/HxGbYTImXQ4NW4P3M5qdfNeogqDK2NwekQXw1O8ENguB zqHnH08y5GfLBg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfeejgedgfeduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffrghvihgu ucfgughmohhnughsohhnuceouggrvhhiugdrvggumhhonhgushhonhesohhrrggtlhgvrd gtohhmqeenucggtffrrghtthgvrhhnpedufeetjefgfefhtdejhfehtdfftefhteekhefg leehfffhiefhgeelgfejtdehkeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegurghvihgurdgvughmohhnughsohhnsehorhgrtghlvgdrtgho mh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 5 Jul 2021 06:46:42 -0400 (EDT) Received: from localhost (disaster-area.hh.sledj.net [local]) by disaster-area.hh.sledj.net (OpenSMTPD) with ESMTPA id f3fd2bd6; Mon, 5 Jul 2021 10:46:32 +0000 (UTC) From: David Edmondson To: qemu-devel@nongnu.org Subject: [RFC PATCH 5/8] target/i386: Make x86_ext_save_areas visible outside cpu.c Date: Mon, 5 Jul 2021 11:46:29 +0100 Message-Id: <20210705104632.2902400-6-david.edmondson@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210705104632.2902400-1-david.edmondson@oracle.com> References: <20210705104632.2902400-1-david.edmondson@oracle.com> MIME-Version: 1.0 Received-SPF: softfail client-ip=66.111.4.223; envelope-from=david.edmondson@oracle.com; helo=forward1-smtp.messagingengine.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_SOFTFAIL=0.665, UNPARSEABLE_RELAY=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Eduardo Habkost , kvm@vger.kernel.org, Michael Roth , Marcelo Tosatti , Richard Henderson , Cameron Esfahani , David Edmondson , babu.moger@amd.com, Roman Bolshakov , Paolo Bonzini Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Provide visibility of the x86_ext_save_areas array and associated type outside of cpu.c. Signed-off-by: David Edmondson --- target/i386/cpu.c | 7 +------ target/i386/cpu.h | 9 +++++++++ 2 files changed, 10 insertions(+), 6 deletions(-) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index d8f3ab3192..13caa0de50 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -1304,12 +1304,7 @@ static const X86RegisterInfo32 x86_reg_info_32[CPU_NB_REGS32] = { }; #undef REGISTER -typedef struct ExtSaveArea { - uint32_t feature, bits; - uint32_t offset, size; -} ExtSaveArea; - -static const ExtSaveArea x86_ext_save_areas[] = { +const ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = { [XSTATE_FP_BIT] = { /* x87 FP state component is always enabled if XSAVE is supported */ .feature = FEAT_1_ECX, .bits = CPUID_EXT_XSAVE, diff --git a/target/i386/cpu.h b/target/i386/cpu.h index ada2941c6e..c9c0a34330 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -1370,6 +1370,15 @@ QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFF QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET); QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET); +typedef struct ExtSaveArea { + uint32_t feature, bits; + uint32_t offset, size; +} ExtSaveArea; + +#define XSAVE_STATE_AREA_COUNT (XSTATE_PKRU_BIT + 1) + +extern const ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT]; + typedef enum TPRAccess { TPR_ACCESS_READ, TPR_ACCESS_WRITE, From patchwork Mon Jul 5 10:46:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Edmondson X-Patchwork-Id: 12358827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52D96C07E9C for ; Mon, 5 Jul 2021 10:51:15 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C89F76100B for ; Mon, 5 Jul 2021 10:51:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C89F76100B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:48996 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m0MC1-00033E-PB for qemu-devel@archiver.kernel.org; Mon, 05 Jul 2021 06:51:13 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:47698) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7l-0003Va-3e for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:49 -0400 Received: from forward1-smtp.messagingengine.com ([66.111.4.223]:49431) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7i-0000GQ-EA for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:48 -0400 Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailforward.nyi.internal (Postfix) with ESMTP id 9A6C51940A20; Mon, 5 Jul 2021 06:46:45 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Mon, 05 Jul 2021 06:46:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=KJRqPn/DOPHLDjX4HZi4nttSvHg/cdDqRBzhFwFLuKU=; b=RbwHLDhV ItAsp1LXS34jtM5rdp0ZcZC+e/OuKgMzM+kayTRZ+BamC77sprg2W6psAQRxVNwy Yf+7TUwFrh0/M5l6HpxZnvo4N5nK0tHC/1ufIFXF3oXSvUHEvKlRFD16r8Bcmx8f IgO4SuA7R0J1CTtcxIbPqu5dBTJnDn1AZl3TvFEE18Xkk+u2+bnl1jnw3UQgqeRY 2P/eg+MU6ek6vwvPU6VbaUJxtgKO128WaXgtQOIz2PKGXRG2I2ruvvlHFv/MJEgZ 4C6Z5ZcwApuyicA6sa+ZXiSHyReQ+H5kCh1QQKLnPmNHP/JTwVLfnmieao4LOCpZ ZFEAuOv1CACZsQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfeejgedgfeduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffrghvihgu ucfgughmohhnughsohhnuceouggrvhhiugdrvggumhhonhgushhonhesohhrrggtlhgvrd gtohhmqeenucggtffrrghtthgvrhhnpedufeetjefgfefhtdejhfehtdfftefhteekhefg leehfffhiefhgeelgfejtdehkeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegurghvihgurdgvughmohhnughsohhnsehorhgrtghlvgdrtgho mh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 5 Jul 2021 06:46:43 -0400 (EDT) Received: from localhost (disaster-area.hh.sledj.net [local]) by disaster-area.hh.sledj.net (OpenSMTPD) with ESMTPA id 3de58af9; Mon, 5 Jul 2021 10:46:32 +0000 (UTC) From: David Edmondson To: qemu-devel@nongnu.org Subject: [RFC PATCH 6/8] target/i386: Observe XSAVE state area offsets Date: Mon, 5 Jul 2021 11:46:30 +0100 Message-Id: <20210705104632.2902400-7-david.edmondson@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210705104632.2902400-1-david.edmondson@oracle.com> References: <20210705104632.2902400-1-david.edmondson@oracle.com> MIME-Version: 1.0 Received-SPF: softfail client-ip=66.111.4.223; envelope-from=david.edmondson@oracle.com; helo=forward1-smtp.messagingengine.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_SOFTFAIL=0.665, UNPARSEABLE_RELAY=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Eduardo Habkost , kvm@vger.kernel.org, Michael Roth , Marcelo Tosatti , Richard Henderson , Cameron Esfahani , David Edmondson , babu.moger@amd.com, Roman Bolshakov , Paolo Bonzini Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Rather than relying on the X86XSaveArea structure definition directly, the routines that manipulate the XSAVE state area should observe the offsets declared in the x86_ext_save_areas array. Currently the offsets declared in the array are derived from the structure definition, resulting in no functional change. Signed-off-by: David Edmondson --- target/i386/xsave_helper.c | 262 ++++++++++++++++++++++++++++--------- 1 file changed, 200 insertions(+), 62 deletions(-) diff --git a/target/i386/xsave_helper.c b/target/i386/xsave_helper.c index b16c6ac0fe..ac61a96344 100644 --- a/target/i386/xsave_helper.c +++ b/target/i386/xsave_helper.c @@ -9,13 +9,20 @@ void x86_cpu_xsave_all_areas(X86CPU *cpu, void *buf, uint32_t buflen) { CPUX86State *env = &cpu->env; - X86XSaveArea *xsave = buf; - uint16_t cwd, swd, twd; + const ExtSaveArea *e, *f; int i; - assert(buflen >= sizeof(*xsave)); + X86LegacyXSaveArea *legacy; + X86XSaveHeader *header; + uint16_t cwd, swd, twd; + + memset(buf, 0, buflen); + + e = &x86_ext_save_areas[XSTATE_FP_BIT]; + + legacy = buf + e->offset; + header = buf + e->offset + sizeof(*legacy); - memset(xsave, 0, buflen); twd = 0; swd = env->fpus & ~(7 << 11); swd |= (env->fpstt & 7) << 11; @@ -23,91 +30,222 @@ void x86_cpu_xsave_all_areas(X86CPU *cpu, void *buf, uint32_t buflen) for (i = 0; i < 8; ++i) { twd |= (!env->fptags[i]) << i; } - xsave->legacy.fcw = cwd; - xsave->legacy.fsw = swd; - xsave->legacy.ftw = twd; - xsave->legacy.fpop = env->fpop; - xsave->legacy.fpip = env->fpip; - xsave->legacy.fpdp = env->fpdp; - memcpy(&xsave->legacy.fpregs, env->fpregs, - sizeof env->fpregs); - xsave->legacy.mxcsr = env->mxcsr; - xsave->header.xstate_bv = env->xstate_bv; - memcpy(&xsave->bndreg_state.bnd_regs, env->bnd_regs, - sizeof env->bnd_regs); - xsave->bndcsr_state.bndcsr = env->bndcs_regs; - memcpy(&xsave->opmask_state.opmask_regs, env->opmask_regs, - sizeof env->opmask_regs); + legacy->fcw = cwd; + legacy->fsw = swd; + legacy->ftw = twd; + legacy->fpop = env->fpop; + legacy->fpip = env->fpip; + legacy->fpdp = env->fpdp; + memcpy(&legacy->fpregs, env->fpregs, + sizeof(env->fpregs)); + legacy->mxcsr = env->mxcsr; for (i = 0; i < CPU_NB_REGS; i++) { - uint8_t *xmm = xsave->legacy.xmm_regs[i]; - uint8_t *ymmh = xsave->avx_state.ymmh[i]; - uint8_t *zmmh = xsave->zmm_hi256_state.zmm_hi256[i]; + uint8_t *xmm = legacy->xmm_regs[i]; + stq_p(xmm, env->xmm_regs[i].ZMM_Q(0)); - stq_p(xmm+8, env->xmm_regs[i].ZMM_Q(1)); - stq_p(ymmh, env->xmm_regs[i].ZMM_Q(2)); - stq_p(ymmh+8, env->xmm_regs[i].ZMM_Q(3)); - stq_p(zmmh, env->xmm_regs[i].ZMM_Q(4)); - stq_p(zmmh+8, env->xmm_regs[i].ZMM_Q(5)); - stq_p(zmmh+16, env->xmm_regs[i].ZMM_Q(6)); - stq_p(zmmh+24, env->xmm_regs[i].ZMM_Q(7)); + stq_p(xmm + 8, env->xmm_regs[i].ZMM_Q(1)); + } + + header->xstate_bv = env->xstate_bv; + + e = &x86_ext_save_areas[XSTATE_YMM_BIT]; + if (e->size && e->offset) { + XSaveAVX *avx; + + avx = buf + e->offset; + + for (i = 0; i < CPU_NB_REGS; i++) { + uint8_t *ymmh = avx->ymmh[i]; + + stq_p(ymmh, env->xmm_regs[i].ZMM_Q(2)); + stq_p(ymmh + 8, env->xmm_regs[i].ZMM_Q(3)); + } + } + + e = &x86_ext_save_areas[XSTATE_BNDREGS_BIT]; + if (e->size && e->offset) { + XSaveBNDREG *bndreg; + XSaveBNDCSR *bndcsr; + + f = &x86_ext_save_areas[XSTATE_BNDCSR_BIT]; + assert(f->size); + assert(f->offset); + + bndreg = buf + e->offset; + bndcsr = buf + f->offset; + + memcpy(&bndreg->bnd_regs, env->bnd_regs, + sizeof(env->bnd_regs)); + bndcsr->bndcsr = env->bndcs_regs; } + e = &x86_ext_save_areas[XSTATE_OPMASK_BIT]; + if (e->size && e->offset) { + XSaveOpmask *opmask; + XSaveZMM_Hi256 *zmm_hi256; +#ifdef TARGET_X86_64 + XSaveHi16_ZMM *hi16_zmm; +#endif + + f = &x86_ext_save_areas[XSTATE_ZMM_Hi256_BIT]; + assert(f->size); + assert(f->offset); + + opmask = buf + e->offset; + zmm_hi256 = buf + f->offset; + + memcpy(&opmask->opmask_regs, env->opmask_regs, + sizeof(env->opmask_regs)); + + for (i = 0; i < CPU_NB_REGS; i++) { + uint8_t *zmmh = zmm_hi256->zmm_hi256[i]; + + stq_p(zmmh, env->xmm_regs[i].ZMM_Q(4)); + stq_p(zmmh + 8, env->xmm_regs[i].ZMM_Q(5)); + stq_p(zmmh + 16, env->xmm_regs[i].ZMM_Q(6)); + stq_p(zmmh + 24, env->xmm_regs[i].ZMM_Q(7)); + } + #ifdef TARGET_X86_64 - memcpy(&xsave->hi16_zmm_state.hi16_zmm, &env->xmm_regs[16], - 16 * sizeof env->xmm_regs[16]); - memcpy(&xsave->pkru_state, &env->pkru, sizeof env->pkru); + f = &x86_ext_save_areas[XSTATE_Hi16_ZMM_BIT]; + assert(f->size); + assert(f->offset); + + hi16_zmm = buf + f->offset; + + memcpy(&hi16_zmm->hi16_zmm, &env->xmm_regs[16], + 16 * sizeof(env->xmm_regs[16])); +#endif + } + +#ifdef TARGET_X86_64 + e = &x86_ext_save_areas[XSTATE_PKRU_BIT]; + if (e->size && e->offset) { + XSavePKRU *pkru = buf + e->offset; + + memcpy(pkru, &env->pkru, sizeof(env->pkru)); + } #endif } void x86_cpu_xrstor_all_areas(X86CPU *cpu, const void *buf, uint32_t buflen) { CPUX86State *env = &cpu->env; - const X86XSaveArea *xsave = buf; + const ExtSaveArea *e, *f, *g; int i; + + const X86LegacyXSaveArea *legacy; + const X86XSaveHeader *header; uint16_t cwd, swd, twd; - assert(buflen >= sizeof(*xsave)); + e = &x86_ext_save_areas[XSTATE_FP_BIT]; - cwd = xsave->legacy.fcw; - swd = xsave->legacy.fsw; - twd = xsave->legacy.ftw; - env->fpop = xsave->legacy.fpop; + legacy = buf + e->offset; + header = buf + e->offset + sizeof(*legacy); + + cwd = legacy->fcw; + swd = legacy->fsw; + twd = legacy->ftw; + env->fpop = legacy->fpop; env->fpstt = (swd >> 11) & 7; env->fpus = swd; env->fpuc = cwd; for (i = 0; i < 8; ++i) { env->fptags[i] = !((twd >> i) & 1); } - env->fpip = xsave->legacy.fpip; - env->fpdp = xsave->legacy.fpdp; - env->mxcsr = xsave->legacy.mxcsr; - memcpy(env->fpregs, &xsave->legacy.fpregs, - sizeof env->fpregs); - env->xstate_bv = xsave->header.xstate_bv; - memcpy(env->bnd_regs, &xsave->bndreg_state.bnd_regs, - sizeof env->bnd_regs); - env->bndcs_regs = xsave->bndcsr_state.bndcsr; - memcpy(env->opmask_regs, &xsave->opmask_state.opmask_regs, - sizeof env->opmask_regs); + env->fpip = legacy->fpip; + env->fpdp = legacy->fpdp; + env->mxcsr = legacy->mxcsr; + memcpy(env->fpregs, &legacy->fpregs, + sizeof(env->fpregs)); for (i = 0; i < CPU_NB_REGS; i++) { - const uint8_t *xmm = xsave->legacy.xmm_regs[i]; - const uint8_t *ymmh = xsave->avx_state.ymmh[i]; - const uint8_t *zmmh = xsave->zmm_hi256_state.zmm_hi256[i]; + const uint8_t *xmm = legacy->xmm_regs[i]; + env->xmm_regs[i].ZMM_Q(0) = ldq_p(xmm); - env->xmm_regs[i].ZMM_Q(1) = ldq_p(xmm+8); - env->xmm_regs[i].ZMM_Q(2) = ldq_p(ymmh); - env->xmm_regs[i].ZMM_Q(3) = ldq_p(ymmh+8); - env->xmm_regs[i].ZMM_Q(4) = ldq_p(zmmh); - env->xmm_regs[i].ZMM_Q(5) = ldq_p(zmmh+8); - env->xmm_regs[i].ZMM_Q(6) = ldq_p(zmmh+16); - env->xmm_regs[i].ZMM_Q(7) = ldq_p(zmmh+24); + env->xmm_regs[i].ZMM_Q(1) = ldq_p(xmm + 8); + } + + env->xstate_bv = header->xstate_bv; + + e = &x86_ext_save_areas[XSTATE_YMM_BIT]; + if (e->size && e->offset) { + const XSaveAVX *avx; + + avx = buf + e->offset; + for (i = 0; i < CPU_NB_REGS; i++) { + const uint8_t *ymmh = avx->ymmh[i]; + + env->xmm_regs[i].ZMM_Q(2) = ldq_p(ymmh); + env->xmm_regs[i].ZMM_Q(3) = ldq_p(ymmh + 8); + } + } + + e = &x86_ext_save_areas[XSTATE_BNDREGS_BIT]; + if (e->size && e->offset) { + const XSaveBNDREG *bndreg; + const XSaveBNDCSR *bndcsr; + + f = &x86_ext_save_areas[XSTATE_BNDCSR_BIT]; + assert(f->size); + assert(f->offset); + + bndreg = buf + e->offset; + bndcsr = buf + f->offset; + + memcpy(env->bnd_regs, &bndreg->bnd_regs, + sizeof(env->bnd_regs)); + env->bndcs_regs = bndcsr->bndcsr; } + e = &x86_ext_save_areas[XSTATE_OPMASK_BIT]; + if (e->size && e->offset) { + const XSaveOpmask *opmask; + const XSaveZMM_Hi256 *zmm_hi256; #ifdef TARGET_X86_64 - memcpy(&env->xmm_regs[16], &xsave->hi16_zmm_state.hi16_zmm, - 16 * sizeof env->xmm_regs[16]); - memcpy(&env->pkru, &xsave->pkru_state, sizeof env->pkru); + const XSaveHi16_ZMM *hi16_zmm; +#endif + + f = &x86_ext_save_areas[XSTATE_ZMM_Hi256_BIT]; + assert(f->size); + assert(f->offset); + + g = &x86_ext_save_areas[XSTATE_Hi16_ZMM_BIT]; + assert(g->size); + assert(g->offset); + + opmask = buf + e->offset; + zmm_hi256 = buf + f->offset; +#ifdef TARGET_X86_64 + hi16_zmm = buf + g->offset; +#endif + + memcpy(env->opmask_regs, &opmask->opmask_regs, + sizeof(env->opmask_regs)); + + for (i = 0; i < CPU_NB_REGS; i++) { + const uint8_t *zmmh = zmm_hi256->zmm_hi256[i]; + + env->xmm_regs[i].ZMM_Q(4) = ldq_p(zmmh); + env->xmm_regs[i].ZMM_Q(5) = ldq_p(zmmh + 8); + env->xmm_regs[i].ZMM_Q(6) = ldq_p(zmmh + 16); + env->xmm_regs[i].ZMM_Q(7) = ldq_p(zmmh + 24); + } + +#ifdef TARGET_X86_64 + memcpy(&env->xmm_regs[16], &hi16_zmm->hi16_zmm, + 16 * sizeof(env->xmm_regs[16])); +#endif + } + +#ifdef TARGET_X86_64 + e = &x86_ext_save_areas[XSTATE_PKRU_BIT]; + if (e->size && e->offset) { + const XSavePKRU *pkru; + + pkru = buf + e->offset; + memcpy(&env->pkru, pkru, sizeof(env->pkru)); + } #endif } From patchwork Mon Jul 5 10:46:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Edmondson X-Patchwork-Id: 12358825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 236B8C07E9B for ; Mon, 5 Jul 2021 10:51:14 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AA57761350 for ; Mon, 5 Jul 2021 10:51:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA57761350 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:48974 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m0MC0-00032F-Su for qemu-devel@archiver.kernel.org; Mon, 05 Jul 2021 06:51:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:47680) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7k-0003T1-1F for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:48 -0400 Received: from forward1-smtp.messagingengine.com ([66.111.4.223]:60549) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7i-0000Fg-5m for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:47 -0400 Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailforward.nyi.internal (Postfix) with ESMTP id 23ED719409BF; Mon, 5 Jul 2021 06:46:45 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute6.internal (MEProxy); Mon, 05 Jul 2021 06:46:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=3ngrxv/e5H1HRwO7HipjFf1e4EewKqhfmFJcIin+TD4=; b=mMxdzhUZ 6Ar9JAgg0Z380biQqJNlFD1YrKwyvwIjjDajDkk3btX3W5oiudh1kVGvZl+yCVly uN6ZcItuXR4ypPVOXvzi0YNGkEG2dO+7HJb6aq40EEL0jP2GdtsWBOoCKN/jlli0 jGWNtedDxT4WqqCD4u7VzQ5D+s6nN6wWQFt9Pylz7r++VVDnLtL++p77o0QJOuWd sgtkAQeP0aSHHF2sCcerouu4vNFVy+IH3BuWNrSg85uzuR4SKaLmgaqtPUyNKLi2 nTO5hdPRDgwS/ssWFqTrKXhnXoKx7/7/xFdbDSokYp6GLbHzonfkFgt/DW14jOBD rrucHzD2ERHI3A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfeejgedgfeduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffrghvihgu ucfgughmohhnughsohhnuceouggrvhhiugdrvggumhhonhgushhonhesohhrrggtlhgvrd gtohhmqeenucggtffrrghtthgvrhhnpedufeetjefgfefhtdejhfehtdfftefhteekhefg leehfffhiefhgeelgfejtdehkeenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmh epmhgrihhlfhhrohhmpegurghvihgurdgvughmohhnughsohhnsehorhgrtghlvgdrtgho mh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 5 Jul 2021 06:46:43 -0400 (EDT) Received: from localhost (disaster-area.hh.sledj.net [local]) by disaster-area.hh.sledj.net (OpenSMTPD) with ESMTPA id 035c34ad; Mon, 5 Jul 2021 10:46:33 +0000 (UTC) From: David Edmondson To: qemu-devel@nongnu.org Subject: [RFC PATCH 7/8] target/i386: Populate x86_ext_save_areas offsets using cpuid where possible Date: Mon, 5 Jul 2021 11:46:31 +0100 Message-Id: <20210705104632.2902400-8-david.edmondson@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210705104632.2902400-1-david.edmondson@oracle.com> References: <20210705104632.2902400-1-david.edmondson@oracle.com> MIME-Version: 1.0 Received-SPF: softfail client-ip=66.111.4.223; envelope-from=david.edmondson@oracle.com; helo=forward1-smtp.messagingengine.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_SOFTFAIL=0.665, UNPARSEABLE_RELAY=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Eduardo Habkost , kvm@vger.kernel.org, Michael Roth , Marcelo Tosatti , Richard Henderson , Cameron Esfahani , David Edmondson , babu.moger@amd.com, Roman Bolshakov , Paolo Bonzini Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Rather than relying on the X86XSaveArea structure definition, determine the offset of XSAVE state areas using CPUID leaf 0xd where possible (KVM and HVF). Signed-off-by: David Edmondson --- target/i386/cpu.c | 13 +------------ target/i386/cpu.h | 2 +- target/i386/hvf/hvf-cpu.c | 34 ++++++++++++++++++++++++++++++++++ target/i386/kvm/kvm-cpu.c | 36 ++++++++++++++++++++++++++++++++++++ target/i386/tcg/tcg-cpu.c | 20 ++++++++++++++++++++ 5 files changed, 92 insertions(+), 13 deletions(-) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index 13caa0de50..5f595a0d7e 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -1304,48 +1304,37 @@ static const X86RegisterInfo32 x86_reg_info_32[CPU_NB_REGS32] = { }; #undef REGISTER -const ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = { +ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = { [XSTATE_FP_BIT] = { /* x87 FP state component is always enabled if XSAVE is supported */ .feature = FEAT_1_ECX, .bits = CPUID_EXT_XSAVE, - /* x87 state is in the legacy region of the XSAVE area */ - .offset = 0, .size = sizeof(X86LegacyXSaveArea) + sizeof(X86XSaveHeader), }, [XSTATE_SSE_BIT] = { /* SSE state component is always enabled if XSAVE is supported */ .feature = FEAT_1_ECX, .bits = CPUID_EXT_XSAVE, - /* SSE state is in the legacy region of the XSAVE area */ - .offset = 0, .size = sizeof(X86LegacyXSaveArea) + sizeof(X86XSaveHeader), }, [XSTATE_YMM_BIT] = { .feature = FEAT_1_ECX, .bits = CPUID_EXT_AVX, - .offset = offsetof(X86XSaveArea, avx_state), .size = sizeof(XSaveAVX) }, [XSTATE_BNDREGS_BIT] = { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_MPX, - .offset = offsetof(X86XSaveArea, bndreg_state), .size = sizeof(XSaveBNDREG) }, [XSTATE_BNDCSR_BIT] = { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_MPX, - .offset = offsetof(X86XSaveArea, bndcsr_state), .size = sizeof(XSaveBNDCSR) }, [XSTATE_OPMASK_BIT] = { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_AVX512F, - .offset = offsetof(X86XSaveArea, opmask_state), .size = sizeof(XSaveOpmask) }, [XSTATE_ZMM_Hi256_BIT] = { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_AVX512F, - .offset = offsetof(X86XSaveArea, zmm_hi256_state), .size = sizeof(XSaveZMM_Hi256) }, [XSTATE_Hi16_ZMM_BIT] = { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_AVX512F, - .offset = offsetof(X86XSaveArea, hi16_zmm_state), .size = sizeof(XSaveHi16_ZMM) }, [XSTATE_PKRU_BIT] = { .feature = FEAT_7_0_ECX, .bits = CPUID_7_0_ECX_PKU, - .offset = offsetof(X86XSaveArea, pkru_state), .size = sizeof(XSavePKRU) }, }; diff --git a/target/i386/cpu.h b/target/i386/cpu.h index c9c0a34330..96b672f8bd 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -1377,7 +1377,7 @@ typedef struct ExtSaveArea { #define XSAVE_STATE_AREA_COUNT (XSTATE_PKRU_BIT + 1) -extern const ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT]; +extern ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT]; typedef enum TPRAccess { TPR_ACCESS_READ, diff --git a/target/i386/hvf/hvf-cpu.c b/target/i386/hvf/hvf-cpu.c index 8fbc423888..3c7c44698f 100644 --- a/target/i386/hvf/hvf-cpu.c +++ b/target/i386/hvf/hvf-cpu.c @@ -30,6 +30,38 @@ static void hvf_cpu_max_instance_init(X86CPU *cpu) hvf_get_supported_cpuid(0xC0000000, 0, R_EAX); } +static void hvf_cpu_xsave_init(void) +{ + int i; + + /* + * The allocated storage must be large enough for all of the + * possible XSAVE state components. + */ + assert(hvf_get_supported_cpuid(0xd, 0, R_ECX) <= 4096); + + /* x87 state is in the legacy region of the XSAVE area. */ + x86_ext_save_areas[XSTATE_FP_BIT].offset = 0; + /* SSE state is in the legacy region of the XSAVE area. */ + x86_ext_save_areas[XSTATE_SSE_BIT].offset = 0; + + for (i = XSTATE_SSE_BIT + 1; i < XSAVE_STATE_AREA_COUNT; i++) { + ExtSaveArea *esa = &x86_ext_save_areas[i]; + + if (esa->size) { + int sz = hvf_get_supported_cpuid(0xd, i, R_EAX); + + if (sz != 0) { + assert(esa->size == sz); + + esa->offset = hvf_get_supported_cpuid(0xd, i, R_EBX); + fprintf(stderr, "%s: state area %d: offset 0x%x, size 0x%x\n", + __func__, i, esa->offset, esa->size); + } + } + } +} + static void hvf_cpu_instance_init(CPUState *cs) { X86CPU *cpu = X86_CPU(cs); @@ -42,6 +74,8 @@ static void hvf_cpu_instance_init(CPUState *cs) if (cpu->max_features) { hvf_cpu_max_instance_init(cpu); } + + hvf_cpu_xsave_init(); } static void hvf_cpu_accel_class_init(ObjectClass *oc, void *data) diff --git a/target/i386/kvm/kvm-cpu.c b/target/i386/kvm/kvm-cpu.c index 00369c2000..f474cc5b83 100644 --- a/target/i386/kvm/kvm-cpu.c +++ b/target/i386/kvm/kvm-cpu.c @@ -122,6 +122,40 @@ static void kvm_cpu_max_instance_init(X86CPU *cpu) kvm_arch_get_supported_cpuid(s, 0xC0000000, 0, R_EAX); } +static void kvm_cpu_xsave_init(void) +{ + KVMState *s = kvm_state; + int i; + + /* + * The allocated storage must be large enough for all of the + * possible XSAVE state components. + */ + assert(sizeof(struct kvm_xsave) >= + kvm_arch_get_supported_cpuid(s, 0xd, 0, R_ECX)); + + /* x87 state is in the legacy region of the XSAVE area. */ + x86_ext_save_areas[XSTATE_FP_BIT].offset = 0; + /* SSE state is in the legacy region of the XSAVE area. */ + x86_ext_save_areas[XSTATE_SSE_BIT].offset = 0; + + for (i = XSTATE_SSE_BIT + 1; i < XSAVE_STATE_AREA_COUNT; i++) { + ExtSaveArea *esa = &x86_ext_save_areas[i]; + + if (esa->size) { + int sz = kvm_arch_get_supported_cpuid(s, 0xd, i, R_EAX); + + if (sz != 0) { + assert(esa->size == sz); + + esa->offset = kvm_arch_get_supported_cpuid(s, 0xd, i, R_EBX); + fprintf(stderr, "%s: state area %d: offset 0x%x, size 0x%x\n", + __func__, i, esa->offset, esa->size); + } + } + } +} + static void kvm_cpu_instance_init(CPUState *cs) { X86CPU *cpu = X86_CPU(cs); @@ -141,6 +175,8 @@ static void kvm_cpu_instance_init(CPUState *cs) if (cpu->max_features) { kvm_cpu_max_instance_init(cpu); } + + kvm_cpu_xsave_init(); } static void kvm_cpu_accel_class_init(ObjectClass *oc, void *data) diff --git a/target/i386/tcg/tcg-cpu.c b/target/i386/tcg/tcg-cpu.c index 014ebea2f6..e96ec9bbcc 100644 --- a/target/i386/tcg/tcg-cpu.c +++ b/target/i386/tcg/tcg-cpu.c @@ -80,6 +80,24 @@ static void tcg_cpu_class_init(CPUClass *cc) cc->init_accel_cpu = tcg_cpu_init_ops; } +static void tcg_cpu_xsave_init(void) +{ +#define XO(bit, field) \ + x86_ext_save_areas[bit].offset = offsetof(X86XSaveArea, field); + + XO(XSTATE_FP_BIT, legacy); + XO(XSTATE_SSE_BIT, legacy); + XO(XSTATE_YMM_BIT, avx_state); + XO(XSTATE_BNDREGS_BIT, bndreg_state); + XO(XSTATE_BNDCSR_BIT, bndcsr_state); + XO(XSTATE_OPMASK_BIT, opmask_state); + XO(XSTATE_ZMM_Hi256_BIT, zmm_hi256_state); + XO(XSTATE_Hi16_ZMM_BIT, hi16_zmm_state); + XO(XSTATE_PKRU_BIT, pkru_state); + +#undef XO +} + /* * TCG-specific defaults that override all CPU models when using TCG */ @@ -93,6 +111,8 @@ static void tcg_cpu_instance_init(CPUState *cs) X86CPU *cpu = X86_CPU(cs); /* Special cases not set in the X86CPUDefinition structs: */ x86_cpu_apply_props(cpu, tcg_default_props); + + tcg_cpu_xsave_init(); } static void tcg_cpu_accel_class_init(ObjectClass *oc, void *data) From patchwork Mon Jul 5 10:46:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Edmondson X-Patchwork-Id: 12358821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24E2AC07E99 for ; Mon, 5 Jul 2021 10:48:14 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B941F61358 for ; Mon, 5 Jul 2021 10:48:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B941F61358 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:41314 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m0M96-0006K0-Qb for qemu-devel@archiver.kernel.org; Mon, 05 Jul 2021 06:48:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:47702) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7l-0003Xj-N2 for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:49 -0400 Received: from forward1-smtp.messagingengine.com ([66.111.4.223]:51991) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m0M7j-0000Hu-NG for qemu-devel@nongnu.org; Mon, 05 Jul 2021 06:46:49 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailforward.nyi.internal (Postfix) with ESMTP id 263BF1940A33; Mon, 5 Jul 2021 06:46:47 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Mon, 05 Jul 2021 06:46:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=kSTjC8Sl+aOJouiYIFKzOztHdUokHFkzp1fkQ3Gmlt8=; b=Bav4Zuw3 3syLuKxyr0BjAd6IIDUPYjwZnCuJus4qEg3I50/ur2tyXjQ/NvnSgNWCqQ7biHWy cg38VGr4CzhQioTY72/CdAVXksV06HSV0AWM4U9bAXNb5k62V6Ab7itM1ya+FS4B YVpZfBOb/id7L+5FLs+EeSsgZCo4etse31JPPATnVhWNFNv3iuyz1To1oyVpwSvM qUzS0NmcrBoq+ACYiuWOdTsGo+7DC2WiE/dMcmYuajQlyAMHLx/KhqfpRCk0OTR5 NIK2acCz74v0HDBHZXBvNCI4iHUZrdMULIpICyIdU4bdGHPJ0me/XouBybBbHjZ+ lZF8R0Cg3zcyWQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfeejgedgfeduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffrghvihgu ucfgughmohhnughsohhnuceouggrvhhiugdrvggumhhonhgushhonhesohhrrggtlhgvrd gtohhmqeenucggtffrrghtthgvrhhnpedufeetjefgfefhtdejhfehtdfftefhteekhefg leehfffhiefhgeelgfejtdehkeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegurghvihgurdgvughmohhnughsohhnsehorhgrtghlvgdrtgho mh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 5 Jul 2021 06:46:44 -0400 (EDT) Received: from localhost (disaster-area.hh.sledj.net [local]) by disaster-area.hh.sledj.net (OpenSMTPD) with ESMTPA id 72435e5b; Mon, 5 Jul 2021 10:46:33 +0000 (UTC) From: David Edmondson To: qemu-devel@nongnu.org Subject: [RFC PATCH 8/8] target/i386: Move X86XSaveArea into TCG Date: Mon, 5 Jul 2021 11:46:32 +0100 Message-Id: <20210705104632.2902400-9-david.edmondson@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210705104632.2902400-1-david.edmondson@oracle.com> References: <20210705104632.2902400-1-david.edmondson@oracle.com> MIME-Version: 1.0 Received-SPF: softfail client-ip=66.111.4.223; envelope-from=david.edmondson@oracle.com; helo=forward1-smtp.messagingengine.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_SOFTFAIL=0.665, UNPARSEABLE_RELAY=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Eduardo Habkost , kvm@vger.kernel.org, Michael Roth , Marcelo Tosatti , Richard Henderson , Cameron Esfahani , David Edmondson , babu.moger@amd.com, Roman Bolshakov , Paolo Bonzini Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Given that TCG is now the only consumer of X86XSaveArea, move the structure definition and associated offset declarations and checks to a TCG specific header. Signed-off-by: David Edmondson --- target/i386/cpu.h | 57 ------------------------------------ target/i386/tcg/fpu_helper.c | 1 + target/i386/tcg/tcg-cpu.h | 57 ++++++++++++++++++++++++++++++++++++ 3 files changed, 58 insertions(+), 57 deletions(-) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 96b672f8bd..0f7ddbfeae 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -1305,48 +1305,6 @@ typedef struct XSavePKRU { uint32_t padding; } XSavePKRU; -#define XSAVE_FCW_FSW_OFFSET 0x000 -#define XSAVE_FTW_FOP_OFFSET 0x004 -#define XSAVE_CWD_RIP_OFFSET 0x008 -#define XSAVE_CWD_RDP_OFFSET 0x010 -#define XSAVE_MXCSR_OFFSET 0x018 -#define XSAVE_ST_SPACE_OFFSET 0x020 -#define XSAVE_XMM_SPACE_OFFSET 0x0a0 -#define XSAVE_XSTATE_BV_OFFSET 0x200 -#define XSAVE_AVX_OFFSET 0x240 -#define XSAVE_BNDREG_OFFSET 0x3c0 -#define XSAVE_BNDCSR_OFFSET 0x400 -#define XSAVE_OPMASK_OFFSET 0x440 -#define XSAVE_ZMM_HI256_OFFSET 0x480 -#define XSAVE_HI16_ZMM_OFFSET 0x680 -#define XSAVE_PKRU_OFFSET 0xa80 - -typedef struct X86XSaveArea { - X86LegacyXSaveArea legacy; - X86XSaveHeader header; - - /* Extended save areas: */ - - /* AVX State: */ - XSaveAVX avx_state; - - /* Ensure that XSaveBNDREG is properly aligned. */ - uint8_t padding[XSAVE_BNDREG_OFFSET - - sizeof(X86LegacyXSaveArea) - - sizeof(X86XSaveHeader) - - sizeof(XSaveAVX)]; - - /* MPX State: */ - XSaveBNDREG bndreg_state; - XSaveBNDCSR bndcsr_state; - /* AVX-512 State: */ - XSaveOpmask opmask_state; - XSaveZMM_Hi256 zmm_hi256_state; - XSaveHi16_ZMM hi16_zmm_state; - /* PKRU State: */ - XSavePKRU pkru_state; -} X86XSaveArea; - QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100); QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40); QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40); @@ -1355,21 +1313,6 @@ QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200); QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400); QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET); - typedef struct ExtSaveArea { uint32_t feature, bits; uint32_t offset, size; diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c index 4e11965067..74bbe94b80 100644 --- a/target/i386/tcg/fpu_helper.c +++ b/target/i386/tcg/fpu_helper.c @@ -20,6 +20,7 @@ #include "qemu/osdep.h" #include #include "cpu.h" +#include "tcg-cpu.h" #include "exec/helper-proto.h" #include "fpu/softfloat.h" #include "fpu/softfloat-macros.h" diff --git a/target/i386/tcg/tcg-cpu.h b/target/i386/tcg/tcg-cpu.h index 36bd300af0..53a8494455 100644 --- a/target/i386/tcg/tcg-cpu.h +++ b/target/i386/tcg/tcg-cpu.h @@ -19,6 +19,63 @@ #ifndef TCG_CPU_H #define TCG_CPU_H +#define XSAVE_FCW_FSW_OFFSET 0x000 +#define XSAVE_FTW_FOP_OFFSET 0x004 +#define XSAVE_CWD_RIP_OFFSET 0x008 +#define XSAVE_CWD_RDP_OFFSET 0x010 +#define XSAVE_MXCSR_OFFSET 0x018 +#define XSAVE_ST_SPACE_OFFSET 0x020 +#define XSAVE_XMM_SPACE_OFFSET 0x0a0 +#define XSAVE_XSTATE_BV_OFFSET 0x200 +#define XSAVE_AVX_OFFSET 0x240 +#define XSAVE_BNDREG_OFFSET 0x3c0 +#define XSAVE_BNDCSR_OFFSET 0x400 +#define XSAVE_OPMASK_OFFSET 0x440 +#define XSAVE_ZMM_HI256_OFFSET 0x480 +#define XSAVE_HI16_ZMM_OFFSET 0x680 +#define XSAVE_PKRU_OFFSET 0xa80 + +typedef struct X86XSaveArea { + X86LegacyXSaveArea legacy; + X86XSaveHeader header; + + /* Extended save areas: */ + + /* AVX State: */ + XSaveAVX avx_state; + + /* Ensure that XSaveBNDREG is properly aligned. */ + uint8_t padding[XSAVE_BNDREG_OFFSET + - sizeof(X86LegacyXSaveArea) + - sizeof(X86XSaveHeader) + - sizeof(XSaveAVX)]; + + /* MPX State: */ + XSaveBNDREG bndreg_state; + XSaveBNDCSR bndcsr_state; + /* AVX-512 State: */ + XSaveOpmask opmask_state; + XSaveZMM_Hi256 zmm_hi256_state; + XSaveHi16_ZMM hi16_zmm_state; + /* PKRU State: */ + XSavePKRU pkru_state; +} X86XSaveArea; + +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET); + bool tcg_cpu_realizefn(CPUState *cs, Error **errp); #endif /* TCG_CPU_H */