From patchwork Mon Apr 24 08:20:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xenia Ragiadakou X-Patchwork-Id: 13221833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F0EEDC77B61 for ; Mon, 24 Apr 2023 08:21:34 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.525208.816263 (Exim 4.92) (envelope-from ) id 1pqrRp-0003Ca-7v; Mon, 24 Apr 2023 08:21:21 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 525208.816263; Mon, 24 Apr 2023 08:21:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pqrRp-0003CR-4W; Mon, 24 Apr 2023 08:21:21 +0000 Received: by outflank-mailman (input) for mailman id 525208; Mon, 24 Apr 2023 08:21:19 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pqrRn-0003By-Qf for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:21:19 +0000 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2062a.outbound.protection.outlook.com [2a01:111:f400:7e8a::62a]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id fbd993db-e278-11ed-b223-6b7b168915f2; Mon, 24 Apr 2023 10:21:17 +0200 (CEST) Received: from BN8PR04CA0041.namprd04.prod.outlook.com (2603:10b6:408:d4::15) by DM6PR12MB4927.namprd12.prod.outlook.com (2603:10b6:5:20a::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 24 Apr 2023 08:21:14 +0000 Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com (2603:10b6:408:d4:cafe::ec) by BN8PR04CA0041.outlook.office365.com (2603:10b6:408:d4::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend Transport; Mon, 24 Apr 2023 08:21:14 +0000 Received: from SATLEXMB03.amd.com (165.204.84.17) by BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 08:21:13 +0000 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr 2023 03:21:13 -0500 Received: from 10.0.2.15 (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend Transport; Mon, 24 Apr 2023 03:21:11 -0500 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: fbd993db-e278-11ed-b223-6b7b168915f2 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KZipKmYEDaWvtt/tucRuUwLiBzI025dBtpTsrwXikUSP5avmUe4hKJzs1G2LldqXap6XO9ZeqMiO1OwzonNaNXx5c/fX4JHS+Smg1qm+j+ZxQLUNDxm17WjwXZFXMlcRn7DSqgBEgF+u7tIw9uklx+7jum+8ef88lMXtFozD6UJRLr2suxM0fBhYbklllDpIdc/5koNc/CiuwpbbpDe5tu+ui8eOqvCuG2pTIJpyUpjt3xXtQ+2D+kTGM9TKSubkUPx+42vlYW9Rft7adYtvAeH7bo67asfESQaxvgcHgCYYsbxPE05TFwKSWjNlQuSAXaYNOtMuRqnupTjmNy39Tg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XWb82N2r/vIWJiXtFNEFWs/ZcXzW3uEgMOIb6Mqkk14=; b=e1RCL6otz0kmLETyVCNUj3QS6YSYU1Ohk0SblkS/M1n8AQ7xBt3leEaxPil/okBC1EATWo1xAZEuc9ZlxDqYP62SBxhG/rOPPOw/ljhzVBoVHLHTOfRcUVxlqBRKUVkOpDMfNagFWPGt/S1e6Y5XjzSzwUsxzatJFZicnYWwwlHpgIcoi5h4tvWvWzJ7Qawj/SexIDQijRipYBsmc9BHm6zfk8e1U7Q2rqOF3zMMaWv/XS1zb/La+aamlTpE61Yyn4EGQTggS6M8VArB6zmtlj/VpBSb1poeA+3e09D+hh23wwiFRnH6hQ3jWCrODj7uWq1ticKgBz6d2JRmPlUXzQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XWb82N2r/vIWJiXtFNEFWs/ZcXzW3uEgMOIb6Mqkk14=; b=Wtd0XFnLQxCFgDwNmNRjH5XVBmBmyfNkTEP/oHOj912BDcOOvb8FvMrDH23w+to1qCzY5LLe00FjcGSWO+29G/1LZeckpIBGYxUU3mPeHvo3sBAoQPKhVPA0Bd0hezuKon841tdP2I607nIlEhOg20a4vGlWwHDLAYycwYG0AVQ= X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C From: Xenia Ragiadakou To: CC: Xenia Ragiadakou , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu Subject: [PATCH v2 1/3] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept() Date: Mon, 24 Apr 2023 11:20:36 +0300 Message-ID: <20230424082038.541122-2-xenia.ragiadakou@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230424082038.541122-1-xenia.ragiadakou@amd.com> References: <20230424082038.541122-1-xenia.ragiadakou@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT017:EE_|DM6PR12MB4927:EE_ X-MS-Office365-Filtering-Correlation-Id: 2d336e64-144f-4b06-daba-08db449cde7b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: R9mfFcbaNDyE1ILb8hPrTj0VzsfYHmpky3OVp0aSVfdJ+PQzoPcybJlOD4qCx/8iKQEgEnDSUPjQIpHezYfnzi0ibX7MvXA/yhmBLj2A8vs6zvJTtEsJ0cm+thLKv+zVAgEKtCzY/56N2m7CD2vpUk0FNTSahRkNTLCJLjmfRl+jj3uQtkH6KGJ8efBSGO6sy2XG03j8bnWHQagEjBnrNr66XWS04skNjw3WoUeNHPucUe7fsnizwe7WkgSlUoXxYBqZnJLVkBWDN0ZYzM1ckxDeHskf939PmXdADzlI1B9xzWQ8IYPoGnduTfSzAJuKm7TpA6CkmVB6DY9FwRY5H7Z12mvOEfqLCclYhCfGYruOE4L9jG0nmJNNH2qTYqLyabkAl+kiEcBH3b0epJGFSWA8HT1sTwkdtPj+Rq2xnzACSLOMjpnRfblu0jlhz9bwCTjF6fAfSU4giZ41l1g9BKt8PNsmHWmbLCm6C6/AhCxV4BQhmTYwo/FyMlWNndv7cy+zAlyze+3th4jMsXHn/PgrVeJsZCF3HqdFLimTIuDEm0QLyf2W9q/MoNk/TB6r/9iFrunT0wdwrGTxpkcwYvRM3M6hwkj6751B+bpQop2LmORyvHQL7noxWRQ/ohUCRT5g65vaVgNwJMMkB10cnPV8J5yht4v45EEizQCjsKNzABkKBjHsM54b6iX6QnX7ty30xCoUjEsZYynL8VGUW859eyn+kHMn+ixWhGNLv04= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(376002)(396003)(451199021)(46966006)(40470700004)(36840700001)(36756003)(8936002)(81166007)(66899021)(8676002)(40460700003)(5660300002)(44832011)(2906002)(82310400005)(86362001)(336012)(1076003)(70206006)(40480700001)(478600001)(6666004)(54906003)(16576012)(186003)(70586007)(2616005)(36860700001)(82740400003)(356005)(316002)(83380400001)(6916009)(4326008)(47076005)(41300700001)(426003)(26005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 08:21:13.9896 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2d336e64-144f-4b06-daba-08db449cde7b X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT017.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4927 This change aims to render the control interface of MSR intercepts identical between SVM and VMX code, so that the control of the MSR intercept in common code can be done through an hvm_funcs callback. Create two new functions: - svm_set_msr_intercept(), enables interception of read/write accesses to the corresponding MSR, by setting the corresponding read/write bits in the MSRPM based on the flags - svm_clear_msr_intercept(), disables interception of read/write accesses to the corresponding MSR, by clearing the corresponding read/write bits in the MSRPM based on the flags More specifically: - if flag is MSR_R, the functions {set,clear} the MSRPM bit that controls read access to the MSR - if flag is MSR_W, the functions {set,clear} the MSRPM bit that controls write access to the MSR - if flag is MSR_RW, the functions {set,clear} both MSRPM bits Place the definitions of the flags in asm/hvm/hvm.h because there is the intention to be used by VMX code as well. Remove svm_intercept_msr() and MSR_INTERCEPT_* definitions, and use the new functions and flags instead. The macros svm_{en,dis}able_intercept_for_msr() will be retained for now but they will be eventually open-coded with a follow-up patch, because only one of them is actually used, and because the meaning of "enabling/disabling" msr intercepts is not consistent through the code(for instance the hvm_func enable_msr_interception() sets only the write MSRPM bit, not both). In the meantime, take the opportunity to remove excess parentheses. No functional change intended. Signed-off-by: Xenia Ragiadakou --- Changes in v2: - restore BUG_ON(), reported by Jan - coding style fixes, reported by Jan - remove excess parentheses from macros, suggested by Jan - change from int to unsigned int the type of param flags, reported by Jan - change from uint32_t to unsigned int the type of param msr, reported by Jan xen/arch/x86/cpu/vpmu_amd.c | 9 +-- xen/arch/x86/hvm/svm/svm.c | 74 ++++++++++++++++--------- xen/arch/x86/include/asm/hvm/hvm.h | 4 ++ xen/arch/x86/include/asm/hvm/svm/vmcb.h | 15 ++--- 4 files changed, 64 insertions(+), 38 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c index 18266b9521..da8e906972 100644 --- a/xen/arch/x86/cpu/vpmu_amd.c +++ b/xen/arch/x86/cpu/vpmu_amd.c @@ -154,8 +154,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v) for ( i = 0; i < num_counters; i++ ) { - svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE); - svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE); + svm_clear_msr_intercept(v, counters[i], MSR_RW); + svm_set_msr_intercept(v, ctrls[i], MSR_W); + svm_clear_msr_intercept(v, ctrls[i], MSR_R); } msr_bitmap_on(vpmu); @@ -168,8 +169,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v) for ( i = 0; i < num_counters; i++ ) { - svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW); - svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW); + svm_set_msr_intercept(v, counters[i], MSR_RW); + svm_set_msr_intercept(v, ctrls[i], MSR_RW); } msr_bitmap_off(vpmu); diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 59a6e88dff..3ee0805ff3 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -277,23 +277,33 @@ svm_msrbit(unsigned long *msr_bitmap, uint32_t msr) return msr_bit; } -void svm_intercept_msr(struct vcpu *v, uint32_t msr, int flags) +void svm_set_msr_intercept(struct vcpu *v, unsigned int msr, unsigned int flags) { - unsigned long *msr_bit; - const struct domain *d = v->domain; + unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr); - msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr); BUG_ON(msr_bit == NULL); + msr &= 0x1fff; - if ( flags & MSR_INTERCEPT_READ ) + if ( flags & MSR_R ) __set_bit(msr * 2, msr_bit); - else if ( !monitored_msr(d, msr) ) - __clear_bit(msr * 2, msr_bit); - - if ( flags & MSR_INTERCEPT_WRITE ) + if ( flags & MSR_W ) __set_bit(msr * 2 + 1, msr_bit); - else if ( !monitored_msr(d, msr) ) +} + +void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags) +{ + unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr); + + BUG_ON(msr_bit == NULL); + + if ( monitored_msr(v->domain, msr) ) + return; + + if ( flags & MSR_R ) + __clear_bit(msr * 2, msr_bit); + if ( flags & MSR_W ) __clear_bit(msr * 2 + 1, msr_bit); } @@ -302,7 +312,10 @@ static void cf_check svm_enable_msr_interception(struct domain *d, uint32_t msr) struct vcpu *v; for_each_vcpu ( d, v ) - svm_intercept_msr(v, msr, MSR_INTERCEPT_WRITE); + { + svm_set_msr_intercept(v, msr, MSR_W); + svm_clear_msr_intercept(v, msr, MSR_R); + } } static void svm_save_dr(struct vcpu *v) @@ -319,10 +332,10 @@ static void svm_save_dr(struct vcpu *v) if ( v->domain->arch.cpuid->extd.dbext ) { - svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_RW); - svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_RW); - svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_RW); - svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_RW); + svm_set_msr_intercept(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_RW); + svm_set_msr_intercept(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_RW); + svm_set_msr_intercept(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_RW); + svm_set_msr_intercept(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_RW); rdmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]); rdmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]); @@ -350,10 +363,10 @@ static void __restore_debug_registers(struct vmcb_struct *vmcb, struct vcpu *v) if ( v->domain->arch.cpuid->extd.dbext ) { - svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_NONE); - svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_NONE); - svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_NONE); - svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_NONE); + svm_clear_msr_intercept(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_RW); + svm_clear_msr_intercept(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_RW); + svm_clear_msr_intercept(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_RW); + svm_clear_msr_intercept(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_RW); wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]); wrmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]); @@ -584,22 +597,29 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v) vmcb_set_exception_intercepts(vmcb, bitmap); /* Give access to MSR_SPEC_CTRL if the guest has been told about it. */ - svm_intercept_msr(v, MSR_SPEC_CTRL, - cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW); + if ( cp->extd.ibrs ) + svm_clear_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW); + else + svm_set_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW); /* * Always trap write accesses to VIRT_SPEC_CTRL in order to cache the guest * setting and avoid having to perform a rdmsr on vmexit to get the guest * setting even if VIRT_SSBD is offered to Xen itself. */ - svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL, - cp->extd.virt_ssbd && cpu_has_virt_ssbd && - !cpu_has_amd_ssbd ? - MSR_INTERCEPT_WRITE : MSR_INTERCEPT_RW); + if ( cp->extd.virt_ssbd && cpu_has_virt_ssbd && !cpu_has_amd_ssbd ) + { + svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_W); + svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_R); + } + else + svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW); /* Give access to MSR_PRED_CMD if the guest has been told about it. */ - svm_intercept_msr(v, MSR_PRED_CMD, - cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW); + if ( cp->extd.ibpb ) + svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW); + else + svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW); } void svm_sync_vmcb(struct vcpu *v, enum vmcb_sync_state new_state) diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index 04cbd4ff24..5740a64281 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -250,6 +250,10 @@ extern struct hvm_function_table hvm_funcs; extern bool_t hvm_enabled; extern s8 hvm_port80_allowed; +#define MSR_R BIT(0, U) +#define MSR_W BIT(1, U) +#define MSR_RW (MSR_W | MSR_R) + extern const struct hvm_function_table *start_svm(void); extern const struct hvm_function_table *start_vmx(void); diff --git a/xen/arch/x86/include/asm/hvm/svm/vmcb.h b/xen/arch/x86/include/asm/hvm/svm/vmcb.h index a1a8a7fd25..94deb0a236 100644 --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h @@ -603,13 +603,14 @@ void svm_destroy_vmcb(struct vcpu *v); void setup_vmcb_dump(void); -#define MSR_INTERCEPT_NONE 0 -#define MSR_INTERCEPT_READ 1 -#define MSR_INTERCEPT_WRITE 2 -#define MSR_INTERCEPT_RW (MSR_INTERCEPT_WRITE | MSR_INTERCEPT_READ) -void svm_intercept_msr(struct vcpu *v, uint32_t msr, int enable); -#define svm_disable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_NONE) -#define svm_enable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_RW) +void svm_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags); +void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags); +#define svm_disable_intercept_for_msr(v, msr) \ + svm_clear_msr_intercept(v, msr, MSR_RW) +#define svm_enable_intercept_for_msr(v, msr) \ + svm_set_intercept_msr(v, msr, MSR_RW) /* * VMCB accessor functions. From patchwork Mon Apr 24 08:20:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xenia Ragiadakou X-Patchwork-Id: 13221835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7B6BFC77B76 for ; Mon, 24 Apr 2023 08:21:36 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.525209.816272 (Exim 4.92) (envelope-from ) id 1pqrRt-0003Tu-F2; Mon, 24 Apr 2023 08:21:25 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 525209.816272; Mon, 24 Apr 2023 08:21:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pqrRt-0003Tn-Bt; Mon, 24 Apr 2023 08:21:25 +0000 Received: by outflank-mailman (input) for mailman id 525209; Mon, 24 Apr 2023 08:21:24 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pqrRs-0002wP-1Y for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:21:24 +0000 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2062a.outbound.protection.outlook.com [2a01:111:f400:fe5a::62a]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id fdcc21ab-e278-11ed-8611-37d641c3527e; Mon, 24 Apr 2023 10:21:21 +0200 (CEST) Received: from DM6PR07CA0105.namprd07.prod.outlook.com (2603:10b6:5:330::8) by SN7PR12MB7911.namprd12.prod.outlook.com (2603:10b6:806:32a::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.31; Mon, 24 Apr 2023 08:21:17 +0000 Received: from DM6NAM11FT005.eop-nam11.prod.protection.outlook.com (2603:10b6:5:330:cafe::b2) by DM6PR07CA0105.outlook.office365.com (2603:10b6:5:330::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend Transport; Mon, 24 Apr 2023 08:21:17 +0000 Received: from SATLEXMB03.amd.com (165.204.84.17) by DM6NAM11FT005.mail.protection.outlook.com (10.13.172.238) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 08:21:16 +0000 Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr 2023 03:21:16 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr 2023 03:21:16 -0500 Received: from 10.0.2.15 (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend Transport; Mon, 24 Apr 2023 03:21:14 -0500 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: fdcc21ab-e278-11ed-8611-37d641c3527e ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=l8FC0XbDaSGc5RHITZFFOvPU1twvx0+O2uJ/L+UrfaSxFBxo53IW8SjpywggItLJNUFGc5KUm10afLNpO5Qlc6cFpJ5CXjJojjUEvvMbvUw32BnjueLMjDOSH1Mm2LgOBltDaS49GJjQUuv0pjxTliQpRObNxLBkCkL/KPpZybN8jHFVpV16pV6T3nZYfBpuGtagFbx4SynSUlLe+qNo7+DmOFSoeJiHl5hXHUTWlteABN7FAja8KdyCNJEMRYZoxtflPd2waoqZAlh+z9Oo/0m/mzqTKUdMZgCNIDkmghavzqUZFr5RY20A+CdarAufqyd8thDklhHnZJcsaxdi2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vLJl8NuTEtKiCgcahKscn4Cu5cCYjQA1mE/ixk7iJfk=; b=DZO5UJ9J4acgEvg3PjhowLVUmjjMZAXiFQyCBhsCW4RBhbUXElEV0EwApxJ7KYBqHjxEzNmLXpO9BMRMIZ8838rUTWJhIf3PD/D3FX8m3lAoYhG6ClAhkderGWcS8J9D03T/xZQBcko3hTNlSOXRI4208J28R2BNyB0EWrwAYzfS34aRume7zibjualDY32Ukb+9lYhREx2bhxG5+Qt5Xx1c0r1gnWbP1PPbikS8Tmd6Rx+jqnRZ83pC4VM3K/leLRSctIQ3jI7yXQ2RoNjyK/2sA+dGuvolZ1yj/NDvgLzoQPbWuLaU14dUQSyuOSNj0itFm1wuCjn2d5PyZx8xSg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vLJl8NuTEtKiCgcahKscn4Cu5cCYjQA1mE/ixk7iJfk=; b=TOWpWF+5TokNpEQNiQF28yIZ0LjQ1W71JL5pK5ez1igh4yfyNrDQeULEq0Af/MDHL1nrCplSAocn+2xrBUIjy/GHROxA4zfoaF1gKPkA0yjwGHkuZlSJ5qp2L70ZZOLX3X4oNQFs6MRJbP5aZOhy+6tRAOVcYP7MOZOY/fe2bME= X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C From: Xenia Ragiadakou To: CC: Xenia Ragiadakou , Jun Nakajima , Kevin Tian , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu Subject: [PATCH v2 2/3] x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags Date: Mon, 24 Apr 2023 11:20:37 +0300 Message-ID: <20230424082038.541122-3-xenia.ragiadakou@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230424082038.541122-1-xenia.ragiadakou@amd.com> References: <20230424082038.541122-1-xenia.ragiadakou@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT005:EE_|SN7PR12MB7911:EE_ X-MS-Office365-Filtering-Correlation-Id: 939b278f-4e43-4a68-1506-08db449ce037 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ShtoRzldRtAD+WGvNnfSe2+t3at5cAj5tYcz9HReR7BB4m2ROu1W0Csm2GqzSgY5qOr/Wj1gNdeGZ74Lx7bFvyZH6oGDw6Se2tq8R+gN7X7KAnVmzAzK9CLOKT/9V+QQOZBzZpWlBpv9ryUtgcOg8A7WyTosTPEqql/0rYjNk2u7/WuzQN4aVpjyKY7WoTB6dGaaepZNNgFF1vgrvsqltFoWdOxhjImn26jE/yJgWyqT5WCi0hHUs8OGyUDd65ZTGgGy/iBpkoyGrkm2f+TzhI9v/8hpPpLFDXKY5yuDo5p+fLkkCXRIc7ysPg0wrz3LA6YGJyIXDv2yxRXMIEnJWcBWGuxFftDL+NiY4yW19s9RDKigZ9qFVtSiqmsmmCIfJIhs8I8va7ifKEVthoc5QUwU1R8m+tPLDpRtyaq430AbPFjrSu1l6+sZS6hveeMY1H7D12P7TSti/aleuAbCOgtXbcNEEU+ql5jKhiwo1DX3LY93Ixo0hCRztF9sutp6cO/Grlsql7I3D+LQD6knAWV1kzYGlyD0Eq3b2HbBXAFf5A1hiXxdZjnlmZ3/ekmgHSNKNKRbXSlvTejL/LCzSBFc5LqjyVu5vGPdKp+67CbtJpl0JgLRQnrfhEEttD5WsV/olfyHtXZiwpCET6KF35iQ5lCetnIt/17OiJpWNGpYe/isqZ09+rfntE9ZlKslALCSf32sqHbPinm37BYvMEtunzWqTMOYAillVYODBxA= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(396003)(376002)(451199021)(40470700004)(46966006)(36840700001)(54906003)(36756003)(4326008)(6916009)(2906002)(186003)(44832011)(70586007)(70206006)(16576012)(316002)(6666004)(30864003)(41300700001)(5660300002)(36860700001)(478600001)(40480700001)(81166007)(356005)(426003)(336012)(82740400003)(47076005)(8676002)(82310400005)(8936002)(86362001)(83380400001)(1076003)(26005)(2616005)(40460700003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 08:21:16.9412 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 939b278f-4e43-4a68-1506-08db449ce037 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT005.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7911 Replace enum vmx_msr_intercept_type with the msr access flags, defined in hvm.h, so that the functions {svm,vmx}_{set,clear}_msr_intercept() share the same prototype. No functional change intended. Signed-off-by: Xenia Ragiadakou --- Changes in v2: - change from int to unsigned int the type of param type, reported by Jan xen/arch/x86/cpu/vpmu_intel.c | 24 +++++++------- xen/arch/x86/hvm/vmx/vmcs.c | 36 ++++++++++---------- xen/arch/x86/hvm/vmx/vmx.c | 44 ++++++++++++------------- xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 12 ++----- 4 files changed, 54 insertions(+), 62 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c index 35e350578b..395830e803 100644 --- a/xen/arch/x86/cpu/vpmu_intel.c +++ b/xen/arch/x86/cpu/vpmu_intel.c @@ -219,22 +219,22 @@ static void core2_vpmu_set_msr_bitmap(struct vcpu *v) /* Allow Read/Write PMU Counters MSR Directly. */ for ( i = 0; i < fixed_pmc_cnt; i++ ) - vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); for ( i = 0; i < arch_pmc_cnt; i++ ) { - vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); if ( full_width_write ) - vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); } /* Allow Read PMU Non-global Controls Directly. */ for ( i = 0; i < arch_pmc_cnt; i++ ) - vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), VMX_MSR_R); + vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); - vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, VMX_MSR_R); - vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, VMX_MSR_R); + vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); + vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); } static void core2_vpmu_unset_msr_bitmap(struct vcpu *v) @@ -242,21 +242,21 @@ static void core2_vpmu_unset_msr_bitmap(struct vcpu *v) unsigned int i; for ( i = 0; i < fixed_pmc_cnt; i++ ) - vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); for ( i = 0; i < arch_pmc_cnt; i++ ) { - vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); if ( full_width_write ) - vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); } for ( i = 0; i < arch_pmc_cnt; i++ ) - vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); - vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, VMX_MSR_R); - vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); + vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); } static inline void __core2_vpmu_save(struct vcpu *v) diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index b209563625..e7b67313a2 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -892,7 +892,7 @@ static void vmx_set_host_env(struct vcpu *v) } void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, - enum vmx_msr_intercept_type type) + unsigned int type) { struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap; struct domain *d = v->domain; @@ -906,17 +906,17 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, if ( msr <= 0x1fff ) { - if ( type & VMX_MSR_R ) + if ( type & MSR_R ) clear_bit(msr, msr_bitmap->read_low); - if ( type & VMX_MSR_W ) + if ( type & MSR_W ) clear_bit(msr, msr_bitmap->write_low); } else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) ) { msr &= 0x1fff; - if ( type & VMX_MSR_R ) + if ( type & MSR_R ) clear_bit(msr, msr_bitmap->read_high); - if ( type & VMX_MSR_W ) + if ( type & MSR_W ) clear_bit(msr, msr_bitmap->write_high); } else @@ -924,7 +924,7 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, } void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, - enum vmx_msr_intercept_type type) + unsigned int type) { struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap; @@ -934,17 +934,17 @@ void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, if ( msr <= 0x1fff ) { - if ( type & VMX_MSR_R ) + if ( type & MSR_R ) set_bit(msr, msr_bitmap->read_low); - if ( type & VMX_MSR_W ) + if ( type & MSR_W ) set_bit(msr, msr_bitmap->write_low); } else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) ) { msr &= 0x1fff; - if ( type & VMX_MSR_R ) + if ( type & MSR_R ) set_bit(msr, msr_bitmap->read_high); - if ( type & VMX_MSR_W ) + if ( type & MSR_W ) set_bit(msr, msr_bitmap->write_high); } else @@ -1151,17 +1151,17 @@ static int construct_vmcs(struct vcpu *v) v->arch.hvm.vmx.msr_bitmap = msr_bitmap; __vmwrite(MSR_BITMAP, virt_to_maddr(msr_bitmap)); - vmx_clear_msr_intercept(v, MSR_FS_BASE, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_GS_BASE, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_SHADOW_GS_BASE, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_CS, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_ESP, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_EIP, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_FS_BASE, MSR_RW); + vmx_clear_msr_intercept(v, MSR_GS_BASE, MSR_RW); + vmx_clear_msr_intercept(v, MSR_SHADOW_GS_BASE, MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_CS, MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_ESP, MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_EIP, MSR_RW); if ( paging_mode_hap(d) && (!is_iommu_enabled(d) || iommu_snoop) ) - vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW); if ( (vmexit_ctl & VM_EXIT_CLEAR_BNDCFGS) && (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS) ) - vmx_clear_msr_intercept(v, MSR_IA32_BNDCFGS, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_BNDCFGS, MSR_RW); } /* I/O access bitmap. */ diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 096c69251d..8a873147a5 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -791,7 +791,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v) */ if ( cp->feat.ibrsb ) { - vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW); rc = vmx_add_guest_msr(v, MSR_SPEC_CTRL, 0); if ( rc ) @@ -799,7 +799,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v) } else { - vmx_set_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW); rc = vmx_del_msr(v, MSR_SPEC_CTRL, VMX_MSR_GUEST); if ( rc && rc != -ESRCH ) @@ -809,20 +809,20 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v) /* MSR_PRED_CMD is safe to pass through if the guest knows about it. */ if ( cp->feat.ibrsb || cp->extd.ibpb ) - vmx_clear_msr_intercept(v, MSR_PRED_CMD, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_PRED_CMD, MSR_RW); else - vmx_set_msr_intercept(v, MSR_PRED_CMD, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_PRED_CMD, MSR_RW); /* MSR_FLUSH_CMD is safe to pass through if the guest knows about it. */ if ( cp->feat.l1d_flush ) - vmx_clear_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_FLUSH_CMD, MSR_RW); else - vmx_set_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_FLUSH_CMD, MSR_RW); if ( cp->feat.pks ) - vmx_clear_msr_intercept(v, MSR_PKRS, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_PKRS, MSR_RW); else - vmx_set_msr_intercept(v, MSR_PKRS, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_PKRS, MSR_RW); out: vmx_vmcs_exit(v); @@ -1418,7 +1418,7 @@ static void cf_check vmx_handle_cd(struct vcpu *v, unsigned long value) vmx_get_guest_pat(v, pat); vmx_set_guest_pat(v, uc_pat); - vmx_set_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW); wbinvd(); /* flush possibly polluted cache */ hvm_asid_flush_vcpu(v); /* invalidate memory type cached in TLB */ @@ -1429,7 +1429,7 @@ static void cf_check vmx_handle_cd(struct vcpu *v, unsigned long value) v->arch.hvm.cache_mode = NORMAL_CACHE_MODE; vmx_set_guest_pat(v, *pat); if ( !is_iommu_enabled(v->domain) || iommu_snoop ) - vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW); hvm_asid_flush_vcpu(v); /* no need to flush cache */ } } @@ -1883,9 +1883,9 @@ static void cf_check vmx_update_guest_efer(struct vcpu *v) * into hardware, clear the read intercept to avoid unnecessary VMExits. */ if ( guest_efer == v->arch.hvm.guest_efer ) - vmx_clear_msr_intercept(v, MSR_EFER, VMX_MSR_R); + vmx_clear_msr_intercept(v, MSR_EFER, MSR_R); else - vmx_set_msr_intercept(v, MSR_EFER, VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_EFER, MSR_R); } static void nvmx_enqueue_n2_exceptions(struct vcpu *v, @@ -2312,7 +2312,7 @@ static void cf_check vmx_enable_msr_interception(struct domain *d, uint32_t msr) struct vcpu *v; for_each_vcpu ( d, v ) - vmx_set_msr_intercept(v, msr, VMX_MSR_W); + vmx_set_msr_intercept(v, msr, MSR_W); } static void cf_check vmx_vcpu_update_eptp(struct vcpu *v) @@ -3479,17 +3479,17 @@ void cf_check vmx_vlapic_msr_changed(struct vcpu *v) { for ( msr = MSR_X2APIC_FIRST; msr <= MSR_X2APIC_LAST; msr++ ) - vmx_clear_msr_intercept(v, msr, VMX_MSR_R); + vmx_clear_msr_intercept(v, msr, MSR_R); - vmx_set_msr_intercept(v, MSR_X2APIC_PPR, VMX_MSR_R); - vmx_set_msr_intercept(v, MSR_X2APIC_TMICT, VMX_MSR_R); - vmx_set_msr_intercept(v, MSR_X2APIC_TMCCT, VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_X2APIC_PPR, MSR_R); + vmx_set_msr_intercept(v, MSR_X2APIC_TMICT, MSR_R); + vmx_set_msr_intercept(v, MSR_X2APIC_TMCCT, MSR_R); } if ( cpu_has_vmx_virtual_intr_delivery ) { - vmx_clear_msr_intercept(v, MSR_X2APIC_TPR, VMX_MSR_W); - vmx_clear_msr_intercept(v, MSR_X2APIC_EOI, VMX_MSR_W); - vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, VMX_MSR_W); + vmx_clear_msr_intercept(v, MSR_X2APIC_TPR, MSR_W); + vmx_clear_msr_intercept(v, MSR_X2APIC_EOI, MSR_W); + vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, MSR_W); } } else @@ -3500,7 +3500,7 @@ void cf_check vmx_vlapic_msr_changed(struct vcpu *v) SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) ) for ( msr = MSR_X2APIC_FIRST; msr <= MSR_X2APIC_LAST; msr++ ) - vmx_set_msr_intercept(v, msr, VMX_MSR_RW); + vmx_set_msr_intercept(v, msr, MSR_RW); vmx_update_secondary_exec_control(v); vmx_vmcs_exit(v); @@ -3636,7 +3636,7 @@ static int cf_check vmx_msr_write_intercept( return X86EMUL_OKAY; } - vmx_clear_msr_intercept(v, lbr->base + i, VMX_MSR_RW); + vmx_clear_msr_intercept(v, lbr->base + i, MSR_RW); } } diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h index 51641caa9f..af6a95b5d9 100644 --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h @@ -633,18 +633,10 @@ static inline int vmx_write_guest_msr(struct vcpu *v, uint32_t msr, return 0; } - -/* MSR intercept bitmap infrastructure. */ -enum vmx_msr_intercept_type { - VMX_MSR_R = 1, - VMX_MSR_W = 2, - VMX_MSR_RW = VMX_MSR_R | VMX_MSR_W, -}; - void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, - enum vmx_msr_intercept_type type); + unsigned int type); void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, - enum vmx_msr_intercept_type type); + unsigned int type); void vmx_vmcs_switch(paddr_t from, paddr_t to); void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector); void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector); From patchwork Mon Apr 24 08:20:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xenia Ragiadakou X-Patchwork-Id: 13221836 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 467C4C7618E for ; Mon, 24 Apr 2023 08:21:42 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.525210.816283 (Exim 4.92) (envelope-from ) id 1pqrRy-0003qT-Ry; Mon, 24 Apr 2023 08:21:30 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 525210.816283; Mon, 24 Apr 2023 08:21:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pqrRy-0003qK-Na; Mon, 24 Apr 2023 08:21:30 +0000 Received: by outflank-mailman (input) for mailman id 525210; Mon, 24 Apr 2023 08:21:29 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pqrRx-0002wP-Do for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:21:29 +0000 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2061f.outbound.protection.outlook.com [2a01:111:f400:fe5a::61f]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 012caa17-e279-11ed-8611-37d641c3527e; Mon, 24 Apr 2023 10:21:27 +0200 (CEST) Received: from MW4PR04CA0240.namprd04.prod.outlook.com (2603:10b6:303:87::35) by SJ0PR12MB8615.namprd12.prod.outlook.com (2603:10b6:a03:484::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr 2023 08:21:20 +0000 Received: from CO1NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:303:87:cafe::16) by MW4PR04CA0240.outlook.office365.com (2603:10b6:303:87::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend Transport; Mon, 24 Apr 2023 08:21:20 +0000 Received: from SATLEXMB04.amd.com (165.204.84.17) by CO1NAM11FT018.mail.protection.outlook.com (10.13.175.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 08:21:19 +0000 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr 2023 03:21:18 -0500 Received: from 10.0.2.15 (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend Transport; Mon, 24 Apr 2023 03:21:17 -0500 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 012caa17-e279-11ed-8611-37d641c3527e ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hQHOFTg5LDC8AuUINx3rtFIeffax/z53phTYUpiVD8K2y3nWy+TbTEeVw2Q7V9rSLdZJhhKnURCY/3m6iu/PC0eHyaDzGbG9rF5xd8n1nfJT1FmY1YGe7aNFrkt48oOYU8t1PYOLxBAKDH9h33lUE0OPyuEbIQWgha5+R68CmUK9cWwYXTbRVX+BeOIFtOE16MtIVrX9vbwlWP24nGemDHan9MPFdt50EQ7I9+80vTAD5IAwTh+YLnd00L0y3TmQBr6winAc1Ulw6Xj0V36G0rYaKUUnnv6/ByvbwN+fi6/xfPSHtxDHu9qjUW9PLc2r0r38tbTGWWG8svSxb1MZ9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bsJJg35SdGTTC2bSQZuKwbd3Cr24cZUb+TyVYaHwI5A=; b=XEdCcuBUAaBJYU06Iu6o1UZmZObiWQ0laNlsFw3MYfSGacF7SI4Z2OPU2g8E6FAdf+wa1uOCS9OwoR6DTBdmtKuxl+6pmJpX1PWKh0r+R40uF4yVyMix6bb1kaou2trkBp2iW4qteLoAVlDStz7P4Fs9me2S66KC+m2XW1OfIfzVvk+GTGG2NKBST8ltPGRrSdTuzKc2hvSGNsasl9Hqf2Dj5akhNQnFDc1KaTqjxeexX4FwVsAQyCtJEgO43R8/n6hu9S6ulxVr4GBPnaCwSnoQ16eU6LTqyZAaq7oFHIGc+ooaZ2+NdAPBPmE6IwTTWhjdbUv0YYKLZDFyK54wkA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bsJJg35SdGTTC2bSQZuKwbd3Cr24cZUb+TyVYaHwI5A=; b=F6X+ymiTsSshlTcG93xG3puoplcraaOGVjV7jbV7n4aZOyFN3jd1uWVH/Ur2U1RX/OU7vhrbWmTRwyr+PDjZu+jTUCCwLvpA9lE8nXgwXpqSRLiIolReafNwOsvrkpb7v9sOrzOPevyxvYqu7fBLyHNjygqb90FUPe5h9uAXU1g= X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C From: Xenia Ragiadakou To: CC: Xenia Ragiadakou , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Jun Nakajima , Kevin Tian Subject: [PATCH v2 3/3] x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept() Date: Mon, 24 Apr 2023 11:20:38 +0300 Message-ID: <20230424082038.541122-4-xenia.ragiadakou@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230424082038.541122-1-xenia.ragiadakou@amd.com> References: <20230424082038.541122-1-xenia.ragiadakou@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT018:EE_|SJ0PR12MB8615:EE_ X-MS-Office365-Filtering-Correlation-Id: b8bff983-5563-4151-185e-08db449ce1e7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: z9hANcABW7CmQe4DK4uY5GHrSsvcFmfdp/vybTPNqg/6f3aNkBy8k9uE8sn+1u9bd4oEH/TWrbQyf5znFMh/pRaCMT3l1s2/GauMM1XcBc3+a1mj37xzcGhGCaR+FgQb1U5G/okVz656C4Ts1eWX5TNq4XBkn6IPyd6ITg64/6xFNgYFrz6NVT/j1C1cOK3K+WzGcG1f6oaR59FvWZCkIsqbIMgAoZexiW3dusUBrjX7gBgPPe9PTx7VFsMfnqpRPT8TVomsubQxGN3KjkbPkuhyq/gfxLj1UVAw/ZEX8IHgS8OPVCfWshgLJdIbVWvItHkACqh+HnrYrGo53sZJEiTC3ZYSWz91HNDaHykRO+VP2+Dvb1pDZdwIllLeIljq0GkpgHJvApuQB6ARONmbt2+KOUDCq++ZhbtnlHluoXsbIujVUkYX16pBeE6v8qYrAypcg848wju4Hod5Izn6MMEcefMdHCzQ01H2zFaf7xtT7rbo/kMp6M16wbet4Jay8FLTSnKPdwpgOuRG+iRvPt9OL7VGuaY/3BY0OQKWp5TTFe6KwK3+2qo0YCfa94KU5r2qDrDfIvAq3NJnTuEukuGu15rYlmTLFV9OfL7FvJHzgj6xJhUqxX8AIktmpMlOCDc3gqKfPTyftPHsCTGbtmbMDFbOR2qHqoqFFQpCych1+eqaSZ2oS2L2qLmGahSDNx/cakIwZGQDBMX7d/OJ0uYbbTn4FKtLv/oxPlQl9no= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(376002)(346002)(396003)(451199021)(46966006)(36840700001)(40470700004)(40460700003)(2906002)(70206006)(70586007)(6916009)(316002)(4326008)(44832011)(8676002)(8936002)(5660300002)(41300700001)(82310400005)(36756003)(86362001)(40480700001)(356005)(26005)(186003)(1076003)(81166007)(478600001)(6666004)(36860700001)(83380400001)(47076005)(2616005)(336012)(426003)(16576012)(82740400003)(54906003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 08:21:19.7256 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b8bff983-5563-4151-185e-08db449ce1e7 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB8615 Add hvm_funcs hooks for {set,clear}_msr_intercept() for controlling the msr intercept in common vpmu code. No functional change intended. Signed-off-by: Xenia Ragiadakou --- Changes in v2: - change the parameter types to unsigned int xen/arch/x86/cpu/vpmu_amd.c | 10 ++++----- xen/arch/x86/cpu/vpmu_intel.c | 24 ++++++++++---------- xen/arch/x86/hvm/svm/svm.c | 7 +++--- xen/arch/x86/hvm/vmx/vmcs.c | 8 +++---- xen/arch/x86/hvm/vmx/vmx.c | 2 ++ xen/arch/x86/include/asm/hvm/hvm.h | 30 +++++++++++++++++++++++++ xen/arch/x86/include/asm/hvm/svm/vmcb.h | 8 +++---- xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 8 +++---- 8 files changed, 65 insertions(+), 32 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c index da8e906972..77dee08588 100644 --- a/xen/arch/x86/cpu/vpmu_amd.c +++ b/xen/arch/x86/cpu/vpmu_amd.c @@ -154,9 +154,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v) for ( i = 0; i < num_counters; i++ ) { - svm_clear_msr_intercept(v, counters[i], MSR_RW); - svm_set_msr_intercept(v, ctrls[i], MSR_W); - svm_clear_msr_intercept(v, ctrls[i], MSR_R); + hvm_clear_msr_intercept(v, counters[i], MSR_RW); + hvm_set_msr_intercept(v, ctrls[i], MSR_W); + hvm_clear_msr_intercept(v, ctrls[i], MSR_R); } msr_bitmap_on(vpmu); @@ -169,8 +169,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v) for ( i = 0; i < num_counters; i++ ) { - svm_set_msr_intercept(v, counters[i], MSR_RW); - svm_set_msr_intercept(v, ctrls[i], MSR_RW); + hvm_set_msr_intercept(v, counters[i], MSR_RW); + hvm_set_msr_intercept(v, ctrls[i], MSR_RW); } msr_bitmap_off(vpmu); diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c index 395830e803..ed32d4d754 100644 --- a/xen/arch/x86/cpu/vpmu_intel.c +++ b/xen/arch/x86/cpu/vpmu_intel.c @@ -219,22 +219,22 @@ static void core2_vpmu_set_msr_bitmap(struct vcpu *v) /* Allow Read/Write PMU Counters MSR Directly. */ for ( i = 0; i < fixed_pmc_cnt; i++ ) - vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); + hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); for ( i = 0; i < arch_pmc_cnt; i++ ) { - vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); + hvm_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); if ( full_width_write ) - vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); + hvm_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); } /* Allow Read PMU Non-global Controls Directly. */ for ( i = 0; i < arch_pmc_cnt; i++ ) - vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); + hvm_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); - vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); - vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); + hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); + hvm_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); } static void core2_vpmu_unset_msr_bitmap(struct vcpu *v) @@ -242,21 +242,21 @@ static void core2_vpmu_unset_msr_bitmap(struct vcpu *v) unsigned int i; for ( i = 0; i < fixed_pmc_cnt; i++ ) - vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); + hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); for ( i = 0; i < arch_pmc_cnt; i++ ) { - vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); + hvm_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); if ( full_width_write ) - vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); + hvm_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); } for ( i = 0; i < arch_pmc_cnt; i++ ) - vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); + hvm_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); - vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); - vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); + hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); + hvm_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); } static inline void __core2_vpmu_save(struct vcpu *v) diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 3ee0805ff3..cbd8eff270 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -277,7 +277,8 @@ svm_msrbit(unsigned long *msr_bitmap, uint32_t msr) return msr_bit; } -void svm_set_msr_intercept(struct vcpu *v, unsigned int msr, unsigned int flags) +void cf_check svm_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags) { unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr); @@ -291,8 +292,8 @@ void svm_set_msr_intercept(struct vcpu *v, unsigned int msr, unsigned int flags) __set_bit(msr * 2 + 1, msr_bit); } -void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int flags) +void cf_check svm_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags) { unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr); diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index e7b67313a2..c051bcb91b 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -891,8 +891,8 @@ static void vmx_set_host_env(struct vcpu *v) (unsigned long)&get_cpu_info()->guest_cpu_user_regs.error_code); } -void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int type) +void cf_check vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int type) { struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap; struct domain *d = v->domain; @@ -923,8 +923,8 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, ASSERT(!"MSR out of range for interception\n"); } -void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int type) +void cf_check vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int type) { struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap; diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 8a873147a5..6a33e92b0a 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2742,6 +2742,8 @@ static struct hvm_function_table __initdata_cf_clobber vmx_function_table = { .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources, .update_vlapic_mode = vmx_vlapic_msr_changed, .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m, + .set_msr_intercept = vmx_set_msr_intercept, + .clear_msr_intercept = vmx_clear_msr_intercept, .enable_msr_interception = vmx_enable_msr_interception, .altp2m_vcpu_update_p2m = vmx_vcpu_update_eptp, .altp2m_vcpu_update_vmfunc_ve = vmx_vcpu_update_vmfunc_ve, diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index 5740a64281..96ff235614 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -213,6 +213,10 @@ struct hvm_function_table { paddr_t *L1_gpa, unsigned int *page_order, uint8_t *p2m_acc, struct npfec npfec); + void (*set_msr_intercept)(struct vcpu *v, unsigned int msr, + unsigned int flags); + void (*clear_msr_intercept)(struct vcpu *v, unsigned int msr, + unsigned int flags); void (*enable_msr_interception)(struct domain *d, uint32_t msr); /* Alternate p2m */ @@ -647,6 +651,20 @@ static inline int nhvm_hap_walk_L1_p2m( v, L2_gpa, L1_gpa, page_order, p2m_acc, npfec); } +static inline void hvm_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags) +{ + if ( hvm_funcs.set_msr_intercept ) + alternative_vcall(hvm_funcs.set_msr_intercept, v, msr, flags); +} + +static inline void hvm_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags) +{ + if ( hvm_funcs.clear_msr_intercept ) + alternative_vcall(hvm_funcs.clear_msr_intercept, v, msr, flags); +} + static inline void hvm_enable_msr_interception(struct domain *d, uint32_t msr) { alternative_vcall(hvm_funcs.enable_msr_interception, d, msr); @@ -905,6 +923,18 @@ static inline void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) ASSERT_UNREACHABLE(); } +static inline void hvm_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags) +{ + ASSERT_UNREACHABLE(); +} + +static inline void hvm_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags) +{ + ASSERT_UNREACHABLE(); +} + #define is_viridian_domain(d) ((void)(d), false) #define is_viridian_vcpu(v) ((void)(v), false) #define has_viridian_time_ref_count(d) ((void)(d), false) diff --git a/xen/arch/x86/include/asm/hvm/svm/vmcb.h b/xen/arch/x86/include/asm/hvm/svm/vmcb.h index 94deb0a236..5e84b4f4c1 100644 --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h @@ -603,10 +603,10 @@ void svm_destroy_vmcb(struct vcpu *v); void setup_vmcb_dump(void); -void svm_set_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int flags); -void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int flags); +void cf_check svm_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags); +void cf_check svm_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags); #define svm_disable_intercept_for_msr(v, msr) \ svm_clear_msr_intercept(v, msr, MSR_RW) #define svm_enable_intercept_for_msr(v, msr) \ diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h index af6a95b5d9..7f7d785977 100644 --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h @@ -633,10 +633,10 @@ static inline int vmx_write_guest_msr(struct vcpu *v, uint32_t msr, return 0; } -void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int type); -void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int type); +void cf_check vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int type); +void cf_check vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int type); void vmx_vmcs_switch(paddr_t from, paddr_t to); void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector); void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);