From patchwork Wed Aug 2 09:44:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicola Vetrini X-Patchwork-Id: 13337938 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3E5E6C04A94 for ; Wed, 2 Aug 2023 09:45:13 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.574821.900422 (Exim 4.92) (envelope-from ) id 1qR8Pe-0000TX-93; Wed, 02 Aug 2023 09:45:02 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 574821.900422; Wed, 02 Aug 2023 09:45:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qR8Pe-0000Qh-4x; Wed, 02 Aug 2023 09:45:02 +0000 Received: by outflank-mailman (input) for mailman id 574821; Wed, 02 Aug 2023 09:45:00 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qR8Pc-0000LL-AA for xen-devel@lists.xenproject.org; Wed, 02 Aug 2023 09:45:00 +0000 Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 3ed653e6-3119-11ee-b25f-6b7b168915f2; Wed, 02 Aug 2023 11:44:59 +0200 (CEST) Received: from nico.bugseng.com (unknown [147.123.100.131]) by support.bugseng.com (Postfix) with ESMTPSA id 9C0044EE0741; Wed, 2 Aug 2023 11:44:58 +0200 (CEST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3ed653e6-3119-11ee-b25f-6b7b168915f2 From: Nicola Vetrini To: xen-devel@lists.xenproject.org Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, consulting@bugseng.com, Nicola Vetrini , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu Subject: [XEN PATCH 2/4] x86/mtrr: address MISRA C:2012 Rule 5.3 Date: Wed, 2 Aug 2023 11:44:29 +0200 Message-Id: <16fa23ecb465442c566a18af0a569092075eef26.1690969271.git.nicola.vetrini@bugseng.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Rename variables to avoid shadowing and thus address MISRA C:2012 Rule 5.3: "An identifier declared in an inner scope shall not hide an identifier declared in an outer scope" No functional changes. Signed-off-by: Nicola Vetrini --- xen/arch/x86/hvm/mtrr.c | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c index 29f3fb1607..d504d1e43b 100644 --- a/xen/arch/x86/hvm/mtrr.c +++ b/xen/arch/x86/hvm/mtrr.c @@ -687,13 +687,13 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start, static int cf_check hvm_save_mtrr_msr(struct vcpu *v, hvm_domain_context_t *h) { - const struct mtrr_state *mtrr_state = &v->arch.hvm.mtrr; + const struct mtrr_state *mtrr = &v->arch.hvm.mtrr; struct hvm_hw_mtrr hw_mtrr = { - .msr_mtrr_def_type = mtrr_state->def_type | - MASK_INSR(mtrr_state->fixed_enabled, + .msr_mtrr_def_type = mtrr->def_type | + MASK_INSR(mtrr->fixed_enabled, MTRRdefType_FE) | - MASK_INSR(mtrr_state->enabled, MTRRdefType_E), - .msr_mtrr_cap = mtrr_state->mtrr_cap, + MASK_INSR(mtrr->enabled, MTRRdefType_E), + .msr_mtrr_cap = mtrr->mtrr_cap, }; unsigned int i; @@ -710,14 +710,14 @@ static int cf_check hvm_save_mtrr_msr(struct vcpu *v, hvm_domain_context_t *h) for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ ) { - hw_mtrr.msr_mtrr_var[i * 2] = mtrr_state->var_ranges->base; - hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr_state->var_ranges->mask; + hw_mtrr.msr_mtrr_var[i * 2] = mtrr->var_ranges->base; + hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr->var_ranges->mask; } BUILD_BUG_ON(sizeof(hw_mtrr.msr_mtrr_fixed) != - sizeof(mtrr_state->fixed_ranges)); + sizeof(mtrr->fixed_ranges)); - memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges, + memcpy(hw_mtrr.msr_mtrr_fixed, mtrr->fixed_ranges, sizeof(hw_mtrr.msr_mtrr_fixed)); return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr); @@ -727,7 +727,7 @@ static int cf_check hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h) { unsigned int vcpuid, i; struct vcpu *v; - struct mtrr_state *mtrr_state; + struct mtrr_state *mtrr; struct hvm_hw_mtrr hw_mtrr; vcpuid = hvm_load_instance(h); @@ -749,26 +749,26 @@ static int cf_check hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h) return -EINVAL; } - mtrr_state = &v->arch.hvm.mtrr; + mtrr = &v->arch.hvm.mtrr; hvm_set_guest_pat(v, hw_mtrr.msr_pat_cr); - mtrr_state->mtrr_cap = hw_mtrr.msr_mtrr_cap; + mtrr->mtrr_cap = hw_mtrr.msr_mtrr_cap; for ( i = 0; i < NUM_FIXED_MSR; i++ ) - mtrr_fix_range_msr_set(d, mtrr_state, i, hw_mtrr.msr_mtrr_fixed[i]); + mtrr_fix_range_msr_set(d, mtrr, i, hw_mtrr.msr_mtrr_fixed[i]); for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ ) { - mtrr_var_range_msr_set(d, mtrr_state, + mtrr_var_range_msr_set(d, mtrr, MSR_IA32_MTRR_PHYSBASE(i), hw_mtrr.msr_mtrr_var[i * 2]); - mtrr_var_range_msr_set(d, mtrr_state, + mtrr_var_range_msr_set(d, mtrr, MSR_IA32_MTRR_PHYSMASK(i), hw_mtrr.msr_mtrr_var[i * 2 + 1]); } - mtrr_def_type_msr_set(d, mtrr_state, hw_mtrr.msr_mtrr_def_type); + mtrr_def_type_msr_set(d, mtrr, hw_mtrr.msr_mtrr_def_type); return 0; }