From patchwork Mon May 8 15:49:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234704 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6411C7EE2C for ; Mon, 8 May 2023 15:51:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233483AbjEHPvO (ORCPT ); Mon, 8 May 2023 11:51:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234659AbjEHPvD (ORCPT ); Mon, 8 May 2023 11:51:03 -0400 Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com [IPv6:2a00:1450:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 702A3D04C for ; Mon, 8 May 2023 08:50:33 -0700 (PDT) Received: by mail-ej1-x629.google.com with SMTP id a640c23a62f3a-956ff2399c9so886667066b.3 for ; Mon, 08 May 2023 08:50:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560989; x=1686152989; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ijdgMy2gzr8fxjruFBoCk7W6yXaTsmVrltdYllWaDVQ=; b=r11KiSZgNh1mU+r6R32UakE8hFiaKOY6swUnOROKzVVXrufZ/3GAY8Y+ABGp25+zvQ KxngwUstZF6q2eI9lrl07iuwmUxDftlfoqIdVUGW9Br32jSyFLA1p0cESXx8m4xj+aFY 84Zq2u0FancGS2FiXUYX1e3Erjgq/bdouEPSBB87FdB0yMiStuphxYFllEpNtSQtsiiv LPmQKH/YNd4NDhJ5Ew4gky+t/m8Ypf3PewTLiqqNL1TBwkn1zueOcGx2mwh6r0skknCW NvgS3/yb8ffBsqjKePvMXMpU3ZS1knqVOUm8OFitsIYf4jC6ELkNMvNBiaRSzkFoNk4H PzqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560989; x=1686152989; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ijdgMy2gzr8fxjruFBoCk7W6yXaTsmVrltdYllWaDVQ=; b=PtvVQaf/C3643OBCe4UqncONjaKa3fTRaQyswgnAl/zzLfasnUHg49xTAzyZEw+PxT ZUAfyPfFQFxZOFMlp47+LEnefEZRIEtBxRyk1m9c9kHZ6zahiki5Jlh0t+1wKXPYYvVc jsxCWyhSJRG4zqsVWdYlUhHV0z688MhTc7pOMfOkoQqNZ6G8Mw/hRTuVejWgflDaaPO9 2N1Te6oL/7opYe65KlBwluYrBv4U0VybqOW/Qr9OpNlzG1AKeTRFlvCVv2+yOlLiqNTg Xblk5hZla4jWRjsacZCrt80d4m1xCfCYrcVVG+X8GG00PNqvwkdEQKTZv47lJt5tUWoH s3dw== X-Gm-Message-State: AC+VfDxb+grs1lMaOwEOPA7cX6DWMJjiZ0eu5OYsBnVl7IrkPvmIJ1Wd OCkoPJkEquRxuRXcQWjFDsg4Dw== X-Google-Smtp-Source: ACHHUZ4HgqlMkSeNuYOcV94pLPpGObzUReiRdjPX/fuSl6ypjFj89BkId+IHXfeKexG21tVLjwXydw== X-Received: by 2002:a17:907:96a4:b0:966:14ca:8cf9 with SMTP id hd36-20020a17090796a400b0096614ca8cf9mr8456700ejc.38.1683560989129; Mon, 08 May 2023 08:49:49 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id bu2-20020a170906a14200b0096654fdbe34sm117550ejb.142.2023.05.08.08.49.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:49:48 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.4 1/3] KVM: x86/mmu: Avoid indirect call for get_cr3 Date: Mon, 8 May 2023 17:49:41 +0200 Message-Id: <20230508154943.30113-2-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154943.30113-1-minipli@grsecurity.net> References: <20230508154943.30113-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Paolo Bonzini [ Upstream commit 2fdcc1b324189b5fb20655baebd40cd82e2bdf0c ] Most of the time, calls to get_guest_pgd result in calling kvm_read_cr3 (the exception is only nested TDP). Hardcode the default instead of using the get_cr3 function, avoiding a retpoline if they are enabled. Signed-off-by: Paolo Bonzini Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-2-minipli@grsecurity.net Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v5.4.x --- arch/x86/kvm/mmu.c | 14 +++++++------- arch/x86/kvm/mmu.h | 11 +++++++++++ arch/x86/kvm/paging_tmpl.h | 2 +- arch/x86/kvm/x86.c | 2 +- 4 files changed, 20 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 015da62e4ad7..a6efd71a0a6e 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3815,7 +3815,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root); } else BUG(); - vcpu->arch.mmu->root_cr3 = vcpu->arch.mmu->get_cr3(vcpu); + vcpu->arch.mmu->root_cr3 = kvm_mmu_get_guest_cr3(vcpu, vcpu->arch.mmu); return 0; } @@ -3827,7 +3827,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) gfn_t root_gfn, root_cr3; int i; - root_cr3 = vcpu->arch.mmu->get_cr3(vcpu); + root_cr3 = kvm_mmu_get_guest_cr3(vcpu, vcpu->arch.mmu); root_gfn = root_cr3 >> PAGE_SHIFT; if (mmu_check_root(vcpu, root_gfn)) @@ -4191,7 +4191,7 @@ static int kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, arch.token = (vcpu->arch.apf.id++ << 12) | vcpu->vcpu_id; arch.gfn = gfn; arch.direct_map = vcpu->arch.mmu->direct_map; - arch.cr3 = vcpu->arch.mmu->get_cr3(vcpu); + arch.cr3 = kvm_mmu_get_guest_cr3(vcpu, vcpu->arch.mmu); return kvm_setup_async_pf(vcpu, cr2_or_gpa, kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch); @@ -4453,7 +4453,7 @@ void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu, gpa_t new_cr3, bool skip_tlb_flush) } EXPORT_SYMBOL_GPL(kvm_mmu_new_cr3); -static unsigned long get_cr3(struct kvm_vcpu *vcpu) +unsigned long get_guest_cr3(struct kvm_vcpu *vcpu) { return kvm_read_cr3(vcpu); } @@ -5040,7 +5040,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) context->shadow_root_level = kvm_x86_ops->get_tdp_level(vcpu); context->direct_map = true; context->set_cr3 = kvm_x86_ops->set_tdp_cr3; - context->get_cr3 = get_cr3; + context->get_cr3 = get_guest_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; @@ -5187,7 +5187,7 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu) kvm_init_shadow_mmu(vcpu); context->set_cr3 = kvm_x86_ops->set_cr3; - context->get_cr3 = get_cr3; + context->get_cr3 = get_guest_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; } @@ -5202,7 +5202,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) return; g_context->mmu_role.as_u64 = new_role.as_u64; - g_context->get_cr3 = get_cr3; + g_context->get_cr3 = get_guest_cr3; g_context->get_pdptr = kvm_pdptr_read; g_context->inject_page_fault = kvm_inject_page_fault; diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index ea9945a05b83..a53b223a245a 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -102,6 +102,17 @@ static inline void kvm_mmu_load_cr3(struct kvm_vcpu *vcpu) kvm_get_active_pcid(vcpu)); } +unsigned long get_guest_cr3(struct kvm_vcpu *vcpu); + +static inline unsigned long kvm_mmu_get_guest_cr3(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + if (IS_ENABLED(CONFIG_RETPOLINE) && mmu->get_cr3 == get_guest_cr3) + return kvm_read_cr3(vcpu); + + return mmu->get_cr3(vcpu); +} + /* * Currently, we have two sorts of write-protection, a) the first one * write-protects guest page to sync the guest modification, b) another one is diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 1a1d2b5e7b35..b61ab1cdeab1 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -315,7 +315,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, trace_kvm_mmu_pagetable_walk(addr, access); retry_walk: walker->level = mmu->root_level; - pte = mmu->get_cr3(vcpu); + pte = kvm_mmu_get_guest_cr3(vcpu, mmu); have_ad = PT_HAVE_ACCESSED_DIRTY(mmu); #if PTTYPE == 64 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f5e9590a8f31..f073c56b9301 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10130,7 +10130,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) return; if (!vcpu->arch.mmu->direct_map && - work->arch.cr3 != vcpu->arch.mmu->get_cr3(vcpu)) + work->arch.cr3 != kvm_mmu_get_guest_cr3(vcpu, vcpu->arch.mmu)) return; vcpu->arch.mmu->page_fault(vcpu, work->cr2_or_gpa, 0, true); From patchwork Mon May 8 15:49:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5752C77B7F for ; Mon, 8 May 2023 15:51:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234624AbjEHPvQ (ORCPT ); Mon, 8 May 2023 11:51:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234727AbjEHPvD (ORCPT ); Mon, 8 May 2023 11:51:03 -0400 Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com [IPv6:2a00:1450:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA2EED04F for ; Mon, 8 May 2023 08:50:33 -0700 (PDT) Received: by mail-ed1-x52e.google.com with SMTP id 4fb4d7f45d1cf-50bd2d7ba74so49966444a12.1 for ; Mon, 08 May 2023 08:50:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560990; x=1686152990; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4cI1DYhc91LVns7RU9eVWRU/DvYrdEigQV2qupYVtP8=; b=Bx/ESbzRF4sQrXMXdtvmC1tALX8Pv07DxHdFOZzm5FpM8khW2G8njJ0MCxfCUWXNZ/ X6deM8XB2ba7ynQfkhm5AvWXa5gHvm5BP+B5nilAG7msk6ngDLsT0afvWN+dcKJ4NUHB AzjtpueMWZjQbUR7TBrNxYtVLbAyfsy/fphBknrXnPXwQ04N5nRnvYAOJ1ufmrSstocc srufpxmkSgA9KViWWNukQ3/ThFEy1/UHL7WPDZClOhCLQxsgrfwf1R8qdUoRIwDvIZik gB6ZuoHippQlaLb3kBtJq+iOV4shjaGG3a6WqY/wYuYdguKt+eHZeQ1ja5ETYMUzhgxJ yGrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560990; x=1686152990; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4cI1DYhc91LVns7RU9eVWRU/DvYrdEigQV2qupYVtP8=; b=cTfUtcDl+7c8NaETJZR4O9f4BVvBVFdpj0wbLwMpNyS6D6pFQtk2V8M/YWgXWuQJuq vsr1Y/CtTJTIT6enup2onLe7jQPNmK/Y9XB+Zvh0NrXj5j7QA0PkPTksQ+eJhKh3g8gb T04ynxKWmv7DmvPr05C9cDh1RE4dz7ERdhCeD6CPP9SFiH/DchbvjKvs7U1btin2mZxL X4LgExfb0XYZd1HAIYc5uOdSUqeI8P3g7oChPP1Anq6ZmPTVh8kDB0NWKZlVGCpawwpx XQOYDwE96mxHrkEq1tTYYE4jR6H+ebw18kV/g98cqNK5idbzE1Q8i4tehHbRQC53cTzo wP+Q== X-Gm-Message-State: AC+VfDwAMUITGyBsSMAdX+YOg8+5vZgRzuH0c92ctvLbglcOs5D1TqC2 hpK8J/7My1ypc4FSOHXtItc1vA== X-Google-Smtp-Source: ACHHUZ6sDiz99qKJFqH+gwml0GjG5UelofZG/r6CSoOj+7yMmhD/u2RMCajOniCLMwlSLuugYGe9HQ== X-Received: by 2002:a17:906:db04:b0:960:175a:3af7 with SMTP id xj4-20020a170906db0400b00960175a3af7mr8509407ejb.19.1683560990040; Mon, 08 May 2023 08:49:50 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id bu2-20020a170906a14200b0096654fdbe34sm117550ejb.142.2023.05.08.08.49.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:49:49 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.4 2/3] KVM: x86: Do not unload MMU roots when only toggling CR0.WP with TDP enabled Date: Mon, 8 May 2023 17:49:42 +0200 Message-Id: <20230508154943.30113-3-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154943.30113-1-minipli@grsecurity.net> References: <20230508154943.30113-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org [ Upstream commit 01b31714bd90be2784f7145bf93b7f78f3d081e1 ] There is no need to unload the MMU roots with TDP enabled when only CR0.WP has changed -- the paging structures are still valid, only the permission bitmap needs to be updated. One heavy user of toggling CR0.WP is grsecurity's KERNEXEC feature to implement kernel W^X. The optimization brings a huge performance gain for this case as the following micro-benchmark running 'ssdd 10 50000' from rt-tests[1] on a grsecurity L1 VM shows (runtime in seconds, lower is better): legacy TDP shadow kvm-x86/next@d8708b 8.43s 9.45s 70.3s +patch 5.39s 5.63s 70.2s For legacy MMU this is ~36% faster, for TDP MMU even ~40% faster. Also TDP and legacy MMU now both have a similar runtime which vanishes the need to disable TDP MMU for grsecurity. Shadow MMU sees no measurable difference and is still slow, as expected. [1] https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-3-minipli@grsecurity.net Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v5.4.x --- arch/x86/kvm/x86.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f073c56b9301..2903fd5523bd 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -799,6 +799,18 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) kvm_x86_ops->set_cr0(vcpu, cr0); + /* + * CR0.WP is incorporated into the MMU role, but only for non-nested, + * indirect shadow MMUs. If TDP is enabled, the MMU's metadata needs + * to be updated, e.g. so that emulating guest translations does the + * right thing, but there's no need to unload the root as CR0.WP + * doesn't affect SPTEs. + */ + if (tdp_enabled && (cr0 ^ old_cr0) == X86_CR0_WP) { + kvm_init_mmu(vcpu, false); + return 0; + } + if ((cr0 ^ old_cr0) & X86_CR0_PG) { kvm_clear_async_pf_completion_queue(vcpu); kvm_async_pf_hash_reset(vcpu); From patchwork Mon May 8 15:49:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A44F5C7EE2A for ; Mon, 8 May 2023 15:51:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234666AbjEHPvS (ORCPT ); Mon, 8 May 2023 11:51:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234665AbjEHPvF (ORCPT ); Mon, 8 May 2023 11:51:05 -0400 Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com [IPv6:2a00:1450:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 103AF1BE5 for ; Mon, 8 May 2023 08:50:36 -0700 (PDT) Received: by mail-ej1-x632.google.com with SMTP id a640c23a62f3a-966287b0f72so387853366b.0 for ; Mon, 08 May 2023 08:50:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560991; x=1686152991; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IitU9c4tf9dt0ERwg1tcoCiv6Eix1HabG4a6CRnRpFc=; b=ixTNBoys9toVbXsiw9eKzZgtvwSn1Xp5L0GwzCnXIVUvzkpNduZd1+8zVQFQiW11MD U/P26R14v4c59zchuLaC6d9KYbYjalBzO+2THFrLSDFoBM0saNxxBK0SAPE19S7eannP GJBFB6Y/W6KxYrdMWJHxkaACesOGFiWK5wPOkPZ6LJKUIoPuEpcNCdZVVm5MY7IhygmE SzLVGJ2cO50bGF15/AkaN/Fl+yHq4ieHZ2Q6a4Jj/NNpKbY+JWfGhujv/wYocfzLEaHu kQp3HAZDL/C+sMQBxn7nMn5JCoiotVbv0AZWMcH8mKq6C0HNM0rPrz0Dh5JVKWH7qp5d taAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560991; x=1686152991; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IitU9c4tf9dt0ERwg1tcoCiv6Eix1HabG4a6CRnRpFc=; b=Inz4M8xlbLIMy2Va7enSK5NBVGcFhVCMj03v5CFG8TSgXC3YbKlQZ8HC/Qhs9f99Rn hJcgjsJJZlTw9SvBBcx1+vMwxznr5XbageS6/9uWcYdVIA1h+f0ohcNnCOBxY3HH604b 8/GQoZEXO3sNsFSpyeadmG1BKE7SekG1Ri4L9szprA2IlychiKGZHkZHlK3maXcCyZr7 dKegC33pKe5OymXvOspw5ijD4uYVKfUaXdKyVQKOYBzoiuR2SEJVxJyehhylY9qzOwGa QN5jTCgmWYPsHFpVLznf8qi7TjEKQh/wgPI2WiQsUEWqdgs6LikpmfYcMLhhrcBuVg+U IK5Q== X-Gm-Message-State: AC+VfDyejEqSdX2JE75mO0Q3x5f7OvAN8p+ePiDB2kf4fvTUxFbZbOaY M508yg0se92GG6/P7DzeDLMcMA== X-Google-Smtp-Source: ACHHUZ6BQE1T/sVLfxIhIGaWL96gl7t6frXSByBX3nO8DBsFTaBOTd9bkI3fVyJH08PGwWov7B2J0A== X-Received: by 2002:a17:907:7f87:b0:965:d7c7:24d4 with SMTP id qk7-20020a1709077f8700b00965d7c724d4mr12477200ejc.77.1683560990946; Mon, 08 May 2023 08:49:50 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id bu2-20020a170906a14200b0096654fdbe34sm117550ejb.142.2023.05.08.08.49.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:49:50 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.4 3/3] KVM: x86: Make use of kvm_read_cr*_bits() when testing bits Date: Mon, 8 May 2023 17:49:43 +0200 Message-Id: <20230508154943.30113-4-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154943.30113-1-minipli@grsecurity.net> References: <20230508154943.30113-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org [ Upstream commit 74cdc836919bf34684ef66f995273f35e2189daf ] Make use of the kvm_read_cr{0,4}_bits() helper functions when we only want to know the state of certain bits instead of the whole register. This not only makes the intent cleaner, it also avoids a potential VMREAD in case the tested bits aren't guest owned. Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-5-minipli@grsecurity.net Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to 5.4.x --- arch/x86/kvm/vmx/vmx.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e6dd6a7e8689..9bbbb201bab5 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4970,7 +4970,7 @@ static int handle_cr(struct kvm_vcpu *vcpu) break; case 3: /* lmsw */ val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f; - trace_kvm_cr_write(0, (kvm_read_cr0(vcpu) & ~0xful) | val); + trace_kvm_cr_write(0, (kvm_read_cr0_bits(vcpu, ~0xful) | val)); kvm_lmsw(vcpu, val); return kvm_skip_emulated_instruction(vcpu); @@ -6982,7 +6982,7 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) goto exit; } - if (kvm_read_cr0(vcpu) & X86_CR0_CD) { + if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) { ipat = VMX_EPT_IPAT_BIT; if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) cache = MTRR_TYPE_WRBACK;