From patchwork Mon May 8 15:45:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234678 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93094C7EE2C for ; Mon, 8 May 2023 15:46:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233872AbjEHPqe (ORCPT ); Mon, 8 May 2023 11:46:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230152AbjEHPq1 (ORCPT ); Mon, 8 May 2023 11:46:27 -0400 Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com [IPv6:2a00:1450:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28181A25C for ; Mon, 8 May 2023 08:46:12 -0700 (PDT) Received: by mail-ed1-x52a.google.com with SMTP id 4fb4d7f45d1cf-50bc4bc2880so7456632a12.2 for ; Mon, 08 May 2023 08:46:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560770; x=1686152770; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H5oicWZ1jTmF2e+SmbW50Qs60p6WTYHYrD3TPOjpGkA=; b=vSTHMrGLw6yrVfdiTW/+SNN9tvjtUrNeY/WKMnUP+rgYzKn1r1pW74aTg4AIUYLOAT V2aA8Xw/VprqSXEpIfqdeNfXKl2G6xWA61kg+6flgncINIRQb+Wd1y/zd0wbkp+IozpV VcsjgZ0PTNKrDXDRP5sLKtInTVwAcWW6zRd62N7loTh0BufLo8+baxw2JhEDxwEyaNd/ uRN1wS52a06/Ags6cH9cWYBFN70E6TKF78No6xl41MY5xaN/9GLIqweDgQYJ6qkjhyHT EfHrqBfuioXkJCx8r4dRK6ni9R3/o5io643y6bafI174Zm8qopUnx3l4d3vIDkwKk2S9 apdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560770; x=1686152770; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H5oicWZ1jTmF2e+SmbW50Qs60p6WTYHYrD3TPOjpGkA=; b=f7s1xlYBXl9VEK2pawODqWmCEYXGATA9XBGnaKwuGiTUrqM1/AusNrf9mOsU4nttPI oWyC1s2DYBpAfOqqYFFz91O+UJllDLN5ukSsUjIwRlS4r0Vl/eRo5MalcCeXsT2wz5p8 EFjsF9wHX3bwkKQPKpaRYpaKdwUA60JN7YwT8Dz4NVZpX45B+mQeIAjMfjmpPjmowpn3 RUw9hKUuALREKHMCziFAkcgBPKGveG0Es05ZMNqqJ1Pz6MB283jlSSrrIrcRQONod/mX DTr76vKIJ8kV5ZqgyruXpMCJq11ywZGFkO9InkM3QMRLDj48sZUgHv4yIg+XiYAVlLFr wyvg== X-Gm-Message-State: AC+VfDyB38xpBSAMTuWODyjqKTwkpSNrOSO5t4JjmXCYpscSfw7FYM3q OE6w+Ssj5O5zpCsxYHEnjGTMIg== X-Google-Smtp-Source: ACHHUZ7lJHewCd1M1wVDc6+iVOehlQunPsxlG6kK8sr0ay74oDUTAbAR2oz1bvwfXpmMP6wR2bQmqA== X-Received: by 2002:aa7:ca57:0:b0:4fe:ddf:8d8c with SMTP id j23-20020aa7ca57000000b004fe0ddf8d8cmr8718081edt.13.1683560770531; Mon, 08 May 2023 08:46:10 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id j19-20020aa7ca53000000b0050bc27a4967sm6213551edt.21.2023.05.08.08.46.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:46:10 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 6.1 1/5] KVM: x86/mmu: Avoid indirect call for get_cr3 Date: Mon, 8 May 2023 17:45:58 +0200 Message-Id: <20230508154602.30008-2-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154602.30008-1-minipli@grsecurity.net> References: <20230508154602.30008-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Paolo Bonzini [ Upstream commit 2fdcc1b324189b5fb20655baebd40cd82e2bdf0c ] Most of the time, calls to get_guest_pgd result in calling kvm_read_cr3 (the exception is only nested TDP). Hardcode the default instead of using the get_cr3 function, avoiding a retpoline if they are enabled. Signed-off-by: Paolo Bonzini Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-2-minipli@grsecurity.net Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v6.1.x --- arch/x86/kvm/mmu/mmu.c | 31 ++++++++++++++++++++----------- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 2 files changed, 21 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b6f96d47e596..f2a10c7d1369 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -232,6 +232,20 @@ static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu) return regs; } +static unsigned long get_guest_cr3(struct kvm_vcpu *vcpu) +{ + return kvm_read_cr3(vcpu); +} + +static inline unsigned long kvm_mmu_get_guest_pgd(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + if (IS_ENABLED(CONFIG_RETPOLINE) && mmu->get_guest_pgd == get_guest_cr3) + return kvm_read_cr3(vcpu); + + return mmu->get_guest_pgd(vcpu); +} + static inline bool kvm_available_flush_tlb_with_range(void) { return kvm_x86_ops.tlb_remote_flush_with_range; @@ -3661,7 +3675,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) int quadrant, i, r; hpa_t root; - root_pgd = mmu->get_guest_pgd(vcpu); + root_pgd = kvm_mmu_get_guest_pgd(vcpu, mmu); root_gfn = root_pgd >> PAGE_SHIFT; if (mmu_check_root(vcpu, root_gfn)) @@ -4112,7 +4126,7 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, arch.token = alloc_apf_token(vcpu); arch.gfn = gfn; arch.direct_map = vcpu->arch.mmu->root_role.direct; - arch.cr3 = vcpu->arch.mmu->get_guest_pgd(vcpu); + arch.cr3 = kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu); return kvm_setup_async_pf(vcpu, cr2_or_gpa, kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch); @@ -4131,7 +4145,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) return; if (!vcpu->arch.mmu->root_role.direct && - work->arch.cr3 != vcpu->arch.mmu->get_guest_pgd(vcpu)) + work->arch.cr3 != kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu)) return; kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); @@ -4488,11 +4502,6 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd) } EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd); -static unsigned long get_cr3(struct kvm_vcpu *vcpu) -{ - return kvm_read_cr3(vcpu); -} - static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, unsigned int access) { @@ -5043,7 +5052,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->page_fault = kvm_tdp_page_fault; context->sync_page = nonpaging_sync_page; context->invlpg = NULL; - context->get_guest_pgd = get_cr3; + context->get_guest_pgd = get_guest_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; @@ -5193,7 +5202,7 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu, kvm_init_shadow_mmu(vcpu, cpu_role); - context->get_guest_pgd = get_cr3; + context->get_guest_pgd = get_guest_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; } @@ -5207,7 +5216,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, return; g_context->cpu_role.as_u64 = new_mode.as_u64; - g_context->get_guest_pgd = get_cr3; + g_context->get_guest_pgd = get_guest_cr3; g_context->get_pdptr = kvm_pdptr_read; g_context->inject_page_fault = kvm_inject_page_fault; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 5ab5f94dcb6f..1f4f5e703f13 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -324,7 +324,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, trace_kvm_mmu_pagetable_walk(addr, access); retry_walk: walker->level = mmu->cpu_role.base.level; - pte = mmu->get_guest_pgd(vcpu); + pte = kvm_mmu_get_guest_pgd(vcpu, mmu); have_ad = PT_HAVE_ACCESSED_DIRTY(mmu); #if PTTYPE == 64 From patchwork Mon May 8 15:45:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCF2EC77B7F for ; Mon, 8 May 2023 15:46:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234612AbjEHPqh (ORCPT ); Mon, 8 May 2023 11:46:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234411AbjEHPq2 (ORCPT ); Mon, 8 May 2023 11:46:28 -0400 Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com [IPv6:2a00:1450:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5CA6BA267 for ; Mon, 8 May 2023 08:46:13 -0700 (PDT) Received: by mail-ed1-x531.google.com with SMTP id 4fb4d7f45d1cf-50bc22805d3so7287784a12.1 for ; Mon, 08 May 2023 08:46:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560772; x=1686152772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hcGjxFwn5C+DW/CBTpII+bmNf6QTkHSVvjbNFA7/wRQ=; b=me38dQUbEuiJ3i7HUhs23JsgD/VxjxUDU2zyDRbm25DkpQQThVNlDl5FREddxzENDf nqEzKRFs/Z6n6vG+Pm8dGMMwzz6cQY3raVTarLVRCWqZHgj9AksnO7739BYQr0wO7thX mWdTQ+/kwkbAhzagCSqYkkcYSwRSYMVqk4s3wPLlhQKh6+D8Gmm09YG8kJ56eyv/KPP/ 0rjCYNwegv1cAyQyq0EBOBjYUojLI9xlszTG6lQVipWjHMAwUNECiRQpZzjHIxHR4vuU XUq5ZTTFEY7D3PiBmmmewrWGfrrqt24duPx/WE5wvfEr50LtVWwMWavylss8lVUDw14K Rm0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560772; x=1686152772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hcGjxFwn5C+DW/CBTpII+bmNf6QTkHSVvjbNFA7/wRQ=; b=dLf30xoU7ar3IzsOZ4P5U7c96pYl493irbLdNMmKERbjmu0Xb7ROYRxk4GUB6CsmdI fdDcXMfzizs2cvGA9PPxoiwrpUMStZ3Vy4faHQfREMc83q4kRKW5iDPQKhVGmlBMX9Vb Zvf8Sv6IwldHGPLb03xwNNy9uWFxslgssY4r8sI8VgdE+Q4sGlk28rHRPIRJo5fP965p MpJZCUmhC5vksbPZwyHnXPEvYPKQ8g1Z4JlKV7nt0n5fHBWrdrAUqla0ZIGZHTZOqxpX onhhU23Jh5rxyx40VtahF5QpV0NHbxWFhsR8FF5Z4ufX3QZUkKfcehZJl37dl/cGnYbP yRIw== X-Gm-Message-State: AC+VfDym4g+1/gf6TD0EKciPpw06HyD7HkgvqKuJKt9WwJJP5ZiG1Cfr SwuUfP312Vi+thqK6wwszcTlDg== X-Google-Smtp-Source: ACHHUZ5xlTT5G0Ab2nprbHr3ddC09SS1fs8kIWgpVZjp61JIcjx3+r7scNKxVmHUAlngG3OMCgNMTA== X-Received: by 2002:a05:6402:104f:b0:50b:c397:bbac with SMTP id e15-20020a056402104f00b0050bc397bbacmr9030003edu.29.1683560771839; Mon, 08 May 2023 08:46:11 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id j19-20020aa7ca53000000b0050bc27a4967sm6213551edt.21.2023.05.08.08.46.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:46:11 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 6.1 2/5] KVM: x86: Do not unload MMU roots when only toggling CR0.WP with TDP enabled Date: Mon, 8 May 2023 17:45:59 +0200 Message-Id: <20230508154602.30008-3-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154602.30008-1-minipli@grsecurity.net> References: <20230508154602.30008-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org [ Upstream commit 01b31714bd90be2784f7145bf93b7f78f3d081e1 ] There is no need to unload the MMU roots with TDP enabled when only CR0.WP has changed -- the paging structures are still valid, only the permission bitmap needs to be updated. One heavy user of toggling CR0.WP is grsecurity's KERNEXEC feature to implement kernel W^X. The optimization brings a huge performance gain for this case as the following micro-benchmark running 'ssdd 10 50000' from rt-tests[1] on a grsecurity L1 VM shows (runtime in seconds, lower is better): legacy TDP shadow kvm-x86/next@d8708b 8.43s 9.45s 70.3s +patch 5.39s 5.63s 70.2s For legacy MMU this is ~36% faster, for TDP MMU even ~40% faster. Also TDP and legacy MMU now both have a similar runtime which vanishes the need to disable TDP MMU for grsecurity. Shadow MMU sees no measurable difference and is still slow, as expected. [1] https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-3-minipli@grsecurity.net Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause --- arch/x86/kvm/x86.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ab09d292bded..496bb9a58273 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -910,6 +910,18 @@ EXPORT_SYMBOL_GPL(load_pdptrs); void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned long cr0) { + /* + * CR0.WP is incorporated into the MMU role, but only for non-nested, + * indirect shadow MMUs. If TDP is enabled, the MMU's metadata needs + * to be updated, e.g. so that emulating guest translations does the + * right thing, but there's no need to unload the root as CR0.WP + * doesn't affect SPTEs. + */ + if (tdp_enabled && (cr0 ^ old_cr0) == X86_CR0_WP) { + kvm_init_mmu(vcpu); + return; + } + if ((cr0 ^ old_cr0) & X86_CR0_PG) { kvm_clear_async_pf_completion_queue(vcpu); kvm_async_pf_hash_reset(vcpu); From patchwork Mon May 8 15:46:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234680 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2113C7EE2C for ; Mon, 8 May 2023 15:46:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234326AbjEHPqj (ORCPT ); Mon, 8 May 2023 11:46:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234424AbjEHPq3 (ORCPT ); Mon, 8 May 2023 11:46:29 -0400 Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com [IPv6:2a00:1450:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25B68A26A for ; Mon, 8 May 2023 08:46:14 -0700 (PDT) Received: by mail-ed1-x530.google.com with SMTP id 4fb4d7f45d1cf-50bc2feb320so7461095a12.3 for ; Mon, 08 May 2023 08:46:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560772; x=1686152772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dS6pqwdpAx5s3Yx/zIn8Es63Zd7b+ctpDmvjcaDbCR8=; b=wj8z7t37nC42CBhBWUBzPbtNwtAzDMwbZ4Ntc3hd1zPoI5YWiTE99QcSlKprmYkSIv UXG9f+5SGRt5mdqSmK4k1p/TTl5W9kvzsHq+gNsTQuoAuhhM8a3yVXk0miq3wkszlY2k SsrSzR9wbJ0dDDftD1aiCBfNJJlFl+BTfrs8kQpbZQZhTJZ7SLPpp5ZAnR30e33oEnqX 2mmCEBcd73dT5afBaV2zqdaGhx5yE2+WGaRZA9o2r2lgQdAEHsdQvvXRvXaiGAtowjl5 X+7s7aUefr1xfCKoQ1942txyJ5AY92vlUkY7kQNOvz6w3T7uuiMKizbEpb9u8VktYUIt en3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560772; x=1686152772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dS6pqwdpAx5s3Yx/zIn8Es63Zd7b+ctpDmvjcaDbCR8=; b=AlwGpTISr5y5nL6lTYqLIpB59HOzii8kMddLktkZbQVnXO19ep3PxECOd2FKL31tI/ K0WG/4eHdXIKIHtGh/8joKpi+SZeN2B1hummQhwaDfmnQoTlJYi3BIb9efHnOTzWDn2s HCxLsBTDTcvL7Ii8iLszzRmY49M7gXeWASAGI3B3gAb/itsuWlzcI9/VMagOigr8igMt EDuo9jDx3OJrVPf53qvoOPj271PaDnD0lNaz/FwhGIu68/mwJ2khcclNcs6n04l3GDuP JL/vGuCouZR6t5OnbeHvbCsg2LIykDH93aVHhbBNI7YlHB/UQX90aP3zwFLJYnlH1TvX r5Hg== X-Gm-Message-State: AC+VfDz6aqUtuHbaCtniGBfsjNdgZUlxK+oJn3PWi53X1kO8J0UUZpON yHeLS53YnRJEedhe4bAGow/rPw== X-Google-Smtp-Source: ACHHUZ5X/O3hd46AMx0roRxFQWpQaweDR2zZfD7RQsEaDXJN1y0K7oNFJQWEi/ceclKFTvhHNriw9Q== X-Received: by 2002:a05:6402:14e:b0:50b:ff3c:d497 with SMTP id s14-20020a056402014e00b0050bff3cd497mr8316762edu.23.1683560772751; Mon, 08 May 2023 08:46:12 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id j19-20020aa7ca53000000b0050bc27a4967sm6213551edt.21.2023.05.08.08.46.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:46:12 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 6.1 3/5] KVM: x86: Make use of kvm_read_cr*_bits() when testing bits Date: Mon, 8 May 2023 17:46:00 +0200 Message-Id: <20230508154602.30008-4-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154602.30008-1-minipli@grsecurity.net> References: <20230508154602.30008-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org [ Upstream commit 74cdc836919bf34684ef66f995273f35e2189daf ] Make use of the kvm_read_cr{0,4}_bits() helper functions when we only want to know the state of certain bits instead of the whole register. This not only makes the intent cleaner, it also avoids a potential VMREAD in case the tested bits aren't guest owned. Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-5-minipli@grsecurity.net Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause --- arch/x86/kvm/pmu.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index de1fd7369736..20cd746cf467 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -418,9 +418,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) if (!pmc) return 1; - if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) && + if (!(kvm_read_cr4_bits(vcpu, X86_CR4_PCE)) && (static_call(kvm_x86_get_cpl)(vcpu) != 0) && - (kvm_read_cr0(vcpu) & X86_CR0_PE)) + (kvm_read_cr0_bits(vcpu, X86_CR0_PE))) return 1; *data = pmc_read_counter(pmc) & mask; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index bc868958e91f..a5009b66df9a 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5417,7 +5417,7 @@ static int handle_cr(struct kvm_vcpu *vcpu) break; case 3: /* lmsw */ val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f; - trace_kvm_cr_write(0, (kvm_read_cr0(vcpu) & ~0xful) | val); + trace_kvm_cr_write(0, (kvm_read_cr0_bits(vcpu, ~0xful) | val)); kvm_lmsw(vcpu, val); return kvm_skip_emulated_instruction(vcpu); @@ -7496,7 +7496,7 @@ static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; - if (kvm_read_cr0(vcpu) & X86_CR0_CD) { + if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) { if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) cache = MTRR_TYPE_WRBACK; else From patchwork Mon May 8 15:46:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C69D7C77B75 for ; Mon, 8 May 2023 15:46:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234562AbjEHPql (ORCPT ); Mon, 8 May 2023 11:46:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234519AbjEHPqa (ORCPT ); Mon, 8 May 2023 11:46:30 -0400 Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com [IPv6:2a00:1450:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95F06A26F for ; Mon, 8 May 2023 08:46:15 -0700 (PDT) Received: by mail-ed1-x533.google.com with SMTP id 4fb4d7f45d1cf-50bc25f0c7dso8888641a12.3 for ; Mon, 08 May 2023 08:46:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560774; x=1686152774; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uP7LoeUADLncwaNyoA4stAurFc7Duvm/V2ReVhkPrFY=; b=cphtT24nP/TV2F5l687wJ31U2kn4b2zDJeEg7L2xzSjuTnt3XTYswFxToyy72IAzkB NUwkzbQbCGq8ux/WhaLWqQkoDAyFWH/6vmIln3o0xMeOP/gOtkwDrhxKfgJZuErg0q/l jC3LJH3HPlx5DFf23i0hdFMYqdaBVuvBwGgE6nG0KgOAgsrbR2xeT/4kp0NlF27ifh+T ny7S+qy55wFUfr4xNU/ztQyrcjtN9mxoH40dl9uMY7qRauqFTWxsoSmtmJmoiz9MM+0Z p7Yu2IrZRCKI0j49kJe066dgy2o1RD/iO5C/5CrF8hRv9OIl9DhWz78wM2c6/tdcq4Vh ebqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560774; x=1686152774; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uP7LoeUADLncwaNyoA4stAurFc7Duvm/V2ReVhkPrFY=; b=OFwLU8e7vMYN0eCdHdECXhl4RQxWqnedohW0XbJ5px1ef6LVwLDwhxtIlK+hQSzkVB JdP68cPy2p+pIqn1vk9BAKcSgscTqTPvltK0OcJy2px/Ae6Vd9epaGU9NrDhgUJpb8jU Pklr/xJZUs4zLezLfGVBRr0JqvmtZRAoFAKWKvUjtVKoxYo7c2Wc/bZIanf5i6qMQ/hB 7GyWUUX4dlyfh6JUfyMs0BaoOThS3jDkbF8MtFIrj+YG/elp5u9qrbf3B4LtK143qNAg +0ioAeR53DlC24R+KZ+F1eTz+tXTimeHLDdlMEZ4hwVGDV7T9mL7vB7zcExa0CQ01J8n h/Og== X-Gm-Message-State: AC+VfDy4IRSfrtTF6HeAyDVyCcuHhLgwUibrmP1G2JVimVBvsn97zAPf UPjI8zHKLZlDaQqvCrVGbxK1KQ== X-Google-Smtp-Source: ACHHUZ6LPb4iG+kDyybRK0yCujT6GUq2ocTFtnuMhjqFyOlsrPYN+jv5ufFIXt0n7w8CYExVIB6jKw== X-Received: by 2002:aa7:cfd5:0:b0:50c:a8c9:3ddf with SMTP id r21-20020aa7cfd5000000b0050ca8c93ddfmr7976203edy.36.1683560774064; Mon, 08 May 2023 08:46:14 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id j19-20020aa7ca53000000b0050bc27a4967sm6213551edt.21.2023.05.08.08.46.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:46:13 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 6.1 4/5] KVM: VMX: Make CR0.WP a guest owned bit Date: Mon, 8 May 2023 17:46:01 +0200 Message-Id: <20230508154602.30008-5-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154602.30008-1-minipli@grsecurity.net> References: <20230508154602.30008-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org [ Upstream commit fb509f76acc8d42bed11bca308404f81c2be856a ] Guests like grsecurity that make heavy use of CR0.WP to implement kernel level W^X will suffer from the implied VMEXITs. With EPT there is no need to intercept a guest change of CR0.WP, so simply make it a guest owned bit if we can do so. This implies that a read of a guest's CR0.WP bit might need a VMREAD. However, the only potentially affected user seems to be kvm_init_mmu() which is a heavy operation to begin with. But also most callers already cache the full value of CR0 anyway, so no additional VMREAD is needed. The only exception is nested_vmx_load_cr3(). This change is VMX-specific, as SVM has no such fine grained control register intercept control. Suggested-by: Sean Christopherson Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-7-minipli@grsecurity.net Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v6.1.x --- arch/x86/kvm/kvm_cache_regs.h | 2 +- arch/x86/kvm/vmx/nested.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/vmx/vmx.h | 18 ++++++++++++++++++ 4 files changed, 22 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 3febc342360c..896cc7394944 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -4,7 +4,7 @@ #include -#define KVM_POSSIBLE_CR0_GUEST_BITS X86_CR0_TS +#define KVM_POSSIBLE_CR0_GUEST_BITS (X86_CR0_TS | X86_CR0_WP) #define KVM_POSSIBLE_CR4_GUEST_BITS \ (X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR \ | X86_CR4_OSXMMEXCPT | X86_CR4_PGE | X86_CR4_TSD | X86_CR4_FSGSBASE) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 1d00f7824da1..d5b1ccd2f362 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4455,7 +4455,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, * CR0_GUEST_HOST_MASK is already set in the original vmcs01 * (KVM doesn't change it); */ - vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmx_set_cr0(vcpu, vmcs12->host_cr0); /* Same as above - no reason to call set_cr4_guest_host_mask(). */ @@ -4606,7 +4606,7 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu) */ vmx_set_efer(vcpu, nested_vmx_get_vmcs01_guest_efer(vmx)); - vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmx_set_cr0(vcpu, vmcs_readl(CR0_READ_SHADOW)); vcpu->arch.cr4_guest_owned_bits = ~vmcs_readl(CR4_GUEST_HOST_MASK); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index a5009b66df9a..29d236a65ed5 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4695,7 +4695,7 @@ static void init_vmcs(struct vcpu_vmx *vmx) /* 22.2.1, 20.8.1 */ vm_entry_controls_set(vmx, vmx_vmentry_ctrl()); - vmx->vcpu.arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vmx->vcpu.arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmcs_writel(CR0_GUEST_HOST_MASK, ~vmx->vcpu.arch.cr0_guest_owned_bits); set_cr4_guest_host_mask(vmx); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index a3da84f4ea45..e2b04f4c0fef 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -640,6 +640,24 @@ BUILD_CONTROLS_SHADOW(tertiary_exec, TERTIARY_VM_EXEC_CONTROL, 64) (1 << VCPU_EXREG_EXIT_INFO_1) | \ (1 << VCPU_EXREG_EXIT_INFO_2)) +static inline unsigned long vmx_l1_guest_owned_cr0_bits(void) +{ + unsigned long bits = KVM_POSSIBLE_CR0_GUEST_BITS; + + /* + * CR0.WP needs to be intercepted when KVM is shadowing legacy paging + * in order to construct shadow PTEs with the correct protections. + * Note! CR0.WP technically can be passed through to the guest if + * paging is disabled, but checking CR0.PG would generate a cyclical + * dependency of sorts due to forcing the caller to ensure CR0 holds + * the correct value prior to determining which CR0 bits can be owned + * by L1. Keep it simple and limit the optimization to EPT. + */ + if (!enable_ept) + bits &= ~X86_CR0_WP; + return bits; +} + static inline struct kvm_vmx *to_kvm_vmx(struct kvm *kvm) { return container_of(kvm, struct kvm_vmx, kvm); From patchwork Mon May 8 15:46:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234682 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CB81C7EE24 for ; Mon, 8 May 2023 15:46:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234519AbjEHPqn (ORCPT ); Mon, 8 May 2023 11:46:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234527AbjEHPqa (ORCPT ); Mon, 8 May 2023 11:46:30 -0400 Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com [IPv6:2a00:1450:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 886979EE4 for ; Mon, 8 May 2023 08:46:16 -0700 (PDT) Received: by mail-ed1-x52a.google.com with SMTP id 4fb4d7f45d1cf-50bc1612940so8902213a12.2 for ; Mon, 08 May 2023 08:46:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560775; x=1686152775; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=b062hKlu11ozPjzQ3nUPfYzQg304FNVr6n23/8qMZEo=; b=IiKf5Xa9w7mL+0vtGPYsvdbZfZ+TTlhlKuWqSHIGZME0XynCfZuqmd0cPadE7QfoPD /TP/t+Qm75iiADjsjny+WUCEmBl+kesNIMQMLV1P/WeLojGZGDbBnNcSd50KFBVTPpOH L/EWiUTbtxCNZ2PW7nc4iYXVryl7WVz9fPslMS86va4pJ3oJewPVfiX2TwfqLBRHZo4C Dsq7loXYmBFRVKqDBSRdCsmKuB871UVMPkS07HgsjG8aazPQZJr5G+PrMi/SS68A6sLt cpwRG/gRgCdEPxJNNeQ7lheYSnTGfqapEw3MA3uW0OHSD8a8hobw+s1TpfXfBXkki/MB kM+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560775; x=1686152775; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=b062hKlu11ozPjzQ3nUPfYzQg304FNVr6n23/8qMZEo=; b=J3jkst/QMFhTP5QJOgfGHZ89AuY52AL093CbR3doaZ684ZTvvS4Rzht5GAP9GU0yLt lHQqLhZ2yvvitpMlq49sJMOrSCixD31UOZOvqDQHFA51vRL9TwcbuiQ4VjOMXJH8NcMI GfFIwTeMszh+gOZBRdxP8HsR/u24zHi+pgPBZmCdPMC2b1hIrOnbK9uqYL9ot089fihm ssmrtI6LyhZ5Vn8HUg5SExFrIbiAo8x3kDZZL3S7oy01BZNvnVDu3+kpwP+lyT4eqick SVeSg4x+lzNZZ4F9fM92LBIrn5xoOt8Xz7WVJDbOqCjb4Yqdek+P3NzqKBZzp1Ph/J4v jhfw== X-Gm-Message-State: AC+VfDxz6zQqNci5LFD+mjr7j3tdOW31mVGpJOYxQMNIlnA3pLjupXi/ g8NJpRjF+VyrSfjF2SybxSddrQ== X-Google-Smtp-Source: ACHHUZ6Y9zCCzrv9JIgBOKkIm6WzkofMrdtZEI0JsTjMzIraRxY497y9MGLULe4X6L9/Pp4Gk8ittQ== X-Received: by 2002:aa7:d8c7:0:b0:50c:160d:f652 with SMTP id k7-20020aa7d8c7000000b0050c160df652mr8487675eds.8.1683560774942; Mon, 08 May 2023 08:46:14 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id j19-20020aa7ca53000000b0050bc27a4967sm6213551edt.21.2023.05.08.08.46.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:46:14 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 6.1 5/5] KVM: x86/mmu: Refresh CR0.WP prior to checking for emulated permission faults Date: Mon, 8 May 2023 17:46:02 +0200 Message-Id: <20230508154602.30008-6-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154602.30008-1-minipli@grsecurity.net> References: <20230508154602.30008-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson [ Upstream commit cf9f4c0eb1699d306e348b1fd0225af7b2c282d3 ] Refresh the MMU's snapshot of the vCPU's CR0.WP prior to checking for permission faults when emulating a guest memory access and CR0.WP may be guest owned. If the guest toggles only CR0.WP and triggers emulation of a supervisor write, e.g. when KVM is emulating UMIP, KVM may consume a stale CR0.WP, i.e. use stale protection bits metadata. Note, KVM passes through CR0.WP if and only if EPT is enabled as CR0.WP is part of the MMU role for legacy shadow paging, and SVM (NPT) doesn't support per-bit interception controls for CR0. Don't bother checking for EPT vs. NPT as the "old == new" check will always be true under NPT, i.e. the only cost is the read of vcpu->arch.cr4 (SVM unconditionally grabs CR0 from the VMCB on VM-Exit). Reported-by: Mathias Krause Link: https://lkml.kernel.org/r/677169b4-051f-fcae-756b-9a3e1bb9f8fe%40grsecurity.net Fixes: fb509f76acc8 ("KVM: VMX: Make CR0.WP a guest owned bit") Tested-by: Mathias Krause Link: https://lore.kernel.org/r/20230405002608.418442-1-seanjc@google.com Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v6.1.x --- arch/x86/kvm/mmu.h | 26 +++++++++++++++++++++++++- arch/x86/kvm/mmu/mmu.c | 15 +++++++++++++++ 2 files changed, 40 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 6bdaacb6faa0..59804be91b5b 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -113,6 +113,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu); int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, u64 fault_address, char *insn, int insn_len); +void __kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu); int kvm_mmu_load(struct kvm_vcpu *vcpu); void kvm_mmu_unload(struct kvm_vcpu *vcpu); @@ -153,6 +155,24 @@ static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu) vcpu->arch.mmu->root_role.level); } +static inline void kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + /* + * When EPT is enabled, KVM may passthrough CR0.WP to the guest, i.e. + * @mmu's snapshot of CR0.WP and thus all related paging metadata may + * be stale. Refresh CR0.WP and the metadata on-demand when checking + * for permission faults. Exempt nested MMUs, i.e. MMUs for shadowing + * nEPT and nNPT, as CR0.WP is ignored in both cases. Note, KVM does + * need to refresh nested_mmu, a.k.a. the walker used to translate L2 + * GVAs to GPAs, as that "MMU" needs to honor L2's CR0.WP. + */ + if (!tdp_enabled || mmu == &vcpu->arch.guest_mmu) + return; + + __kvm_mmu_refresh_passthrough_bits(vcpu, mmu); +} + /* * Check if a given access (described through the I/D, W/R and U/S bits of a * page fault error code pfec) causes a permission fault with the given PTE @@ -184,8 +204,12 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, u64 implicit_access = access & PFERR_IMPLICIT_ACCESS; bool not_smap = ((rflags & X86_EFLAGS_AC) | implicit_access) == X86_EFLAGS_AC; int index = (pfec + (not_smap << PFERR_RSVD_BIT)) >> 1; - bool fault = (mmu->permissions[index] >> pte_access) & 1; u32 errcode = PFERR_PRESENT_MASK; + bool fault; + + kvm_mmu_refresh_passthrough_bits(vcpu, mmu); + + fault = (mmu->permissions[index] >> pte_access) & 1; WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK)); if (unlikely(mmu->pkru_mask)) { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f2a10c7d1369..230108a90cf3 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5005,6 +5005,21 @@ kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) return role; } +void __kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + const bool cr0_wp = !!kvm_read_cr0_bits(vcpu, X86_CR0_WP); + + BUILD_BUG_ON((KVM_MMU_CR0_ROLE_BITS & KVM_POSSIBLE_CR0_GUEST_BITS) != X86_CR0_WP); + BUILD_BUG_ON((KVM_MMU_CR4_ROLE_BITS & KVM_POSSIBLE_CR4_GUEST_BITS)); + + if (is_cr0_wp(mmu) == cr0_wp) + return; + + mmu->cpu_role.base.cr0_wp = cr0_wp; + reset_guest_paging_metadata(vcpu, mmu); +} + static inline int kvm_mmu_get_tdp_level(struct kvm_vcpu *vcpu) { /* tdp_root_level is architecture forced level, use it if nonzero */