From patchwork Mon May 8 15:47:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234694 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD099C7EE2A for ; Mon, 8 May 2023 15:49:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234637AbjEHPtS (ORCPT ); Mon, 8 May 2023 11:49:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234572AbjEHPtG (ORCPT ); Mon, 8 May 2023 11:49:06 -0400 Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [IPv6:2a00:1450:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FE4DA26A for ; Mon, 8 May 2023 08:48:34 -0700 (PDT) Received: by mail-ej1-x635.google.com with SMTP id a640c23a62f3a-94a34a14a54so939924066b.1 for ; Mon, 08 May 2023 08:48:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560897; x=1686152897; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DBg5T5mpAb8M0jAPS52CYiPxwJ9/YpcwxA5FtnhL5qA=; b=KzwWbzl9W8H8agODh0VgfjDPGYBD2m7HDw/ylQgf7zD+u1O9/Q4MtNfKUr1YYV5rcf SFNSSULFwI2r8NYMvSP5L5pgAGmZNbTMzMDmJz2GjnJARt1Tcx4qUDJr97QPhx4NNEtg Ph5lmhovNcLgMooZLOGong18ZKI9kNIFfDPbcOLCF+IfIr21L92LnoGM5xus3NCal2bx JgfpNpDCritl0oNiIgGCPqEPkCxdkVIJ701VlLIyaROnqUYslD+ewb5m2ntELuCyNVJV 8zV/wtK26a3SucJ6Uhbl0tEn51yl11uygDf6X0BagiEWafCx6PZT6pLT6hP8hcuGjMSA QHKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560897; x=1686152897; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DBg5T5mpAb8M0jAPS52CYiPxwJ9/YpcwxA5FtnhL5qA=; b=RbZ2a/D6fY7HpNxFr4VlWs6RVyEo4jvKythRV6SqhxddctB9PipLERR+qwlBEQamW3 c123H7oXtNnhPa5pmRJGBpJz5CrugEATJoLqV94huJNDH3GxN/agFvp6vpd31G4KtWKz DkTNTqiT8Id5wTo9Q6GPwR9DmI35g7BalMXsHfrnU2I28PV68qfHC0T7Cw32UrJH8wJA vcSjaQsDqvg98lbdXGqpZjzgkQUI++5UIzNyJt5hOVyRbd5Ge2EQvW8ABYJPztsCxsQ4 eg8l+4e+F7qUnzu+uO/mZHnKskoZ6oIxEbzYQnXUzudL1uU6P8CQwLTxFVqlEkQOeeyq 3Z6w== X-Gm-Message-State: AC+VfDzlse9agtPmLkSCsLKV69v3PCD3SxMIp4pGK3G+z6+2GMpjauQw KMggj23s/PxRW28KOEnGRNQ0LR0xKopmt+k7C+2zaQ== X-Google-Smtp-Source: ACHHUZ6DFmf/PThcyPNey14p0+/PgsjodVy8TbMr3Ermew4WsWGW6tfFV0mFgY0L3hXXmzwbY46fEw== X-Received: by 2002:a17:906:4fce:b0:965:a414:7cd6 with SMTP id i14-20020a1709064fce00b00965a4147cd6mr10968243ejw.17.1683560897400; Mon, 08 May 2023 08:48:17 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id k21-20020a170906055500b009584c5bcbc7sm126316eja.49.2023.05.08.08.48.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:48:17 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.10 01/10] KVM: x86/mmu: Avoid indirect call for get_cr3 Date: Mon, 8 May 2023 17:47:55 +0200 Message-Id: <20230508154804.30078-2-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154804.30078-1-minipli@grsecurity.net> References: <20230508154804.30078-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Paolo Bonzini [ Upstream commit 2fdcc1b324189b5fb20655baebd40cd82e2bdf0c ] Most of the time, calls to get_guest_pgd result in calling kvm_read_cr3 (the exception is only nested TDP). Hardcode the default instead of using the get_cr3 function, avoiding a retpoline if they are enabled. Signed-off-by: Paolo Bonzini Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-2-minipli@grsecurity.net Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v5.10.x --- arch/x86/kvm/mmu.h | 11 +++++++++++ arch/x86/kvm/mmu/mmu.c | 12 ++++++------ arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/x86.c | 2 +- 4 files changed, 19 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 581925e476d6..dcbd882545b4 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -99,6 +99,17 @@ static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu) vcpu->arch.mmu->shadow_root_level); } +unsigned long get_guest_cr3(struct kvm_vcpu *vcpu); + +static inline unsigned long kvm_mmu_get_guest_pgd(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + if (IS_ENABLED(CONFIG_RETPOLINE) && mmu->get_guest_pgd == get_guest_cr3) + return kvm_read_cr3(vcpu); + + return mmu->get_guest_pgd(vcpu); +} + int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, bool prefault); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 13bf3198d0ce..da9e7cea475a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3278,7 +3278,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) hpa_t root; int i; - root_pgd = vcpu->arch.mmu->get_guest_pgd(vcpu); + root_pgd = kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu); root_gfn = root_pgd >> PAGE_SHIFT; if (mmu_check_root(vcpu, root_gfn)) @@ -3652,7 +3652,7 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, arch.token = alloc_apf_token(vcpu); arch.gfn = gfn; arch.direct_map = vcpu->arch.mmu->direct_map; - arch.cr3 = vcpu->arch.mmu->get_guest_pgd(vcpu); + arch.cr3 = kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu); return kvm_setup_async_pf(vcpu, cr2_or_gpa, kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch); @@ -3934,7 +3934,7 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd, bool skip_tlb_flush, } EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd); -static unsigned long get_cr3(struct kvm_vcpu *vcpu) +unsigned long get_guest_cr3(struct kvm_vcpu *vcpu) { return kvm_read_cr3(vcpu); } @@ -4523,7 +4523,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) context->invlpg = NULL; context->shadow_root_level = kvm_mmu_get_tdp_level(vcpu); context->direct_map = true; - context->get_guest_pgd = get_cr3; + context->get_guest_pgd = get_guest_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; @@ -4718,7 +4718,7 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu) kvm_read_cr4_bits(vcpu, X86_CR4_PAE), vcpu->arch.efer); - context->get_guest_pgd = get_cr3; + context->get_guest_pgd = get_guest_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; } @@ -4756,7 +4756,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) return; g_context->mmu_role.as_u64 = new_role.as_u64; - g_context->get_guest_pgd = get_cr3; + g_context->get_guest_pgd = get_guest_cr3; g_context->get_pdptr = kvm_pdptr_read; g_context->inject_page_fault = kvm_inject_page_fault; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index c6daeeff1d9c..3d84fc56caca 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -330,7 +330,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, trace_kvm_mmu_pagetable_walk(addr, access); retry_walk: walker->level = mmu->root_level; - pte = mmu->get_guest_pgd(vcpu); + pte = kvm_mmu_get_guest_pgd(vcpu, mmu); have_ad = PT_HAVE_ACCESSED_DIRTY(mmu); #if PTTYPE == 64 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 0ccc8d1b972c..7464ca3806fa 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11080,7 +11080,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) return; if (!vcpu->arch.mmu->direct_map && - work->arch.cr3 != vcpu->arch.mmu->get_guest_pgd(vcpu)) + work->arch.cr3 != kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu)) return; kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); From patchwork Mon May 8 15:47:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234693 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1E10C77B75 for ; Mon, 8 May 2023 15:49:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234566AbjEHPtQ (ORCPT ); Mon, 8 May 2023 11:49:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234597AbjEHPtG (ORCPT ); Mon, 8 May 2023 11:49:06 -0400 Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [IPv6:2a00:1450:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0086FA5C4 for ; Mon, 8 May 2023 08:48:34 -0700 (PDT) Received: by mail-ej1-x635.google.com with SMTP id a640c23a62f3a-965f7bdab6bso636477866b.3 for ; Mon, 08 May 2023 08:48:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560899; x=1686152899; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bUQiJBVRFOYwj3LFo9SqD7kqRZqQ+2PxKuM6BL7h40I=; b=Qfm6+G4WvzVL4wrVGrBvOtN403UmyKKYwxYYgKaEkA29qU3EzzjNXgmq03iV1V8h71 SqCK9fw/OojzEhmOzvGEFHvCQ00fFTm+hdpGYzsFLMQgAS/gkgx24bzx3xh21uMCmPVN ZELaHmpBnIXPwhP23WYRQuaqkb3VVrD5td6E5rrikmJsJuI3CaDHvRxoECGOTWhhwJJ8 QLS+1XYB+FasfYecpneaj5iK0eD1w2tzZ3rTozm36rXcpokp7i4CqkvmrjWS8YnpgcWW IH5FCeyXgsNPMGiEVD5O0MmLxKqrM8Zp9VvYuk6mHKFNo4ojmMA2+SnFh/w1a7oeWe6T KMcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560899; x=1686152899; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bUQiJBVRFOYwj3LFo9SqD7kqRZqQ+2PxKuM6BL7h40I=; b=ewGuWTX28/LN223ZgnXa0Er5z0LgOlCd3SuqHggHUGweyjVnJYJthimFWX1UL32KQS AOY0jRskp7kp0ab2to8v0x/a/I49iz1vPjHvyB43SR10V1K41CpOidtAt9G7EVkSpkz+ VLWSfmf248r1ERrXdnfh8u7CL8xEBOOszQVR90YXtlN4VbO6hr5Fzfoeq2//1xL8/GVT yRDyzDKxurRKGKW/M/KgNYaH6iLR/xJD03JG6RK+mS+lCaa+CMJ7Ekg27DdI2+W5Ojfc ha+W7mWQu8cX9p8bA3Xdht3yNYcLx72DD27lv4szIFSaghe58KCIZz2wBQy1G80IUooB kA/g== X-Gm-Message-State: AC+VfDy32lIdqLgQ6wUA0Z+baf9hfGzm72sFcfcgJpKvHDbQmcDCHbv9 fMxDlgdfp+E4rMWXaRZ02fQZbw== X-Google-Smtp-Source: ACHHUZ51bGDLCusY1s0Fw64gaWk1STV7958l6+QhX6BFFSIn00DTnVNGi67ePeXiI6vOFYMh9Cbh9Q== X-Received: by 2002:a17:907:6eaa:b0:94f:81c:725e with SMTP id sh42-20020a1709076eaa00b0094f081c725emr11354436ejc.59.1683560898712; Mon, 08 May 2023 08:48:18 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id k21-20020a170906055500b009584c5bcbc7sm126316eja.49.2023.05.08.08.48.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:48:18 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.10 02/10] KVM: x86: Do not unload MMU roots when only toggling CR0.WP with TDP enabled Date: Mon, 8 May 2023 17:47:56 +0200 Message-Id: <20230508154804.30078-3-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154804.30078-1-minipli@grsecurity.net> References: <20230508154804.30078-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org [ Upstream commit 01b31714bd90be2784f7145bf93b7f78f3d081e1 ] There is no need to unload the MMU roots with TDP enabled when only CR0.WP has changed -- the paging structures are still valid, only the permission bitmap needs to be updated. One heavy user of toggling CR0.WP is grsecurity's KERNEXEC feature to implement kernel W^X. The optimization brings a huge performance gain for this case as the following micro-benchmark running 'ssdd 10 50000' from rt-tests[1] on a grsecurity L1 VM shows (runtime in seconds, lower is better): legacy TDP shadow kvm-x86/next@d8708b 8.43s 9.45s 70.3s +patch 5.39s 5.63s 70.2s For legacy MMU this is ~36% faster, for TDP MMU even ~40% faster. Also TDP and legacy MMU now both have a similar runtime which vanishes the need to disable TDP MMU for grsecurity. Shadow MMU sees no measurable difference and is still slow, as expected. [1] https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-3-minipli@grsecurity.net Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v5.10.x --- - account for different kvm_init_mmu() arguments arch/x86/kvm/x86.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 7464ca3806fa..bd4d64c1bdf9 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -868,6 +868,18 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) kvm_x86_ops.set_cr0(vcpu, cr0); + /* + * CR0.WP is incorporated into the MMU role, but only for non-nested, + * indirect shadow MMUs. If TDP is enabled, the MMU's metadata needs + * to be updated, e.g. so that emulating guest translations does the + * right thing, but there's no need to unload the root as CR0.WP + * doesn't affect SPTEs. + */ + if (tdp_enabled && (cr0 ^ old_cr0) == X86_CR0_WP) { + kvm_init_mmu(vcpu, false); + return 0; + } + if ((cr0 ^ old_cr0) & X86_CR0_PG) { kvm_clear_async_pf_completion_queue(vcpu); kvm_async_pf_hash_reset(vcpu); From patchwork Mon May 8 15:47:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234695 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB6A1C77B7F for ; Mon, 8 May 2023 15:49:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234301AbjEHPtU (ORCPT ); Mon, 8 May 2023 11:49:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234628AbjEHPtH (ORCPT ); Mon, 8 May 2023 11:49:07 -0400 Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com [IPv6:2a00:1450:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9377DA5D4 for ; Mon, 8 May 2023 08:48:37 -0700 (PDT) Received: by mail-ed1-x52b.google.com with SMTP id 4fb4d7f45d1cf-50bc37e1525so9163879a12.1 for ; Mon, 08 May 2023 08:48:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560899; x=1686152899; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1M2QwCPvuhfE1NKXT05PI8v/YEjTDoq9lyCyoiIhl/M=; b=J1OMGN1vl2on8cDTL9lfgRL6QzT8uiR96+CvxbPWK3h2A03eY/Q6Yn40NDedyf77bh 2PAWZurZRTQ0kLJ5tvCqNLn12o5RaOUduKJCOp/qI8tFyTxPtVkgxNfzNxnExa+C9TS3 ZyxHRNv1ESmB0AoHvrp7Ag6v6IiyyY1PutlM/PTAgWbgdQKXQfexr+dKjWy3mwkT7gi2 Z6vYR8Fs1HCBsh+LKkTp0VUt8/Dy6+u9iEBPKujqpy2azrSYB6iy0M/gtCYhm89lMP80 eg4ducXtyL6JDqELOMXm/r8C5l4Bt5kPupEMHXmQtHtjPkMSdIZ2eulANLth1JTBqYIj AI+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560899; x=1686152899; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1M2QwCPvuhfE1NKXT05PI8v/YEjTDoq9lyCyoiIhl/M=; b=ax3U9tZSUUKFaLKl0ek3udOYvU+Nobf6pDVCTf3xkfhtTHoJb66UOJJNmps9pTa/9i oJbWI1l++OOMDdTP/P8aqtAqGAiBG8OumUh6XavCF1KB0vxM6IJHg4a6oKQyIG5Xt202 BBZ7i2MzAQWsMHejgLZOnhvo6YcVMimv1GB5urgk6cJlHJMb2rxb58PrXwEYbGBj3Fuw QynM79gd1pjMhyd9SckGtR8c8ow73Afo1qhqBMYpwumeR01ym0UOj2rV7ME/SZMDj6QK zieah7NEMcO7/aalXtL0Gvq9U/jak4ccBVjSSD6bxdeSd9Dqxkm2lQbwHkQmsk9qqnp/ gI+g== X-Gm-Message-State: AC+VfDyVlyH/PwoOR0TtCwxb73A7NA3q48ic+teIdJno64hc6FCFYM9/ 9+sUyC6LGE6xjlQI2lzlA1tZo8P7M6JT1qayJ21aTw== X-Google-Smtp-Source: ACHHUZ5TY2ijNlqQ+P690y3IyhkwKgfmYeWE/+r04wSrGgiGHKT87MYC9f+wAk+/oEBJ86klwz1lyA== X-Received: by 2002:a17:906:974b:b0:957:943e:7416 with SMTP id o11-20020a170906974b00b00957943e7416mr10354487ejy.15.1683560899765; Mon, 08 May 2023 08:48:19 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id k21-20020a170906055500b009584c5bcbc7sm126316eja.49.2023.05.08.08.48.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:48:19 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.10 03/10] KVM: x86: Make use of kvm_read_cr*_bits() when testing bits Date: Mon, 8 May 2023 17:47:57 +0200 Message-Id: <20230508154804.30078-4-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154804.30078-1-minipli@grsecurity.net> References: <20230508154804.30078-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org [ Upstream commit 74cdc836919bf34684ef66f995273f35e2189daf ] Make use of the kvm_read_cr{0,4}_bits() helper functions when we only want to know the state of certain bits instead of the whole register. This not only makes the intent cleaner, it also avoids a potential VMREAD in case the tested bits aren't guest owned. Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-5-minipli@grsecurity.net Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v5.10.x --- arch/x86/kvm/pmu.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index e5322a0dc5bb..5b494564faa2 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -374,9 +374,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) if (!pmc) return 1; - if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) && + if (!kvm_read_cr4_bits(vcpu, X86_CR4_PCE) && (kvm_x86_ops.get_cpl(vcpu) != 0) && - (kvm_read_cr0(vcpu) & X86_CR0_PE)) + kvm_read_cr0_bits(vcpu, X86_CR0_PE)) return 1; *data = pmc_read_counter(pmc) & mask; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 2c5d8b9f9873..db769fc68378 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5180,7 +5180,7 @@ static int handle_cr(struct kvm_vcpu *vcpu) break; case 3: /* lmsw */ val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f; - trace_kvm_cr_write(0, (kvm_read_cr0(vcpu) & ~0xful) | val); + trace_kvm_cr_write(0, (kvm_read_cr0_bits(vcpu, ~0xful) | val)); kvm_lmsw(vcpu, val); return kvm_skip_emulated_instruction(vcpu); @@ -7212,7 +7212,7 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) goto exit; } - if (kvm_read_cr0(vcpu) & X86_CR0_CD) { + if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) { ipat = VMX_EPT_IPAT_BIT; if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) cache = MTRR_TYPE_WRBACK; From patchwork Mon May 8 15:47:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234696 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35007C7EE2F for ; Mon, 8 May 2023 15:49:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234658AbjEHPtV (ORCPT ); Mon, 8 May 2023 11:49:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234661AbjEHPtI (ORCPT ); Mon, 8 May 2023 11:49:08 -0400 Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com [IPv6:2a00:1450:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21978A5F1 for ; Mon, 8 May 2023 08:48:43 -0700 (PDT) Received: by mail-ej1-x62e.google.com with SMTP id a640c23a62f3a-9661a1ff1e9so338449266b.1 for ; Mon, 08 May 2023 08:48:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560900; x=1686152900; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MHZ3/Gv8tpI6bbfJYN6NaXigTfYVOiTM9ttXlsZaDqM=; b=E0GbtbpvKptfy4bOYXF8qBtTwLh2dBmtWciGyAurTskr5TzD30iw/cXhGfMMKaIcdc wlwCMU+oIlZM7GBNkg3gt3IKUcoMAO1hYI8Or5yJ9WY+eQHBX0quy5A4vqJer0tnmZ8q Xi4WUbI2GmVMa5ekdOIFZ/xsbQGHOE5H6ukPlPM8AMjFKyLUC6irUvcbhcIawrA2DPgc 31g0FQ+dtqjQ9w4aRHbI114UAci4ZjNPRuamWZoDJFjWCKAA1Jo0zwmH3TPam1unHO0j SCVCFVx2u44Q5Q4caUSKqrG7n8UcJYFzfhvydENN1PEik97GkGEV3DJBXp6/Sdb8dXui C4ZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560900; x=1686152900; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MHZ3/Gv8tpI6bbfJYN6NaXigTfYVOiTM9ttXlsZaDqM=; b=HGsqudzndUwhWPSy9RsRHVKFe2ltERrXq08+IDKibW51LIRyU31kDu6ZDxliOEPN4O UV7QkM8ouLnC1e0TQD0bcKp6oA98R/jTrjIYe2wSDPRZnX2ugjZCQfFrq8T4RmmywwTB Ygqs9Sasd/MFOdqI+WTl0kRqyLHAPTBEl7/6Qo/H4QNo89Aq9fGf2iayIJE7A03H9wr6 6K1jEw0sM/c4XkxB5tYqfghIp2YV0maZ6VwY4WWA1GrCwnsb35NO5m4Nq3B/4Ae8AwFO UAmkfFModGaD8JF3RguqmreApv1gtn6AkxDSriEZJjzN8bePp5ZUe/jB0g83lWeQohpL DdKQ== X-Gm-Message-State: AC+VfDzHFCJeEnGwZ6hww3eU2EBNISMUXrN1DkuWBYuYBhj+ZhhHg42T 0V8INABWfhU8b/olcxhwX+AgRg== X-Google-Smtp-Source: ACHHUZ6+STQh3jZq5m7w+8VPANlgzMG0eJxYbDlgdFPCpJgjdLZe0tlTTr+RNVaJu3Hl1pj6a+e9RA== X-Received: by 2002:a17:907:94c2:b0:966:3114:c790 with SMTP id dn2-20020a17090794c200b009663114c790mr6683942ejc.37.1683560900696; Mon, 08 May 2023 08:48:20 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id k21-20020a170906055500b009584c5bcbc7sm126316eja.49.2023.05.08.08.48.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:48:20 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.10 04/10] KVM: VMX: Make CR0.WP a guest owned bit Date: Mon, 8 May 2023 17:47:58 +0200 Message-Id: <20230508154804.30078-5-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154804.30078-1-minipli@grsecurity.net> References: <20230508154804.30078-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org [ Upstream commit fb509f76acc8d42bed11bca308404f81c2be856a ] Guests like grsecurity that make heavy use of CR0.WP to implement kernel level W^X will suffer from the implied VMEXITs. With EPT there is no need to intercept a guest change of CR0.WP, so simply make it a guest owned bit if we can do so. This implies that a read of a guest's CR0.WP bit might need a VMREAD. However, the only potentially affected user seems to be kvm_init_mmu() which is a heavy operation to begin with. But also most callers already cache the full value of CR0 anyway, so no additional VMREAD is needed. The only exception is nested_vmx_load_cr3(). This change is VMX-specific, as SVM has no such fine grained control register intercept control. Suggested-by: Sean Christopherson Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-7-minipli@grsecurity.net Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v5.10.x --- arch/x86/kvm/kvm_cache_regs.h | 2 +- arch/x86/kvm/vmx/nested.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/vmx/vmx.h | 18 ++++++++++++++++++ 4 files changed, 22 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index a889563ad02d..4471aa86270a 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -4,7 +4,7 @@ #include -#define KVM_POSSIBLE_CR0_GUEST_BITS X86_CR0_TS +#define KVM_POSSIBLE_CR0_GUEST_BITS (X86_CR0_TS | X86_CR0_WP) #define KVM_POSSIBLE_CR4_GUEST_BITS \ (X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR \ | X86_CR4_OSXMMEXCPT | X86_CR4_PGE | X86_CR4_TSD | X86_CR4_FSGSBASE) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index c165ddbb672f..5ddb177dd40d 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4247,7 +4247,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, * CR0_GUEST_HOST_MASK is already set in the original vmcs01 * (KVM doesn't change it); */ - vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmx_set_cr0(vcpu, vmcs12->host_cr0); /* Same as above - no reason to call set_cr4_guest_host_mask(). */ @@ -4397,7 +4397,7 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu) */ vmx_set_efer(vcpu, nested_vmx_get_vmcs01_guest_efer(vmx)); - vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmx_set_cr0(vcpu, vmcs_readl(CR0_READ_SHADOW)); vcpu->arch.cr4_guest_owned_bits = ~vmcs_readl(CR4_GUEST_HOST_MASK); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index db769fc68378..ff36d93b2552 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4456,7 +4456,7 @@ static void init_vmcs(struct vcpu_vmx *vmx) /* 22.2.1, 20.8.1 */ vm_entry_controls_set(vmx, vmx_vmentry_ctrl()); - vmx->vcpu.arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vmx->vcpu.arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmcs_writel(CR0_GUEST_HOST_MASK, ~vmx->vcpu.arch.cr0_guest_owned_bits); set_cr4_guest_host_mask(vmx); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index ed4b6da83aa8..28210741fd08 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -447,6 +447,24 @@ static inline u32 vmx_vmexit_ctrl(void) u32 vmx_exec_control(struct vcpu_vmx *vmx); u32 vmx_pin_based_exec_ctrl(struct vcpu_vmx *vmx); +static inline unsigned long vmx_l1_guest_owned_cr0_bits(void) +{ + unsigned long bits = KVM_POSSIBLE_CR0_GUEST_BITS; + + /* + * CR0.WP needs to be intercepted when KVM is shadowing legacy paging + * in order to construct shadow PTEs with the correct protections. + * Note! CR0.WP technically can be passed through to the guest if + * paging is disabled, but checking CR0.PG would generate a cyclical + * dependency of sorts due to forcing the caller to ensure CR0 holds + * the correct value prior to determining which CR0 bits can be owned + * by L1. Keep it simple and limit the optimization to EPT. + */ + if (!enable_ept) + bits &= ~X86_CR0_WP; + return bits; +} + static inline struct kvm_vmx *to_kvm_vmx(struct kvm *kvm) { return container_of(kvm, struct kvm_vmx, kvm); From patchwork Mon May 8 15:47:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37159C7EE24 for ; Mon, 8 May 2023 15:49:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234675AbjEHPtW (ORCPT ); Mon, 8 May 2023 11:49:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234664AbjEHPtI (ORCPT ); Mon, 8 May 2023 11:49:08 -0400 Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [IPv6:2a00:1450:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AF79E68 for ; Mon, 8 May 2023 08:48:46 -0700 (PDT) Received: by mail-ed1-x52d.google.com with SMTP id 4fb4d7f45d1cf-50bc37e1525so9163958a12.1 for ; Mon, 08 May 2023 08:48:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560902; x=1686152902; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6pD5x1DjcMgFYaj5G2kx0WTtEGNyMWKMqFSuUrLVWxw=; b=FqU4ODoQp3xzxgaIe+u1IFFLJrkbuDLu9hbQg5f1TDJO8uVRmuSISJtEI4w9RSmjen gGIr/GhHQfILOqYzQiPw+cKm2mq4nrjzb1I2aa7A3eWlsUQXvgDq9csS5ox25jDoYl+p YgqTpiTyR+PQXDhvQS8BlxlHZsl2DXUjBlOD2+szsdkAd64Pitno20/+vmikeZD+U8N+ yC66aSixLRDdcpNkh8HYvIeYT2bHTQq2/7ORvwCmSCN2bpIpllYwjEMMkQ9ziwZXD4FJ DgN0smDW1H42RjXv6nge4CPfMrkYQBo9ex+JXvIsrdtl0A399YR3/yNiCtotkSUn0ECl TnGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560902; x=1686152902; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6pD5x1DjcMgFYaj5G2kx0WTtEGNyMWKMqFSuUrLVWxw=; b=e/IZGG+GkII1M8HZBmrpKh7db1Vk+LrJV9eEXHfO7Aj2oaiVz5/EusAqJ+ggU9pG7G n16Fvz+WHMqZXcDKwz9Ms/qI2135bGk3p/+d9iHT3muUmhWOMy6YuU/1jELOPhMN1Pg0 uGa3wpkFKp0MVRym0P3d1B6zvvaFzyKD6A0DdFE/5D9tjVwwZNKIhgcWc/3nTI+q38Do 4Cf+uQ8zcZDzsTwQPs9MJTz44BfK78EDvM0r97at6gG2H5b4gvRzVdJ4f9W9RuIUM7+2 r2Sl41AXbqTphN6ewl2ewKyteW+XE3/B+8EF0uip42byzMQOhJMb4cosH/8VBl75OLK+ 1Ucw== X-Gm-Message-State: AC+VfDxjq9w6A2GWZHAezoV8Aiura51TldV0/t1D2WWqxftvmxcYkx81 W5zyTk50GvgtEOevppLauAnAZebQNKz8WjrGDQa/YA== X-Google-Smtp-Source: ACHHUZ59C70Jwt1HM/c0qziYA6xiq9AcSYQ2l5zjAU3mbhfmLrgFgqvrIqdSWZT5bBWkvyCRy5V8Bg== X-Received: by 2002:a17:907:2d9f:b0:94e:e3c3:aebe with SMTP id gt31-20020a1709072d9f00b0094ee3c3aebemr10203858ejc.0.1683560901836; Mon, 08 May 2023 08:48:21 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id k21-20020a170906055500b009584c5bcbc7sm126316eja.49.2023.05.08.08.48.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:48:21 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.10 05/10] KVM: x86: Read and pass all CR0/CR4 role bits to shadow MMU helper Date: Mon, 8 May 2023 17:47:59 +0200 Message-Id: <20230508154804.30078-6-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154804.30078-1-minipli@grsecurity.net> References: <20230508154804.30078-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson [ Upstream commit 20f632bd0060e12fca083adc44b097231e2f4649 ] Grab all CR0/CR4 MMU role bits from current vCPU state when initializing a non-nested shadow MMU. Extract the masks from kvm_post_set_cr{0,4}(), as the CR0/CR4 update masks must exactly match the mmu_role bits, with one exception (see below). The "full" CR0/CR4 will be used by future commits to initialize the MMU and its role, as opposed to the current approach of pulling everything from vCPU, which is incorrect for certain flows, e.g. nested NPT. CR4.LA57 is an exception, as it can be toggled on VM-Exit (for L1's MMU) but can't be toggled via MOV CR4 while long mode is active. I.e. LA57 needs to be in the mmu_role, but technically doesn't need to be checked by kvm_post_set_cr4(). However, the extra check is completely benign as the hardware restrictions simply mean LA57 will never be _the_ cause of a MMU reset during MOV CR4. Signed-off-by: Sean Christopherson Message-Id: <20210622175739.3610207-18-seanjc@google.com> Signed-off-by: Paolo Bonzini Signed-off-by: Mathias Krause # backport to v5.10.x --- - prerequisite for Lai Jiangshan's follow-up patches - only visible change is that changes to CR4.SMEP and CR4.LA57 are taken into account as well now to trigger a MMU reset in kvm_set_cr4() arch/x86/kvm/mmu.h | 6 ++++++ arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/x86.c | 6 ++---- 3 files changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index dcbd882545b4..0d73e8b45642 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -44,6 +44,12 @@ #define PT32_ROOT_LEVEL 2 #define PT32E_ROOT_LEVEL 3 +#define KVM_MMU_CR4_ROLE_BITS (X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE | \ + X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE | \ + X86_CR4_LA57) + +#define KVM_MMU_CR0_ROLE_BITS (X86_CR0_PG | X86_CR0_WP) + static inline u64 rsvd_bits(int s, int e) { if (e < s) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index da9e7cea475a..e1107723ffdc 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4714,8 +4714,8 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu) struct kvm_mmu *context = &vcpu->arch.root_mmu; kvm_init_shadow_mmu(vcpu, - kvm_read_cr0_bits(vcpu, X86_CR0_PG), - kvm_read_cr4_bits(vcpu, X86_CR4_PAE), + kvm_read_cr0_bits(vcpu, KVM_MMU_CR0_ROLE_BITS), + kvm_read_cr4_bits(vcpu, KVM_MMU_CR4_ROLE_BITS), vcpu->arch.efer); context->get_guest_pgd = get_guest_cr3; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index bd4d64c1bdf9..d6bb2c300e16 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -829,7 +829,6 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) { unsigned long old_cr0 = kvm_read_cr0(vcpu); unsigned long pdptr_bits = X86_CR0_CD | X86_CR0_NW | X86_CR0_PG; - unsigned long update_bits = X86_CR0_PG | X86_CR0_WP; cr0 |= X86_CR0_ET; @@ -885,7 +884,7 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) kvm_async_pf_hash_reset(vcpu); } - if ((cr0 ^ old_cr0) & update_bits) + if ((cr0 ^ old_cr0) & KVM_MMU_CR0_ROLE_BITS) kvm_mmu_reset_context(vcpu); if (((cr0 ^ old_cr0) & X86_CR0_CD) && @@ -1017,7 +1016,6 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) unsigned long old_cr4 = kvm_read_cr4(vcpu); unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE | X86_CR4_SMEP; - unsigned long mmu_role_bits = pdptr_bits | X86_CR4_SMAP | X86_CR4_PKE; if (kvm_valid_cr4(vcpu, cr4)) return 1; @@ -1044,7 +1042,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) kvm_x86_ops.set_cr4(vcpu, cr4); - if (((cr4 ^ old_cr4) & mmu_role_bits) || + if (((cr4 ^ old_cr4) & KVM_MMU_CR4_ROLE_BITS) || (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE))) kvm_mmu_reset_context(vcpu); From patchwork Mon May 8 15:48:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38EC1C7EE30 for ; Mon, 8 May 2023 15:49:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234686AbjEHPtX (ORCPT ); Mon, 8 May 2023 11:49:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234668AbjEHPtJ (ORCPT ); Mon, 8 May 2023 11:49:09 -0400 Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com [IPv6:2a00:1450:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EA94AD1C for ; Mon, 8 May 2023 08:48:47 -0700 (PDT) Received: by mail-ed1-x52a.google.com with SMTP id 4fb4d7f45d1cf-50bc3088b7aso9163223a12.3 for ; Mon, 08 May 2023 08:48:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560903; x=1686152903; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2RYegx694beL5KeyUErKjz07xtwHsu2z4x0/sf2I1+o=; b=tthoCc1wLGExjibFIebm0G4HoLnKNsn9GVOAZa0hyYrJT3QSeNGclV83+IOv+WoK0x 9hDPzlRgyj2yiSKYIrspxo1c0NoB7RAs3Ni+1RwZOxGSR91HDB2Lc+UNWqlQe060JeEg bSZ6/lyP0T7Df+os/IfDHPisAbrMdWuJGNlulr9QuBHXhlL8F6lnru+e7eF3f7AAvKUh 93VZgd8wYUlB+GEzCsFEOOGiWhpLPnKxwRZJttqlwSjrWUdFpEeZdhxbaxOc9vPecez6 u0/leQyA5nYk/yDWkgF5r9eZ72Fz/bo/BWlgPjhrida2Tgy8EPdMHdGtn+UmSJw71+uP 4gDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560903; x=1686152903; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2RYegx694beL5KeyUErKjz07xtwHsu2z4x0/sf2I1+o=; b=Tmdpd3M7yPGJGLbnduRD7SDNOuIvVugZaAE9C9AGPqj3Q+5gC6b/vpH3KxBHDlVWeG zdnIQcvKWYIK0i5ZO7uWmSXN8ZYNcRkxsC0PNiGnAX5FFYvuKCka3lkoTXcpYpwG4GZo /BzuEq4fLxpqOJ7jpmXEKc+iIhaAb7LwXgMARgsgHr29htnhKjr1JG5p4xtouohqraME MLPMD87hhoE/5CDA0PZevtzZh30qon7+Zhsl+3QHqkEciHCej/W9ALNpobOYI1Y7dj6g g+ka9wfimTidBsYUJMf0h/F3nqkQn7c8vXsw3NyWPnMRWBu06kPS4H6Iea79dmqHJ18o d+kw== X-Gm-Message-State: AC+VfDzmjJjkxmvJSs2sd2dCWCAwOwPBUKUxrlkYQfTeQYSuoMCi3Pnt WacDnHs0ltwcalDrv19ZKRTllA== X-Google-Smtp-Source: ACHHUZ6AZ+9CAV1RIXeLs9ZmXAzxI9ojrRgPVN3Usx1PgpQ45grC6eVGcj4P/WrdfDPrNECwtp2FVw== X-Received: by 2002:a17:907:6e8d:b0:961:be96:b0dd with SMTP id sh13-20020a1709076e8d00b00961be96b0ddmr11185551ejc.38.1683560902841; Mon, 08 May 2023 08:48:22 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id k21-20020a170906055500b009584c5bcbc7sm126316eja.49.2023.05.08.08.48.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:48:22 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause , Lai Jiangshan Subject: [PATCH 5.10 06/10] KVM: X86: Don't reset mmu context when X86_CR4_PCIDE 1->0 Date: Mon, 8 May 2023 17:48:00 +0200 Message-Id: <20230508154804.30078-7-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154804.30078-1-minipli@grsecurity.net> References: <20230508154804.30078-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan [ Upstream commit 552617382c197949ff965a3559da8952bf3c1fa5 ] X86_CR4_PCIDE doesn't participate in kvm_mmu_role, so the mmu context doesn't need to be reset. It is only required to flush all the guest tlb. Signed-off-by: Lai Jiangshan Reviewed-by: Sean Christopherson Message-Id: <20210919024246.89230-2-jiangshanlai@gmail.com> Signed-off-by: Paolo Bonzini Signed-off-by: Mathias Krause # backport to v5.10.x --- - no kvm_post_set_cr4() in this kernel yet, it's part of kvm_set_cr4() arch/x86/kvm/x86.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d6bb2c300e16..952281f18987 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1042,9 +1042,10 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) kvm_x86_ops.set_cr4(vcpu, cr4); - if (((cr4 ^ old_cr4) & KVM_MMU_CR4_ROLE_BITS) || - (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE))) + if ((cr4 ^ old_cr4) & KVM_MMU_CR4_ROLE_BITS) kvm_mmu_reset_context(vcpu); + else if (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)) + kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu); if ((cr4 ^ old_cr4) & (X86_CR4_OSXSAVE | X86_CR4_PKE)) kvm_update_cpuid_runtime(vcpu); From patchwork Mon May 8 15:48:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1ED3FC7EE2D for ; Mon, 8 May 2023 15:49:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234687AbjEHPtY (ORCPT ); Mon, 8 May 2023 11:49:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234674AbjEHPtK (ORCPT ); Mon, 8 May 2023 11:49:10 -0400 Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com [IPv6:2a00:1450:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0344AAD24 for ; Mon, 8 May 2023 08:48:49 -0700 (PDT) Received: by mail-ed1-x530.google.com with SMTP id 4fb4d7f45d1cf-50bd2d7ba74so49954457a12.1 for ; Mon, 08 May 2023 08:48:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560904; x=1686152904; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NGW2wQyLJ67fs/5qGW2WH8IEaw+NRLWt7EKBadrjIl8=; b=fBTaMUaNejreatVUVmw0r6csLNrr6+MtSBMDNyt1uAIziWsFu6/WBwmUiAkwXZN+Df gAW7ZS74qG6tQYzBprPSr2wZUWd7nNyP+x/mcJMc1xrSzpWzam/vRbaQxmF0LDWA7UPr YBlOnMkfWub0pfzKr1ekb/Z0/jls8AqfD79YYiW4I6Ebh16HnPKAZDx1CticLRsOkq9n GXmqV6awsu00+9Ysw4kk+6rkIZU2ODfeRJvUTvy1RE9tAA5RVxAT9CIY3k9PqopymP9r q3l6DdFUaU/qWnRwN6oBHyDmfboc8xmAfR1UOyW/FFoSLdjFs8lHE2Va3CLPFPb7PbgE Vmhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560904; x=1686152904; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NGW2wQyLJ67fs/5qGW2WH8IEaw+NRLWt7EKBadrjIl8=; b=C4IWPzDHZzhDFWftWQGEzWfjHHoJUs099w8r69N/vpRELHFYC9F0nyJF4DfwX9YIUt WOFzTJ2Q3/MzQkTYc5G7a2tszM/AJ93bFYDEJETBaTj2rsYvZPX0ikuvH8K1h1oyB/CR mr6Ez9t3eKPsn8pbvQmkqNCfUpV8ix0NQT7xI9onap4jbsHaHbza4hYZuddfrGCGq4Az bPvB8EMQxQTvdrEg9BymiIUtQF2dNx9oYt8/PxCNLK+q42wmSl6+sDZk4wraK+k46ErW 49ZZX0jRUFL59D8XRl1YZUgAdOPE3YOcyxrZM4FB8TC9MxVkFnp3052WrqBSCIl0Fhke woGQ== X-Gm-Message-State: AC+VfDxLSd2fSN3glNH5pA4e+MTCc48SzwkC3s28VlK6rF3wWIr6JYdL Uksgug8TrwuYntch14YKzZRK4g== X-Google-Smtp-Source: ACHHUZ79BdzbT4/7hAGhtlOQ2G2J17ka5CQW7H7/kLtWXuOeFiPbUsVUCRCt0PaIqJLYWHLIP2HAjA== X-Received: by 2002:a17:907:3f9c:b0:966:1bf2:2af5 with SMTP id hr28-20020a1709073f9c00b009661bf22af5mr6317313ejc.22.1683560903850; Mon, 08 May 2023 08:48:23 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id k21-20020a170906055500b009584c5bcbc7sm126316eja.49.2023.05.08.08.48.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:48:23 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause , Lai Jiangshan Subject: [PATCH 5.10 07/10] KVM: X86: Don't reset mmu context when toggling X86_CR4_PGE Date: Mon, 8 May 2023 17:48:01 +0200 Message-Id: <20230508154804.30078-8-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154804.30078-1-minipli@grsecurity.net> References: <20230508154804.30078-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan [ Upstream commit a91a7c7096005113d8e749fd8dfdd3e1eecee263 ] X86_CR4_PGE doesn't participate in kvm_mmu_role, so the mmu context doesn't need to be reset. It is only required to flush all the guest tlb. It is also inconsistent that X86_CR4_PGE is in KVM_MMU_CR4_ROLE_BITS while kvm_mmu_role doesn't use X86_CR4_PGE. So X86_CR4_PGE is also removed from KVM_MMU_CR4_ROLE_BITS. Signed-off-by: Lai Jiangshan Reviewed-by: Sean Christopherson Message-Id: <20210919024246.89230-3-jiangshanlai@gmail.com> Signed-off-by: Paolo Bonzini Signed-off-by: Mathias Krause # backport to v5.10.x --- - no kvm_post_set_cr4() in this kernel yet, it's part of kvm_set_cr4() arch/x86/kvm/mmu.h | 5 ++--- arch/x86/kvm/x86.c | 3 ++- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 0d73e8b45642..a77f6acb46f6 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -44,9 +44,8 @@ #define PT32_ROOT_LEVEL 2 #define PT32E_ROOT_LEVEL 3 -#define KVM_MMU_CR4_ROLE_BITS (X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE | \ - X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE | \ - X86_CR4_LA57) +#define KVM_MMU_CR4_ROLE_BITS (X86_CR4_PSE | X86_CR4_PAE | X86_CR4_LA57 | \ + X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE) #define KVM_MMU_CR0_ROLE_BITS (X86_CR0_PG | X86_CR0_WP) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 952281f18987..b2378ec80305 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1044,7 +1044,8 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) if ((cr4 ^ old_cr4) & KVM_MMU_CR4_ROLE_BITS) kvm_mmu_reset_context(vcpu); - else if (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)) + else if (((cr4 ^ old_cr4) & X86_CR4_PGE) || + (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE))) kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu); if ((cr4 ^ old_cr4) & (X86_CR4_OSXSAVE | X86_CR4_PKE)) From patchwork Mon May 8 15:48:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234700 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48494C77B7F for ; Mon, 8 May 2023 15:49:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233569AbjEHPtZ (ORCPT ); Mon, 8 May 2023 11:49:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234681AbjEHPtK (ORCPT ); Mon, 8 May 2023 11:49:10 -0400 Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com [IPv6:2a00:1450:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70C14AD25 for ; Mon, 8 May 2023 08:48:49 -0700 (PDT) Received: by mail-ej1-x629.google.com with SMTP id a640c23a62f3a-94f910ea993so726933766b.3 for ; Mon, 08 May 2023 08:48:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560905; x=1686152905; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=blFZvUojr5dhE4Jx5u2uh9350JzTZAott5birNxEhcg=; b=pYG476BqfI0Rq7cQdmIWBYzsxMQ6qf4Bo97dMqri/IbahRLZlnrKi0gBYk4/Sqpo66 Q3RlmmLFqrpAvHslTW31KUkKMBSpOPheRzUej3bUiYF9dybrdamzfCdAzqCdxFRbPyFJ bbr96HGSF/S2bh8NxsiVxZnK9NEpp8Y9NKvr1QwWUXTnwVc4V0OI1b2W2sLJb+/UbSQD Vt7n4ll21udaCfdxONJbLAPbDwKG4i9xpr/gJ5JmT847Adhmuw5tbzn5q+ax7Q/5kT5Z y8iyrQIsvbOKciCSmfVMNSqkXum61ae6xiyNSv/sp6Qkf08jmBB3m2eBdmfXXwZG3Sgm JA4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560905; x=1686152905; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=blFZvUojr5dhE4Jx5u2uh9350JzTZAott5birNxEhcg=; b=hghvX3bRdvrVGDztqdOKmdoPhuZBOivDpuykbIYBwieP8zNt2aUXpvJzO//StR4EI6 GhJ0ueF7bjKl+LN7ICSGG1hj/E2ulVqXQh3YrORYtz4beKIfXmD/LEWzs7lgOHhtx6mi wzZBumjCZU4YwRH0+Rt8HL7xiDZTRrorl9FJWAoec/pHQAKITk6q7yN+ln18yuUmF1hA 1v6ukGE3VfeA0zZvKgijUyQof9RFM3IHV+++sZXn9BXBIbYbmZMzkPDv30wFlYHx7m1K PFHurwKap9AAp72sHRCycODcHJ8L4Vhlid5usGl5zdmdK1+9bc43gHgPNQiae1dDPmsa Ak1Q== X-Gm-Message-State: AC+VfDyv/+hV/llZRLlMAi7DIBWWk839EMvX8gDmvQTlQmWl9GOf8Lte Y6CiEhpu6rlInzaw72mcUMXE/g== X-Google-Smtp-Source: ACHHUZ6p97wsXEK2KKfsgqHjlyGpBe7C7EV/1INNvqFR8KwoZB1QZozqOGZJE90qgy29jP1yG9L2WQ== X-Received: by 2002:a17:907:961a:b0:965:fd04:f76b with SMTP id gb26-20020a170907961a00b00965fd04f76bmr10216621ejc.55.1683560905013; Mon, 08 May 2023 08:48:25 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id k21-20020a170906055500b009584c5bcbc7sm126316eja.49.2023.05.08.08.48.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:48:24 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause , Lai Jiangshan Subject: [PATCH 5.10 08/10] KVM: X86: Ensure that dirty PDPTRs are loaded Date: Mon, 8 May 2023 17:48:02 +0200 Message-Id: <20230508154804.30078-9-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154804.30078-1-minipli@grsecurity.net> References: <20230508154804.30078-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan [ Upstream commit 2c5653caecc4807b8abfe9c41880ac38417be7bf ] For VMX with EPT, dirty PDPTRs need to be loaded before the next vmentry via vmx_load_mmu_pgd() But not all paths that call load_pdptrs() will cause vmx_load_mmu_pgd() to be invoked. Normally, kvm_mmu_reset_context() is used to cause KVM_REQ_LOAD_MMU_PGD, but sometimes it is skipped: * commit d81135a57aa6("KVM: x86: do not reset mmu if CR0.CD and CR0.NW are changed") skips kvm_mmu_reset_context() after load_pdptrs() when changing CR0.CD and CR0.NW. * commit 21823fbda552("KVM: x86: Invalidate all PGDs for the current PCID on MOV CR3 w/ flush") skips KVM_REQ_LOAD_MMU_PGD after load_pdptrs() when rewriting the CR3 with the same value. * commit a91a7c709600("KVM: X86: Don't reset mmu context when toggling X86_CR4_PGE") skips kvm_mmu_reset_context() after load_pdptrs() when changing CR4.PGE. Fixes: d81135a57aa6 ("KVM: x86: do not reset mmu if CR0.CD and CR0.NW are changed") Fixes: 21823fbda552 ("KVM: x86: Invalidate all PGDs for the current PCID on MOV CR3 w/ flush") Fixes: a91a7c709600 ("KVM: X86: Don't reset mmu context when toggling X86_CR4_PGE") Signed-off-by: Lai Jiangshan Message-Id: <20211108124407.12187-2-jiangshanlai@gmail.com> Signed-off-by: Paolo Bonzini Signed-off-by: Mathias Krause # backport to v5.10.x --- arch/x86/kvm/x86.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b2378ec80305..038ac5bbdd19 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -794,6 +794,7 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3) memcpy(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs)); kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR); + kvm_make_request(KVM_REQ_LOAD_MMU_PGD, vcpu); out: From patchwork Mon May 8 15:48:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8527BC77B75 for ; Mon, 8 May 2023 15:49:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234462AbjEHPt0 (ORCPT ); Mon, 8 May 2023 11:49:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234700AbjEHPtK (ORCPT ); Mon, 8 May 2023 11:49:10 -0400 Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com [IPv6:2a00:1450:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39439AD35 for ; Mon, 8 May 2023 08:48:50 -0700 (PDT) Received: by mail-ej1-x632.google.com with SMTP id a640c23a62f3a-9661a1ff1e9so338464866b.1 for ; Mon, 08 May 2023 08:48:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560906; x=1686152906; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TTjF2g8AG30VF1JiAmV3qLANPxqhYhgw+CZ7n5LpFOg=; b=npzr5E2AtlhYYeRtRhD0GNjc8RuIDXcv9/wHRfDCdHBanmUNxMznH5ijmT5HIBAEMq Hn6r26T/BY95aVHlgvam6U65zD1eYeoQvid2Zwl6Bx5xrXAE0eKycCsqrPMUBxybUfhg HaBlmIJGM4H34OOfPWCufg/sFhjni0PnITeO+1XYoCCEr5xSsx7WHjm56jRcP780xAoE queTy38rgdyqgdYjrwGuUBR9L+VvNnNKm2rCqAw3p5tZMn5AIbupOcVCaDpcgHncWeW0 nRme13uvCa5ofTAnP2k/v3elpdjZ2rI8fsFtkUK+84Acko/3wxC28tUYdJpIV3Xi7+Sq zmPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560906; x=1686152906; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TTjF2g8AG30VF1JiAmV3qLANPxqhYhgw+CZ7n5LpFOg=; b=iDQqvmN5kd38jVRmjV19GqRXa1uTakCbsGIYt7W4wT+hqnx1SCN1cJkeH+Fm5uie2h NLRiyjqaFcfx8jhozhei1QaMoK17A+bbS04e/GRIGXKN/quj69uYY+P8Gmh/TpGgkJ7R gly2z+6GpnDqsm4XEmEo+1cgj/cyGQ4luxamB6wb/hD+Yd0odIy1cJAPEDAl19OlyBJY F4LHPQmze/2Y49PhTDQJIimyssFBhOuvbY+7/dOXo9zRek0otIVHNlDRHLDZ27iDfqO6 PzoOTbGQgVxkPMulMU5R549nuWrYfP6YD53uNGh7ILqdgFWhRuttZTIW4KPulkoX3Qkr Erjg== X-Gm-Message-State: AC+VfDyIbIlN/mW9+S6RMZObfCs2PtyVu0+fON535Mp+R1Yf29xlrN/p kyJe0QD1Vg6G0MxE6AteekASOw== X-Google-Smtp-Source: ACHHUZ4HkxXAZofVoOxKX0VjlGu4xRFEy3ypDBB/+2Zm5Dltwz14RJuPJeiJuzPpdUFAhb+bzZDyxQ== X-Received: by 2002:a17:907:743:b0:94e:e574:6021 with SMTP id xc3-20020a170907074300b0094ee5746021mr8902153ejb.7.1683560906028; Mon, 08 May 2023 08:48:26 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id k21-20020a170906055500b009584c5bcbc7sm126316eja.49.2023.05.08.08.48.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:48:25 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause , Lai Jiangshan Subject: [PATCH 5.10 09/10] KVM: x86/mmu: Reconstruct shadow page root if the guest PDPTEs is changed Date: Mon, 8 May 2023 17:48:03 +0200 Message-Id: <20230508154804.30078-10-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154804.30078-1-minipli@grsecurity.net> References: <20230508154804.30078-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan [ Upstream commit 6b123c3a89a90ac6418e4d64b1e23f09d458a77d ] For shadow paging, the page table needs to be reconstructed before the coming VMENTER if the guest PDPTEs is changed. But not all paths that call load_pdptrs() will cause the page tables to be reconstructed. Normally, kvm_mmu_reset_context() and kvm_mmu_free_roots() are used to launch later reconstruction. The commit d81135a57aa6("KVM: x86: do not reset mmu if CR0.CD and CR0.NW are changed") skips kvm_mmu_reset_context() after load_pdptrs() when changing CR0.CD and CR0.NW. The commit 21823fbda552("KVM: x86: Invalidate all PGDs for the current PCID on MOV CR3 w/ flush") skips kvm_mmu_free_roots() after load_pdptrs() when rewriting the CR3 with the same value. The commit a91a7c709600("KVM: X86: Don't reset mmu context when toggling X86_CR4_PGE") skips kvm_mmu_reset_context() after load_pdptrs() when changing CR4.PGE. Guests like linux would keep the PDPTEs unchanged for every instance of pagetable, so this missing reconstruction has no problem for linux guests. Fixes: d81135a57aa6("KVM: x86: do not reset mmu if CR0.CD and CR0.NW are changed") Fixes: 21823fbda552("KVM: x86: Invalidate all PGDs for the current PCID on MOV CR3 w/ flush") Fixes: a91a7c709600("KVM: X86: Don't reset mmu context when toggling X86_CR4_PGE") Suggested-by: Sean Christopherson Signed-off-by: Lai Jiangshan Message-Id: <20211216021938.11752-3-jiangshanlai@gmail.com> Signed-off-by: Paolo Bonzini Signed-off-by: Mathias Krause --- arch/x86/kvm/x86.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 038ac5bbdd19..065e1a5b3b94 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -792,6 +792,13 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3) } ret = 1; + /* + * Marking VCPU_EXREG_PDPTR dirty doesn't work for !tdp_enabled. + * Shadow page roots need to be reconstructed instead. + */ + if (!tdp_enabled && memcmp(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs))) + kvm_mmu_free_roots(vcpu, mmu, KVM_MMU_ROOT_CURRENT); + memcpy(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs)); kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR); kvm_make_request(KVM_REQ_LOAD_MMU_PGD, vcpu); From patchwork Mon May 8 15:48:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234702 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFAE0C77B7F for ; Mon, 8 May 2023 15:49:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234633AbjEHPt3 (ORCPT ); Mon, 8 May 2023 11:49:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234607AbjEHPtM (ORCPT ); Mon, 8 May 2023 11:49:12 -0400 Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com [IPv6:2a00:1450:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC9E5AD26 for ; Mon, 8 May 2023 08:48:51 -0700 (PDT) Received: by mail-ej1-x62e.google.com with SMTP id a640c23a62f3a-94ef0a8546fso757195466b.1 for ; Mon, 08 May 2023 08:48:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560907; x=1686152907; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=z9sfiCmAl7MFuZIKET6VHP2k7SLxPKw23VxB9Fd66Jg=; b=BpJm4ohGshjhdYMt8BEDRG54maOqplzKVQ/M+1e5Tc89Hi+j10QKZxil9EjtWBVUg0 8NNqXcfDTH0eMhwS+IdSNlnjUtmwuHmdj6Xvx9oQRpzCoYA8nLyHSpbsX76xcy8hJYqk kOKZrRt2CFbQSyajMXyBKhq71DdjKU0CKiHLFwbj3KEXENA9Tl/yahgMQ6khx0rT/Fg6 wo51pgCaIOqNwcIlpoHe+NNl6fKGcAJPyNETtmge8yF2gpchTL8NTn/JgsB6x8M3D1sJ 67pOFJccOUtsC6GtXN2in/BrGMDTvVdKczK1EtmSSCtMhkaloNEHTJZqCj58Wg1DBwE2 aNmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560907; x=1686152907; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=z9sfiCmAl7MFuZIKET6VHP2k7SLxPKw23VxB9Fd66Jg=; b=O9/J1pE1FQHhqEqjdqBdz+ViM0uR7Y+5SjJNDyeeT8YTatpes13BvRWoOcZO8sm/Rl log4mSV0IFTA6m529KpbWj3bNvozxn/rB9KRAiLE5PKeXQp58j24WzZVLuoEJzdTS41E GMWbXoT/oB7U4XZzD5+qfBekde6HhpMRMxTqzUM8+3cYCItJBjXs+GfHhXbb+1HihtrG mDU/ULSbPfsuEfWS6xpl4vXlEBnH2X8k86adR4gItJgwAz31SGXs+uWrsHDpViEukT6Y 9hMGNWsT/K/GAwKYYTWT8KIZsFr/1IN+cWpmg2BqbubF5gqhl50s+4/Y9y5600RWuX5y j9Yw== X-Gm-Message-State: AC+VfDx8ITOTbRBtakpHqTHed950KoyeWMB2YZvy1o20k0m0CevznU4Y k8YwTrsiyM3kqvyelruiQtl8EQWMr+1ojy/OTxbiuA== X-Google-Smtp-Source: ACHHUZ7wX1X2xBZTjrBM0/PQWLXDNKwfEZ0sT29rjATHzhr5Lcx3srVS4wv/eRPpTtL+xGEvvhnFkA== X-Received: by 2002:a17:907:36c6:b0:94f:562b:2979 with SMTP id bj6-20020a17090736c600b0094f562b2979mr8453343ejc.31.1683560907360; Mon, 08 May 2023 08:48:27 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id k21-20020a170906055500b009584c5bcbc7sm126316eja.49.2023.05.08.08.48.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:48:26 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.10 10/10] KVM: x86/mmu: Refresh CR0.WP prior to checking for emulated permission faults Date: Mon, 8 May 2023 17:48:04 +0200 Message-Id: <20230508154804.30078-11-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154804.30078-1-minipli@grsecurity.net> References: <20230508154804.30078-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson [ Upstream commit cf9f4c0eb1699d306e348b1fd0225af7b2c282d3 ] Refresh the MMU's snapshot of the vCPU's CR0.WP prior to checking for permission faults when emulating a guest memory access and CR0.WP may be guest owned. If the guest toggles only CR0.WP and triggers emulation of a supervisor write, e.g. when KVM is emulating UMIP, KVM may consume a stale CR0.WP, i.e. use stale protection bits metadata. Note, KVM passes through CR0.WP if and only if EPT is enabled as CR0.WP is part of the MMU role for legacy shadow paging, and SVM (NPT) doesn't support per-bit interception controls for CR0. Don't bother checking for EPT vs. NPT as the "old == new" check will always be true under NPT, i.e. the only cost is the read of vcpu->arch.cr4 (SVM unconditionally grabs CR0 from the VMCB on VM-Exit). Reported-by: Mathias Krause Link: https://lkml.kernel.org/r/677169b4-051f-fcae-756b-9a3e1bb9f8fe%40grsecurity.net Fixes: fb509f76acc8 ("KVM: VMX: Make CR0.WP a guest owned bit") Tested-by: Mathias Krause Link: https://lore.kernel.org/r/20230405002608.418442-1-seanjc@google.com Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v5.10.x --- - this kernel lacks the MMU role bits access helpers, so I simply open coded them - it also has "historic" ones for vCPU ones, like is_write_protection() - no reset_guest_paging_metadata() yet either, so I open-coded its v5.10 pendant as well arch/x86/kvm/mmu.h | 26 +++++++++++++++++++++++++- arch/x86/kvm/mmu/mmu.c | 16 ++++++++++++++++ 2 files changed, 41 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index a77f6acb46f6..ee4dd4eb7c1c 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -70,6 +70,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu); int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, u64 fault_address, char *insn, int insn_len); +void __kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu); static inline int kvm_mmu_reload(struct kvm_vcpu *vcpu) { @@ -171,6 +173,24 @@ static inline bool is_write_protection(struct kvm_vcpu *vcpu) return kvm_read_cr0_bits(vcpu, X86_CR0_WP); } +static inline void kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + /* + * When EPT is enabled, KVM may passthrough CR0.WP to the guest, i.e. + * @mmu's snapshot of CR0.WP and thus all related paging metadata may + * be stale. Refresh CR0.WP and the metadata on-demand when checking + * for permission faults. Exempt nested MMUs, i.e. MMUs for shadowing + * nEPT and nNPT, as CR0.WP is ignored in both cases. Note, KVM does + * need to refresh nested_mmu, a.k.a. the walker used to translate L2 + * GVAs to GPAs, as that "MMU" needs to honor L2's CR0.WP. + */ + if (!tdp_enabled || mmu == &vcpu->arch.guest_mmu) + return; + + __kvm_mmu_refresh_passthrough_bits(vcpu, mmu); +} + /* * Check if a given access (described through the I/D, W/R and U/S bits of a * page fault error code pfec) causes a permission fault with the given PTE @@ -202,8 +222,12 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long smap = (cpl - 3) & (rflags & X86_EFLAGS_AC); int index = (pfec >> 1) + (smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1)); - bool fault = (mmu->permissions[index] >> pte_access) & 1; u32 errcode = PFERR_PRESENT_MASK; + bool fault; + + kvm_mmu_refresh_passthrough_bits(vcpu, mmu); + + fault = (mmu->permissions[index] >> pte_access) & 1; WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK)); if (unlikely(mmu->pkru_mask)) { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e1107723ffdc..a17f222b628e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4486,6 +4486,22 @@ static union kvm_mmu_role kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, return role; } +void __kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + const bool cr0_wp = is_write_protection(vcpu); + + BUILD_BUG_ON((KVM_MMU_CR0_ROLE_BITS & KVM_POSSIBLE_CR0_GUEST_BITS) != X86_CR0_WP); + BUILD_BUG_ON((KVM_MMU_CR4_ROLE_BITS & KVM_POSSIBLE_CR4_GUEST_BITS)); + + if (mmu->mmu_role.base.cr0_wp == cr0_wp) + return; + + mmu->mmu_role.base.cr0_wp = cr0_wp; + update_permission_bitmask(vcpu, mmu, false); + update_pkru_bitmask(vcpu, mmu, false); +} + static inline int kvm_mmu_get_tdp_level(struct kvm_vcpu *vcpu) { /* Use 5-level TDP if and only if it's useful/necessary. */