From patchwork Mon May 8 15:47:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16148C77B75 for ; Mon, 8 May 2023 15:47:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234178AbjEHPrm (ORCPT ); Mon, 8 May 2023 11:47:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233864AbjEHPrk (ORCPT ); Mon, 8 May 2023 11:47:40 -0400 Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0E3640C7 for ; Mon, 8 May 2023 08:47:19 -0700 (PDT) Received: by mail-ed1-x535.google.com with SMTP id 4fb4d7f45d1cf-50bd2d7ba74so49945127a12.1 for ; Mon, 08 May 2023 08:47:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560838; x=1686152838; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CX0HY5Dhn8sXQJZD/VhXh7Y9Z1W5/IGYy7pEt9qRBEI=; b=Zut+1EbnZAxY1pFQj+lFNKhEa+qFGSw3trv8O+gk5vL+L8TxS20JUrk1vc7RTY3DGp pitiiZbo4HhdH/sY7L89gnu0iYuPo9XpKjwRc10/gl/8avqv3SefNqfMOO38Tl/jL7hj HmoiGvkWHjmMJmrPHbDnlVWgUXUSZjJpof3NrYBsku3pYlzdR9G7RV6XGdyO7B3eqYms WlVlXgy1Xwfh7JhenE8blM4DQfql0W87MayPdKdF6h1wmbHZJ1WUmOQqPqtMW61Kyb9C 2aFfEONAf2uANt+XT/95SXBS1AEUqKSArHVD7TKEwu8gVBcENeTsztCBVnodA51Bbkjs Mamw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560838; x=1686152838; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CX0HY5Dhn8sXQJZD/VhXh7Y9Z1W5/IGYy7pEt9qRBEI=; b=R1wTsNfoNaHgTb2P2uh+LIdnLtCtXKY0SUv4xoNsfiTYm1y9KOv9gYTpoBdnp0JTCa OHRujHMhSdnI4i3DOz/LDyTMaqR3D0u0xRkV1EtQmaGdOB76AQk1Jm+jt1XzOVaEb4cq MgdTcTh7vDqcq2om096IDE8/7U/YrFaxudhplcyeznvi2hMRC9hHz4HVsp8rZG0Tr4G3 F0unc/X6bf/sPyTGloSxwanHZJ3zsTsbWimKcVaGK1QxPWYsD0SAw9aNrCIApodNDrIM tD6ynsyJ3z7qtIv598LLz04kh1CQqQKwZPxBnBsHFq1qGZALZmCsnMnAYDz16hz4Uyf9 T6xQ== X-Gm-Message-State: AC+VfDxRpEAvZRLKYDvY2ElNVVmqUL+QUXIXWijIRKyFebOJEK9+IAC9 t7ZZh15/BxAH97z0SymyzRGdGQ== X-Google-Smtp-Source: ACHHUZ5y0A5OCcuWqG4aV9RiEAIJmn1LodjKaJGsqf4OmSCGVu/fbqTVzWyy3zKDQxFnsXExvqshCA== X-Received: by 2002:a17:907:8a08:b0:969:2df9:a0dd with SMTP id sc8-20020a1709078a0800b009692df9a0ddmr1556233ejc.25.1683560838249; Mon, 08 May 2023 08:47:18 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id md1-20020a170906ae8100b0094b5ce9d43dsm121822ejb.85.2023.05.08.08.47.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:47:17 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.15 1/8] KVM: x86/mmu: Avoid indirect call for get_cr3 Date: Mon, 8 May 2023 17:47:02 +0200 Message-Id: <20230508154709.30043-2-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154709.30043-1-minipli@grsecurity.net> References: <20230508154709.30043-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Paolo Bonzini [ Upstream commit 2fdcc1b324189b5fb20655baebd40cd82e2bdf0c ] Most of the time, calls to get_guest_pgd result in calling kvm_read_cr3 (the exception is only nested TDP). Hardcode the default instead of using the get_cr3 function, avoiding a retpoline if they are enabled. Signed-off-by: Paolo Bonzini Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-2-minipli@grsecurity.net Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v5.15.x --- arch/x86/kvm/mmu.h | 11 +++++++++++ arch/x86/kvm/mmu/mmu.c | 12 ++++++------ arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/x86.c | 2 +- 4 files changed, 19 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 7bb165c23233..baafc5d8bb9e 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -115,6 +115,17 @@ static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu) vcpu->arch.mmu->shadow_root_level); } +unsigned long get_guest_cr3(struct kvm_vcpu *vcpu); + +static inline unsigned long kvm_mmu_get_guest_pgd(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + if (IS_ENABLED(CONFIG_RETPOLINE) && mmu->get_guest_pgd == get_guest_cr3) + return kvm_read_cr3(vcpu); + + return mmu->get_guest_pgd(vcpu); +} + int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, bool prefault); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4724289c8a7f..7c3b809f24b3 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3472,7 +3472,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) unsigned i; int r; - root_pgd = mmu->get_guest_pgd(vcpu); + root_pgd = kvm_mmu_get_guest_pgd(vcpu, mmu); root_gfn = root_pgd >> PAGE_SHIFT; if (mmu_check_root(vcpu, root_gfn)) @@ -3899,7 +3899,7 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, arch.token = alloc_apf_token(vcpu); arch.gfn = gfn; arch.direct_map = vcpu->arch.mmu->direct_map; - arch.cr3 = vcpu->arch.mmu->get_guest_pgd(vcpu); + arch.cr3 = kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu); return kvm_setup_async_pf(vcpu, cr2_or_gpa, kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch); @@ -4203,7 +4203,7 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd) } EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd); -static unsigned long get_cr3(struct kvm_vcpu *vcpu) +unsigned long get_guest_cr3(struct kvm_vcpu *vcpu) { return kvm_read_cr3(vcpu); } @@ -4756,7 +4756,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) context->invlpg = NULL; context->shadow_root_level = kvm_mmu_get_tdp_level(vcpu); context->direct_map = true; - context->get_guest_pgd = get_cr3; + context->get_guest_pgd = get_guest_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; context->root_level = role_regs_to_root_level(®s); @@ -4933,7 +4933,7 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu) kvm_init_shadow_mmu(vcpu, ®s); - context->get_guest_pgd = get_cr3; + context->get_guest_pgd = get_guest_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; } @@ -4965,7 +4965,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) return; g_context->mmu_role.as_u64 = new_role.as_u64; - g_context->get_guest_pgd = get_cr3; + g_context->get_guest_pgd = get_guest_cr3; g_context->get_pdptr = kvm_pdptr_read; g_context->inject_page_fault = kvm_inject_page_fault; g_context->root_level = new_role.base.level; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index a1811f51eda9..dbb54f65d1df 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -359,7 +359,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, trace_kvm_mmu_pagetable_walk(addr, access); retry_walk: walker->level = mmu->root_level; - pte = mmu->get_guest_pgd(vcpu); + pte = kvm_mmu_get_guest_pgd(vcpu, mmu); have_ad = PT_HAVE_ACCESSED_DIRTY(mmu); #if PTTYPE == 64 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5cb4af42ba64..018f6a394d44 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12155,7 +12155,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) return; if (!vcpu->arch.mmu->direct_map && - work->arch.cr3 != vcpu->arch.mmu->get_guest_pgd(vcpu)) + work->arch.cr3 != kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu)) return; kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); From patchwork Mon May 8 15:47:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F566C77B75 for ; Mon, 8 May 2023 15:47:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234429AbjEHPrq (ORCPT ); Mon, 8 May 2023 11:47:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234116AbjEHPrk (ORCPT ); Mon, 8 May 2023 11:47:40 -0400 Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com [IPv6:2a00:1450:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D12A049F0 for ; Mon, 8 May 2023 08:47:20 -0700 (PDT) Received: by mail-ed1-x536.google.com with SMTP id 4fb4d7f45d1cf-50b383222f7so7142709a12.3 for ; Mon, 08 May 2023 08:47:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560839; x=1686152839; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fvMBpYGUUFbOhtoeU6tZl9mcm8XZOcaQ6enJN2RpEHs=; b=ecsq9ypdG7ZRARLjZW8Emx2S8hbt9VVx7CQmTQQ5bVRbcTQ1XQOMmJei2JhIfYngbI sdJ/uk2mG9tpr7zXZ88/TwbUM1FuHhh8r9xWA03phExNboobCL46/QEFH76PQlrPtvV4 mmJDDDpxsfWxgadvfLHS3rHbt5hu8u57XThNC6fWYDaTsyDcWSiubYw/aaLR0ei+eIu/ T/7OMQ65d0aaxy3TVWLl69GFETfPh+Y/pWyrWUgWLhDWT8i9FoHQCJ36calMSBWzBvtG lJYp6h54kd32+XcKED7g/VCJH2hy8Cu3aMVx1eMdEiETmvNerPrnm1qLZcN9bcop0C/a 8MBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560839; x=1686152839; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fvMBpYGUUFbOhtoeU6tZl9mcm8XZOcaQ6enJN2RpEHs=; b=N9rBVGsCZqkt4H3MVuAHYQZCX/KgP21J+ERxuMKEOg7TyWdjQcFZc5VC37TCha3V77 msN1wv21j39sBeijAY/kKh4jf9DA4qjC7RltJ1bEzJENK3FDz+gUQCWkaEKQh+slF7a1 hpmgOGJ5dOV7nDVWwgWDrQvUHA7Lq5M58+0aZCKbmSSPQb7w57SPd7TioBGTcnOvOQ9D gbled+6HZTUfGwWekdcX56+FvfKQQqynW0C6mpZM3aS31xduiaGQf777O2ArjR01E3lg UUZHXQ7Ejp/0CTpLmKcIc5ogGO0uTzgGgNd2ai6CarW+phujxW5jEeB6f38W2Ckn6B+a DONg== X-Gm-Message-State: AC+VfDwHse14NwSCJdvuspQpt7HhwtFYC0yV45rMExiWqK5bEHvzf6k0 wpoM5y5C2WkaWlD6b2pI6K8bgA== X-Google-Smtp-Source: ACHHUZ5/u4SgZ1Df3Vxq+zvKFE+Zqa3Bu3zrMmOOg0tv1QZinxH5syijLyX11+GoDI+9NhhiGWsMtw== X-Received: by 2002:a17:907:ea6:b0:951:f54c:208b with SMTP id ho38-20020a1709070ea600b00951f54c208bmr9167633ejc.24.1683560839338; Mon, 08 May 2023 08:47:19 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id md1-20020a170906ae8100b0094b5ce9d43dsm121822ejb.85.2023.05.08.08.47.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:47:18 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.15 2/8] KVM: x86: Do not unload MMU roots when only toggling CR0.WP with TDP enabled Date: Mon, 8 May 2023 17:47:03 +0200 Message-Id: <20230508154709.30043-3-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154709.30043-1-minipli@grsecurity.net> References: <20230508154709.30043-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org [ Upstream commit 01b31714bd90be2784f7145bf93b7f78f3d081e1 ] There is no need to unload the MMU roots with TDP enabled when only CR0.WP has changed -- the paging structures are still valid, only the permission bitmap needs to be updated. One heavy user of toggling CR0.WP is grsecurity's KERNEXEC feature to implement kernel W^X. The optimization brings a huge performance gain for this case as the following micro-benchmark running 'ssdd 10 50000' from rt-tests[1] on a grsecurity L1 VM shows (runtime in seconds, lower is better): legacy TDP shadow kvm-x86/next@d8708b 8.43s 9.45s 70.3s +patch 5.39s 5.63s 70.2s For legacy MMU this is ~36% faster, for TDP MMU even ~40% faster. Also TDP and legacy MMU now both have a similar runtime which vanishes the need to disable TDP MMU for grsecurity. Shadow MMU sees no measurable difference and is still slow, as expected. [1] https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-3-minipli@grsecurity.net Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause --- arch/x86/kvm/x86.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 018f6a394d44..27900d4017a7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -878,6 +878,18 @@ EXPORT_SYMBOL_GPL(load_pdptrs); void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned long cr0) { + /* + * CR0.WP is incorporated into the MMU role, but only for non-nested, + * indirect shadow MMUs. If TDP is enabled, the MMU's metadata needs + * to be updated, e.g. so that emulating guest translations does the + * right thing, but there's no need to unload the root as CR0.WP + * doesn't affect SPTEs. + */ + if (tdp_enabled && (cr0 ^ old_cr0) == X86_CR0_WP) { + kvm_init_mmu(vcpu); + return; + } + if ((cr0 ^ old_cr0) & X86_CR0_PG) { kvm_clear_async_pf_completion_queue(vcpu); kvm_async_pf_hash_reset(vcpu); From patchwork Mon May 8 15:47:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234686 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAF77C7EE2C for ; Mon, 8 May 2023 15:47:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234590AbjEHPrs (ORCPT ); Mon, 8 May 2023 11:47:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234001AbjEHPrm (ORCPT ); Mon, 8 May 2023 11:47:42 -0400 Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com [IPv6:2a00:1450:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51ADA6A6E for ; Mon, 8 May 2023 08:47:22 -0700 (PDT) Received: by mail-ej1-x632.google.com with SMTP id a640c23a62f3a-965ddb2093bso582750266b.2 for ; Mon, 08 May 2023 08:47:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560840; x=1686152840; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HHmXGDybyVXf4mNmP07Oe+6ZNUXie//tYnOgK70dlKg=; b=UTZmsSXGep1QE4N9XDRzcd6Ry3MOrvaoJuZVF5S7z5CvAOF7aZGKRVvaKetzVDoW26 CgNaXecyd8vO+Gt5ky2a3gCg90TcMAf95SUMDybyXTGx5SM5LMsV6OAH/FaPT0eLvNUT LKHOQPANpOoSvYChe8ag/4s7bN5ydbEKiVbz1SopRqzZR4bTZvafPnVtOhgj+5JQ07Ia JBk3PJNZBKW8tsEPx49Aw2G46x83RV23l4XYUZTdLTE5GGHDwb96YnUGtXikck0+hmhg ryMUfH3waPF2rmXx6ZG2wADQjNDHFY0+lOsxLCeOXcM/DdOG2VVEpB7b7Wgy9Y5kTZKd jdsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560840; x=1686152840; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HHmXGDybyVXf4mNmP07Oe+6ZNUXie//tYnOgK70dlKg=; b=hsCjeC8SkGsp7k6MZniSHVR5Bg6tnTvuc3Nf0wxh+bJFvtEQ8sl0yx5mi3mLx8kFKN 3k2yNXxzLtc+h83T2CJqS+asnfPDU36M3s8nFGZEiinFG5bLGsHrarNLADbt1Rs9qRmT JDPJYrh9jb5cX7NzqYXUqqBLHkdkFVTsUiItarCmEx5ETZTCciVwfj3z9GgM1kuUeEfR ofBZbr2mygod4tOzUs3c/Uw+nY6kVeuofK7j5jlWwcC8ix7Z3zug4JVqWLl60jLzCYoq DaYGcKxL6dsI6kfJS6gIYOM40OH2NolIa/rfqI2vFwb1VrgtM9syLoEZ3GF8JiklfmAJ 5ZTg== X-Gm-Message-State: AC+VfDzyXUn8rSHcxUN1wX7jfdo+uVsKESa3osBQCu1pG5xiekbWs2zp vxv1j7nsVPLh+kBWR/dAIr3+BA== X-Google-Smtp-Source: ACHHUZ7T/bLCeIH5G/5peVnVqvHaz7wkUnmjLE/SSKcC90BfNoETEPcDcPEPVzIgPdQgX0CxLnMwKQ== X-Received: by 2002:a17:906:dac5:b0:94a:4fc5:4c2e with SMTP id xi5-20020a170906dac500b0094a4fc54c2emr8739273ejb.49.1683560840518; Mon, 08 May 2023 08:47:20 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id md1-20020a170906ae8100b0094b5ce9d43dsm121822ejb.85.2023.05.08.08.47.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:47:20 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.15 3/8] KVM: x86: Make use of kvm_read_cr*_bits() when testing bits Date: Mon, 8 May 2023 17:47:04 +0200 Message-Id: <20230508154709.30043-4-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154709.30043-1-minipli@grsecurity.net> References: <20230508154709.30043-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org [ Upstream commit 74cdc836919bf34684ef66f995273f35e2189daf ] Make use of the kvm_read_cr{0,4}_bits() helper functions when we only want to know the state of certain bits instead of the whole register. This not only makes the intent cleaner, it also avoids a potential VMREAD in case the tested bits aren't guest owned. Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-5-minipli@grsecurity.net Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v5.15.x --- arch/x86/kvm/pmu.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 62333f9756a3..5c2b9ff8e014 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -366,9 +366,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) if (!pmc) return 1; - if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) && + if (!(kvm_read_cr4_bits(vcpu, X86_CR4_PCE)) && (static_call(kvm_x86_get_cpl)(vcpu) != 0) && - (kvm_read_cr0(vcpu) & X86_CR0_PE)) + (kvm_read_cr0_bits(vcpu, X86_CR0_PE))) return 1; *data = pmc_read_counter(pmc) & mask; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index c95c3675e8d5..566367409598 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5128,7 +5128,7 @@ static int handle_cr(struct kvm_vcpu *vcpu) break; case 3: /* lmsw */ val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f; - trace_kvm_cr_write(0, (kvm_read_cr0(vcpu) & ~0xful) | val); + trace_kvm_cr_write(0, (kvm_read_cr0_bits(vcpu, ~0xful) | val)); kvm_lmsw(vcpu, val); return kvm_skip_emulated_instruction(vcpu); @@ -7149,7 +7149,7 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) goto exit; } - if (kvm_read_cr0(vcpu) & X86_CR0_CD) { + if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) { ipat = VMX_EPT_IPAT_BIT; if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) cache = MTRR_TYPE_WRBACK; From patchwork Mon May 8 15:47:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B019EC7EE24 for ; Mon, 8 May 2023 15:47:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234600AbjEHPrt (ORCPT ); Mon, 8 May 2023 11:47:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234439AbjEHPro (ORCPT ); Mon, 8 May 2023 11:47:44 -0400 Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com [IPv6:2a00:1450:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31CE17AA7 for ; Mon, 8 May 2023 08:47:23 -0700 (PDT) Received: by mail-ed1-x52e.google.com with SMTP id 4fb4d7f45d1cf-50bcb229adaso8942840a12.2 for ; Mon, 08 May 2023 08:47:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560841; x=1686152841; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rOiyYILIoTqnBpOx8Nww3bxaJ5z+0Kbz04MiChCwDjQ=; b=mXKM8+OxWi5QgNjpVRPQI62Czbk+J0r3Vjx4ldkMl9anqgM9igokq8sA/3gOalm3i8 GeO2DdEDjsW2TYqZTRRSVX8Qpg3DnkHxZsqcu1TxeDa0rZta+WUX8durOBsvjLq0OGck 4J4SRrP4QcngFwfFVDeQR11spQ/9n4VkmHr+TgLpfvIy6xRGviuNEtks1eAHjnWQcWem /BDu+JwY1gRa7WM2IgAlSEM1IzvY7+qUlBZNiUsPebV44UWr6on1xyWdP619ORk99R6a WhLiSz/FyF7srL1nj7Jk1Be1UIy7KW0L/h0/kXL81oehPQw8HwJ8s0Lr9xX1yB14shqx OpyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560841; x=1686152841; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rOiyYILIoTqnBpOx8Nww3bxaJ5z+0Kbz04MiChCwDjQ=; b=cdvCqpZCLBzIGr89sCA+FFDKlkxQKvLmmbQGB1XCeiueDCK1/DHTOReZZ7jDLDgqmr DXEtU0NnYPsjC4Z3Gn78YR20rX6l/HyWlOGRhjJoSuPYRkbHv6y4ClFXeKlWdROPk1oe 8d39/aFSISggkJIsr1acpvIw0CQKCJop+VGxWnU0x9Ygjg5lZgWwlRORBtb35mPcbaTj FtO3Oi8jfY4ZpBZF4Mpdv9qDK73JukyXrJ+HRhh2+f/2V6zXaPnnu/I/fNcabkYaczBr tLogwvMqedbze3r4XIiGz0NDoTgtkvkq8hjdrEwOiZvYRyGVqE/t0dLXDqFA0KPk88dQ Oc5g== X-Gm-Message-State: AC+VfDzyuLWaQYDL+SoYJuMXtRRBkYKcEl5eHU57YigT0UDsTF+168/i LFgxZc6LZwMs6J59Yiy+vd72gw== X-Google-Smtp-Source: ACHHUZ4hZg7+tS+VdzU5+LVC8b1Sh5OZ5iKm6JHC2LBP9I0NJOaRYO+D/QdiJACsZxC/qTEyS7Z8WA== X-Received: by 2002:a17:906:9754:b0:960:dad:5931 with SMTP id o20-20020a170906975400b009600dad5931mr10579869ejy.13.1683560841562; Mon, 08 May 2023 08:47:21 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id md1-20020a170906ae8100b0094b5ce9d43dsm121822ejb.85.2023.05.08.08.47.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:47:21 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.15 4/8] KVM: VMX: Make CR0.WP a guest owned bit Date: Mon, 8 May 2023 17:47:05 +0200 Message-Id: <20230508154709.30043-5-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154709.30043-1-minipli@grsecurity.net> References: <20230508154709.30043-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org [ Upstream commit fb509f76acc8d42bed11bca308404f81c2be856a ] Guests like grsecurity that make heavy use of CR0.WP to implement kernel level W^X will suffer from the implied VMEXITs. With EPT there is no need to intercept a guest change of CR0.WP, so simply make it a guest owned bit if we can do so. This implies that a read of a guest's CR0.WP bit might need a VMREAD. However, the only potentially affected user seems to be kvm_init_mmu() which is a heavy operation to begin with. But also most callers already cache the full value of CR0 anyway, so no additional VMREAD is needed. The only exception is nested_vmx_load_cr3(). This change is VMX-specific, as SVM has no such fine grained control register intercept control. Suggested-by: Sean Christopherson Signed-off-by: Mathias Krause Link: https://lore.kernel.org/r/20230322013731.102955-7-minipli@grsecurity.net Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v5.15.x --- arch/x86/kvm/kvm_cache_regs.h | 2 +- arch/x86/kvm/vmx/nested.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/vmx/vmx.h | 18 ++++++++++++++++++ 4 files changed, 22 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 90e1ffdc05b7..dd536243f653 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -4,7 +4,7 @@ #include -#define KVM_POSSIBLE_CR0_GUEST_BITS X86_CR0_TS +#define KVM_POSSIBLE_CR0_GUEST_BITS (X86_CR0_TS | X86_CR0_WP) #define KVM_POSSIBLE_CR4_GUEST_BITS \ (X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR \ | X86_CR4_OSXMMEXCPT | X86_CR4_PGE | X86_CR4_TSD | X86_CR4_FSGSBASE) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index e4e4c1d3aa17..2bebb0d43666 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4308,7 +4308,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, * CR0_GUEST_HOST_MASK is already set in the original vmcs01 * (KVM doesn't change it); */ - vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmx_set_cr0(vcpu, vmcs12->host_cr0); /* Same as above - no reason to call set_cr4_guest_host_mask(). */ @@ -4459,7 +4459,7 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu) */ vmx_set_efer(vcpu, nested_vmx_get_vmcs01_guest_efer(vmx)); - vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmx_set_cr0(vcpu, vmcs_readl(CR0_READ_SHADOW)); vcpu->arch.cr4_guest_owned_bits = ~vmcs_readl(CR4_GUEST_HOST_MASK); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 566367409598..cab0ee27db74 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4450,7 +4450,7 @@ static void init_vmcs(struct vcpu_vmx *vmx) /* 22.2.1, 20.8.1 */ vm_entry_controls_set(vmx, vmx_vmentry_ctrl()); - vmx->vcpu.arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vmx->vcpu.arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmcs_writel(CR0_GUEST_HOST_MASK, ~vmx->vcpu.arch.cr0_guest_owned_bits); set_cr4_guest_host_mask(vmx); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 20f1213a9368..cd73fe0c05b9 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -531,6 +531,24 @@ static inline void vmx_register_cache_reset(struct kvm_vcpu *vcpu) vcpu->arch.regs_dirty = 0; } +static inline unsigned long vmx_l1_guest_owned_cr0_bits(void) +{ + unsigned long bits = KVM_POSSIBLE_CR0_GUEST_BITS; + + /* + * CR0.WP needs to be intercepted when KVM is shadowing legacy paging + * in order to construct shadow PTEs with the correct protections. + * Note! CR0.WP technically can be passed through to the guest if + * paging is disabled, but checking CR0.PG would generate a cyclical + * dependency of sorts due to forcing the caller to ensure CR0 holds + * the correct value prior to determining which CR0 bits can be owned + * by L1. Keep it simple and limit the optimization to EPT. + */ + if (!enable_ept) + bits &= ~X86_CR0_WP; + return bits; +} + static inline struct kvm_vmx *to_kvm_vmx(struct kvm *kvm) { return container_of(kvm, struct kvm_vmx, kvm); From patchwork Mon May 8 15:47:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234688 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C240C77B75 for ; Mon, 8 May 2023 15:47:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234527AbjEHPrv (ORCPT ); Mon, 8 May 2023 11:47:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234478AbjEHPro (ORCPT ); Mon, 8 May 2023 11:47:44 -0400 Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com [IPv6:2a00:1450:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 335A546B1 for ; Mon, 8 May 2023 08:47:24 -0700 (PDT) Received: by mail-ej1-x629.google.com with SMTP id a640c23a62f3a-9659c5b14d8so756603166b.3 for ; Mon, 08 May 2023 08:47:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560842; x=1686152842; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1L1xoMwaMAqCyoSzSt4AhGEwqRQKjBIsHlRxClFZIMI=; b=WpaeCR5CJhFhLNyygPXAd6nAB3PZl3MeVPZPOTH8zg708fJpKt4rtBunqPiv03I52Q JevPqUODWZ2g3zjT2uz04oT9OTbWH5KIllBsTsecOaPZRBasvpHI/sII5YI6es6PlcNq JYaDd572YOOEkPUqogD2iyr0HNq1m3jDC65F/g1Kv8uk7Ruurr4B3DX969VVhi5tIb7j gqrrhqPg8yLrM/JueR73U0Oyr3yjtqhOeVwFMfMJFMEUSdUc8cTzo+490d0AGWWqlTOU 5MpdeinP48ocRpXUjDSeMLNSRPtTQdECEA7OCTRttrjdgctBY7rxTpdsZ2Uq/dCb8xbp kRzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560842; x=1686152842; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1L1xoMwaMAqCyoSzSt4AhGEwqRQKjBIsHlRxClFZIMI=; b=TroiX48IAv7XNxx8yZMO18lY+F16xKDkOkGjdOB/dYTb/r+CKa/CPU55bIo6AvrzDZ kryveuiWJ36/jfDTlaBcuvlq//D7lcDNuSAnCKbbncc0uNx+q/jILRuBuNBxGvoZFd0X +71eRUnQJZBZPL8SBGQtPlTHhddRtuOKic94xMcnz9QTunQGNHEEslfSCsMrgJHtUoWB 0TR1C24116XVpKDeTY4MgcY40NOPUii1CBiXlLFfj2bnfnXtnQmGy0NQ1K0kX1+wCAZQ 4K8FHQ+t1ozR0AoHu420PtcrtK5hgUatlb/ntKPTvdXcIoKkKAtpku/MQpvs/WXTjyr1 7Nwg== X-Gm-Message-State: AC+VfDwueZ3w6vacXuE0LFDehEsagblp/kVhGHXAHIjFjQp4Idq99hd7 p2CiGjg+0gzS8e95qtZDfEXz8PyawzHxyKpaXaFsag== X-Google-Smtp-Source: ACHHUZ4tPF4QyIMzbRXYhP3TpdrpjZXZkQSwnPbyAOICm5QtvDVhfZ0NNa1FQ0wPymYEBjT7BPCELw== X-Received: by 2002:a17:907:3d86:b0:967:5c5f:e45c with SMTP id he6-20020a1709073d8600b009675c5fe45cmr2679075ejc.0.1683560842666; Mon, 08 May 2023 08:47:22 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id md1-20020a170906ae8100b0094b5ce9d43dsm121822ejb.85.2023.05.08.08.47.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:47:22 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause , Lai Jiangshan Subject: [PATCH 5.15 5/8] KVM: X86: Don't reset mmu context when X86_CR4_PCIDE 1->0 Date: Mon, 8 May 2023 17:47:06 +0200 Message-Id: <20230508154709.30043-6-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154709.30043-1-minipli@grsecurity.net> References: <20230508154709.30043-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan [ Upstream commit 552617382c197949ff965a3559da8952bf3c1fa5 ] X86_CR4_PCIDE doesn't participate in kvm_mmu_role, so the mmu context doesn't need to be reset. It is only required to flush all the guest tlb. Signed-off-by: Lai Jiangshan Reviewed-by: Sean Christopherson Message-Id: <20210919024246.89230-2-jiangshanlai@gmail.com> Signed-off-by: Paolo Bonzini Signed-off-by: Mathias Krause --- arch/x86/kvm/x86.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 27900d4017a7..515033d01b12 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1081,9 +1081,10 @@ static bool kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) void kvm_post_set_cr4(struct kvm_vcpu *vcpu, unsigned long old_cr4, unsigned long cr4) { - if (((cr4 ^ old_cr4) & KVM_MMU_CR4_ROLE_BITS) || - (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE))) + if ((cr4 ^ old_cr4) & KVM_MMU_CR4_ROLE_BITS) kvm_mmu_reset_context(vcpu); + else if (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)) + kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu); } EXPORT_SYMBOL_GPL(kvm_post_set_cr4); From patchwork Mon May 8 15:47:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234689 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24316C77B75 for ; Mon, 8 May 2023 15:47:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234623AbjEHPrw (ORCPT ); Mon, 8 May 2023 11:47:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234462AbjEHPrq (ORCPT ); Mon, 8 May 2023 11:47:46 -0400 Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com [IPv6:2a00:1450:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E0331BE5 for ; Mon, 8 May 2023 08:47:25 -0700 (PDT) Received: by mail-ed1-x52c.google.com with SMTP id 4fb4d7f45d1cf-50be0d835aaso8592499a12.3 for ; Mon, 08 May 2023 08:47:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560844; x=1686152844; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BZhxwX5YJmwh3Xb5Pl4fKRR45KPDMOBgNGiklYWsEI0=; b=oZynVFASdBKAgYyOG49mPvnYjqQSeNV4wMohU0REK+FMcD8WGiPgcjLMPs2eAGQ064 dc1QygcLHggCd+KzNEHvccDTgGDEtn96BD6nvR9mrscA3+4sYLeuUv8RpTYxkK2wvYlZ D+C/XNnVrIhH53qaEMos3PpbRzQ5OlIn9vKHge2O2h6A1O15qAS4QyXxSp2YRXwNUDuW FFW2QDm5o+gBK0C555Xy6Gk0Z/1pVZkDM5K9RgdWsJM2+Uiuj/ozFDxGmJ7xzURyMtOO NDggZpm8gRWMkjhip2lUICJ67yY0y7THRHLmY7AWBOXzNSxRx/siRaeyI1avOs3Sf6uv w0UA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560844; x=1686152844; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BZhxwX5YJmwh3Xb5Pl4fKRR45KPDMOBgNGiklYWsEI0=; b=NFfmxzbMsQXD35zdzYlucVeRgpW8fOM8uxvl9pEMghtnuLCcN23jeo1JJIs4b1Ctai gyzIC0ylRyh8IZ/p0AsHA/eHFRz9KBB8p1iczIAAClf/lxMNqH4H7h0tZX6QUtwpovd4 BtXbSmivqfKboTqa8sOO4ZxFJoEbY+grbyC0/TBLboCoMWVIqDWnLL2ee1pDO4ej2JMR 3O2GkSRGaRksj/Ds28g3BNUu00Woj/Yr3EYsL16qzksVkM9thzTl+9deR8L7+O/l4048 v3Mg0+LULIc+LLGjthGYOaSk2txZxTA3Whlsi8lweI4pfntX1huqJa7TrZttzkCBf6cS vjAg== X-Gm-Message-State: AC+VfDzfjcMDrWE6151UPTcezOgnD8UOl9oVMnBWJFovDDs734Kxgtq/ 9gloUqfZQX6vuJ4MAOujOqyi/qlvIqLFPFE21tlyGw== X-Google-Smtp-Source: ACHHUZ7jRf5X6LPF8KMEFLEm2MTP5YhZaRBxEQCBPsfU8Vbbnma77M/DqG3AdqvmB9K/0uHX2MZMMQ== X-Received: by 2002:a17:906:58c9:b0:94f:6218:191d with SMTP id e9-20020a17090658c900b0094f6218191dmr10485675ejs.32.1683560843813; Mon, 08 May 2023 08:47:23 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id md1-20020a170906ae8100b0094b5ce9d43dsm121822ejb.85.2023.05.08.08.47.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:47:23 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause , Lai Jiangshan Subject: [PATCH 5.15 6/8] KVM: X86: Don't reset mmu context when toggling X86_CR4_PGE Date: Mon, 8 May 2023 17:47:07 +0200 Message-Id: <20230508154709.30043-7-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154709.30043-1-minipli@grsecurity.net> References: <20230508154709.30043-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan [ Upstream commit a91a7c7096005113d8e749fd8dfdd3e1eecee263 ] X86_CR4_PGE doesn't participate in kvm_mmu_role, so the mmu context doesn't need to be reset. It is only required to flush all the guest tlb. It is also inconsistent that X86_CR4_PGE is in KVM_MMU_CR4_ROLE_BITS while kvm_mmu_role doesn't use X86_CR4_PGE. So X86_CR4_PGE is also removed from KVM_MMU_CR4_ROLE_BITS. Signed-off-by: Lai Jiangshan Reviewed-by: Sean Christopherson Message-Id: <20210919024246.89230-3-jiangshanlai@gmail.com> Signed-off-by: Paolo Bonzini Signed-off-by: Mathias Krause --- arch/x86/kvm/mmu.h | 5 ++--- arch/x86/kvm/x86.c | 3 ++- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index baafc5d8bb9e..03a9e37e446a 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -44,9 +44,8 @@ #define PT32_ROOT_LEVEL 2 #define PT32E_ROOT_LEVEL 3 -#define KVM_MMU_CR4_ROLE_BITS (X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE | \ - X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE | \ - X86_CR4_LA57) +#define KVM_MMU_CR4_ROLE_BITS (X86_CR4_PSE | X86_CR4_PAE | X86_CR4_LA57 | \ + X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE) #define KVM_MMU_CR0_ROLE_BITS (X86_CR0_PG | X86_CR0_WP) #define KVM_MMU_EFER_ROLE_BITS (EFER_LME | EFER_NX) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 515033d01b12..ea3bdc4f2284 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1083,7 +1083,8 @@ void kvm_post_set_cr4(struct kvm_vcpu *vcpu, unsigned long old_cr4, unsigned lon { if ((cr4 ^ old_cr4) & KVM_MMU_CR4_ROLE_BITS) kvm_mmu_reset_context(vcpu); - else if (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)) + else if (((cr4 ^ old_cr4) & X86_CR4_PGE) || + (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE))) kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu); } EXPORT_SYMBOL_GPL(kvm_post_set_cr4); From patchwork Mon May 8 15:47:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234690 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBAE9C77B7F for ; Mon, 8 May 2023 15:47:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234567AbjEHPry (ORCPT ); Mon, 8 May 2023 11:47:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234551AbjEHPrq (ORCPT ); Mon, 8 May 2023 11:47:46 -0400 Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [IPv6:2a00:1450:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70A37A25F for ; Mon, 8 May 2023 08:47:26 -0700 (PDT) Received: by mail-ed1-x52d.google.com with SMTP id 4fb4d7f45d1cf-50bd87539c2so7479746a12.0 for ; Mon, 08 May 2023 08:47:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560845; x=1686152845; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XACjbBOAT3WTujlADfwySgVeuYdWzCzKnAbBojJJLuw=; b=m7nGtVjV2YeIOSfS3Jjm4gpCaMSLaErxADIokVRVlHjAiPwnRRl6L2jg/UsbB2Me2K jJcZl2nsCtSmDJpYZI9Uv0a5E3zYHY2vwo1Ht2ChJkVtX/iCe41QtRIFbuv0Ar73VrID sTCFVqBLMx7HPoG7tXhL/EgXaHaXt9GanXQ7pib7qDUtHCniLsejZi02HHA/rU98aOYd 2qlVtW6aNMR8z27i0vK53ii2Cg6+hFtcQh53L/Al7Q56B/ixzlDhtkok1UzK2mbqIXZa zVjUcz9MYmIzjJPN6pxiUHYlto8Nu0Ey95VFPo43TQR9HqURz2pZYae054PQjU9kzo0U El9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560845; x=1686152845; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XACjbBOAT3WTujlADfwySgVeuYdWzCzKnAbBojJJLuw=; b=kmltnC59YRJLuPrSs74v+pLvyB2UfE7PnI0iAExRWuP6dZCW9TckShh1V0IVTnD4Ce GnrMfEZrgKIBAypG9/bk1Ltb+QirfCJI8i2NlNSi8LEt3wWgO+me/QTkdY6yq0OVNdyJ XE3qYJyaBBuxD/vAZf5dsH25yFuOV/J7/imhYjSApo3cBl6v/KQ4dYvyW+rngPWHZSkw AGIQONi1J1nBKy1kL7VGOMTE8PYDVUKpdOHuLtewOeI4/w0FxIri6knDXDpSRpPS4izB E9Ud0Iqq/1QcNJaXvUWad/pyW1HMsoNmzPNrmvMZBf3na7zeJPd0L+A2fd15OWCZBL6c 7eqg== X-Gm-Message-State: AC+VfDwzOyMnPQiRbtZyHHGqgcPbUhv/LMwrxBdCRRFHOWK0S6NR8Dhp 9W/0G50IjTxvzvy0ejscwkuYPA== X-Google-Smtp-Source: ACHHUZ63TaqJofUXKI+zdSUTTIvLT5pVplgiYOGyW9hpQdis6FbkN7Ux8zdhRTtehvfqI/ituU6ZiQ== X-Received: by 2002:a17:907:a01:b0:94a:5d5c:fe6f with SMTP id bb1-20020a1709070a0100b0094a5d5cfe6fmr8602857ejc.47.1683560844851; Mon, 08 May 2023 08:47:24 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id md1-20020a170906ae8100b0094b5ce9d43dsm121822ejb.85.2023.05.08.08.47.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:47:24 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause , Lai Jiangshan Subject: [PATCH 5.15 7/8] KVM: x86/mmu: Reconstruct shadow page root if the guest PDPTEs is changed Date: Mon, 8 May 2023 17:47:08 +0200 Message-Id: <20230508154709.30043-8-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154709.30043-1-minipli@grsecurity.net> References: <20230508154709.30043-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan [ Upstream commit 6b123c3a89a90ac6418e4d64b1e23f09d458a77d ] For shadow paging, the page table needs to be reconstructed before the coming VMENTER if the guest PDPTEs is changed. But not all paths that call load_pdptrs() will cause the page tables to be reconstructed. Normally, kvm_mmu_reset_context() and kvm_mmu_free_roots() are used to launch later reconstruction. The commit d81135a57aa6("KVM: x86: do not reset mmu if CR0.CD and CR0.NW are changed") skips kvm_mmu_reset_context() after load_pdptrs() when changing CR0.CD and CR0.NW. The commit 21823fbda552("KVM: x86: Invalidate all PGDs for the current PCID on MOV CR3 w/ flush") skips kvm_mmu_free_roots() after load_pdptrs() when rewriting the CR3 with the same value. The commit a91a7c709600("KVM: X86: Don't reset mmu context when toggling X86_CR4_PGE") skips kvm_mmu_reset_context() after load_pdptrs() when changing CR4.PGE. Guests like linux would keep the PDPTEs unchanged for every instance of pagetable, so this missing reconstruction has no problem for linux guests. Fixes: d81135a57aa6("KVM: x86: do not reset mmu if CR0.CD and CR0.NW are changed") Fixes: 21823fbda552("KVM: x86: Invalidate all PGDs for the current PCID on MOV CR3 w/ flush") Fixes: a91a7c709600("KVM: X86: Don't reset mmu context when toggling X86_CR4_PGE") Suggested-by: Sean Christopherson Signed-off-by: Lai Jiangshan Message-Id: <20211216021938.11752-3-jiangshanlai@gmail.com> Signed-off-by: Paolo Bonzini Signed-off-by: Mathias Krause --- arch/x86/kvm/x86.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ea3bdc4f2284..a9f80a544ff1 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -865,6 +865,13 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3) } ret = 1; + /* + * Marking VCPU_EXREG_PDPTR dirty doesn't work for !tdp_enabled. + * Shadow page roots need to be reconstructed instead. + */ + if (!tdp_enabled && memcmp(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs))) + kvm_mmu_free_roots(vcpu, mmu, KVM_MMU_ROOT_CURRENT); + memcpy(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs)); kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR); kvm_make_request(KVM_REQ_LOAD_MMU_PGD, vcpu); From patchwork Mon May 8 15:47:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Krause X-Patchwork-Id: 13234691 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C130C7EE24 for ; Mon, 8 May 2023 15:48:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234652AbjEHPr4 (ORCPT ); Mon, 8 May 2023 11:47:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234466AbjEHPrr (ORCPT ); Mon, 8 May 2023 11:47:47 -0400 Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com [IPv6:2a00:1450:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CA4F6A64 for ; Mon, 8 May 2023 08:47:27 -0700 (PDT) Received: by mail-ed1-x52a.google.com with SMTP id 4fb4d7f45d1cf-50c8d87c775so5660153a12.3 for ; Mon, 08 May 2023 08:47:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; t=1683560846; x=1686152846; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UJQV64QBH17naeT8eJx/a1qVgb7IkGF2ZlDgCUBkk4s=; b=TwbgjZiYe9cpmD+emVUQTuUI8nafgnvS88tLdrHRvPplTOB9nbzzNpH05SryOH0PBJ 6tqchoL9WCOftm/N3iq+hVKEXA4y6dtBSgC9QpB5vMtL6wb13MkUTNFUxofqx3lhAMYe br74B5+2bCVK18QXLsAZgS283moCwRikgwen7WTppS7f+xkjiHX+YfVONd0WtVO4oq3t LJMmYDabUtgl7J0aIIDmgb5ZCdiU63jJagTtjoYQKW+3C/04eKL0EDXinu/JN5y1IIuP AaagYu9s+/Fvz6pMEJuZEIOwyA2dPVN6wzFFxsQs6ZpSDPHgqNxmYzxbJmHBG7Ffa3Fd GG8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683560846; x=1686152846; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UJQV64QBH17naeT8eJx/a1qVgb7IkGF2ZlDgCUBkk4s=; b=ljjI/V5zEg4n5yZpLgsl5bEBqwRv0RqaTNWgajmdmDbBbBuXCBGIhheudqu/9h8jee EG55MEZjR2zgkLMi2kGde/VOt2KVCV66vEcjjX5OvO7+4udbEP3zINKTAo0JxUPuVHJR wpS1830xmSqSjp7V82T2RyI4XKgSr+i8CGqAOpnP0V3oiMIlVgsqsCi8j/gVuLe3RkDn XfAzIxvOn4JTbzojRP8csQVybYoN/lwtXYU9NVgz/IUdqUjm/1RQ41pCT743lr8vykCJ VTkuNcQJ6PlOHyBcA0TBonGj11VR2dy5GmLQrdY7ho5Upc6paNRL7w3u7h18LPdk12n1 VbJQ== X-Gm-Message-State: AC+VfDyHJzEhIIc7nxKnLIv558e7dL94Vzq+y21pkhFMCp99j/9G5rXq o4saIudPIeYD/GaGmGzejUQr8Q== X-Google-Smtp-Source: ACHHUZ6bi7QivlhACIL7oOIhZay7sHr9dfOsk9A7F5DJm1wgDJ5xrltXLfX0LARUZwbalbRVxKpvtA== X-Received: by 2002:a17:907:7fa8:b0:969:8d19:74 with SMTP id qk40-20020a1709077fa800b009698d190074mr1068811ejc.57.1683560845756; Mon, 08 May 2023 08:47:25 -0700 (PDT) Received: from localhost.localdomain (p549211c7.dip0.t-ipconnect.de. [84.146.17.199]) by smtp.gmail.com with ESMTPSA id md1-20020a170906ae8100b0094b5ce9d43dsm121822ejb.85.2023.05.08.08.47.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 08:47:25 -0700 (PDT) From: Mathias Krause To: stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , kvm@vger.kernel.org, Mathias Krause Subject: [PATCH 5.15 8/8] KVM: x86/mmu: Refresh CR0.WP prior to checking for emulated permission faults Date: Mon, 8 May 2023 17:47:09 +0200 Message-Id: <20230508154709.30043-9-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230508154709.30043-1-minipli@grsecurity.net> References: <20230508154709.30043-1-minipli@grsecurity.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson [ Upstream commit cf9f4c0eb1699d306e348b1fd0225af7b2c282d3 ] Refresh the MMU's snapshot of the vCPU's CR0.WP prior to checking for permission faults when emulating a guest memory access and CR0.WP may be guest owned. If the guest toggles only CR0.WP and triggers emulation of a supervisor write, e.g. when KVM is emulating UMIP, KVM may consume a stale CR0.WP, i.e. use stale protection bits metadata. Note, KVM passes through CR0.WP if and only if EPT is enabled as CR0.WP is part of the MMU role for legacy shadow paging, and SVM (NPT) doesn't support per-bit interception controls for CR0. Don't bother checking for EPT vs. NPT as the "old == new" check will always be true under NPT, i.e. the only cost is the read of vcpu->arch.cr4 (SVM unconditionally grabs CR0 from the VMCB on VM-Exit). Reported-by: Mathias Krause Link: https://lkml.kernel.org/r/677169b4-051f-fcae-756b-9a3e1bb9f8fe%40grsecurity.net Fixes: fb509f76acc8 ("KVM: VMX: Make CR0.WP a guest owned bit") Tested-by: Mathias Krause Link: https://lore.kernel.org/r/20230405002608.418442-1-seanjc@google.com Signed-off-by: Sean Christopherson Signed-off-by: Mathias Krause # backport to v5.15.x --- - the MMU role wasn't folded into the CPU role yet in this kernel version and the "not_smap" handling was done slightly different, however, independent of the permission bitmap handling, so refreshing the bitmap prior to determining the fault state is still sufficient arch/x86/kvm/mmu.h | 26 +++++++++++++++++++++++++- arch/x86/kvm/mmu/mmu.c | 15 +++++++++++++++ 2 files changed, 40 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 03a9e37e446a..a3c0dc07fc96 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -76,6 +76,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu); int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, u64 fault_address, char *insn, int insn_len); +void __kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu); int kvm_mmu_load(struct kvm_vcpu *vcpu); void kvm_mmu_unload(struct kvm_vcpu *vcpu); @@ -176,6 +178,24 @@ static inline bool is_writable_pte(unsigned long pte) return pte & PT_WRITABLE_MASK; } +static inline void kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + /* + * When EPT is enabled, KVM may passthrough CR0.WP to the guest, i.e. + * @mmu's snapshot of CR0.WP and thus all related paging metadata may + * be stale. Refresh CR0.WP and the metadata on-demand when checking + * for permission faults. Exempt nested MMUs, i.e. MMUs for shadowing + * nEPT and nNPT, as CR0.WP is ignored in both cases. Note, KVM does + * need to refresh nested_mmu, a.k.a. the walker used to translate L2 + * GVAs to GPAs, as that "MMU" needs to honor L2's CR0.WP. + */ + if (!tdp_enabled || mmu == &vcpu->arch.guest_mmu) + return; + + __kvm_mmu_refresh_passthrough_bits(vcpu, mmu); +} + /* * Check if a given access (described through the I/D, W/R and U/S bits of a * page fault error code pfec) causes a permission fault with the given PTE @@ -207,8 +227,12 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long smap = (cpl - 3) & (rflags & X86_EFLAGS_AC); int index = (pfec >> 1) + (smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1)); - bool fault = (mmu->permissions[index] >> pte_access) & 1; u32 errcode = PFERR_PRESENT_MASK; + bool fault; + + kvm_mmu_refresh_passthrough_bits(vcpu, mmu); + + fault = (mmu->permissions[index] >> pte_access) & 1; WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK)); if (unlikely(mmu->pkru_mask)) { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7c3b809f24b3..0e50b4dd01e5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4713,6 +4713,21 @@ static union kvm_mmu_role kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, return role; } +void __kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + const bool cr0_wp = !!kvm_read_cr0_bits(vcpu, X86_CR0_WP); + + BUILD_BUG_ON((KVM_MMU_CR0_ROLE_BITS & KVM_POSSIBLE_CR0_GUEST_BITS) != X86_CR0_WP); + BUILD_BUG_ON((KVM_MMU_CR4_ROLE_BITS & KVM_POSSIBLE_CR4_GUEST_BITS)); + + if (is_cr0_wp(mmu) == cr0_wp) + return; + + mmu->mmu_role.base.cr0_wp = cr0_wp; + reset_guest_paging_metadata(vcpu, mmu); +} + static inline int kvm_mmu_get_tdp_level(struct kvm_vcpu *vcpu) { /* tdp_root_level is architecture forced level, use it if nonzero */