From patchwork Mon Nov 15 23:45:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF791C433EF for ; Tue, 16 Nov 2021 03:15:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A55F461A4E for ; Tue, 16 Nov 2021 03:15:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238574AbhKPDSo (ORCPT ); Mon, 15 Nov 2021 22:18:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238487AbhKPDR7 (ORCPT ); Mon, 15 Nov 2021 22:17:59 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DC71C043197 for ; Mon, 15 Nov 2021 15:46:09 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id k8-20020a6555c8000000b002e32ed2a021so6143230pgs.1 for ; Mon, 15 Nov 2021 15:46:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nyFgzVEyCb72eNk2VPGomrdPJdYxRfeEzI0NSb6F/8s=; b=FhwSo+AeArEwiboUCjKrKMx7ZmN/9qUIupIm2aZQAugw8dJDwnhnLiP3VJTF8bT1AK jayIojnrkZaa7jiUDGHwFP7G3QBGerYyLehza9KYtn8XtebTX01R5RVExHPxf9O3VPcS 1Awn5+p8ZnWwEfsh4oDQWZV7LTQGB/cEkZ2Gk1na4VJavTk+NtbQTNu8HhfgMEkQi+wc 1prwoPhZvdYbcDDATxT5EL6XEMkTG2w+aS0u+jtU8+uBwLBVMtxgzO37bzqj5A/g9lgl kerfqBf+QDMqIvG+mMaXL01J3WBJy1tC9gZzHhi7Sy5SXotbfNiC2tIU1O9NgYsxdMLM HDag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nyFgzVEyCb72eNk2VPGomrdPJdYxRfeEzI0NSb6F/8s=; b=aT/14r7gRcC3QuxLR3qFSYGR6X3ZJD7fER8rfqF6jNohoLql1k+BUpVveKh9X1zM0q 08ISMhFu2hue9XH52HHRRsYezgFxh0S0RfsUd1XZ47MD7lKYRGUeNKo0iV1c7Dd/NK+U dDONk5mewMeOko4RiW0csKz5YKnKpBONYHNr6hOjsmyk7T4e7PuaUax/Ynd2ZUXUYwuc qiXrD9/Qj6EBMCNRlgl2NST7qWpcZ9jYJHsKPlQjZSj/CAWQMEw/wB7gRp9mtfB4bBZa T4qAETJSDUMv9MbwnimM0Fr1KGpAtefF6fT6R3v6r4hO4xD0nXEImYLP3Dtag+WIn+xf u94A== X-Gm-Message-State: AOAM533UejZq1EcjO7JZAqEcVaLsO9yEERU0vCP+JqYAF2/X+QsazWr/ xZIlSMnshtw9oGPxjXbatieR5NAKKRde X-Google-Smtp-Source: ABdhPJyUpLKPY0hJHdO9XvEjt/lnw3HVEnRL357TT0PU6WuDTHxaiDtzi5W6QrLBfSrYf6aop5hTIjuD5YGG X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a17:902:6a8a:b0:143:905f:aec7 with SMTP id n10-20020a1709026a8a00b00143905faec7mr40786543plk.8.1637019968545; Mon, 15 Nov 2021 15:46:08 -0800 (PST) Date: Mon, 15 Nov 2021 15:45:49 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-2-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 01/15] KVM: x86/mmu: Remove redundant flushes when disabling dirty logging From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org tdp_mmu_zap_spte_atomic flushes on every zap already, so no need to flush again after it's done. Reviewed-by: David Matlack Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 4 +--- arch/x86/kvm/mmu/tdp_mmu.c | 21 ++++++--------------- arch/x86/kvm/mmu/tdp_mmu.h | 5 ++--- 3 files changed, 9 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 354d2ca92df4..baa94acab516 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5870,9 +5870,7 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); - flush = kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot, flush); - if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot); read_unlock(&kvm->mmu_lock); } } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7c5dd83e52de..b3c78568ae60 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1364,10 +1364,9 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, * Clear leaf entries which could be replaced by large mappings, for * GFNs within the slot. */ -static bool zap_collapsible_spte_range(struct kvm *kvm, +static void zap_collapsible_spte_range(struct kvm *kvm, struct kvm_mmu_page *root, - const struct kvm_memory_slot *slot, - bool flush) + const struct kvm_memory_slot *slot) { gfn_t start = slot->base_gfn; gfn_t end = start + slot->npages; @@ -1378,10 +1377,8 @@ static bool zap_collapsible_spte_range(struct kvm *kvm, tdp_root_for_each_pte(iter, root, start, end) { retry: - if (tdp_mmu_iter_cond_resched(kvm, &iter, flush, true)) { - flush = false; + if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; - } if (!is_shadow_present_pte(iter.old_spte) || !is_last_spte(iter.old_spte, iter.level)) @@ -1401,30 +1398,24 @@ static bool zap_collapsible_spte_range(struct kvm *kvm, iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep)); goto retry; } - flush = true; } rcu_read_unlock(); - - return flush; } /* * Clear non-leaf entries (and free associated page tables) which could * be replaced by large mappings, for GFNs within the slot. */ -bool kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, - const struct kvm_memory_slot *slot, - bool flush) +void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, + const struct kvm_memory_slot *slot) { struct kvm_mmu_page *root; lockdep_assert_held_read(&kvm->mmu_lock); for_each_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true) - flush = zap_collapsible_spte_range(kvm, root, slot, flush); - - return flush; + zap_collapsible_spte_range(kvm, root, slot); } /* diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 476b133544dd..3899004a5d91 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -64,9 +64,8 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, unsigned long mask, bool wrprot); -bool kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, - const struct kvm_memory_slot *slot, - bool flush); +void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, + const struct kvm_memory_slot *slot); bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, From patchwork Mon Nov 15 23:45:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F17AC433F5 for ; Tue, 16 Nov 2021 03:16:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 804F961C4F for ; Tue, 16 Nov 2021 03:16:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243942AbhKPDSw (ORCPT ); Mon, 15 Nov 2021 22:18:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238921AbhKPDSA (ORCPT ); Mon, 15 Nov 2021 22:18:00 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D06E2C03AA01 for ; Mon, 15 Nov 2021 15:46:10 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id lj10-20020a17090b344a00b001a653d07ad8so360155pjb.3 for ; Mon, 15 Nov 2021 15:46:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=CrzbjrscULj4Dg1fX5nLFO7ahCBMSKg0nQziJBU00js=; b=RK3viodQvs2MBG+iV942VqiKjAzlEyYpVczlVa9BP/a4aBKi29lpfBZfx9H9/jeaCD uteQxO5FLk+uC0FX+8j2+XWx2xMIlkBFpOgmdWDT+EIfa+MDSZHRAKXPtdFuSSycPNiD g/IscGCEOV4nSRXT6e2FWOLQ5PIxHF6AuzcpkcXO6GHtyPKhCmGz2g+sK8dPU1ciN2n2 xhnoc5MGIGcaz2GCm5isat9SyT9EeQuuNFV7nh9jVdrsA7TP1/5ebvz+O4kqGD6SkhAy 2dEZg/Qgb/QnMn8wsNq/mv+03SvQpR2ggu3Spbm+FcW9P2pROIdIdcfpfqke0TEZc5nW GlGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=CrzbjrscULj4Dg1fX5nLFO7ahCBMSKg0nQziJBU00js=; b=Z0HW9anTjy0yXXBnE1fgxcbrFitmoRkvBr449YeJ7+3fjWW/aPky81nU9DaDX+wBmG OHFf+w4QCvYOpNzZpQgPljh06ZrX2O5QL6lgDN5ihF49+so+1NM0n+UQUDopUV4qL11X W6mvodPpL2oOfvF4nnwVITLqGtzN9FlebXyjZpo68dpO0sfHOd/4M8A6YutS/aHOXiKo RctRXnW2XbTdTtHlGjba5uaWwF82xS4KAcA2xNb9A1Vze5mADPLzZsT3iBVBi4/QvP1m dBqio+qCBo+Cv2Ik/X7KtdtoXyr24or6BaQHSN3I9rQ9w9P7VhODWSYxe/y7eGzDp28u UIfA== X-Gm-Message-State: AOAM533f9B784Y1E8XZG9Vne6eF2DNr5E7rih89e3diyzVF7MrO+811y J38lmaPjBf3OX6qeRvm/pXum1yK0DWNu X-Google-Smtp-Source: ABdhPJzInLL2Tp8AmTKtaUk8NgsD6eRTMx8aQqk+UjES9ao9u2Kr2nWwYtamHV/chMSUqFhO1MeM/GCWz0KS X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a17:90b:17cf:: with SMTP id me15mr3038244pjb.125.1637019970413; Mon, 15 Nov 2021 15:46:10 -0800 (PST) Date: Mon, 15 Nov 2021 15:45:50 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-3-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 02/15] KVM: x86/mmu: Introduce vcpu_make_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a wrapper around make_spte which conveys the vCPU-specific context of the function. This will facilitate factoring out all uses of the vCPU pointer from make_spte in subsequent commits. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 6 +++--- arch/x86/kvm/mmu/spte.c | 17 +++++++++++++---- arch/x86/kvm/mmu/spte.h | 12 ++++++++---- arch/x86/kvm/mmu/tdp_mmu.c | 7 ++++--- 5 files changed, 29 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index baa94acab516..2ada6dee920a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2723,7 +2723,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, was_rmapped = 1; } - wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch, + wrprot = vcpu_make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch, true, host_writable, &spte); if (*sptep == spte) { diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index f87d36898c44..edb8ebd1a775 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -1129,9 +1129,9 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) spte = *sptep; host_writable = spte & shadow_host_writable_mask; slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - make_spte(vcpu, sp, slot, pte_access, gfn, - spte_to_pfn(spte), spte, true, false, - host_writable, &spte); + vcpu_make_spte(vcpu, sp, slot, pte_access, gfn, + spte_to_pfn(spte), spte, true, false, + host_writable, &spte); flush |= mmu_spte_update(sptep, spte); } diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 0c76c45fdb68..04d26e913941 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -90,10 +90,9 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) } bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - struct kvm_memory_slot *slot, - unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, - u64 old_spte, bool prefetch, bool can_unsync, - bool host_writable, u64 *new_spte) + struct kvm_memory_slot *slot, unsigned int pte_access, + gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, + bool can_unsync, bool host_writable, u64 *new_spte) { int level = sp->role.level; u64 spte = SPTE_MMU_PRESENT_MASK; @@ -191,6 +190,16 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, return wrprot; } +bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, + struct kvm_memory_slot *slot, unsigned int pte_access, + gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, + bool can_unsync, bool host_writable, u64 *new_spte) +{ + return make_spte(vcpu, sp, slot, pte_access, gfn, pfn, old_spte, + prefetch, can_unsync, host_writable, new_spte); + +} + u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled) { u64 spte = SPTE_MMU_PRESENT_MASK; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index cc432f9a966b..14f18082d505 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -330,10 +330,14 @@ static inline u64 get_mmio_spte_generation(u64 spte) } bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - struct kvm_memory_slot *slot, - unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, - u64 old_spte, bool prefetch, bool can_unsync, - bool host_writable, u64 *new_spte); + struct kvm_memory_slot *slot, unsigned int pte_access, + gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, + bool can_unsync, bool host_writable, u64 *new_spte); +bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, + struct kvm_memory_slot *slot, + unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, + u64 old_spte, bool prefetch, bool can_unsync, + bool host_writable, u64 *new_spte); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access); u64 mark_spte_for_access_track(u64 spte); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index b3c78568ae60..43c7834b4f0a 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -906,9 +906,10 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, if (unlikely(!fault->slot)) new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL); else - wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn, - fault->pfn, iter->old_spte, fault->prefetch, true, - fault->map_writable, &new_spte); + wrprot = vcpu_make_spte(vcpu, sp, fault->slot, ACC_ALL, + iter->gfn, fault->pfn, iter->old_spte, + fault->prefetch, true, + fault->map_writable, &new_spte); if (new_spte == iter->old_spte) ret = RET_PF_SPURIOUS; From patchwork Mon Nov 15 23:45:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1264C433F5 for ; Tue, 16 Nov 2021 03:16:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9B01961A4E for ; Tue, 16 Nov 2021 03:16:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238960AbhKPDTA (ORCPT ); Mon, 15 Nov 2021 22:19:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239044AbhKPDSA (ORCPT ); Mon, 15 Nov 2021 22:18:00 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5260BC03AA03 for ; Mon, 15 Nov 2021 15:46:12 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id lt10-20020a17090b354a00b001a649326aedso351463pjb.5 for ; Mon, 15 Nov 2021 15:46:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=qcf2Eg0o3TvhzSJkWrvn5u1Le3k7H28bMccIScZI42Q=; b=R1f21uBhu2zkhQpwfilUK0yB4rzDBRwp3gLteA2bYINBMGeLzvLuE0mACNCuiyZ4f9 lrfP3Y0+7uhtb1YYetVZQ+6nT9gYGh26Uc7jorYccvBq4+EaO+XQeFzXk/W0GG/FoSl4 +E9XfsvZZ49aMcMs7aYHH6ASRma04+TRwVnpS+glHOP6gT14RZZbLOu4OWGWoG9QETFb GhTw28lSnP2nEnyzty+HNxnhUv+mGpfw3ackN3aiJf7JUPMnnhJluetBSW+M0K2Gm6Wz VtCtxZQgymuSjEXK0fCynCwuhAcnX/gLAWHFosCGqy+1496Yo3BrQXri4v+CaOx2/46m A1rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=qcf2Eg0o3TvhzSJkWrvn5u1Le3k7H28bMccIScZI42Q=; b=WIc50J1Amdb03tRTpKr6cfCEcKVfs63Zva+HqnP5MmhZuYeTEpjoPo9/WiPO+wkxdw NRrsYAeqB/Y32c3lP0sB/xvF14ZjK5XPul1EDYTRU1S+yDOB3MwKSZaW4gbBImbHEZ+F Uh+0nDASMrUMaPLk0gAej/cD+hEYWODv2FSVyC/bUD50/81Ry0X8ICojHzsmswVMsBDL v949cOEuqk91NJIsnT7z+bC/4e8WInzmjb+FceqVukncpZ2G15TyXnAUBMO1Wd6jdKv/ z/qZlFXwqtu3b5xoLuABWPlYK1xKXKsg1RBgnaSdslxUov7dHUVoHd8bUHGJ6vghHrHR 3Hlw== X-Gm-Message-State: AOAM532RRqhGn9H6TGOWxTyWPeSbfNuJm+xL7frie7ksTTDXR+HXL0wz u4y52FFrx3/dAGdwMOjmHHdt9TtbHp9Q X-Google-Smtp-Source: ABdhPJxO7hzN+85LFyrvqt+Vom8vQ6lHGRfbKSwvSNQmt4Rhs5Q7EHtYF2UwuFZzvGiinA9jezuFZOw+rmAN X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a17:90b:4b85:: with SMTP id lr5mr3163913pjb.236.1637019971898; Mon, 15 Nov 2021 15:46:11 -0800 (PST) Date: Mon, 15 Nov 2021 15:45:51 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-4-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 03/15] KVM: x86/mmu: Factor wrprot for nested PML out of make_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When running a nested VM, KVM write protects SPTEs in the EPT/NPT02 instead of using PML for dirty tracking. This avoids expensive translation later, when emptying the Page Modification Log. In service of removing the vCPU pointer from make_spte, factor the check for nested PML out of the function. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/spte.c | 10 +++++++--- arch/x86/kvm/mmu/spte.h | 3 ++- 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 04d26e913941..3cf08a534a16 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -92,7 +92,8 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, - bool can_unsync, bool host_writable, u64 *new_spte) + bool can_unsync, bool host_writable, bool ad_need_write_protect, + u64 *new_spte) { int level = sp->role.level; u64 spte = SPTE_MMU_PRESENT_MASK; @@ -100,7 +101,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (sp->role.ad_disabled) spte |= SPTE_TDP_AD_DISABLED_MASK; - else if (kvm_vcpu_ad_need_write_protect(vcpu)) + else if (ad_need_write_protect) spte |= SPTE_TDP_AD_WRPROT_ONLY_MASK; /* @@ -195,8 +196,11 @@ bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, u64 *new_spte) { + bool ad_need_write_protect = kvm_vcpu_ad_need_write_protect(vcpu); + return make_spte(vcpu, sp, slot, pte_access, gfn, pfn, old_spte, - prefetch, can_unsync, host_writable, new_spte); + prefetch, can_unsync, host_writable, + ad_need_write_protect, new_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 14f18082d505..bcf58602f224 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -332,7 +332,8 @@ static inline u64 get_mmio_spte_generation(u64 spte) bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, - bool can_unsync, bool host_writable, u64 *new_spte); + bool can_unsync, bool host_writable, bool ad_need_write_protect, + u64 *new_spte); bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, From patchwork Mon Nov 15 23:45:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 646FBC433EF for ; Tue, 16 Nov 2021 03:16:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4AC7061A4E for ; Tue, 16 Nov 2021 03:16:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240926AbhKPDTC (ORCPT ); Mon, 15 Nov 2021 22:19:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233289AbhKPDSB (ORCPT ); Mon, 15 Nov 2021 22:18:01 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 417A6C03AA05 for ; Mon, 15 Nov 2021 15:46:14 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id pg9-20020a17090b1e0900b001a689204b52so378064pjb.0 for ; Mon, 15 Nov 2021 15:46:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=B/yg4WcNXvfhHh1ItPHcVenAMSlZWjYxNfghv1yYukI=; b=pl+Kd1oXJAVEjvB89dWXf6GXeIfvHIeV4eJcSVU0kFDOHJWJTzn94+ovzfXffsNmtb DEkH8bQWR4muIMVj9JNk/SuvruwmfpRhyZFj7KRqFcpp3IFQ/iOdzG3S742aeaUNeWgw L3tpR1k86b7ccfFzNDzkRkzXnXsUjjYgfn9JsMp62NLLNet3Y4N+afvfrhC/fioSZ4Rj hEr4B97NTyUtdFVrNTno4KsfypUPLVs2WlJ+U4bhVcYbjTE44CYxo56+/9q+nbk4W0BV aT/0CKNdaojgzkvkN16UURCgY/acGBgMPUrr1tNPBNuYTk+d6iwyyCsnofezOR6UWRsq tF8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=B/yg4WcNXvfhHh1ItPHcVenAMSlZWjYxNfghv1yYukI=; b=kbCaBlrBzZX/mbN+M9FagswCbYtaf8swjSNg/xuR5wZEg3N+KifcivGeMuBGtlE/NT sRnmGPRIROFL4/DQxSbIR9wKnaMo8oGRB0YU1wdPRjGsnXHWvhAntnOjoUqOk1E+3tk3 zQjUlNyvF+j3VNZC0/VSfoej0zaKABZvNBhKscF4LPhggNmLPddOe4+JuKI1rkvCyXdj /oD/X8Dk7xEMYRSWUbAwqk4kf8CODUtLk/Lj66nipPfwXnmoLDbXpHY5heWfV7AUtmVW Jd5mDaaZwQPzKfEIiZlLWc5Ie6IgsX85xFl+znFGXp4ZD6LWrVTGBGQcPfiy+Ga3O77C yfoA== X-Gm-Message-State: AOAM5337qAAhK/BmijszRMbR3USkPQlJMM/84y9xgvbODzgqd5jjhZcv LrJFs9eBbDpc44XpCrQmZH68EoQZRKP5 X-Google-Smtp-Source: ABdhPJzKz0bnR7QOIvhoClp4ndlo0iYuOIa1p8rhT2RjGuaxk3gPe8aZ6ANw6hW6MpROrMaC9IpJmFO0sRWS X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a65:5bc8:: with SMTP id o8mr1967862pgr.92.1637019973706; Mon, 15 Nov 2021 15:46:13 -0800 (PST) Date: Mon, 15 Nov 2021 15:45:52 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-5-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 04/15] KVM: x86/mmu: Factor mt_mask out of make_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In service of removing the vCPU pointer from make_spte, factor the memory type mask calculation out of make_spte. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/spte.c | 9 +++++---- arch/x86/kvm/mmu/spte.h | 2 +- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 3cf08a534a16..75c666d3e7f1 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -93,7 +93,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, - u64 *new_spte) + u64 mt_mask, u64 *new_spte) { int level = sp->role.level; u64 spte = SPTE_MMU_PRESENT_MASK; @@ -130,8 +130,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (level > PG_LEVEL_4K) spte |= PT_PAGE_SIZE_MASK; if (tdp_enabled) - spte |= static_call(kvm_x86_get_mt_mask)(vcpu, gfn, - kvm_is_mmio_pfn(pfn)); + spte |= mt_mask; if (host_writable) spte |= shadow_host_writable_mask; @@ -197,10 +196,12 @@ bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool can_unsync, bool host_writable, u64 *new_spte) { bool ad_need_write_protect = kvm_vcpu_ad_need_write_protect(vcpu); + u64 mt_mask = static_call(kvm_x86_get_mt_mask)(vcpu, gfn, + kvm_is_mmio_pfn(pfn)); return make_spte(vcpu, sp, slot, pte_access, gfn, pfn, old_spte, prefetch, can_unsync, host_writable, - ad_need_write_protect, new_spte); + ad_need_write_protect, mt_mask, new_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index bcf58602f224..e739f2ebf844 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -333,7 +333,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, - u64 *new_spte); + u64 mt_mask, u64 *new_spte); bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, From patchwork Mon Nov 15 23:45:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 101E7C433EF for ; Tue, 16 Nov 2021 03:16:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ECB616322A for ; Tue, 16 Nov 2021 03:16:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244686AbhKPDTJ (ORCPT ); Mon, 15 Nov 2021 22:19:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238645AbhKPDSB (ORCPT ); Mon, 15 Nov 2021 22:18:01 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51A3FC03AA07 for ; Mon, 15 Nov 2021 15:46:16 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id pg9-20020a17090b1e0900b001a689204b52so378113pjb.0 for ; Mon, 15 Nov 2021 15:46:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Eou8FVkVRfjkrBKWxyUmSpsbt55EFqh+Bt08zRm2Nu4=; b=Kp6wEABwVLjiNcfFda7/xbF7oFTu5ebXV5fAdQXWlwtsWLAL3JoTKS49Kp711b6Lv+ Rz7X6n19co0IA9785SV/PbRsr882CNMdjUIBdoH+UVbjd8R/JQHcTxh0JCe3kA13V5wi QmNxwgUWaO0g5jrJ0SboF6QCeatudbZw6gyx0miSWKQTaytyNwHV+uRUddy6wDi1u9C1 2l7i2AyBni1r8U5rkdTJW41ZLF+M7GRc0f1PbZTpdgh4+SJGoJjW/xpa0Okt5QI4FR0Y E4tMN+iZyYHa9hKAz3bI+N5qnhGg0KdTy45o1e3XMp13edLbTjVXovZX1oZBrM3fwvym EFrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Eou8FVkVRfjkrBKWxyUmSpsbt55EFqh+Bt08zRm2Nu4=; b=XsatMaru+oOcKZs0XbiF+bViqA0TnD7nWtBFJUP7lf/T3rfEUv1SQzY8reFW1PO/hW u6a7CbeK4SDkg12/ZCF2axQcYRLUroawtc7FrPPJygoDCBC6U7eFttFUanH8vvwr4PXr ebrrEpzXDu68pMrM0NSUEbpTujAki3hNq5eCcWZEs783AAj3tycJt5fG48j0Sbr4Rp5U clevKfgdC2T+yaYlnpmZtFih+j7QDbvh1xNP2cTJitnlyte9aNbwTX4xbCM+v5GmSeoH RMJzyY/og1AAEschLkrFSrXhsV9GPoHXyIUVdQfXK5rCqyCDTOYSF4GMcMb3ESaEpArx Hrzg== X-Gm-Message-State: AOAM532tV7Kf1RP4Rj7nRe4395tAJ1tZo6K6hfd4W8uqFDxucSYCRiE8 GgNi/mRlVKrotZ2axLDvstW1+q1Qote+ X-Google-Smtp-Source: ABdhPJxNckKj+qaFC4ScHZjUXYA8ufJdstZxTBvxY/76m/ltYcoxzqKLcKP6l75EiC/6sTO60ZflB4iWFP2A X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a17:90b:3ec2:: with SMTP id rm2mr69184267pjb.1.1637019975780; Mon, 15 Nov 2021 15:46:15 -0800 (PST) Date: Mon, 15 Nov 2021 15:45:53 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-6-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 05/15] KVM: x86/mmu: Remove need for a vcpu from kvm_slot_page_track_is_active From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org kvm_slot_page_track_is_active only uses its vCPU argument to get a pointer to the assoicated struct kvm, so just pass in the struct KVM to remove the need for a vCPU pointer. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_page_track.h | 2 +- arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/mmu/page_track.c | 4 ++-- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index 9d4a3b1b25b9..e99a30a4d38b 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -63,7 +63,7 @@ void kvm_slot_page_track_add_page(struct kvm *kvm, void kvm_slot_page_track_remove_page(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode); -bool kvm_slot_page_track_is_active(struct kvm_vcpu *vcpu, +bool kvm_slot_page_track_is_active(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2ada6dee920a..7d0da79668c0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2587,7 +2587,7 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, * track machinery is used to write-protect upper-level shadow pages, * i.e. this guards the role.level == 4K assertion below! */ - if (kvm_slot_page_track_is_active(vcpu, slot, gfn, KVM_PAGE_TRACK_WRITE)) + if (kvm_slot_page_track_is_active(vcpu->kvm, slot, gfn, KVM_PAGE_TRACK_WRITE)) return -EPERM; /* @@ -3884,7 +3884,7 @@ static bool page_fault_handle_page_track(struct kvm_vcpu *vcpu, * guest is writing the page which is write tracked which can * not be fixed by page fault handler. */ - if (kvm_slot_page_track_is_active(vcpu, fault->slot, fault->gfn, KVM_PAGE_TRACK_WRITE)) + if (kvm_slot_page_track_is_active(vcpu->kvm, fault->slot, fault->gfn, KVM_PAGE_TRACK_WRITE)) return true; return false; diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index cc4eb5b7fb76..35c221d5f6ce 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -173,7 +173,7 @@ EXPORT_SYMBOL_GPL(kvm_slot_page_track_remove_page); /* * check if the corresponding access on the specified guest page is tracked. */ -bool kvm_slot_page_track_is_active(struct kvm_vcpu *vcpu, +bool kvm_slot_page_track_is_active(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode) { @@ -186,7 +186,7 @@ bool kvm_slot_page_track_is_active(struct kvm_vcpu *vcpu, return false; if (mode == KVM_PAGE_TRACK_WRITE && - !kvm_page_track_write_tracking_enabled(vcpu->kvm)) + !kvm_page_track_write_tracking_enabled(kvm)) return false; index = gfn_to_index(gfn, slot->base_gfn, PG_LEVEL_4K); From patchwork Mon Nov 15 23:45:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1B5EC433EF for ; Tue, 16 Nov 2021 03:16:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BD23F61A4E for ; Tue, 16 Nov 2021 03:16:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244449AbhKPDTM (ORCPT ); Mon, 15 Nov 2021 22:19:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239020AbhKPDSD (ORCPT ); Mon, 15 Nov 2021 22:18:03 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45B6CC03AA09 for ; Mon, 15 Nov 2021 15:46:18 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id b11-20020a17090acc0b00b001a9179dc89fso348459pju.6 for ; Mon, 15 Nov 2021 15:46:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=k0ianZsaqavWbC1pNnaiNP0oEWRBt9h/8rN3evoNPnA=; b=gPwLk7cniJbuhYQqoB2hN105I72BPKOVyK/Lxg6UVbRSEKggwDCms4p5fmRGnyPdwB TlLvWd48BJFBvhVsU0vPQ01pGMdUZR2Shv4Ti0P5igVJqFviO9lpRWwlaDkW0gedbqfF TbGrUzcaunv4mJCUhedET+BnLT8tv4jEWi2WyFhXUKoGgUxFwr68Yb/YDOES6JyHb5V3 2boEhqIufHykl8Pn815XZOCCbBojtt2g2kNHj5OIqJdukZHrwqLc5X1VU7KzegyCK2Xd Se2rbZBjHrtWsKBANijVxWSzsEa/Wpk423+KsXsPunzFk1selpGJalG4MsLZNoZHQfDb w3Vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=k0ianZsaqavWbC1pNnaiNP0oEWRBt9h/8rN3evoNPnA=; b=c+eDGU3KS1wEbUs5vEviUT+QMbsjFZQcrpNKgt8fCEfEMw8gZ1oPF8ae5jyiizjTgJ wfPlMGGmiwz1/n1DYFxRkr++ggTZMhXqnlZS0w5S3Iuly836rp1/Sdp4Zqr/S0e4rnOD P9bX0gKZ4dzt+zTMG1aGiq0/mwKRI+Akt1lOwVka6WGRWLaEyV6kkFkR05OH6Evz57nV NqD+8pqrccoYclQJ/4lAf6HNotVBCDI3pVStwhCuWhMVYYO/aG9SpxobqNV1tvdw67Wg yiMKjJ7rfjA5ZkVUY+B6GUyeWrhBQS3qHE6ajJ8IHuxC/M68cJnhLSVhaI+KyAxXrwtl cNHg== X-Gm-Message-State: AOAM533ZW/5sCGm763u9uzPH5n1hfKWz+BrM7ts92FgNbqTfqcBlsTwD MP+HCr/LiL+icaBc3MHumP7J7BtzRjgX X-Google-Smtp-Source: ABdhPJzgHgGNqNTdY15bTvSWPSzzWmEJSVX8fSNyZorF4VifPCDginrJlXeGfdFqsPwOjBBSHjYpf0ugu26V X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a63:86c8:: with SMTP id x191mr1895173pgd.390.1637019977829; Mon, 15 Nov 2021 15:46:17 -0800 (PST) Date: Mon, 15 Nov 2021 15:45:54 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-7-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 06/15] KVM: x86/mmu: Remove need for a vcpu from mmu_try_to_unsync_pages From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The vCPU argument to mmu_try_to_unsync_pages is now only used to get a pointer to the associated struct kvm, so pass in the kvm pointer from the beginning to remove the need for a vCPU when calling the function. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 16 ++++++++-------- arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/spte.c | 2 +- 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7d0da79668c0..1e890509b93f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2561,10 +2561,10 @@ static int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva) return r; } -static void kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) +static void kvm_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp) { trace_kvm_mmu_unsync_page(sp); - ++vcpu->kvm->stat.mmu_unsync; + ++kvm->stat.mmu_unsync; sp->unsync = 1; kvm_mmu_mark_parents_unsync(sp); @@ -2576,7 +2576,7 @@ static void kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) * were marked unsync (or if there is no shadow page), -EPERM if the SPTE must * be write-protected. */ -int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, +int mmu_try_to_unsync_pages(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, bool can_unsync, bool prefetch) { struct kvm_mmu_page *sp; @@ -2587,7 +2587,7 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, * track machinery is used to write-protect upper-level shadow pages, * i.e. this guards the role.level == 4K assertion below! */ - if (kvm_slot_page_track_is_active(vcpu->kvm, slot, gfn, KVM_PAGE_TRACK_WRITE)) + if (kvm_slot_page_track_is_active(kvm, slot, gfn, KVM_PAGE_TRACK_WRITE)) return -EPERM; /* @@ -2596,7 +2596,7 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, * that case, KVM must complete emulation of the guest TLB flush before * allowing shadow pages to become unsync (writable by the guest). */ - for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) { + for_each_gfn_indirect_valid_sp(kvm, sp, gfn) { if (!can_unsync) return -EPERM; @@ -2615,7 +2615,7 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, */ if (!locked) { locked = true; - spin_lock(&vcpu->kvm->arch.mmu_unsync_pages_lock); + spin_lock(&kvm->arch.mmu_unsync_pages_lock); /* * Recheck after taking the spinlock, a different vCPU @@ -2630,10 +2630,10 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, } WARN_ON(sp->role.level != PG_LEVEL_4K); - kvm_unsync_page(vcpu, sp); + kvm_unsync_page(kvm, sp); } if (locked) - spin_unlock(&vcpu->kvm->arch.mmu_unsync_pages_lock); + spin_unlock(&kvm->arch.mmu_unsync_pages_lock); /* * We need to ensure that the marking of unsync pages is visible diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 52c6527b1a06..1073d10cce91 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -118,7 +118,7 @@ static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu) kvm_x86_ops.cpu_dirty_log_size; } -int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, +int mmu_try_to_unsync_pages(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, bool can_unsync, bool prefetch); void kvm_mmu_gfn_disallow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 75c666d3e7f1..b7271daa06c5 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -160,7 +160,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, * e.g. it's write-tracked (upper-level SPs) or has one or more * shadow pages and unsync'ing pages is not allowed. */ - if (mmu_try_to_unsync_pages(vcpu, slot, gfn, can_unsync, prefetch)) { + if (mmu_try_to_unsync_pages(vcpu->kvm, slot, gfn, can_unsync, prefetch)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); wrprot = true; From patchwork Mon Nov 15 23:45:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0262C433F5 for ; Tue, 16 Nov 2021 03:16:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7ECE061C12 for ; Tue, 16 Nov 2021 03:16:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345842AbhKPDTl (ORCPT ); Mon, 15 Nov 2021 22:19:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238652AbhKPDSD (ORCPT ); Mon, 15 Nov 2021 22:18:03 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47659C125D58 for ; Mon, 15 Nov 2021 15:46:20 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id y6-20020a17090322c600b001428ab3f888so6875297plg.8 for ; Mon, 15 Nov 2021 15:46:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=TaawULiB60gNJQHEA+L17NJPz8Ipx3YceDb8vfSdElI=; b=YVCMfqo7x3YELrx++SmAnLWSRYdYMvUd7XOA/7N2LvPPnYgV9V3xB+YIpK9MmoDWi4 HmEf1vhRPiBq65CO6iUKDp7MSvrjcXJ06eftMiP862dpT0FYE8WdlmYbXBv6jzDL8Os5 DCvtd0gTnj9vwFa/HxOrZE+p6KEWugMim2yyz5HDo9JKkHCVVls3vzRnd+U9tQSinIOy BQSaJCBC4/0pMzPqJQctunAYdgx8CYkANU8IARqArmObCYCXpvz6T3HDrUkKXzChIgO8 XxuIGZYlum/js6ZZD4OsFd4vyydu5TYKvNrdcE/dYiJ2eeAyuF0phXdRg915MyswHCtp qbMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TaawULiB60gNJQHEA+L17NJPz8Ipx3YceDb8vfSdElI=; b=Jtb/mwmxyK0xr3UkZx3ejHIb/+d2ZYtdO5VCgmzWML4sQGrP6+/QHNQSTuMSjVtIZt 6rzAfrRWUXI9GhjVK/gc2UJ0hYLqlUYl4JZGRK3AloU5SQHamwn/muna6+k4ubIF6nmF 4FGckIAJU4RSD5Ljs0d3xK+TsDTtfpd7BiQnkYTIZHhYW3+DMZPIQSuWwJ7Am1a0Qkvb qjJ1BbWtVSCMZj+o594wj6l/6ciuUTgagz8ne/tzMNGNgooU/mleUKIKiMzJSTe84Jt2 EPfoKbwN1Zl1vLU7JQK6lHc5LGQKAIg89och/uZVcjR0z06h8z4kYeZ74VKHnkTJ6wsX 41bg== X-Gm-Message-State: AOAM531X6tyKjd6DCQ1eL5g5VvVbzYR3dQZZk8TTVdW69uUySLBvEXZU ahwnXOOwVRlAQpOCSMm+JA+JDW8LpmNq X-Google-Smtp-Source: ABdhPJxCRMs7bDESGivm495NoEttjPJVoSUQjlL6shijeGmoCpLiTsFIqbnUqe9igl8QeSsFe3yaAH4wl0vA X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a05:6a00:2496:b0:49f:eba0:6575 with SMTP id c22-20020a056a00249600b0049feba06575mr36565288pfv.78.1637019979746; Mon, 15 Nov 2021 15:46:19 -0800 (PST) Date: Mon, 15 Nov 2021 15:45:55 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-8-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 07/15] KVM: x86/mmu: Factor shadow_zero_check out of make_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In the interest of devloping a version of make_spte that can function without a vCPU pointer, factor out the shadow_zero_mask to be an additional argument to the function. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/spte.c | 11 +++++++---- arch/x86/kvm/mmu/spte.h | 3 ++- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index b7271daa06c5..d3b059e96c6e 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -93,7 +93,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, - u64 mt_mask, u64 *new_spte) + u64 mt_mask, struct rsvd_bits_validate *shadow_zero_check, + u64 *new_spte) { int level = sp->role.level; u64 spte = SPTE_MMU_PRESENT_MASK; @@ -176,9 +177,9 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (prefetch) spte = mark_spte_for_access_track(spte); - WARN_ONCE(is_rsvd_spte(&vcpu->arch.mmu->shadow_zero_check, spte, level), + WARN_ONCE(is_rsvd_spte(shadow_zero_check, spte, level), "spte = 0x%llx, level = %d, rsvd bits = 0x%llx", spte, level, - get_rsvd_bits(&vcpu->arch.mmu->shadow_zero_check, spte, level)); + get_rsvd_bits(shadow_zero_check, spte, level)); if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) { /* Enforced by kvm_mmu_hugepage_adjust. */ @@ -198,10 +199,12 @@ bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool ad_need_write_protect = kvm_vcpu_ad_need_write_protect(vcpu); u64 mt_mask = static_call(kvm_x86_get_mt_mask)(vcpu, gfn, kvm_is_mmio_pfn(pfn)); + struct rsvd_bits_validate *shadow_zero_check = &vcpu->arch.mmu->shadow_zero_check; return make_spte(vcpu, sp, slot, pte_access, gfn, pfn, old_spte, prefetch, can_unsync, host_writable, - ad_need_write_protect, mt_mask, new_spte); + ad_need_write_protect, mt_mask, shadow_zero_check, + new_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index e739f2ebf844..6134a10487c4 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -333,7 +333,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, - u64 mt_mask, u64 *new_spte); + u64 mt_mask, struct rsvd_bits_validate *shadow_zero_check, + u64 *new_spte); bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, From patchwork Mon Nov 15 23:45:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2632C433F5 for ; Tue, 16 Nov 2021 03:16:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DCCA36320D for ; Tue, 16 Nov 2021 03:16:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245343AbhKPDTf (ORCPT ); Mon, 15 Nov 2021 22:19:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239072AbhKPDSD (ORCPT ); Mon, 15 Nov 2021 22:18:03 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57B36C125D5A for ; Mon, 15 Nov 2021 15:46:22 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id m16-20020a628c10000000b004a282d715b2so5365825pfd.11 for ; Mon, 15 Nov 2021 15:46:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NK+8rYcOVoQTEX0gJ6Fr1MIpj494/IRugHRkqHwCiJU=; b=h6gkAhTxTIqpYt3pIdFjjD79PneayWjH+Kmp6eYcncUPoQ5fS0LdsmhoG90DPNXguz y5hRRppS0kc1q3IyJOrcWXAppggGAAuYSOvY21AmfBUlLLPYijYNfgyulKE0hZ4RgIt4 DM7ipK8npI6H5ps/rfev9KyE0cHdW1BXB5/o2AJVqCNEN8ygBHDI0uj4sGQFR0I/Ky+n +XkypehF0mN7p3pwwLDHStg86m0NuLdV753adrYDj+nFMtI35G/69v4wZ9Xwmbx6A3TM WRnvIWKIBXoVzXH5iEx3iMN/yGluNHf4aBXnVixf65ayk8odyi52jMHF2z8uuidXslac AIhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NK+8rYcOVoQTEX0gJ6Fr1MIpj494/IRugHRkqHwCiJU=; b=wsVugdFyxRadRJPFV2uvfmPrKnxJF/phut0P13BuSF30dQ2vo8I+RAr5Maq+Rwf1L2 pzzgjCNKXaTOz8BUFs5SVmUKzKdAxld+W/TzZ/MAmV03aGEIOTn/bbf1DUwBzvGkj+ES 39qyy+/8Ow4Fs+F6qJO3MJMKp+9hv6uSNr5ZIW8cgL0WleMy4t+rTMbQEVd0cTj4ZSGn GLprT6V2DSee2oZtY4VfPawyvtUegyaW3XcfdD8GLBMVeO0rBatmAZaoDzvfnc7NKUVp a4ycNilUkGpCj3xjMwxnBRVxKflpE52bUWRLhxcApjwH/baWIlpyl8gEAHecfW7RPv92 53zA== X-Gm-Message-State: AOAM532StueBfZkU9XMha3ir6xHH6hx8kjQO0ecR7fetXFpZkmG9jFaY KC9/MQLoBcO976m9FhGVmpJ2bqiUULSN X-Google-Smtp-Source: ABdhPJye/DO3A/CAHjinV+X6XSHTaIT7/Y4KHsao3Dc96K/pWGVR+UoPfq4WWIv0S5be28xFqsIIbeQNYaom X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a65:6854:: with SMTP id q20mr1884133pgt.240.1637019981859; Mon, 15 Nov 2021 15:46:21 -0800 (PST) Date: Mon, 15 Nov 2021 15:45:56 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-9-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 08/15] KVM: x86/mmu: Replace vcpu argument with kvm pointer in make_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org No that nothing in make_spte actually needs the vCPU argument, just pass in a pointer to the struct kvm. This allows the function to be used in situations where there is no relevant struct vcpu. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/spte.c | 8 ++++---- arch/x86/kvm/mmu/spte.h | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index d3b059e96c6e..d98723b14cec 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -89,7 +89,7 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) E820_TYPE_RAM); } -bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, +bool make_spte(struct kvm *kvm, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, @@ -161,7 +161,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, * e.g. it's write-tracked (upper-level SPs) or has one or more * shadow pages and unsync'ing pages is not allowed. */ - if (mmu_try_to_unsync_pages(vcpu->kvm, slot, gfn, can_unsync, prefetch)) { + if (mmu_try_to_unsync_pages(kvm, slot, gfn, can_unsync, prefetch)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); wrprot = true; @@ -184,7 +184,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) { /* Enforced by kvm_mmu_hugepage_adjust. */ WARN_ON(level > PG_LEVEL_4K); - mark_page_dirty_in_slot(vcpu->kvm, slot, gfn); + mark_page_dirty_in_slot(kvm, slot, gfn); } *new_spte = spte; @@ -201,7 +201,7 @@ bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, kvm_is_mmio_pfn(pfn)); struct rsvd_bits_validate *shadow_zero_check = &vcpu->arch.mmu->shadow_zero_check; - return make_spte(vcpu, sp, slot, pte_access, gfn, pfn, old_spte, + return make_spte(vcpu->kvm, sp, slot, pte_access, gfn, pfn, old_spte, prefetch, can_unsync, host_writable, ad_need_write_protect, mt_mask, shadow_zero_check, new_spte); diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 6134a10487c4..5bb055688080 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -329,7 +329,7 @@ static inline u64 get_mmio_spte_generation(u64 spte) return gen; } -bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, +bool make_spte(struct kvm *kvm, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, From patchwork Mon Nov 15 23:45:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10B69C433F5 for ; Tue, 16 Nov 2021 03:16:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EDD5D61C12 for ; Tue, 16 Nov 2021 03:16:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244141AbhKPDTO (ORCPT ); Mon, 15 Nov 2021 22:19:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239071AbhKPDSD (ORCPT ); Mon, 15 Nov 2021 22:18:03 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3117EC125D5C for ; Mon, 15 Nov 2021 15:46:24 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id m16-20020a628c10000000b004a282d715b2so5365867pfd.11 for ; Mon, 15 Nov 2021 15:46:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=4sbG15/K0RC0hA1szqgTwpG7ttP6Yoxf33M0eCrVUVw=; b=Fdx+f6R0epGfm7e9048Z0TCAZY+74t7wZ+aic8nAfEQmn6Ukdy597iOd8j1SJoLPDz 9ZVsc5kpQ7YXrMkWumaL1vKWO1MdPifALT/yhrRE04AdKahbmlLOArvU2ueB9gyTXlp4 v/9wA/iyRTIh79KxfHcK7wSqCeLt8qAhyWEfRWVL0IkM8B9CxlP4P/AOD23vQBKjpZu1 TnswFwNP8R12Cd/HkK3PKc8gbv/EpdbSZkCZz+QZpL8Ggj8uTybrTcq2z8Ru85oM4i1T PQY4HM1v2Hc5bLylx1BdDQx5+tvrabeoJ8f3f0B71M8+up7mOwBa4G2LuUSI2LOOj/D9 qs4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4sbG15/K0RC0hA1szqgTwpG7ttP6Yoxf33M0eCrVUVw=; b=MaHMgOu23Qe0A+zdBLDZwmnyJJ7E91eIv9D41cLWm7L3bN2Xfth8tR/5iqyEGYWcnO 3t5Bw6JTsKLQn/5rcNeX1z+U+FVVmx7/X8NhPDRP0VCncZyrSu/TaA3i5QgCLp6UmcXn 0zPfXoChhbDP867QJPFfbupj7cxEQDj0jROjqb8CJFHNul5TiEwSGNT5qR+KspBuF2Ij GY4o1Zs3OcxavY72X8k6DjycIdwiwCxA66qeQaC+PwXuQ1Jvw7eqWWZ7GlHILk+K0yKe ptT1fIYjHkpzZjxPk2w76OeQ7kCRqnhbPDjqEIyjDuqi0S5An2VI7CcT0Dukna2cj8fF XZgg== X-Gm-Message-State: AOAM531FxCkf6gtyWZajc+KhW8f68Zq3uBAkYC0XHFtIDv2ZVMhyNJWU k9AYd52Dd/jy3IPIYZxYlAbl0ablYEGC X-Google-Smtp-Source: ABdhPJxHcnpKZWE4BYEiOV5wXxvBj6tl6hlHLLr4T/NZSL33XrFN0YnkTklLBn54gdnLvMYjM9poeb8dMfGz X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a17:903:1c7:b0:141:e630:130c with SMTP id e7-20020a17090301c700b00141e630130cmr40070558plh.80.1637019983710; Mon, 15 Nov 2021 15:46:23 -0800 (PST) Date: Mon, 15 Nov 2021 15:45:57 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-10-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 09/15] KVM: x86/mmu: Factor out the meat of reset_tdp_shadow_zero_bits_mask From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Factor out the implementation of reset_tdp_shadow_zero_bits_mask to a helper function which does not require a vCPU pointer. The only element of the struct kvm_mmu context used by the function is the shadow root level, so pass that in too instead of the mmu context. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1e890509b93f..fdf0f15ab19d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4450,17 +4450,14 @@ static inline bool boot_cpu_is_amd(void) * possible, however, kvm currently does not do execution-protection. */ static void -reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, - struct kvm_mmu *context) +build_tdp_shadow_zero_bits_mask(struct rsvd_bits_validate *shadow_zero_check, + int shadow_root_level) { - struct rsvd_bits_validate *shadow_zero_check; int i; - shadow_zero_check = &context->shadow_zero_check; - if (boot_cpu_is_amd()) __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), - context->shadow_root_level, false, + shadow_root_level, false, boot_cpu_has(X86_FEATURE_GBPAGES), false, true); else @@ -4470,12 +4467,20 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, if (!shadow_me_mask) return; - for (i = context->shadow_root_level; --i >= 0;) { + for (i = shadow_root_level; --i >= 0;) { shadow_zero_check->rsvd_bits_mask[0][i] &= ~shadow_me_mask; shadow_zero_check->rsvd_bits_mask[1][i] &= ~shadow_me_mask; } } +static void +reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, + struct kvm_mmu *context) +{ + build_tdp_shadow_zero_bits_mask(&context->shadow_zero_check, + context->shadow_root_level); +} + /* * as the comments in reset_shadow_zero_bits_mask() except it * is the shadow page table for intel nested guest. From patchwork Mon Nov 15 23:45:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4B76C433F5 for ; Tue, 16 Nov 2021 03:16:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AB0E361C12 for ; Tue, 16 Nov 2021 03:16:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245661AbhKPDTV (ORCPT ); Mon, 15 Nov 2021 22:19:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239034AbhKPDSE (ORCPT ); Mon, 15 Nov 2021 22:18:04 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00042C125D5E for ; Mon, 15 Nov 2021 15:46:25 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id y6-20020a17090322c600b001428ab3f888so6875316plg.8 for ; Mon, 15 Nov 2021 15:46:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+OKOVpu+rqKFCqXdsvqkypiAF8JDNqlIsCpMpjb8VLA=; b=WxJ0KprsO1Mp/EhTFoXcCLjjKauA8lxq37bsHlwfjPMreMOtBfVZJ9OndfvBRtccbD 2CIcrXt8tppOWKQbEiNksJDQacp2mwxBRQtNrriyDsiL3IQDaJQ2yadv7ijKQWiFMLoz whGc+1COWgy/2nG/6R2dw2hsfWtdrLQ6EtseVeIKkB2e5ynMJPASYnIToIVWkylQB8/g aL1hNH2eNDFOR/pnvQqPm/j7mN3USgEp/wA2Hj4LGr3qh+4Cw+az0QLgM6wvDeUQIQJZ c9Ov6oY7TLd7InF6MDRR8GIr16P3STwdRWxtfdx53bHZbSZ9UtllyqtE/d78BYs9DC6j gSSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+OKOVpu+rqKFCqXdsvqkypiAF8JDNqlIsCpMpjb8VLA=; b=2xA1+JkvXJ6koRIoLpjZHyk2qznTMQlUJ4sYw9ZEJfUkHYdwXFtOSaQnNYfi9WuAaA nHog0oT0Q+XklR5mqYAjwNN+UsmVHhFO7voQaRXcTsuuHtYPJ4D98suzYqkOT4ZoUgIX 2EmYqO7A4MffsS1WSbaG/VQ/7ih39hz7E9PcP/aq3bCtHQyMweSAqnWEMjAr9nfwSaI5 EdLiG24epSPYtNRyBzdmiHRGU/amzvPl8zV4TAFX3mTlc5GxlteQe1T9jScA2RYnaNwd d0TQzYp4HhUwpPKP1SusWK6UBn5Gq1OxHkKEmUIgoGCSb4Ku1IUJbhqP+qxawdymPjJp S/YQ== X-Gm-Message-State: AOAM531T10BzJuL7RnZbhOyHrE/OXQIlypSfH3jGbmn92IMFtNH4VGRr e7UkZPm2IZwq3nZ2TZ0mkn5vpn84IzmY X-Google-Smtp-Source: ABdhPJxOhmPP2Kns1hVgSK4SgG30TWor9gDWzgUZRJolzYH+LUA73qqn+Xsj8rF2l7Wc2HeZdTRZ7ZddbCLV X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a17:90a:3b02:: with SMTP id d2mr3018851pjc.159.1637019985502; Mon, 15 Nov 2021 15:46:25 -0800 (PST) Date: Mon, 15 Nov 2021 15:45:58 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-11-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 10/15] KVM: x86/mmu: Propagate memslot const qualifier From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In preparation for implementing in-place hugepage promotion, various functions will need to be called from zap_collapsible_spte_range, which has the const qualifier on its memslot argument. Propagate the const qualifier to the various functions which will be needed. This just serves to simplify the following patch. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_page_track.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/page_track.c | 4 ++-- arch/x86/kvm/mmu/spte.c | 2 +- arch/x86/kvm/mmu/spte.h | 2 +- include/linux/kvm_host.h | 10 +++++----- virt/kvm/kvm_main.c | 12 ++++++------ 8 files changed, 19 insertions(+), 19 deletions(-) diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index e99a30a4d38b..eb186bc57f6a 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -64,8 +64,8 @@ void kvm_slot_page_track_remove_page(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode); bool kvm_slot_page_track_is_active(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode); + const struct kvm_memory_slot *slot, + gfn_t gfn, enum kvm_page_track_mode mode); void kvm_page_track_register_notifier(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index fdf0f15ab19d..ef7a84422463 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2576,7 +2576,7 @@ static void kvm_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp) * were marked unsync (or if there is no shadow page), -EPERM if the SPTE must * be write-protected. */ -int mmu_try_to_unsync_pages(struct kvm *kvm, struct kvm_memory_slot *slot, +int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, bool can_unsync, bool prefetch) { struct kvm_mmu_page *sp; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1073d10cce91..6563cce9c438 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -118,7 +118,7 @@ static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu) kvm_x86_ops.cpu_dirty_log_size; } -int mmu_try_to_unsync_pages(struct kvm *kvm, struct kvm_memory_slot *slot, +int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, bool can_unsync, bool prefetch); void kvm_mmu_gfn_disallow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index 35c221d5f6ce..68eb1fb548b6 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -174,8 +174,8 @@ EXPORT_SYMBOL_GPL(kvm_slot_page_track_remove_page); * check if the corresponding access on the specified guest page is tracked. */ bool kvm_slot_page_track_is_active(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode) + const struct kvm_memory_slot *slot, + gfn_t gfn, enum kvm_page_track_mode mode) { int index; diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index d98723b14cec..7be41d2dbb02 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -90,7 +90,7 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) } bool make_spte(struct kvm *kvm, struct kvm_mmu_page *sp, - struct kvm_memory_slot *slot, unsigned int pte_access, + const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, u64 mt_mask, struct rsvd_bits_validate *shadow_zero_check, diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 5bb055688080..d7598506fbad 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -330,7 +330,7 @@ static inline u64 get_mmio_spte_generation(u64 spte) } bool make_spte(struct kvm *kvm, struct kvm_mmu_page *sp, - struct kvm_memory_slot *slot, unsigned int pte_access, + const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, u64 mt_mask, struct rsvd_bits_validate *shadow_zero_check, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 60a35d9fe259..675da38fac7f 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -435,7 +435,7 @@ struct kvm_memory_slot { u16 as_id; }; -static inline bool kvm_slot_dirty_track_enabled(struct kvm_memory_slot *slot) +static inline bool kvm_slot_dirty_track_enabled(const struct kvm_memory_slot *slot) { return slot->flags & KVM_MEM_LOG_DIRTY_PAGES; } @@ -855,9 +855,9 @@ void kvm_set_page_accessed(struct page *page); kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); -kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn); -kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn); -kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, +kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn); +kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn); +kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool *async, bool write_fault, bool *writable, hva_t *hva); @@ -934,7 +934,7 @@ struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn); bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn); bool kvm_vcpu_is_visible_gfn(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn); -void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot, gfn_t gfn); +void mark_page_dirty_in_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot, gfn_t gfn); void mark_page_dirty(struct kvm *kvm, gfn_t gfn); struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3f6d450355f0..6dbf8cba1900 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2138,12 +2138,12 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn) return size; } -static bool memslot_is_readonly(struct kvm_memory_slot *slot) +static bool memslot_is_readonly(const struct kvm_memory_slot *slot) { return slot->flags & KVM_MEM_READONLY; } -static unsigned long __gfn_to_hva_many(struct kvm_memory_slot *slot, gfn_t gfn, +static unsigned long __gfn_to_hva_many(const struct kvm_memory_slot *slot, gfn_t gfn, gfn_t *nr_pages, bool write) { if (!slot || slot->flags & KVM_MEMSLOT_INVALID) @@ -2438,7 +2438,7 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, return pfn; } -kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, +kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool *async, bool write_fault, bool *writable, hva_t *hva) { @@ -2478,13 +2478,13 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); -kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn) +kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); -kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn) +kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn) { return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL, NULL); } @@ -3079,7 +3079,7 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len) EXPORT_SYMBOL_GPL(kvm_clear_guest); void mark_page_dirty_in_slot(struct kvm *kvm, - struct kvm_memory_slot *memslot, + const struct kvm_memory_slot *memslot, gfn_t gfn) { if (memslot && kvm_slot_dirty_track_enabled(memslot)) { From patchwork Mon Nov 15 23:45:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07D17C433F5 for ; Tue, 16 Nov 2021 03:16:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DF40361C12 for ; Tue, 16 Nov 2021 03:16:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245471AbhKPDTQ (ORCPT ); Mon, 15 Nov 2021 22:19:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238938AbhKPDSE (ORCPT ); Mon, 15 Nov 2021 22:18:04 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE8DAC125D60 for ; Mon, 15 Nov 2021 15:46:27 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id a12-20020a17090aa50cb0290178fef5c227so306030pjq.1 for ; Mon, 15 Nov 2021 15:46:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=0zwyK1+5SQSFi33zyn3A2qd649otgAUCjnWlf13vnt4=; b=eLYV4+JIwHSuMcMpNKKlGM6iy8jUFnvLlNL8Aq95E7jZBtaCwbmdVfVVHCF0LY6qeS 3WfargDXoaDwa5wtyd3HN/VegDEqqGFbzcS/kLiu3JhWuUFMG6L0WyaXb3yNzpF1UDxT Db2GTF8u0GsI7+1GaPha6eX1CuiWHO0L2WKt6Y4uN9V+ZNgXMYhy/MHgOF4zHqz2Bjfq sIV146dWVag2Y+XUfHbjrCHJxZYXvRlqp0QspMblFxm8D8oQtbRuLUlNuqmZFtee2cT8 Jz/BRbS5ImG9Dx2qndjtt2CeQ/C0ULPdJZdB0vP4AHrrZRioItj9bcxJbirYqucQBIxJ jOXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0zwyK1+5SQSFi33zyn3A2qd649otgAUCjnWlf13vnt4=; b=XHE0tqjNvDbmVF/+iNR+LlQwJw37oQU19IJR/iOebB4YT4zCimFOXShhYZY5F8BoFF CZb48UfZuxPdeNWS9xHWD5uMidz1JQOZdahlk0ji5TBxnz+4go/dWifV23DXrAw7JjF0 eNEJ3jgm6RWwzV1iJrojmAB69JZO0VAKGZkSrfU3TPH2ODu43JyxqXFqvinm+QnuihNp LKSxSZ1acW1QMOLNhpjvcia3/d39h2jUHG2iaetwEqD8V7/RuwJYztyq6zfKx4cqvo/l P3bXHerSQ9tHHCPew4SbT/fp2GqFn2TTPoEOmc8EoU8MzqnolWc3O5mGHfR+f3A9Axds aGbQ== X-Gm-Message-State: AOAM531aHUz5DnJIPVtRAhVzbkayL9l7I94j+JeyKLFGAJmcQk8tcPVb NNSjy7tw/FeZe4uPV+r16SQFvUCljde1 X-Google-Smtp-Source: ABdhPJy0ArkW29tRlngLJfDrzxQXp4ZPpLuHRsLyZfvuNRCXPgtMO6AdXTagSFmsnZPpyQZO6mAqrpn/wzPb X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a17:90a:800a:: with SMTP id b10mr70386185pjn.162.1637019987257; Mon, 15 Nov 2021 15:46:27 -0800 (PST) Date: Mon, 15 Nov 2021 15:45:59 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-12-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 11/15] KVM: x86/MMU: Refactor vmx_get_mt_mask From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove the gotos from vmx_get_mt_mask to make it easier to separate out the parts which do not depend on vcpu state. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/vmx/vmx.c | 23 +++++++---------------- 1 file changed, 7 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 71f54d85f104..77f45c005f28 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6987,7 +6987,6 @@ static int __init vmx_check_processor_compat(void) static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { u8 cache; - u64 ipat = 0; /* We wanted to honor guest CD/MTRR/PAT, but doing so could result in * memory aliases with conflicting memory types and sometimes MCEs. @@ -7007,30 +7006,22 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) * EPT memory type is used to emulate guest CD/MTRR. */ - if (is_mmio) { - cache = MTRR_TYPE_UNCACHABLE; - goto exit; - } + if (is_mmio) + return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; - if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) { - ipat = VMX_EPT_IPAT_BIT; - cache = MTRR_TYPE_WRBACK; - goto exit; - } + if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) + return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; if (kvm_read_cr0(vcpu) & X86_CR0_CD) { - ipat = VMX_EPT_IPAT_BIT; if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) cache = MTRR_TYPE_WRBACK; else cache = MTRR_TYPE_UNCACHABLE; - goto exit; - } - cache = kvm_mtrr_get_guest_memory_type(vcpu, gfn); + return (cache << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; + } -exit: - return (cache << VMX_EPT_MT_EPTE_SHIFT) | ipat; + return kvm_mtrr_get_guest_memory_type(vcpu, gfn) << VMX_EPT_MT_EPTE_SHIFT; } static void vmcs_set_secondary_exec_control(struct vcpu_vmx *vmx, u32 new_ctl) From patchwork Mon Nov 15 23:46:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A48DC433EF for ; Tue, 16 Nov 2021 03:16:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A54761A4E for ; Tue, 16 Nov 2021 03:16:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238297AbhKPDTv (ORCPT ); Mon, 15 Nov 2021 22:19:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237713AbhKPDSE (ORCPT ); Mon, 15 Nov 2021 22:18:04 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73A78C125D62 for ; Mon, 15 Nov 2021 15:46:29 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id f206-20020a6238d7000000b004a02dd7156bso9198085pfa.5 for ; Mon, 15 Nov 2021 15:46:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=oOs80LZVdiQ4d3q5aPfHR+bsPyznZ55Wbv8FEL1qlus=; b=mF0hAH9QLpjpmeGlCCFxRTBcxrivS+KQjuYb3LUyFj60om6rSZzrwpaKaBVGYAQUVo G/jC+HFg+lm6Eh2AC0N5nsncCcaS+7aJjaHsxkhKySmn7+tC3Ddrr8oSwGr3v++mQeB5 lFsvnM9HWxS0dljDtQkQQ0bwGg/Sgo5aQO4sfF6FaxwPMUxaIYWpBW1GYeIUbuzqjJK8 qS58pJD5z3G1SgA7GNRk6j4QwQGHpxgKWdH8SWYqdN5PuT0UuvA0VmwE2uTRHQj0vqGE N2WKHWraNlXVzUWPsNL7fO4Kk8yIac9Peis4ZIoo14kBEVt7cfgkr724txZS75XvQKTc qupQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=oOs80LZVdiQ4d3q5aPfHR+bsPyznZ55Wbv8FEL1qlus=; b=rTTZxEblBMzb0hMBIne2PstTDZsDs6RCS5L8RZ15QgmzyuyrGoIFGk1scqkQNQGmlU Q/dcSe8oYHZunsQp3Td1cUG7IrI9gYmHMcCteZP/UhsOmg4deyDK9a+plJoix6avTSzg b3GMqNdaSFk0cb2XMZsD3GPu7mYoZrsVoILAQFyP6OE4oCpw6dtM1f1VvcvlZCJZYHBa diQg/3KPUp1yg03qoeuaNXWPrjrZdyQf6sRX4VWLmoSvuDpN0OXD76cagLf2U/FX4YT5 FDiBcb9dqKoL5yq+tC7nZbOkdMUvr4YKWakO/k5qOVV22xBAuVdTSq8WKjWBAyCxgMdE hz7g== X-Gm-Message-State: AOAM533bWUI9+XJHJlSlzkfOFlqZFpAVfiy6Wfy5JsNTs1+Qs40e/X3a OrNR2YhO+mL40LP24CYh5+QJ3XpwvOqy X-Google-Smtp-Source: ABdhPJzRGCVAwdaAydunYFkfEvbpn17w5k9kC3M5j5eufSSBuuTXWSLhdlcmuG9m8skndRSZpj8BmuH1DPQP X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a17:90a:6e0c:: with SMTP id b12mr15821912pjk.41.1637019988883; Mon, 15 Nov 2021 15:46:28 -0800 (PST) Date: Mon, 15 Nov 2021 15:46:00 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-13-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 12/15] KVM: x86/mmu: Factor out part of vmx_get_mt_mask which does not depend on vcpu From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Factor out the parts of vmx_get_mt_mask which do not depend on the vCPU argument. This also requires adding some error reporting to the helper function to say whether it was possible to generate the MT mask without a vCPU argument. This refactoring will allow the MT mask to be computed when noncoherent DMA is not enabled on a VM. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/vmx/vmx.c | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 77f45c005f28..4129614262e8 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6984,9 +6984,26 @@ static int __init vmx_check_processor_compat(void) return 0; } +static bool vmx_try_get_mt_mask(struct kvm *kvm, gfn_t gfn, + bool is_mmio, u64 *mask) +{ + if (is_mmio) { + *mask = MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; + return true; + } + + if (!kvm_arch_has_noncoherent_dma(kvm)) { + *mask = (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; + return true; + } + + return false; +} + static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { u8 cache; + u64 mask; /* We wanted to honor guest CD/MTRR/PAT, but doing so could result in * memory aliases with conflicting memory types and sometimes MCEs. @@ -7006,11 +7023,8 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) * EPT memory type is used to emulate guest CD/MTRR. */ - if (is_mmio) - return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; - - if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) - return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; + if (vmx_try_get_mt_mask(vcpu->kvm, gfn, is_mmio, &mask)) + return mask; if (kvm_read_cr0(vcpu) & X86_CR0_CD) { if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) From patchwork Mon Nov 15 23:46:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 954E1C433EF for ; Tue, 16 Nov 2021 03:17:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72B5761C12 for ; Tue, 16 Nov 2021 03:17:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244648AbhKPDTy (ORCPT ); Mon, 15 Nov 2021 22:19:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238823AbhKPDSH (ORCPT ); Mon, 15 Nov 2021 22:18:07 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80BF9C125D64 for ; Mon, 15 Nov 2021 15:46:31 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id hg9-20020a17090b300900b001a6aa0b7d8cso685271pjb.2 for ; Mon, 15 Nov 2021 15:46:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Dlsx6G0Aa6YAW2eF9MRy8C2GdRBAN+E2yIu8b7+b9gE=; b=AEPSb9gB9X+mcoeEkrhUu6b+vkvNQpe19neoqdvuZtdk8rgBcyqmFEQqDZoShMgBCu HREY/EiiIwsModnlEeROFHIQBCUq8L9TIJmwLikrEdzt361/YGsezjGOKqfuowRZtoK5 SS4gsvFbD606g0JsRcmIj2deIeblTjqimUdvAxUhiZ9Dk6MoEao7lPnuQ0qj+C07aQVt CiBZCpRJdTH1X0Ybb6PtPVmtrS5v0+60kQZ6vpM6x5Xa6ybC/I8Iw940CH7vD2nA4RTV 4XaSQzbfht6xxSyB9CLzmuSLQBZSFmvGE/1Cjtv4I+zt6A+TElEvfa9ivyC49Iiy2zEX cORg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Dlsx6G0Aa6YAW2eF9MRy8C2GdRBAN+E2yIu8b7+b9gE=; b=MrW6tCN4yfYWhba2HNNqh1DefVRjokMdsqNC10x/Ok1yhzJuV1HVykcDDm9LK6PE8j DtMsryC0ecn2iwMH3dSVNwJB5qhM4LBTCP3hEIbo06uPxmz5Se8Gjojkt+Kx9CPsPgq2 w330aWyi5uD3vZcJ+GJ7S4kzruUWdzDpEhGS5WvSEQOFdTdzAu6usm05jicxq5BgDaka YKD8CLL7Svac+nEe8w0YCfPcwk0REZWAl1+V6e6HSWf6ljWn7ZcDYGIRD2l4W6LeNkR2 elvzcEoKLqnpjtojwi85CftfAxcwEfuG2T5Wa9ZXFWv3eV/6QIDSkCBzX8/vsHpzAzro egYA== X-Gm-Message-State: AOAM530/0s47hrzOabeLnPWmR1+gKXX437aNa9FzfX3QUGlOOPUEGJfb s2mbKzPrOsqYVIWm6J2qbSZBFkI2NqFR X-Google-Smtp-Source: ABdhPJxR7BgjoBwQ9B89WxUa2H67v65qgcqcKHTSHeJIPSHgSFYipbBYjqUIToRFiR6eHE45WHICziu1KB4s X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a17:90a:284f:: with SMTP id p15mr5719pjf.1.1637019990603; Mon, 15 Nov 2021 15:46:30 -0800 (PST) Date: Mon, 15 Nov 2021 15:46:01 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-14-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 13/15] KVM: x86/mmu: Add try_get_mt_mask to x86_ops From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add another function for getting the memory type mask to x86_ops. This version of the function can fail, but it does not require a vCPU pointer. It will be used in a subsequent commit for in-place large page promotion when disabling dirty logging. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/svm/svm.c | 8 ++++++++ arch/x86/kvm/vmx/vmx.c | 1 + 4 files changed, 12 insertions(+) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index cefe1d81e2e8..c86e9629ff1a 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -84,6 +84,7 @@ KVM_X86_OP_NULL(sync_pir_to_irr) KVM_X86_OP(set_tss_addr) KVM_X86_OP(set_identity_map_addr) KVM_X86_OP(get_mt_mask) +KVM_X86_OP(try_get_mt_mask) KVM_X86_OP(load_mmu_pgd) KVM_X86_OP_NULL(has_wbinvd_exit) KVM_X86_OP(get_l2_tsc_offset) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 88fce6ab4bbd..ae13075f4d4c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1400,6 +1400,8 @@ struct kvm_x86_ops { int (*set_tss_addr)(struct kvm *kvm, unsigned int addr); int (*set_identity_map_addr)(struct kvm *kvm, u64 ident_addr); u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio); + bool (*try_get_mt_mask)(struct kvm *kvm, gfn_t gfn, + bool is_mmio, u64 *mask); void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 21bb81710e0f..d073cc3985e6 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4067,6 +4067,13 @@ static bool svm_has_emulated_msr(struct kvm *kvm, u32 index) return true; } +static bool svm_try_get_mt_mask(struct kvm *kvm, gfn_t gfn, + bool is_mmio, u64 *mask) +{ + *mask = 0; + return true; +} + static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { return 0; @@ -4660,6 +4667,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .set_tss_addr = svm_set_tss_addr, .set_identity_map_addr = svm_set_identity_map_addr, .get_mt_mask = svm_get_mt_mask, + .try_get_mt_mask = svm_try_get_mt_mask, .get_exit_info = svm_get_exit_info, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4129614262e8..8cd6c1f50d3e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7658,6 +7658,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .set_tss_addr = vmx_set_tss_addr, .set_identity_map_addr = vmx_set_identity_map_addr, .get_mt_mask = vmx_get_mt_mask, + .try_get_mt_mask = vmx_try_get_mt_mask, .get_exit_info = vmx_get_exit_info, From patchwork Mon Nov 15 23:46:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B389CC4332F for ; Tue, 16 Nov 2021 03:17:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 96EA661A4E for ; Tue, 16 Nov 2021 03:17:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346295AbhKPDT5 (ORCPT ); Mon, 15 Nov 2021 22:19:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238944AbhKPDSI (ORCPT ); Mon, 15 Nov 2021 22:18:08 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E43F9C125D66 for ; Mon, 15 Nov 2021 15:46:32 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id p12-20020a17090b010c00b001a65bfe8054so677619pjz.8 for ; Mon, 15 Nov 2021 15:46:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=z5GMv6NbiMRoCg4DiG3eqEEOfHTG8i5CBe13GQYa+1k=; b=A2uLziMDfGCU+rrQ2OKXdWg6gzg6DIqlGF5SKsaSoBg7uWPPxDJY0s748vK6+QQSOT xVg/6WqCuaWnBf1aaRCxe9xf7+yvVjEDPObAjPhySWzzmEz3J9u6ElblqUWlvjVY7gwQ 9vmqP0gILIWkpqZSxKMwhFsYVRsxmzG1kGVsOQ9RkVhdZp5YZlMfLwDa9/owMnejuIjF Z/lLZlFfojUzhd72kFpUzI+tuPxreNL6xJRK2UkPOweqdhf/ZuW4ZbOZV0w9qjWt/Iav 94QkxePOPuEsMnQQXmsxrwGip68cpaX8/znoVlNLgXNzuZrx410YEZT+0NYZ0TcODjgj S1PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=z5GMv6NbiMRoCg4DiG3eqEEOfHTG8i5CBe13GQYa+1k=; b=HibSbj7L+EXtO5ZPynPtb6Eik2wwrkd/yPaelbBIIHXmO1bt+qCK4ikCicKb3IfQAB jXahyn2tDI4rETLyhXhzyWTpGQDPHU9NVGWEhkR1R/VL6+WbG5MJ7u4Jy1anrLVg0Ze7 RHG3PNqumr/bHCHBNlfbKGDUfFZGDzLuQAqfsO2VFqM2wAHmUGw5NHdcjKX4J9wbkuLW JeGHQv69PQTCdRySvkMPQeDU12EthrgDrTsiXke5yl+5zYcmMb9QCQHAAGD3C9s9BEzM 58Btyg/5QrZD9NCmO6Ng9x4w8abDkRl25hBOCUvHQTTVvxIj2327KuQktrIwA/BC3kB5 yaMw== X-Gm-Message-State: AOAM533DuEwIlWANQ+s1BHPIqRRV9HBTxPtJ+sYIOO8PQ13Qbj6dd7cs Q6WAJImWJooB8CkTAomhHoXJf5JWizTM X-Google-Smtp-Source: ABdhPJxHaYh7TLQhMk8Ll5lZtJAQgEuc1s0oBzh/SNTUN5Q0ypeYOJUqaOndyUFkEtIVwn11rQrJFTghwIqK X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a17:90b:1b4a:: with SMTP id nv10mr3225743pjb.118.1637019992439; Mon, 15 Nov 2021 15:46:32 -0800 (PST) Date: Mon, 15 Nov 2021 15:46:02 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-15-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 14/15] KVM: x86/mmu: Make kvm_is_mmio_pfn usable outside of spte.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Export kvm_is_mmio_pfn from spte.c. It will be used in a subsequent commit for in-place lpage promotion when disabling dirty logging. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/spte.c | 2 +- arch/x86/kvm/mmu/spte.h | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 7be41d2dbb02..13b6143f6333 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -68,7 +68,7 @@ u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access) return spte; } -static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) +bool kvm_is_mmio_pfn(kvm_pfn_t pfn) { if (pfn_valid(pfn)) return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn)) && diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index d7598506fbad..909c24c733c4 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -347,4 +347,5 @@ u64 kvm_mmu_changed_pte_notifier_make_spte(u64 old_spte, kvm_pfn_t new_pfn); void kvm_mmu_reset_all_pte_masks(void); +bool kvm_is_mmio_pfn(kvm_pfn_t pfn); #endif From patchwork Mon Nov 15 23:46:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12621241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A457FC433F5 for ; Tue, 16 Nov 2021 03:17:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8474761C12 for ; Tue, 16 Nov 2021 03:17:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345867AbhKPDUB (ORCPT ); Mon, 15 Nov 2021 22:20:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238962AbhKPDSJ (ORCPT ); Mon, 15 Nov 2021 22:18:09 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0995CC125D68 for ; Mon, 15 Nov 2021 15:46:35 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id i25-20020a631319000000b002cce0a43e94so9945074pgl.0 for ; Mon, 15 Nov 2021 15:46:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bYB0Frpqgq4a9NUDlxvo/lp43pZWFoSQ8nRPnBtD0cE=; b=Y3uEfQ4qdN/MW6wyOA1SBXTNb7fxPkPA51UQ0JGqP8UKNnikesEn8bD2t1yaalrc8r 5Rbc8Ih5HzvQsN34ap2ytRDWucDss1XeVYmG1C1MEFaqp5w6FxmN5SBDOhFjBE8FTHvG nJPcEH+Ukd6BUOwF3vPs4plTdvREalHUafnhHfSwVsjmMuaGEis++HMDkUYE5yQ7gst3 x1XthBkH8lcDvCiFAu9c9QI+FLWDxAgq+EU+MV70hr+ODDtVeq8suQurcyPXaNHO7JPH dTIS44ctV3Fu5EUiwS/4qa0g45UgzDA6uoPBI2mxE6DUBTEnhyfxXhibamdUOTYs/1mr AUdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bYB0Frpqgq4a9NUDlxvo/lp43pZWFoSQ8nRPnBtD0cE=; b=w1VYuKATy5eVhTaa+ozkXUf8Rdl9mqDnUqNpjyQ3xk0LkpOMpDMrnaPErHh9W9eRRI mUZRiHtyImqb7Er2EggySCaAQyklCVDjBny9VdQch4CV3aVkFfqCnADybZIKV24nr7xQ wzJUJvDVMVWCWRRXVxYEasP/ivxEeqSo5LyBWS06zKg/C8mGOdq+HfOkwX9Iof/Imh4r 2W60ulsLnTFDNH3nZHTlZpTBmDBHif/D2to+2cN8e1+u5MPdO7HUe9fx9dnkvEF+eAQI EfOzDoFdEZ3C8M5PaM58XikE9TkCnKOblCSfH4m0EgWb8Fi9Ljdr/u1W358LRHC12M7y 5XnA== X-Gm-Message-State: AOAM5335MOBrfTf/qnyghevxQ9yM2Hhrb6BCbaRzTIMQObjn5CAVhR2i FmwGmeiqT3McqilW3kIrsK+9Vdpi0v9Z X-Google-Smtp-Source: ABdhPJy1jTOodcUWrB2s9eN8S9v5ypDiz6UXWEZRvh6J0jgZ4nSz4s2XF54h5kwMFoX8A5QHcgUANCgCLZ/b X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a17:90a:909:: with SMTP id n9mr3107782pjn.1.1637019994545; Mon, 15 Nov 2021 15:46:34 -0800 (PST) Date: Mon, 15 Nov 2021 15:46:03 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-16-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 15/15] KVM: x86/mmu: Promote pages in-place when disabling dirty logging From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When disabling dirty logging, the TDP MMU currently zaps each leaf entry mapping memory in the relevant memslot. This is very slow. Doing the zaps under the mmu read lock requires a TLB flush for every zap and the zapping causes a storm of ETP/NPT violations. Instead of zapping, replace the split large pages with large page mappings directly. While this sort of operation has historically only been done in the vCPU page fault handler context, refactorings earlier in this series and the relative simplicity of the TDP MMU make it possible here as well. Running the dirty_log_perf_test on an Intel Skylake with 96 vCPUs and 1G of memory per vCPU, this reduces the time required to disable dirty logging from over 45 seconds to just over 1 second. It also avoids provoking page faults, improving vCPU performance while disabling dirty logging. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/mmu_internal.h | 4 ++ arch/x86/kvm/mmu/tdp_mmu.c | 69 ++++++++++++++++++++++++++++++++- 3 files changed, 72 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ef7a84422463..add724aa9e8c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4449,7 +4449,7 @@ static inline bool boot_cpu_is_amd(void) * the direct page table on host, use as much mmu features as * possible, however, kvm currently does not do execution-protection. */ -static void +void build_tdp_shadow_zero_bits_mask(struct rsvd_bits_validate *shadow_zero_check, int shadow_root_level) { diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 6563cce9c438..84d439432acf 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -161,4 +161,8 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); +void +build_tdp_shadow_zero_bits_mask(struct rsvd_bits_validate *shadow_zero_check, + int shadow_root_level); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 43c7834b4f0a..b15c8cd11cf9 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1361,6 +1361,66 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, clear_dirty_pt_masked(kvm, root, gfn, mask, wrprot); } +static void try_promote_lpage(struct kvm *kvm, + const struct kvm_memory_slot *slot, + struct tdp_iter *iter) +{ + struct kvm_mmu_page *sp = sptep_to_sp(iter->sptep); + struct rsvd_bits_validate shadow_zero_check; + /* + * Since the TDP MMU doesn't manage nested PTs, there's no need to + * write protect for a nested VM when PML is in use. + */ + bool ad_need_write_protect = false; + bool map_writable; + kvm_pfn_t pfn; + u64 new_spte; + u64 mt_mask; + + /* + * If addresses are being invalidated, don't do in-place promotion to + * avoid accidentally mapping an invalidated address. + */ + if (unlikely(kvm->mmu_notifier_count)) + return; + + pfn = __gfn_to_pfn_memslot(slot, iter->gfn, true, NULL, true, + &map_writable, NULL); + + /* + * Can't reconstitute an lpage if the consituent pages can't be + * mapped higher. + */ + if (iter->level > kvm_mmu_max_mapping_level(kvm, slot, iter->gfn, + pfn, PG_LEVEL_NUM)) + return; + + build_tdp_shadow_zero_bits_mask(&shadow_zero_check, iter->root_level); + + /* + * In some cases, a vCPU pointer is required to get the MT mask, + * however in most cases it can be generated without one. If a + * vCPU pointer is needed kvm_x86_try_get_mt_mask will fail. + * In that case, bail on in-place promotion. + */ + if (unlikely(!static_call(kvm_x86_try_get_mt_mask)(kvm, iter->gfn, + kvm_is_mmio_pfn(pfn), + &mt_mask))) + return; + + make_spte(kvm, sp, slot, ACC_ALL, iter->gfn, pfn, 0, false, true, + map_writable, ad_need_write_protect, mt_mask, + &shadow_zero_check, &new_spte); + + tdp_mmu_set_spte_atomic(kvm, iter, new_spte); + + /* + * Re-read the SPTE to avoid recursing into one of the removed child + * page tables. + */ + iter->old_spte = READ_ONCE(*rcu_dereference(iter->sptep)); +} + /* * Clear leaf entries which could be replaced by large mappings, for * GFNs within the slot. @@ -1381,9 +1441,14 @@ static void zap_collapsible_spte_range(struct kvm *kvm, if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; - if (!is_shadow_present_pte(iter.old_spte) || - !is_last_spte(iter.old_spte, iter.level)) + if (!is_shadow_present_pte(iter.old_spte)) + continue; + + /* Try to promote the constitutent pages to an lpage. */ + if (!is_last_spte(iter.old_spte, iter.level)) { + try_promote_lpage(kvm, slot, &iter); continue; + } pfn = spte_to_pfn(iter.old_spte); if (kvm_is_reserved_pfn(pfn) ||