From patchwork Fri Nov 19 23:57:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA403C433F5 for ; Fri, 19 Nov 2021 23:58:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236237AbhKTABW (ORCPT ); Fri, 19 Nov 2021 19:01:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236188AbhKTABP (ORCPT ); Fri, 19 Nov 2021 19:01:15 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EDA3C061748 for ; Fri, 19 Nov 2021 15:58:12 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id y68-20020a627d47000000b004a2f6a1551cso6427048pfc.23 for ; Fri, 19 Nov 2021 15:58:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+HxitEmGbj7Id7UQJr+hxa1o5D7b2Er3V4lZLdOvJG8=; b=EK/bV7ojvgVmkLrNi00TKk1XmZpxKHttG8Bc1EYVUrOnNsWdM46gHGNguTCjaR++Ii JHaMc4EX7/DVj3TlWbdgC12Dn5zoXTi8dj0h2Qlkq37JH0ICJtLp5oSfbFhgFiP00WqL GsXsISE9PuL8ZTLvYyP3b7yTEGO7LV92K2pbJ5sSnvsH6UngvfdBzqedesbryjQxyBIm mJWc3fsTxYVsFuPyGTVCWhWySzgMsTVpTHOKjsjUYL5doqCmqsgddSKMgGfoxCXUTEso dgs0xhNzVObQ0K7QOiDkps/gee+y/8UeSe4Ci478GvEw7z5BJAOkJxaPMT68Jl5m1xt3 VRwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+HxitEmGbj7Id7UQJr+hxa1o5D7b2Er3V4lZLdOvJG8=; b=h9v4L6Zb+ZERwdEJzCNF0tJxJ5EZQCxr6TXHqaJ3nZNmYye6qpvbojFdy6q+K14Ja5 CcP0vm0eV1B7SCvcfXkFOmKQp1RwTiQBlGXFKK512TE3skb/YXmU3mGQ5OoDMk5sS3hu gwPj9OeceH0TOf8loZIxc5RTyfg97rkmPLsUcikEAl+6dzjxrV+/z+s+jjkDGpIg4VU/ Gzz0x2Wcg14dhpmiPqsVG2Bh8WyfiB5qGnWdExBkc7/XWcL65KMJBeAwMsZBRCvI3vFN B9YPM05EvzK6n7Kq/smD4zJIdDeWbkT5VuD08f2FraT1CjqLxdiVr1uasjMtACHxW2xc aBXw== X-Gm-Message-State: AOAM5339vfZ9AI8At6ykCCfsvRBDcHflajv759/FYVR1vaRuo1mfruSM mBRg5SVd89lTDd9KpmP3fq4vJH2c4LSsVg== X-Google-Smtp-Source: ABdhPJyJj/jj6PyPdfLsTctRVdRlpXqWMEKRuU5wE1dWY6q2mnCQ8u+lhrPPEHegii0T+HEf93zFFAoJRfnL1g== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:284f:: with SMTP id p15mr597329pjf.1.1637366291551; Fri, 19 Nov 2021 15:58:11 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:45 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-2-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 01/15] KVM: x86/mmu: Rename rmap_write_protect to kvm_vcpu_write_protect_gfn From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org rmap_write_protect is a poor name because we may not even touch the rmap if the TDP MMU is in use. It is also confusing that rmap_write_protect is not a simpler wrapper around __rmap_write_protect, since that is the typical flow for functions with double-underscore names. Rename it to kvm_vcpu_write_protect_gfn to convey that we are write-protecting a specific gfn in the context of a vCPU. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Ben Gardon Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8f0035517450..16ffb571bc75 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1427,7 +1427,7 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, return write_protected; } -static bool rmap_write_protect(struct kvm_vcpu *vcpu, u64 gfn) +static bool kvm_vcpu_write_protect_gfn(struct kvm_vcpu *vcpu, u64 gfn) { struct kvm_memory_slot *slot; @@ -2026,7 +2026,7 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu, bool protected = false; for_each_sp(pages, sp, parents, i) - protected |= rmap_write_protect(vcpu, sp->gfn); + protected |= kvm_vcpu_write_protect_gfn(vcpu, sp->gfn); if (protected) { kvm_mmu_remote_flush_or_zap(vcpu->kvm, &invalid_list, true); @@ -2153,7 +2153,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, hlist_add_head(&sp->hash_link, sp_list); if (!direct) { account_shadowed(vcpu->kvm, sp); - if (level == PG_LEVEL_4K && rmap_write_protect(vcpu, gfn)) + if (level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); } trace_kvm_mmu_get_page(sp, true); From patchwork Fri Nov 19 23:57:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DE07C433EF for ; Fri, 19 Nov 2021 23:58:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236255AbhKTAB0 (ORCPT ); Fri, 19 Nov 2021 19:01:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236200AbhKTABQ (ORCPT ); Fri, 19 Nov 2021 19:01:16 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9833C061574 for ; Fri, 19 Nov 2021 15:58:13 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id n6-20020a17090a670600b001a9647fd1aaso7470576pjj.1 for ; Fri, 19 Nov 2021 15:58:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ORkMQIzkhcuqBRx3eJca9p54z3P3WQQiypjtLzoh9Ww=; b=EVSF8l2vRT5ze/yOakhU3W87MjlMAJgeJZV3rU6oZltmiUFxh+DyzzmdvRvSSZAPns XY8l9R6BnQ64vMmHMDgW7+QAwDxuY32vGK9MHNB5n6ldmHSDpPkafoWTkpWp9aIUtgSP +/ywtXMjb6MXL2xKJYw2FTqOPuRv856TVe0gOHyl1TizlNGA/H3C8Dg8Gy6emBUy6Y4C ZRySLnuK43nsg843viqcB20kVaeU6NfVMvjljgso/5eWEYJEFD2enkl6TFuWNR46DQFI hYnVD9CwD+llCGwW7jq9cVs/nW/5eUtQwPcVh6U2H7dvsFedECBCxyBzlev6OJdR5TkY c2ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ORkMQIzkhcuqBRx3eJca9p54z3P3WQQiypjtLzoh9Ww=; b=WnAUu0+RTeZLigsI6dYnZIeA+XUeEkOjFHPZqCpC3Lznb+pg71imrqY5YnchqSN/ls 3913zjExJ9njwJKLsaD5X/SOrlht32cniOXb4A/RT6Ty4ZU2xp4K1qRD/RsTGplyBF+3 mRY2sVmFHQaRyFkei6hAZ/k3DdOCeTWkXezC1XMdMU58Z5mIvlgDKflBfZ122eDTUH55 dry9gpZU9jMR00N0tGvKcPiyl1u4wWj+GitA7HbIQW+AnVbjiPX3JaqvpNaeSDjXXcqx YJI5N9WF/DOyFc1zQVytaOK6dWXMGvaF/W8wi0ulx88HSjp64KEoaDZri0adFFzwlB/H FcsA== X-Gm-Message-State: AOAM531RO4wuJ+WV6c8AVauJDMSyJ+H/EeV4PciiCkspEJaWznULKSTA sm3+UA76EvpJ+YolO2B5LY0j2TA85jlywg== X-Google-Smtp-Source: ABdhPJxLdz2EWalCLqpJuTxAPY1Jh19P8TYAALRiU2AkUb/W0TGkkeemjFitnuvBcRG+X388eZUBcbtHyAR2uw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:a8e:b0:47b:a658:7f4d with SMTP id b14-20020a056a000a8e00b0047ba6587f4dmr67288414pfl.82.1637366293339; Fri, 19 Nov 2021 15:58:13 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:46 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-3-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 02/15] KVM: x86/mmu: Rename __rmap_write_protect to rmap_write_protect From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that rmap_write_protect has been renamed, there is no need for the double underscores in front of __rmap_write_protect. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Ben Gardon Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 16ffb571bc75..1146f87044a6 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1235,9 +1235,9 @@ static bool spte_write_protect(u64 *sptep, bool pt_protect) return mmu_spte_update(sptep, spte); } -static bool __rmap_write_protect(struct kvm *kvm, - struct kvm_rmap_head *rmap_head, - bool pt_protect) +static bool rmap_write_protect(struct kvm *kvm, + struct kvm_rmap_head *rmap_head, + bool pt_protect) { u64 *sptep; struct rmap_iterator iter; @@ -1317,7 +1317,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, while (mask) { rmap_head = gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); - __rmap_write_protect(kvm, rmap_head, false); + rmap_write_protect(kvm, rmap_head, false); /* clear the first set bit */ mask &= mask - 1; @@ -1416,7 +1416,7 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, if (kvm_memslots_have_rmaps(kvm)) { for (i = min_level; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { rmap_head = gfn_to_rmap(gfn, i, slot); - write_protected |= __rmap_write_protect(kvm, rmap_head, true); + write_protected |= rmap_write_protect(kvm, rmap_head, true); } } @@ -5780,7 +5780,7 @@ static bool slot_rmap_write_protect(struct kvm *kvm, struct kvm_rmap_head *rmap_head, const struct kvm_memory_slot *slot) { - return __rmap_write_protect(kvm, rmap_head, false); + return rmap_write_protect(kvm, rmap_head, false); } void kvm_mmu_slot_remove_write_access(struct kvm *kvm, From patchwork Fri Nov 19 23:57:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52092C433EF for ; Fri, 19 Nov 2021 23:58:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236382AbhKTABd (ORCPT ); Fri, 19 Nov 2021 19:01:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236209AbhKTABR (ORCPT ); Fri, 19 Nov 2021 19:01:17 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65170C06173E for ; Fri, 19 Nov 2021 15:58:15 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id m15-20020a170902bb8f00b0014382b67873so5338379pls.19 for ; Fri, 19 Nov 2021 15:58:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+5FkbEhx1+9GLVuvk8L93fKbVlYearXqDKwYBbqQGyM=; b=J47SqzCVE674c+S+p5XhPXNPtiUpiobr5ZLjHVmBMWqoDbo7Lo/B8LgRidDs4r/Lf8 3VbgHm3sqq3B6iQYOcnOGYLdQQkS0mvaGY18DUt5nFkSClyk+dvy3ZyuPtVyUSWeDyWC f+KP84lqd29GRxChGs0+5a5vG4aEyBHn+mFBf1G3zVB9O2Fr9Lkwz5nfjRmi1Ejbv9k9 dAtmDiTGvTJf7rpXpkS0qpGbMPMrMGe3W3hg+RGszAtpZaUjTOuSX+k+nZ382eYwNcTW kUUSSVkaOknfJZY9mD1DQWWQN2+PrRR9/b75tTcdvZ7Sgy6X3j5Amf9chVez/hruJxte UM7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+5FkbEhx1+9GLVuvk8L93fKbVlYearXqDKwYBbqQGyM=; b=RCLjszOOn9S2gp4DwkCb2YRBIhCJEZufJjtjjLcMpLne687brqdkDZYJCfn/+ggZ0U Eg4QowP6+vr60dx8YY3tFzCBaJ5C0WlAhI/H8TfWLwqHg0KEenJaHwujIN71uRAd9Cpb 1ZH+aNT6iOs7EvkQ8/m0DRGO4j/8sDe1mfeMrwRiGkTEanfuwZgyrdfmc4uPtVlrACPn 1pYiApCKHWQXUuRwxiT2QDLshokh30xFvpC4sOc4TRY+uIqofZvwmYPYWHWEOFOR3mb2 Tg/EcUknTLTB6A2eYjudop1zdKpRFxoz7k6G+wNwECVUCpahLrfSxhfRN/3uMKGIdz+1 i6xw== X-Gm-Message-State: AOAM531lATvV2iXgTevApOrSvP39ZujB9UCvzTNAgnuyA84BFyF97hEl Afz625VokTpTEPMiv1L1EQBkV/Ts/jSoMg== X-Google-Smtp-Source: ABdhPJxxYB6OpokHiTIXOsuahJdGdcZ/cwDyR5jXqj8GkgVPFTmZ6RbpEcJdgasomqfuV7TJM5DMuIunJ0wKIw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:714f:b0:142:892d:a46 with SMTP id u15-20020a170902714f00b00142892d0a46mr80956847plm.39.1637366294873; Fri, 19 Nov 2021 15:58:14 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:47 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-4-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 03/15] KVM: x86/mmu: Automatically update iter->old_spte if cmpxchg fails From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Consolidate a bunch of code that was manually re-reading the spte if the cmpxchg fails. There is no extra cost of doing this because we already have the spte value as a result of the cmpxchg (and in fact this eliminates re-reading the spte), and none of the call sites depend on iter->old_spte retaining the stale spte value. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 56 ++++++++++++-------------------------- 1 file changed, 18 insertions(+), 38 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 377a96718a2e..cc9fe33c9b36 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -492,16 +492,22 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, * and handle the associated bookkeeping. Do not mark the page dirty * in KVM's dirty bitmaps. * + * If setting the SPTE fails because it has changed, iter->old_spte will be + * updated with the updated value of the spte. + * * @kvm: kvm instance * @iter: a tdp_iter instance currently on the SPTE that should be set * @new_spte: The value the SPTE should be set to * Returns: true if the SPTE was set, false if it was not. If false is returned, - * this function will have no side-effects. + * this function will have no side-effects other than updating + * iter->old_spte to the latest value of spte. */ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte) { + u64 old_spte; + lockdep_assert_held_read(&kvm->mmu_lock); /* @@ -515,9 +521,11 @@ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm, * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and * does not hold the mmu_lock. */ - if (cmpxchg64(rcu_dereference(iter->sptep), iter->old_spte, - new_spte) != iter->old_spte) + old_spte = cmpxchg64(rcu_dereference(iter->sptep), iter->old_spte, new_spte); + if (old_spte != iter->old_spte) { + iter->old_spte = old_spte; return false; + } __handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, new_spte, iter->level, true); @@ -747,14 +755,8 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, if (!shared) { tdp_mmu_set_spte(kvm, &iter, 0); flush = true; - } else if (!tdp_mmu_zap_spte_atomic(kvm, &iter)) { - /* - * The iter must explicitly re-read the SPTE because - * the atomic cmpxchg failed. - */ - iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep)); + } else if (!tdp_mmu_zap_spte_atomic(kvm, &iter)) goto retry; - } } rcu_read_unlock(); @@ -978,13 +980,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) is_large_pte(iter.old_spte)) { if (!tdp_mmu_zap_spte_atomic(vcpu->kvm, &iter)) break; - - /* - * The iter must explicitly re-read the spte here - * because the new value informs the !present - * path below. - */ - iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep)); } if (!is_shadow_present_pte(iter.old_spte)) { @@ -1190,14 +1185,9 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, new_spte = iter.old_spte & ~PT_WRITABLE_MASK; - if (!tdp_mmu_set_spte_atomic(kvm, &iter, new_spte)) { - /* - * The iter must explicitly re-read the SPTE because - * the atomic cmpxchg failed. - */ - iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep)); + if (!tdp_mmu_set_spte_atomic(kvm, &iter, new_spte)) goto retry; - } + spte_set = true; } @@ -1258,14 +1248,9 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, continue; } - if (!tdp_mmu_set_spte_atomic(kvm, &iter, new_spte)) { - /* - * The iter must explicitly re-read the SPTE because - * the atomic cmpxchg failed. - */ - iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep)); + if (!tdp_mmu_set_spte_atomic(kvm, &iter, new_spte)) goto retry; - } + spte_set = true; } @@ -1391,14 +1376,9 @@ static bool zap_collapsible_spte_range(struct kvm *kvm, pfn, PG_LEVEL_NUM)) continue; - if (!tdp_mmu_zap_spte_atomic(kvm, &iter)) { - /* - * The iter must explicitly re-read the SPTE because - * the atomic cmpxchg failed. - */ - iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep)); + if (!tdp_mmu_zap_spte_atomic(kvm, &iter)) goto retry; - } + flush = true; } From patchwork Fri Nov 19 23:57:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 227A7C433EF for ; Fri, 19 Nov 2021 23:58:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236442AbhKTABk (ORCPT ); Fri, 19 Nov 2021 19:01:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236227AbhKTABU (ORCPT ); Fri, 19 Nov 2021 19:01:20 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26E64C061574 for ; Fri, 19 Nov 2021 15:58:17 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id v23-20020a170902bf9700b001421d86afc4so5356893pls.9 for ; Fri, 19 Nov 2021 15:58:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=IFiCOA44RFYNwaRr/Yb1BDlw+EFlbcyt6Uo3IPFhDZo=; b=kuGzEGQ65pTKEh6gsEmMPWzBQLJ3tfPfO2YP4JqYZllDteIFsyo3EJiTaJQeP2RLyL jHZumiyvBi/3tq0Y482P0/oT6o0tMb3p7NeqUaUrF1hDRHL1dmfk70JhvGACb3SMAb8w MG73XS+erymKTKp9F3LikG9NdwRIrYjtpd01twH1ExPmcguk9mpSPaTehYDzYd/XNN4A yhnF1AulmXv1wkB8MPhU/t6HJymMNks9QyEQn1sXd8tf9Vqdb4L7rGYmpxaOwQ+trQwE i6ExWO28eLjOedEqr3JM9IuzXccsSFUu4rKGvp7/E/oXnH4uIWD6ERILyFhIqn+bFHFa +CgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IFiCOA44RFYNwaRr/Yb1BDlw+EFlbcyt6Uo3IPFhDZo=; b=GMheO/agsugWp9zMsFiuLbiI2DJ+7sbhwT2OZWJ3RNb6g42AyiS5TjlR2mRZnGOi06 SGcUmsrMrTMFhcUpx+nVA8AWiXx63FJU5CfNgaK3m3m/YcNLwyOoum4szv+YqFK1VOr0 OO+5NBMviDYylbWFY8/yeNuM0XhZPVL1p6nwJzfU4kh2mYDd623EJ7rnOHP9Z4PH4nt9 Dh3hccz1H1zNf2IN1jbrJjvBF+bLk2FWxbQ1LzN3gzFRLsubDXc7JmlqSjZ4zB4HdDqC 7gjOJ/nBOZRxukLcVLdXHvc+xDJjDeRCL4nPzND42kFTUnjMn5nRWaiaQZoeTaIwtNk8 zOBQ== X-Gm-Message-State: AOAM531M7mJk8AJEKOIdnqVsX2pK84vlaHBOZnS5k/KvGW89kZyQzj2X via5N/G8d8rMI7dnlzZ5EUgP5Yy1aiG7PQ== X-Google-Smtp-Source: ABdhPJydB1K+/B/2hmZVYyzVCOl2RrQKg/VTodVN+yYJAtI40Y49nXiXylnIcWmFVSJtOTp5jQ2hu7NnRLXGVA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:8d85:b0:142:892d:bfa with SMTP id v5-20020a1709028d8500b00142892d0bfamr82529714plo.76.1637366296582; Fri, 19 Nov 2021 15:58:16 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:48 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-5-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 04/15] KVM: x86/mmu: Factor out logic to atomically install a new page table From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Factor out the logic to atomically replace an SPTE with an SPTE that points to a new page table. This will be used in a follow-up commit to split a large page SPTE into one level lower. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 53 ++++++++++++++++++++++++++------------ 1 file changed, 37 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index cc9fe33c9b36..9ee3f4f7fdf5 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -945,6 +945,39 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, return ret; } +/* + * tdp_mmu_install_sp_atomic - Atomically replace the given spte with an + * spte pointing to the provided page table. + * + * @kvm: kvm instance + * @iter: a tdp_iter instance currently on the SPTE that should be set + * @sp: The new TDP page table to install. + * @account_nx: True if this page table is being installed to split a + * non-executable huge page. + * + * Returns: True if the new page table was installed. False if spte being + * replaced changed, causing the atomic compare-exchange to fail. + * If this function returns false the sp will be freed before + * returning. + */ +static bool tdp_mmu_install_sp_atomic(struct kvm *kvm, + struct tdp_iter *iter, + struct kvm_mmu_page *sp, + bool account_nx) +{ + u64 spte; + + spte = make_nonleaf_spte(sp->spt, !shadow_accessed_mask); + + if (tdp_mmu_set_spte_atomic(kvm, iter, spte)) { + tdp_mmu_link_page(kvm, sp, account_nx); + return true; + } else { + tdp_mmu_free_sp(sp); + return false; + } +} + /* * Handle a TDP page fault (NPT/EPT violation/misconfiguration) by installing * page tables and SPTEs to translate the faulting guest physical address. @@ -954,8 +987,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) struct kvm_mmu *mmu = vcpu->arch.mmu; struct tdp_iter iter; struct kvm_mmu_page *sp; - u64 *child_pt; - u64 new_spte; int ret; kvm_mmu_hugepage_adjust(vcpu, fault); @@ -983,6 +1014,9 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) } if (!is_shadow_present_pte(iter.old_spte)) { + bool account_nx = fault->huge_page_disallowed && + fault->req_level >= iter.level; + /* * If SPTE has been frozen by another thread, just * give up and retry, avoiding unnecessary page table @@ -992,21 +1026,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) break; sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level - 1); - child_pt = sp->spt; - - new_spte = make_nonleaf_spte(child_pt, - !shadow_accessed_mask); - - if (tdp_mmu_set_spte_atomic(vcpu->kvm, &iter, new_spte)) { - tdp_mmu_link_page(vcpu->kvm, sp, - fault->huge_page_disallowed && - fault->req_level >= iter.level); - - trace_kvm_mmu_get_page(sp, true); - } else { - tdp_mmu_free_sp(sp); + if (!tdp_mmu_install_sp_atomic(vcpu->kvm, &iter, sp, account_nx)) break; - } } } From patchwork Fri Nov 19 23:57:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B93DC433FE for ; Fri, 19 Nov 2021 23:58:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236501AbhKTABt (ORCPT ); Fri, 19 Nov 2021 19:01:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236152AbhKTABV (ORCPT ); Fri, 19 Nov 2021 19:01:21 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5AC8C06173E for ; Fri, 19 Nov 2021 15:58:18 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id pg9-20020a17090b1e0900b001a689204b52so7484114pjb.0 for ; Fri, 19 Nov 2021 15:58:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bEq6kwsGlJixZVgAv3CUce1H9nK4ADw+62+7N53XAYs=; b=GKm3oFtEl/vKHCcbQsCpBhn9CrwXLXZPY7uYiR+Le8/eDyIr1HsVjvi/00exiiwOhF F1RhN25/KJRp2zP/+edLUlvK7i5hSlIWdtLGq8beOwPbLlsAc8+4UJeZ5BP6Ei7B8CFZ JUx2LZ/TwYYvWqjEvGTYHR87J3g3ihDTGMUa2nr4MO2FllPVgiIJeXhIBXB6t3yBphLT cwSFEHvv0Glh5Cbm5HflMIslN9iI8zAm66pJX4BHGVfbmvU0GDdPL14kAlcB7bJPTPfR Ew9kMW8ytp14yTU4eqizaGooQf2urkC+JBRmCAyMG+w7DqyiL8vzHVPQRoL2cpis+JD5 mcog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bEq6kwsGlJixZVgAv3CUce1H9nK4ADw+62+7N53XAYs=; b=PsFYWi8X5bfAObifG3Ulq1peQ29laitSIgZ/U7HFAiOho1/g5I4FbqBaEe9CqjoKCy C9eTrcTSUMA/fHINGah8KGbrkrRO0ahD73FrOHDwLeawus7JL+foDO0rpaVuXg3pxMU+ eJYRsIGDRUVnhgOGo602JvpWhO8H1AVjNMrOhi3bWkdf8R6Y884PiaIp737Wqy7ftYmR lD2Q3+DhWM58MkOKP6NrmboSyLldxJuDfwEOSNGlM3L1e5PVEb5yxcD5QY1wwWhK1qgv +UTqnuAXX9XZjPu8QxkbzbWKIgMtg6Q1cMmNIVBSaGFxUaPWXo9AKfiKziYB6ZqNyyzH pzmA== X-Gm-Message-State: AOAM532hlxcQ4rYzowPkJaWgfIeYEvrHKV9Mmfu4jWiQaSWeA35cRFc+ DtlftBZr7xeZFRj5OQF9n9vIR2BeVQprUA== X-Google-Smtp-Source: ABdhPJx6WLjwzwkSIJDC2kZ6nBkzcxNQZJcUrSVxixUY4nlSMrmb42RaIGHplK/00Q5IzF7tCHhrRIAojt4p6A== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:615:: with SMTP id gb21mr4827648pjb.10.1637366298243; Fri, 19 Nov 2021 15:58:18 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:49 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-6-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 05/15] KVM: x86/mmu: Abstract mmu caches out to a separate struct From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the kvm_mmu_memory_cache structs into a separate wrapper struct. This is in preparation for eagerly splitting all large pages during VM-ioctls (i.e. not in the vCPU fault path) which will require adding kvm_mmu_memory_cache structs to struct kvm_arch. Signed-off-by: David Matlack Reviewed-by: Ben Gardon Reviewed-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 12 ++++--- arch/x86/kvm/mmu/mmu.c | 59 ++++++++++++++++++++++----------- arch/x86/kvm/mmu/tdp_mmu.c | 7 ++-- 3 files changed, 52 insertions(+), 26 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 1fcb345bc107..2a7564703ea6 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -612,6 +612,13 @@ struct kvm_vcpu_xen { u64 runstate_times[4]; }; +struct kvm_mmu_memory_caches { + struct kvm_mmu_memory_cache pte_list_desc_cache; + struct kvm_mmu_memory_cache shadow_page_cache; + struct kvm_mmu_memory_cache gfn_array_cache; + struct kvm_mmu_memory_cache page_header_cache; +}; + struct kvm_vcpu_arch { /* * rip and regs accesses must go through @@ -681,10 +688,7 @@ struct kvm_vcpu_arch { */ struct kvm_mmu *walk_mmu; - struct kvm_mmu_memory_cache mmu_pte_list_desc_cache; - struct kvm_mmu_memory_cache mmu_shadow_page_cache; - struct kvm_mmu_memory_cache mmu_gfn_array_cache; - struct kvm_mmu_memory_cache mmu_page_header_cache; + struct kvm_mmu_memory_caches mmu_caches; /* * QEMU userspace and the guest each have their own FPU state. diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1146f87044a6..537952574211 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -732,38 +732,60 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu) static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) { + struct kvm_mmu_memory_caches *mmu_caches; int r; + mmu_caches = &vcpu->arch.mmu_caches; + /* 1 rmap, 1 parent PTE per level, and the prefetched rmaps. */ - r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache, + r = kvm_mmu_topup_memory_cache(&mmu_caches->pte_list_desc_cache, 1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM); if (r) return r; - r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache, + r = kvm_mmu_topup_memory_cache(&mmu_caches->shadow_page_cache, PT64_ROOT_MAX_LEVEL); if (r) return r; if (maybe_indirect) { - r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_gfn_array_cache, + r = kvm_mmu_topup_memory_cache(&mmu_caches->gfn_array_cache, PT64_ROOT_MAX_LEVEL); if (r) return r; } - return kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, + return kvm_mmu_topup_memory_cache(&mmu_caches->page_header_cache, PT64_ROOT_MAX_LEVEL); } static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) { - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); + struct kvm_mmu_memory_caches *mmu_caches; + + mmu_caches = &vcpu->arch.mmu_caches; + + kvm_mmu_free_memory_cache(&mmu_caches->pte_list_desc_cache); + kvm_mmu_free_memory_cache(&mmu_caches->shadow_page_cache); + kvm_mmu_free_memory_cache(&mmu_caches->gfn_array_cache); + kvm_mmu_free_memory_cache(&mmu_caches->page_header_cache); +} + +static void mmu_init_memory_caches(struct kvm_mmu_memory_caches *caches) +{ + caches->pte_list_desc_cache.kmem_cache = pte_list_desc_cache; + caches->pte_list_desc_cache.gfp_zero = __GFP_ZERO; + + caches->page_header_cache.kmem_cache = mmu_page_header_cache; + caches->page_header_cache.gfp_zero = __GFP_ZERO; + + caches->shadow_page_cache.gfp_zero = __GFP_ZERO; } static struct pte_list_desc *mmu_alloc_pte_list_desc(struct kvm_vcpu *vcpu) { - return kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_pte_list_desc_cache); + struct kvm_mmu_memory_caches *mmu_caches; + + mmu_caches = &vcpu->arch.mmu_caches; + + return kvm_mmu_memory_cache_alloc(&mmu_caches->pte_list_desc_cache); } static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) @@ -1071,7 +1093,7 @@ static bool rmap_can_add(struct kvm_vcpu *vcpu) { struct kvm_mmu_memory_cache *mc; - mc = &vcpu->arch.mmu_pte_list_desc_cache; + mc = &vcpu->arch.mmu_caches.pte_list_desc_cache; return kvm_mmu_memory_cache_nr_free_objects(mc); } @@ -1742,12 +1764,15 @@ static void drop_parent_pte(struct kvm_mmu_page *sp, static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct) { + struct kvm_mmu_memory_caches *mmu_caches; struct kvm_mmu_page *sp; - sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); - sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + mmu_caches = &vcpu->arch.mmu_caches; + + sp = kvm_mmu_memory_cache_alloc(&mmu_caches->page_header_cache); + sp->spt = kvm_mmu_memory_cache_alloc(&mmu_caches->shadow_page_cache); if (!direct) - sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); + sp->gfns = kvm_mmu_memory_cache_alloc(&mmu_caches->gfn_array_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); /* @@ -5544,13 +5569,7 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) { int ret; - vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache; - vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO; - - vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; - vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; - - vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO; + mmu_init_memory_caches(&vcpu->arch.mmu_caches); vcpu->arch.mmu = &vcpu->arch.root_mmu; vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 9ee3f4f7fdf5..b70707a7fe87 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -175,10 +175,13 @@ static union kvm_mmu_page_role page_role_for_level(struct kvm_vcpu *vcpu, static struct kvm_mmu_page *alloc_tdp_mmu_page(struct kvm_vcpu *vcpu, gfn_t gfn, int level) { + struct kvm_mmu_memory_caches *mmu_caches; struct kvm_mmu_page *sp; - sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); - sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + mmu_caches = &vcpu->arch.mmu_caches; + + sp = kvm_mmu_memory_cache_alloc(&mmu_caches->page_header_cache); + sp->spt = kvm_mmu_memory_cache_alloc(&mmu_caches->shadow_page_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); sp->role.word = page_role_for_level(vcpu, level).word; From patchwork Fri Nov 19 23:57:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE6C3C433EF for ; Fri, 19 Nov 2021 23:58:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236280AbhKTABu (ORCPT ); Fri, 19 Nov 2021 19:01:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236232AbhKTABW (ORCPT ); Fri, 19 Nov 2021 19:01:22 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F5D6C061748 for ; Fri, 19 Nov 2021 15:58:20 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id n13-20020a170902d2cd00b0014228ffc40dso5361561plc.4 for ; Fri, 19 Nov 2021 15:58:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Wkh89fBmd980/DQWmtPUYCadBdan6p83cu7YV29qV8k=; b=ldY/d0mXvBJvYMeDK9yCu4ktNBNtCrtdEfhdwO9kKn2UaQOwPlTeNKIPzENhdWDDaV XJMXzeLRc3dO+wT7e5bXMZmayNYt23r/Ogdip4meQ9vPQQNcW/VNp07LGN7VUJJwyR8g ofWeE4b3IIp451A6W10yKqZ4rDxjfJGVSloYlu/prw86q7L/tlDBN5+VR+8owR5VkEWB BpC9QOPuNqIWz79X7ztv0Z28B9aeSN02nr8YoQoQRjRqI/VPw5a5gmaH87wSuisUZn5m hVw2Lrqf9O0UYuIwkdIEVaOnOOsQ9vjt4m3mFfogdpP2iUC3V4LUjYFz+SGdq20QUj5x QeVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Wkh89fBmd980/DQWmtPUYCadBdan6p83cu7YV29qV8k=; b=nKlFhoxzPl7Xt6m/PkowL+4QyCa+ocQ0ZNa9urYrSfI2QLveXJnyWHd3iYPm9lV4dD RJggaNiBSwvJHLPt7MVkJYnyM720YoHRw8sM1J93Xyw/CBFLUCmIZbAgXyFDqXqloLuv tr9lpG/0jauyJQANjNV6BAe8oCffGLQ3a2imRBfyMZBsDdyTzppMS4LvA8p7OhxIuLEI aq7wY26BHLdI0EHeJ4+BC4ahHvX1AQv2kMiTRYyVNWRRD32TAkOT9RxalKhJcdNLwyOV fGt5aFTpuD9LbYCujnU2cBiAMzx6Sn/0pmC9WE8EaAGll6OX9abUuY5UylIz0cbLmCgD JaHg== X-Gm-Message-State: AOAM531PiTqhmezc87D1mBsPZDIaX7fFt0lkExCPEZg4El6LmTWeDC2J 9yRQKcFtZ+n2RE+n00Ycp7CfWB/Fh9Ic9w== X-Google-Smtp-Source: ABdhPJxAiDhSmMRQvNP4bwzJil10537qF0v5lVx1zfQp6exZ6z/iBFJmLyKBgLkVV9W6GLU1kmcYWRkSF8Jxrw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:aa7:8b14:0:b0:4a3:a714:30ed with SMTP id f20-20020aa78b14000000b004a3a71430edmr10453181pfd.2.1637366299898; Fri, 19 Nov 2021 15:58:19 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:50 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-7-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 06/15] KVM: x86/mmu: Derive page role from parent From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Derive the page role from the parent shadow page, since the only thing that changes is the level. This is in preparation for eagerly splitting large pages during VM-ioctls which does not have access to the vCPU MMU context. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 43 ++++++++++++++++++++------------------ 1 file changed, 23 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index b70707a7fe87..1a409992a57f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -157,23 +157,8 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, if (kvm_mmu_page_as_id(_root) != _as_id) { \ } else -static union kvm_mmu_page_role page_role_for_level(struct kvm_vcpu *vcpu, - int level) -{ - union kvm_mmu_page_role role; - - role = vcpu->arch.mmu->mmu_role.base; - role.level = level; - role.direct = true; - role.gpte_is_8_bytes = true; - role.access = ACC_ALL; - role.ad_disabled = !shadow_accessed_mask; - - return role; -} - static struct kvm_mmu_page *alloc_tdp_mmu_page(struct kvm_vcpu *vcpu, gfn_t gfn, - int level) + union kvm_mmu_page_role role) { struct kvm_mmu_memory_caches *mmu_caches; struct kvm_mmu_page *sp; @@ -184,7 +169,7 @@ static struct kvm_mmu_page *alloc_tdp_mmu_page(struct kvm_vcpu *vcpu, gfn_t gfn, sp->spt = kvm_mmu_memory_cache_alloc(&mmu_caches->shadow_page_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - sp->role.word = page_role_for_level(vcpu, level).word; + sp->role = role; sp->gfn = gfn; sp->tdp_mmu_page = true; @@ -193,6 +178,19 @@ static struct kvm_mmu_page *alloc_tdp_mmu_page(struct kvm_vcpu *vcpu, gfn_t gfn, return sp; } +static struct kvm_mmu_page *alloc_child_tdp_mmu_page(struct kvm_vcpu *vcpu, struct tdp_iter *iter) +{ + struct kvm_mmu_page *parent_sp; + union kvm_mmu_page_role role; + + parent_sp = sptep_to_sp(rcu_dereference(iter->sptep)); + + role = parent_sp->role; + role.level--; + + return alloc_tdp_mmu_page(vcpu, iter->gfn, role); +} + hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) { union kvm_mmu_page_role role; @@ -201,7 +199,12 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) lockdep_assert_held_write(&kvm->mmu_lock); - role = page_role_for_level(vcpu, vcpu->arch.mmu->shadow_root_level); + role = vcpu->arch.mmu->mmu_role.base; + role.level = vcpu->arch.mmu->shadow_root_level; + role.direct = true; + role.gpte_is_8_bytes = true; + role.access = ACC_ALL; + role.ad_disabled = !shadow_accessed_mask; /* Check for an existing root before allocating a new one. */ for_each_tdp_mmu_root(kvm, root, kvm_mmu_role_as_id(role)) { @@ -210,7 +213,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) goto out; } - root = alloc_tdp_mmu_page(vcpu, 0, vcpu->arch.mmu->shadow_root_level); + root = alloc_tdp_mmu_page(vcpu, 0, role); refcount_set(&root->tdp_mmu_root_count, 1); spin_lock(&kvm->arch.tdp_mmu_pages_lock); @@ -1028,7 +1031,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (is_removed_spte(iter.old_spte)) break; - sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level - 1); + sp = alloc_child_tdp_mmu_page(vcpu, &iter); if (!tdp_mmu_install_sp_atomic(vcpu->kvm, &iter, sp, account_nx)) break; } From patchwork Fri Nov 19 23:57:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F08FFC433EF for ; Fri, 19 Nov 2021 23:58:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236314AbhKTABw (ORCPT ); Fri, 19 Nov 2021 19:01:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236212AbhKTABY (ORCPT ); Fri, 19 Nov 2021 19:01:24 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15C07C06174A for ; Fri, 19 Nov 2021 15:58:22 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id x18-20020a170902ec9200b00143c6409dbcso5371500plg.5 for ; Fri, 19 Nov 2021 15:58:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=THyCGcJlJDIteh93mmk5MXegs42Ad6o+RQjSwoy4UKE=; b=HzfOzdGrIvRs5CF2OA0gibAVw6lsOEdz4I9Y3lvNVhRphfkOGsmN2IdLnrQOgDe7Wd gUAcFcFlNnBVY9rG1gXm9D0WUfqtNbo2+vhixc3srxN+Sw81tuSyHPFqSmsB+i8tscLn SbOrfwigCEi+pCyreDY377FZMcYVvZl1RD88P2bv9p8JWHBXs2j43mlJIjVDFfLX6Qi/ 1IxX7BsjaFXchGBtICPLuxxPFB9xDf6KBMSyk+sEyidXAsW+O7IIMw/Er4vbM1Ek6h2r 1UR2R80rT0W1hmP3Urp+8M1HRoNKPg6X6VbqSjSgmMYkfEcW8HSC8riD9Szou81VLxuX xMuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=THyCGcJlJDIteh93mmk5MXegs42Ad6o+RQjSwoy4UKE=; b=pCtUPkkcccQJ6rf6cxabO6iU5aEl3k9yw96ykCEiV5Yqfc+QV06iEjiBNt9R0Uubmw gZGjO51J1ZjLL2Y245QXx+DaFonXzrqsPn+XQprgZBUWCGRQh5D20T9MThjnToHRjM3V feQ/LU6qunnOrkb3CNJBYN8fXl+oRtjoP/ijbSNv2e1+ItXNiEnBoaSKjYL+DrpdfYtE lNoXpzPo3mI/GrUnkESivY0RIVCE8SCc3uBbbTDpGKcy/eaULvRS0LcJFZQMhsYO7dTB fqcnGcd1m47fj9GSCUNxZB0qktROEYMF+8P5YOasc/uMA36VzlVi2+EmAClVqeXAALeg KAcA== X-Gm-Message-State: AOAM5311/SQRv/tZAuJJWoByVK4VtLCHwQZhJjZb0tsRubsx9MWcENtH P+UFB1hA6yLkF19xLk1kEwHMxKRSd2BwOg== X-Google-Smtp-Source: ABdhPJxt7MI51GZajwUnfC3GYlRa9MHQ8weuL4DIzGepaXz2g9XCcaSYQOuxMHK8ExOZQp1keUHWHsbpgXa43w== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:f68e:b0:142:c60:475 with SMTP id l14-20020a170902f68e00b001420c600475mr83161881plg.8.1637366301600; Fri, 19 Nov 2021 15:58:21 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:51 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-8-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 07/15] KVM: x86/mmu: Pass in vcpu->arch.mmu_caches instead of vcpu From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Pass in vcpu->arch.mmu_caches to alloc_{,_child}_tdp_mmu_page() instead of the vcpu. This is in preparation for eagerly splitting large pages during VM-ioctls which does not have access to the vCPU mmu_caches. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_mmu.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 1a409992a57f..ff4d83ad7580 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -157,14 +157,11 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, if (kvm_mmu_page_as_id(_root) != _as_id) { \ } else -static struct kvm_mmu_page *alloc_tdp_mmu_page(struct kvm_vcpu *vcpu, gfn_t gfn, - union kvm_mmu_page_role role) +static struct kvm_mmu_page *alloc_tdp_mmu_page(struct kvm_mmu_memory_caches *mmu_caches, + gfn_t gfn, union kvm_mmu_page_role role) { - struct kvm_mmu_memory_caches *mmu_caches; struct kvm_mmu_page *sp; - mmu_caches = &vcpu->arch.mmu_caches; - sp = kvm_mmu_memory_cache_alloc(&mmu_caches->page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(&mmu_caches->shadow_page_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); @@ -178,7 +175,8 @@ static struct kvm_mmu_page *alloc_tdp_mmu_page(struct kvm_vcpu *vcpu, gfn_t gfn, return sp; } -static struct kvm_mmu_page *alloc_child_tdp_mmu_page(struct kvm_vcpu *vcpu, struct tdp_iter *iter) +static struct kvm_mmu_page *alloc_child_tdp_mmu_page(struct kvm_mmu_memory_caches *mmu_caches, + struct tdp_iter *iter) { struct kvm_mmu_page *parent_sp; union kvm_mmu_page_role role; @@ -188,7 +186,7 @@ static struct kvm_mmu_page *alloc_child_tdp_mmu_page(struct kvm_vcpu *vcpu, stru role = parent_sp->role; role.level--; - return alloc_tdp_mmu_page(vcpu, iter->gfn, role); + return alloc_tdp_mmu_page(mmu_caches, iter->gfn, role); } hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) @@ -213,7 +211,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) goto out; } - root = alloc_tdp_mmu_page(vcpu, 0, role); + root = alloc_tdp_mmu_page(&vcpu->arch.mmu_caches, 0, role); refcount_set(&root->tdp_mmu_root_count, 1); spin_lock(&kvm->arch.tdp_mmu_pages_lock); @@ -1031,7 +1029,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (is_removed_spte(iter.old_spte)) break; - sp = alloc_child_tdp_mmu_page(vcpu, &iter); + sp = alloc_child_tdp_mmu_page(&vcpu->arch.mmu_caches, &iter); if (!tdp_mmu_install_sp_atomic(vcpu->kvm, &iter, sp, account_nx)) break; } From patchwork Fri Nov 19 23:57:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0112C433EF for ; Fri, 19 Nov 2021 23:58:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236205AbhKTABz (ORCPT ); Fri, 19 Nov 2021 19:01:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236254AbhKTAB0 (ORCPT ); Fri, 19 Nov 2021 19:01:26 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C07EAC06173E for ; Fri, 19 Nov 2021 15:58:23 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id a16-20020a17090aa51000b001a78699acceso7455234pjq.8 for ; Fri, 19 Nov 2021 15:58:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=WSjxVd+eIUHGILSD5cz53Ty6YrqZdmuc1hfDKll+vog=; b=QvwtB7h+duQXv3dzzrPANRhkI3y4CeNF7R9Z21cqkoZpgq4T8pL3s6G9edaV9bE8EV vpU0w7cWgje9IrgtkN+tdZ7QlxPoBRlfgNjPUqMm2sbDrsbOr/H8WZ+QsOKmL4UGV/Wv g5CDvpAGBFiGU2+JIjQp/IIaonmRkwSgO4wRilkGjKyx78GtLnIAS+QdkwkXDpxkYARJ lA+w9DMyFXDjUwsBRRrC0e/5U0LsmeqE7OwxeisO3e7XuVRM7XFvckMKG1/F8X/SbmvL C2fj53qUag/eCof8Ia53ZVMO0tWABU4En+GZx1WamrNay4xYslApyDEdKLjCrDmOTOJs dRSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WSjxVd+eIUHGILSD5cz53Ty6YrqZdmuc1hfDKll+vog=; b=iJQcVzNv2x+UEEh86/0GBrICfIx3bgihu5H8ffGivbKw3N7jkZboWS0eilhQb7vAYz xGqIGSVKeDC/ZamqLednp0FVL0UWkhHIK8krUXsm0vQYkfWmsyB14/FOwoPGZKXKXs2B b+m+lKaneZC3fhMu3gmVj93vCWQKXBlqe31rSQ0vy4MOg9JqbvNLPxZeODM4iPbvsXsA Tjsprwi3yCe+GziVtbSRxJWOlDyFE73AP4dgDoRgwil9/vyg2HhzQ29FAZEQ5ZhpaCO8 s9ivPXT6St8KxVuXtUAz/cAQMfW73UNi4T5klozyTKHzh4xPnGlinsCTcb7Cp90WKxHD N+fw== X-Gm-Message-State: AOAM5323sx/2oMzADjylvDEGszjizmWbbgF20mRXJoIOMkVEiRCh/Onm qHdMA4cx/atNLHClZ9RjitCXn+MVUhD/6g== X-Google-Smtp-Source: ABdhPJxalh2bCjcDgQUc9QuZFQdQ9h6h3Ghsd3BB+2YaEjBodxA1MBl+2fDqDME4UqArL3EbXCW0bA84LJzy+w== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:bc8b:b0:143:caf5:4a0e with SMTP id bb11-20020a170902bc8b00b00143caf54a0emr47960112plb.38.1637366303291; Fri, 19 Nov 2021 15:58:23 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:52 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-9-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 08/15] KVM: x86/mmu: Helper method to check for large and present sptes From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Consolidate is_large_pte and is_present_pte into a single helper. This will be used in a follow-up commit to check for present large-pages during Eager Page Splitting. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Ben Gardon --- arch/x86/kvm/mmu/spte.h | 5 +++++ arch/x86/kvm/mmu/tdp_mmu.c | 3 +-- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index cc432f9a966b..e73c41d31816 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -257,6 +257,11 @@ static inline bool is_large_pte(u64 pte) return pte & PT_PAGE_SIZE_MASK; } +static inline bool is_large_present_pte(u64 pte) +{ + return is_shadow_present_pte(pte) && is_large_pte(pte); +} + static inline bool is_last_spte(u64 pte, int level) { return (level == PG_LEVEL_4K) || is_large_pte(pte); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index ff4d83ad7580..f8c4337f1fcf 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1011,8 +1011,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) * than the target, that SPTE must be cleared and replaced * with a non-leaf SPTE. */ - if (is_shadow_present_pte(iter.old_spte) && - is_large_pte(iter.old_spte)) { + if (is_large_present_pte(iter.old_spte)) { if (!tdp_mmu_zap_spte_atomic(vcpu->kvm, &iter)) break; } From patchwork Fri Nov 19 23:57:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18A9BC433EF for ; Fri, 19 Nov 2021 23:58:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236285AbhKTAB6 (ORCPT ); Fri, 19 Nov 2021 19:01:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236262AbhKTAB2 (ORCPT ); Fri, 19 Nov 2021 19:01:28 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 731FAC061757 for ; Fri, 19 Nov 2021 15:58:25 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id l7-20020a622507000000b00494608c84a4so6468808pfl.6 for ; Fri, 19 Nov 2021 15:58:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iwzx2oWOtBNJIUC+MupnoX64lt1LDWHXN4CjIPxQ/kI=; b=pQB6J9GR2SnGMs3nv0wKQPq4/1Io7NUDcPtwhakBHFqLv7JqJnP07ad39B8iWMdYih l7ZWSGjm2PzZdJYK9YVNdxrn6v/SW7ZcfY1fW62jKkHay6qf5NnE8F0nl/JFtv/QZAeh TdZHAb94Sp9CbmRqQceNWPK7whydNn9CP1qdN9UEABbF4PlIq5lAjqvnq7B6YAwcPxNN EKNQCi17v85u6GRRw3SUEwiHHT9odecprQUnUX5avaXVrFDQPuJnGS7P+zxJR3eoshtS mb9VhpMuo0s97t1GoqojkprqCtLj6Q1ksebaiTcm5kEdjBeQhohFXzhdx4fe1tTRzTZG OTyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iwzx2oWOtBNJIUC+MupnoX64lt1LDWHXN4CjIPxQ/kI=; b=5JlNdujBOrVh2djYZ2Ryk8CjqEUNghSraFAEU7NTWHsWU9LpswQ9qJyeTiBTWyOOVR PdN1NTEpmmgn4nq4NBMZ0lC584opAIkMEk3uhIZwT7dF4CJEsErK5dPqb6gPaRp/DU0T vdNBrR8zORPvfEzxpucMfwUUTKHF7bGue9EHxhZHhFf6FVx3x5xWLzeqgz0WEG8Q8bCN hk+EEiPKfjU0sgSo+OZPvZjJZSgD12bPLNJqjbEtgRu8T8zY1xuOqlHCTS4XBhpWyO2J odOeVfoVRpfF8HB0nNZCjp9tExuMcosqTAHRE/JixZAXXFGeNSteX8+anBsVcsUai5f5 pPyQ== X-Gm-Message-State: AOAM530JTRjidN0nS1r4BeUOn9N6yydAAMlDNZnwcvYzxiiX1NgflNti x07SzhhuOlYm8Inxny3U0K5ESIJ/bzr4+Q== X-Google-Smtp-Source: ABdhPJyJqE+C+B5GvHmHAN94QpH6FQygbHRpdL8kRCYSEiIgl32QIJgZFA4dPvOSdZvSwSI79FjpGIjcabgfVg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:f283:b0:141:f719:c434 with SMTP id k3-20020a170902f28300b00141f719c434mr79956908plc.79.1637366304900; Fri, 19 Nov 2021 15:58:24 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:53 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-10-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 09/15] KVM: x86/mmu: Move restore_acc_track_spte to spte.c From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org restore_acc_track_spte is purely an SPTE manipulation, making it a good fit for spte.c. It is also needed in spte.c in a follow-up commit so we can construct child SPTEs during large page splitting. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 18 ------------------ arch/x86/kvm/mmu/spte.c | 18 ++++++++++++++++++ arch/x86/kvm/mmu/spte.h | 1 + 3 files changed, 19 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 537952574211..54f0d2228135 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -652,24 +652,6 @@ static u64 mmu_spte_get_lockless(u64 *sptep) return __get_spte_lockless(sptep); } -/* Restore an acc-track PTE back to a regular PTE */ -static u64 restore_acc_track_spte(u64 spte) -{ - u64 new_spte = spte; - u64 saved_bits = (spte >> SHADOW_ACC_TRACK_SAVED_BITS_SHIFT) - & SHADOW_ACC_TRACK_SAVED_BITS_MASK; - - WARN_ON_ONCE(spte_ad_enabled(spte)); - WARN_ON_ONCE(!is_access_track_spte(spte)); - - new_spte &= ~shadow_acc_track_mask; - new_spte &= ~(SHADOW_ACC_TRACK_SAVED_BITS_MASK << - SHADOW_ACC_TRACK_SAVED_BITS_SHIFT); - new_spte |= saved_bits; - - return new_spte; -} - /* Returns the Accessed status of the PTE and resets it at the same time. */ static bool mmu_spte_age(u64 *sptep) { diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 0c76c45fdb68..df2cdb8bcf77 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -268,6 +268,24 @@ u64 mark_spte_for_access_track(u64 spte) return spte; } +/* Restore an acc-track PTE back to a regular PTE */ +u64 restore_acc_track_spte(u64 spte) +{ + u64 new_spte = spte; + u64 saved_bits = (spte >> SHADOW_ACC_TRACK_SAVED_BITS_SHIFT) + & SHADOW_ACC_TRACK_SAVED_BITS_MASK; + + WARN_ON_ONCE(spte_ad_enabled(spte)); + WARN_ON_ONCE(!is_access_track_spte(spte)); + + new_spte &= ~shadow_acc_track_mask; + new_spte &= ~(SHADOW_ACC_TRACK_SAVED_BITS_MASK << + SHADOW_ACC_TRACK_SAVED_BITS_SHIFT); + new_spte |= saved_bits; + + return new_spte; +} + void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask) { BUG_ON((u64)(unsigned)access_mask != access_mask); diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index e73c41d31816..3e4943ee5a01 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -342,6 +342,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access); u64 mark_spte_for_access_track(u64 spte); +u64 restore_acc_track_spte(u64 spte); u64 kvm_mmu_changed_pte_notifier_make_spte(u64 old_spte, kvm_pfn_t new_pfn); void kvm_mmu_reset_all_pte_masks(void); From patchwork Fri Nov 19 23:57:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28396C433F5 for ; Fri, 19 Nov 2021 23:59:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236347AbhKTACB (ORCPT ); Fri, 19 Nov 2021 19:02:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236305AbhKTABa (ORCPT ); Fri, 19 Nov 2021 19:01:30 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B0D3C061748 for ; Fri, 19 Nov 2021 15:58:27 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id e9-20020a170902ed8900b00143a3f40299so5346901plj.20 for ; Fri, 19 Nov 2021 15:58:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uqSYlC0I4Uk2XD+zNrdVsAZt4a+qWQj6Ntf9g58LQWo=; b=P9HLJnd2hqrk7R1hBa7mNX+eUKE5kMKsrjwFcRk92RKrM91rqw8SOkSghTTbn3U03d eevfFxQlHqv2hJEO4FImj39bDFZphCuB9U6ErvYdIZg+p3dCfEqVl39iyqS+CaJ85Rhw bK8VgZmWNtAfVF3RbhISGhoRtrVEa0QnB9SWPDsBytwwHya0eLtGVtlKxH+SJZCvTXat h4cSx4h/92ba9pKyI1grNIHB/E/YGvatUaJwO8gcj9qouWYkEshau9Wht23bdsDAk0Qd c6B+P0oGUsPmbOnb3sfe5nJxL6QgjPFMu8cxVcLud46aneF525+ZwOcoT/+y3uxY7QWm DToA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uqSYlC0I4Uk2XD+zNrdVsAZt4a+qWQj6Ntf9g58LQWo=; b=I1cYd6kxaD5UQmXVBwMLBrpnNl9GCTtimiLl220ebAJCQkgw52XHet3+9VFM91D6+J Bf0VzW5MVNC68SGYwFogyfvjywDx9oY5dLkL3nU46h5V9VCHv0OuxGSxd3e7KzIy2DiP u7RhQtCHWz482GoYJOi7riV9dTlLhNFbCWY9rEH9v3jIqq8m45sJRyWBYU8baTlV4yuG ZKcb84cZT0WxPP1n7ypjizhlFLN2cs4VF/hwvLMNShWI9UHYhRuULLCPNuO7Q6bUMIgy HgrTzXw0VxVDVe2+eNamBI49O9Nu1VYP0p9lTEyt7rlWv6IrogsL/XgZMGGNSCCZoNj+ gcbw== X-Gm-Message-State: AOAM533JV5Q1nNcxcUlfH3LDhDBKdgGDYc24s83JfLXyG2mYuMP598Bz uqP9Oz0H5jJC5SrqaovpNiqYVZQbNRHxCg== X-Google-Smtp-Source: ABdhPJxcg2JjViS0WdcZ+sawcs+yC2AKTIBpMSdDQfvAmUoPeRvcuAZMaCu87WehH8mc83diCZLWNMf0F/s9oQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:b70b:b0:143:74b1:7e3b with SMTP id d11-20020a170902b70b00b0014374b17e3bmr82697946pls.26.1637366306591; Fri, 19 Nov 2021 15:58:26 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:54 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-11-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 10/15] KVM: x86/mmu: Abstract need_resched logic from tdp_mmu_iter_cond_resched From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Abstract out the logic that checks whether or not we should reschedule (including the extra check that ensures we make forward progress) to a helper method. This will be used in a follow-up commit to reschedule during large page splitting. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_mmu.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index f8c4337f1fcf..2221e074d8ea 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -645,6 +645,15 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm, for_each_tdp_pte(_iter, __va(_mmu->root_hpa), \ _mmu->shadow_root_level, _start, _end) +static inline bool tdp_mmu_iter_need_resched(struct kvm *kvm, struct tdp_iter *iter) +{ + /* Ensure forward progress has been made before yielding. */ + if (iter->next_last_level_gfn == iter->yielded_gfn) + return false; + + return need_resched() || rwlock_needbreak(&kvm->mmu_lock); +} + /* * Yield if the MMU lock is contended or this thread needs to return control * to the scheduler. @@ -664,11 +673,7 @@ static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter, bool flush, bool shared) { - /* Ensure forward progress has been made before yielding. */ - if (iter->next_last_level_gfn == iter->yielded_gfn) - return false; - - if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { + if (tdp_mmu_iter_need_resched(kvm, iter)) { rcu_read_unlock(); if (flush) From patchwork Fri Nov 19 23:57:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B69CC433EF for ; Fri, 19 Nov 2021 23:59:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236471AbhKTACD (ORCPT ); Fri, 19 Nov 2021 19:02:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236328AbhKTABb (ORCPT ); Fri, 19 Nov 2021 19:01:31 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7A56C061758 for ; Fri, 19 Nov 2021 15:58:28 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id w2-20020a627b02000000b0049fa951281fso6456899pfc.9 for ; Fri, 19 Nov 2021 15:58:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SmCSXpJ6Ch03EpbHnZ8Gjzgl3pEb/pHGueUC/l7cmO8=; b=PVwnx7oykzYltR5RjyOiSweFy620wA0jgFKtHt4t0z4nIUZlcnRoKWn2M81HeaslDw lNrQvSt0+7uDBTAy4vp8ze7+pcYC70o1Do5wEDGa9Bka5JD+yrJQkyQbUs8lU5snUwB3 pQ2EKNTlyXq+7L1wtTrSR/dDH/4iZpyzpoeg4kWIdfL4xzkHS8gUdyoDE9HCXkXJBoxg CWFBi7tQItxjh1LpdSZOkwUUzsVqGaOuAg45eUt5KV7xZv5eWw5Ipu1e1YjJaLy3d1cx DeHzgkQWXhY6jJmgAi/YMsDjWGNzXI4SImz+9zsyli6nwydXaLjh4w2ZW3cZlkTp7aXB vFGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SmCSXpJ6Ch03EpbHnZ8Gjzgl3pEb/pHGueUC/l7cmO8=; b=ZseihJTMI9a+jQCGbcvhTKsO31qQlWZN0SB4CuOvyHlSNWpZCrYhF1DtmB4182SSiX 6WfmUeUcDGgUTr3Ae3zHYjh7OxQCAUIvyTSzMyB0DZekRF0dz0dWIPSdeTWOY/B5d/eo 0c8n6cNgjHVnzvY+rWhQv8SmxBxnFds/6z7XgPgHH78+kaoBUOqryt6jTnQthVAK9SWv 9VzLyMQ6fCmOlAnkEfXT+gdscWLL/FQlY/LnOGOjkAayfmNZlNeYNZcXoYOiIsekWQl1 xmAvREQDiyFEQm3T+/ij/wYBY0IJQFBVKpW2zI9D/lGBgd2efhgmtouwrGFo7Fe9BKx0 DGcQ== X-Gm-Message-State: AOAM530qbZM0cfu77xYDRitU06UGLW0/32r6WslL90E7gg1PAWzIeMCY cvzkp81MIK8qtvuSDtk2B0yfXSYoQVmtUw== X-Google-Smtp-Source: ABdhPJzCQPDmO7av7RFy/2wpnWlmH7ZZkywBi1u5O/uKV1PQMaBTqJ2xaOUAvIQTX/I7TkVkL0KcNrOvjwBr9w== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:248b:b0:49f:9d7f:84e2 with SMTP id c11-20020a056a00248b00b0049f9d7f84e2mr68199503pfv.40.1637366308173; Fri, 19 Nov 2021 15:58:28 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:55 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-12-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 11/15] KVM: x86/mmu: Refactor tdp_mmu iterators to take kvm_mmu_page root From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Instead of passing a pointer to the root page table and the root level seperately, pass in a pointer to the kvm_mmu_page that backs the root. This reduces the number of arguments by 1, cutting down on line lengths. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_iter.c | 5 ++++- arch/x86/kvm/mmu/tdp_iter.h | 10 +++++----- arch/x86/kvm/mmu/tdp_mmu.c | 14 +++++--------- 3 files changed, 14 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index b3ed302c1a35..92b3a075525a 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -39,9 +39,12 @@ void tdp_iter_restart(struct tdp_iter *iter) * Sets a TDP iterator to walk a pre-order traversal of the paging structure * rooted at root_pt, starting with the walk to translate next_last_level_gfn. */ -void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, +void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root, int min_level, gfn_t next_last_level_gfn) { + u64 *root_pt = root->spt; + int root_level = root->role.level; + WARN_ON(root_level < 1); WARN_ON(root_level > PT64_ROOT_MAX_LEVEL); diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index b1748b988d3a..ec1f58013428 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -51,17 +51,17 @@ struct tdp_iter { * Iterates over every SPTE mapping the GFN range [start, end) in a * preorder traversal. */ -#define for_each_tdp_pte_min_level(iter, root, root_level, min_level, start, end) \ - for (tdp_iter_start(&iter, root, root_level, min_level, start); \ +#define for_each_tdp_pte_min_level(iter, root, min_level, start, end) \ + for (tdp_iter_start(&iter, root, min_level, start); \ iter.valid && iter.gfn < end; \ tdp_iter_next(&iter)) -#define for_each_tdp_pte(iter, root, root_level, start, end) \ - for_each_tdp_pte_min_level(iter, root, root_level, PG_LEVEL_4K, start, end) +#define for_each_tdp_pte(iter, root, start, end) \ + for_each_tdp_pte_min_level(iter, root, PG_LEVEL_4K, start, end) tdp_ptep_t spte_to_child_pt(u64 pte, int level); -void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, +void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root, int min_level, gfn_t next_last_level_gfn); void tdp_iter_next(struct tdp_iter *iter); void tdp_iter_restart(struct tdp_iter *iter); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 2221e074d8ea..5ca0fa659245 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -632,7 +632,7 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm, } #define tdp_root_for_each_pte(_iter, _root, _start, _end) \ - for_each_tdp_pte(_iter, _root->spt, _root->role.level, _start, _end) + for_each_tdp_pte(_iter, _root, _start, _end) #define tdp_root_for_each_leaf_pte(_iter, _root, _start, _end) \ tdp_root_for_each_pte(_iter, _root, _start, _end) \ @@ -642,8 +642,7 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm, else #define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end) \ - for_each_tdp_pte(_iter, __va(_mmu->root_hpa), \ - _mmu->shadow_root_level, _start, _end) + for_each_tdp_pte(_iter, to_shadow_page(_mmu->root_hpa), _start, _end) static inline bool tdp_mmu_iter_need_resched(struct kvm *kvm, struct tdp_iter *iter) { @@ -738,8 +737,7 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, rcu_read_lock(); - for_each_tdp_pte_min_level(iter, root->spt, root->role.level, - min_level, start, end) { + for_each_tdp_pte_min_level(iter, root, min_level, start, end) { retry: if (can_yield && tdp_mmu_iter_cond_resched(kvm, &iter, flush, shared)) { @@ -1201,8 +1199,7 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, BUG_ON(min_level > KVM_MAX_HUGEPAGE_LEVEL); - for_each_tdp_pte_min_level(iter, root->spt, root->role.level, - min_level, start, end) { + for_each_tdp_pte_min_level(iter, root, min_level, start, end) { retry: if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; @@ -1450,8 +1447,7 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, rcu_read_lock(); - for_each_tdp_pte_min_level(iter, root->spt, root->role.level, - min_level, gfn, gfn + 1) { + for_each_tdp_pte_min_level(iter, root, min_level, gfn, gfn + 1) { if (!is_shadow_present_pte(iter.old_spte) || !is_last_spte(iter.old_spte, iter.level)) continue; From patchwork Fri Nov 19 23:57:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629893 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B0DFC433F5 for ; Fri, 19 Nov 2021 23:59:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236323AbhKTACP (ORCPT ); Fri, 19 Nov 2021 19:02:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236358AbhKTABc (ORCPT ); Fri, 19 Nov 2021 19:01:32 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 312C1C061574 for ; Fri, 19 Nov 2021 15:58:30 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id pg9-20020a17090b1e0900b001a689204b52so7484336pjb.0 for ; Fri, 19 Nov 2021 15:58:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=LjrcCvl2XE6BaZeb3NcnXgGup80WJ8oxY3f4O7nxHag=; b=ljz37K09ZpWa/HflL79lxn1IYkFcRenPjBV3VwR7x7+9w954bJbtARLS8U5QMUPHkR D/W8MZi/kphOkW2ebtPdKoe4dzo4ff3AczZ7bjsTrQyDOMCKm7DkY/jjnroUFDEFJC37 HtMYMvp0ngBWCQor1rWiDgpCzmmCIK0DMBq00iIfMRbxHcy65GCwmh2V+v3X05rnjkeT JUqR6lhmTuHMHSCHbb+P2HJDg0J+YENatbqIl/oZHyAyumXrCx2FZshSmAK3Wdr8YyZG 2ELDepM6dEzEQ+93ShOivbgNKUgZEXgGyiI1Ajx4qODHdmwhxrenr7Bzrc6+2QPe+56c YvFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LjrcCvl2XE6BaZeb3NcnXgGup80WJ8oxY3f4O7nxHag=; b=YuWy57QKL1rbdbPjP0W7wbHmdpIfJv66KEOkIWZBywQmFYEz9Lbu/OAZYqtMjIYFBb W7ysdBEpDqWFTpXh9B6oVfGPhpDh5I/2qDXA0WcXYijWY+dpAPjn1dCFaYEcS/tiy5OK HzzVwytLD9/VE75GNnIycHvSkxEDfAkLvEUqRjeMrCl2pkHoKVCRDZjpeZNS0j4OJF2g SO/xqi7g/DjcdigrfrgvSzeKEC82Ny+AmXplVIIssJcgjS0NcJRCx9+tO8oh+cheDJ4r HpUhhxpDP87p696gPdEZUuKCTMKfc5dZhTyKwcAkGCHaX3bI9YARhmCdRmQy9VAhajwU bSAQ== X-Gm-Message-State: AOAM533XkdV9Lc5vQlYsPqkcJnUp52S9yD/s7xi6wdKckqMaCRO0ZvOg zUZI6+TS4F6JND6uwVrNrDXVWtzKrU4tsA== X-Google-Smtp-Source: ABdhPJyPLcIHCk2wWLuOr2qHc5FI6qP1GLflZ+6snAADoJm7fBtXoaC7DVdK69sOCqZPYb/Lm3zY2kaU33VNCw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:c145:b0:142:50c3:c2a with SMTP id 5-20020a170902c14500b0014250c30c2amr81272532plj.32.1637366309630; Fri, 19 Nov 2021 15:58:29 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:56 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-13-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 12/15] KVM: x86/mmu: Split large pages when dirty logging is enabled From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When dirty logging is enabled without initially-all-set, attempt to split all large pages in the memslot down to 4KB pages so that vCPUs do not have to take expensive write-protection faults to split large pages. Large page splitting is best-effort only. This commit only adds the support for the TDP MMU, and even there splitting may fail due to out of memory conditions. Failures to split a large page is fine from a correctness standpoint because we still always follow it up by write- protecting any remaining large pages. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 6 ++ arch/x86/kvm/mmu/mmu.c | 83 +++++++++++++++++++++ arch/x86/kvm/mmu/mmu_internal.h | 3 + arch/x86/kvm/mmu/spte.c | 46 ++++++++++++ arch/x86/kvm/mmu/spte.h | 1 + arch/x86/kvm/mmu/tdp_mmu.c | 123 ++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 5 ++ arch/x86/kvm/x86.c | 6 ++ 8 files changed, 273 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 2a7564703ea6..432a4df817ec 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1232,6 +1232,9 @@ struct kvm_arch { hpa_t hv_root_tdp; spinlock_t hv_root_tdp_lock; #endif + + /* MMU caches used when splitting large pages during VM-ioctls. */ + struct kvm_mmu_memory_caches split_caches; }; struct kvm_vm_stat { @@ -1588,6 +1591,9 @@ void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); void kvm_mmu_slot_remove_write_access(struct kvm *kvm, const struct kvm_memory_slot *memslot, int start_level); +void kvm_mmu_slot_try_split_large_pages(struct kvm *kvm, + const struct kvm_memory_slot *memslot, + int target_level); void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *memslot); void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 54f0d2228135..6768ef9c0891 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -738,6 +738,66 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) PT64_ROOT_MAX_LEVEL); } +static inline void assert_split_caches_invariants(struct kvm *kvm) +{ + /* + * The split caches must only be modified while holding the slots_lock, + * since it is only used during memslot VM-ioctls. + */ + lockdep_assert_held(&kvm->slots_lock); + + /* + * Only the TDP MMU supports large page splitting using + * kvm->arch.split_caches, which is why we only have to allocate + * page_header_cache and shadow_page_cache. Assert that the TDP + * MMU is at least enabled when the split cache is allocated. + */ + BUG_ON(!is_tdp_mmu_enabled(kvm)); +} + +int mmu_topup_split_caches(struct kvm *kvm) +{ + struct kvm_mmu_memory_caches *split_caches = &kvm->arch.split_caches; + int r; + + assert_split_caches_invariants(kvm); + + r = kvm_mmu_topup_memory_cache(&split_caches->page_header_cache, 1); + if (r) + goto out; + + r = kvm_mmu_topup_memory_cache(&split_caches->shadow_page_cache, 1); + if (r) + goto out; + + return 0; + +out: + pr_warn("Failed to top-up split caches. Will not split large pages.\n"); + return r; +} + +static void mmu_free_split_caches(struct kvm *kvm) +{ + assert_split_caches_invariants(kvm); + + kvm_mmu_free_memory_cache(&kvm->arch.split_caches.pte_list_desc_cache); + kvm_mmu_free_memory_cache(&kvm->arch.split_caches.shadow_page_cache); +} + +bool mmu_split_caches_need_topup(struct kvm *kvm) +{ + assert_split_caches_invariants(kvm); + + if (kvm->arch.split_caches.page_header_cache.nobjs == 0) + return true; + + if (kvm->arch.split_caches.shadow_page_cache.nobjs == 0) + return true; + + return false; +} + static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) { struct kvm_mmu_memory_caches *mmu_caches; @@ -5696,6 +5756,7 @@ void kvm_mmu_init_vm(struct kvm *kvm) spin_lock_init(&kvm->arch.mmu_unsync_pages_lock); + mmu_init_memory_caches(&kvm->arch.split_caches); kvm_mmu_init_tdp_mmu(kvm); node->track_write = kvm_mmu_pte_write; @@ -5819,6 +5880,28 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); } +void kvm_mmu_slot_try_split_large_pages(struct kvm *kvm, + const struct kvm_memory_slot *memslot, + int target_level) +{ + u64 start, end; + + if (!is_tdp_mmu_enabled(kvm)) + return; + + if (mmu_topup_split_caches(kvm)) + return; + + start = memslot->base_gfn; + end = start + memslot->npages; + + read_lock(&kvm->mmu_lock); + kvm_tdp_mmu_try_split_large_pages(kvm, memslot, start, end, target_level); + read_unlock(&kvm->mmu_lock); + + mmu_free_split_caches(kvm); +} + static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, const struct kvm_memory_slot *slot) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 52c6527b1a06..89b9b907c567 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -161,4 +161,7 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); +int mmu_topup_split_caches(struct kvm *kvm); +bool mmu_split_caches_need_topup(struct kvm *kvm); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index df2cdb8bcf77..6bb9b597a854 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -191,6 +191,52 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, return wrprot; } +static u64 mark_spte_executable(u64 spte) +{ + bool is_access_track = is_access_track_spte(spte); + + if (is_access_track) + spte = restore_acc_track_spte(spte); + + spte &= ~shadow_nx_mask; + spte |= shadow_x_mask; + + if (is_access_track) + spte = mark_spte_for_access_track(spte); + + return spte; +} + +/* + * Construct an SPTE that maps a sub-page of the given large SPTE. This is + * used during large page splitting, to build the SPTEs that make up the new + * page table. + */ +u64 make_large_page_split_spte(u64 large_spte, int level, int index, unsigned int access) +{ + u64 child_spte; + int child_level; + + BUG_ON(is_mmio_spte(large_spte)); + BUG_ON(!is_large_present_pte(large_spte)); + + child_spte = large_spte; + child_level = level - 1; + + child_spte += (index * KVM_PAGES_PER_HPAGE(child_level)) << PAGE_SHIFT; + + if (child_level == PG_LEVEL_4K) { + child_spte &= ~PT_PAGE_SIZE_MASK; + + /* Allow execution for 4K pages if it was disabled for NX HugePages. */ + if (is_nx_huge_page_enabled() && access & ACC_EXEC_MASK) + child_spte = mark_spte_executable(child_spte); + } + + return child_spte; +} + + u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled) { u64 spte = SPTE_MMU_PRESENT_MASK; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 3e4943ee5a01..4efb4837e38d 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -339,6 +339,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, u64 *new_spte); +u64 make_large_page_split_spte(u64 large_spte, int level, int index, unsigned int access); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access); u64 mark_spte_for_access_track(u64 spte); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 5ca0fa659245..366857b9fb3b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -695,6 +695,39 @@ static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm, return false; } +static inline bool +tdp_mmu_need_split_caches_topup_or_resched(struct kvm *kvm, struct tdp_iter *iter) +{ + if (mmu_split_caches_need_topup(kvm)) + return true; + + return tdp_mmu_iter_need_resched(kvm, iter); +} + +static inline int +tdp_mmu_topup_split_caches_resched(struct kvm *kvm, struct tdp_iter *iter, bool flush) +{ + int r; + + rcu_read_unlock(); + + if (flush) + kvm_flush_remote_tlbs(kvm); + + read_unlock(&kvm->mmu_lock); + + cond_resched(); + r = mmu_topup_split_caches(kvm); + + read_lock(&kvm->mmu_lock); + + rcu_read_lock(); + WARN_ON(iter->gfn > iter->next_last_level_gfn); + tdp_iter_restart(iter); + + return r; +} + /* * Tears down the mappings for the range of gfns, [start, end), and frees the * non-root pages mapping GFNs strictly within that range. Returns true if @@ -1241,6 +1274,96 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm, return spte_set; } +static bool tdp_mmu_split_large_page_atomic(struct kvm *kvm, struct tdp_iter *iter) +{ + const u64 large_spte = iter->old_spte; + const int level = iter->level; + struct kvm_mmu_page *child_sp; + u64 child_spte; + int i; + + BUG_ON(mmu_split_caches_need_topup(kvm)); + + child_sp = alloc_child_tdp_mmu_page(&kvm->arch.split_caches, iter); + + for (i = 0; i < PT64_ENT_PER_PAGE; i++) { + child_spte = make_large_page_split_spte(large_spte, level, i, ACC_ALL); + + /* + * No need for atomics since child_sp has not been installed + * in the table yet and thus is not reachable by any other + * thread. + */ + child_sp->spt[i] = child_spte; + } + + return tdp_mmu_install_sp_atomic(kvm, iter, child_sp, false); +} + +static void tdp_mmu_split_large_pages_root(struct kvm *kvm, struct kvm_mmu_page *root, + gfn_t start, gfn_t end, int target_level) +{ + struct tdp_iter iter; + bool flush = false; + int r; + + rcu_read_lock(); + + /* + * Traverse the page table splitting all large pages above the target + * level into one lower level. For example, if we encounter a 1GB page + * we split it into 512 2MB pages. + * + * Since the TDP iterator uses a pre-order traversal, we are guaranteed + * to visit an SPTE before ever visiting its children, which means we + * will correctly recursively split large pages that are more than one + * level above the target level (e.g. splitting 1GB to 2MB to 4KB). + */ + for_each_tdp_pte_min_level(iter, root, target_level + 1, start, end) { +retry: + if (tdp_mmu_need_split_caches_topup_or_resched(kvm, &iter)) { + r = tdp_mmu_topup_split_caches_resched(kvm, &iter, flush); + flush = false; + + /* + * If topping up the split caches failed, we can't split + * any more pages. Bail out of the loop. + */ + if (r) + break; + + continue; + } + + if (!is_large_present_pte(iter.old_spte)) + continue; + + if (!tdp_mmu_split_large_page_atomic(kvm, &iter)) + goto retry; + + flush = true; + } + + rcu_read_unlock(); + + if (flush) + kvm_flush_remote_tlbs(kvm); +} + +void kvm_tdp_mmu_try_split_large_pages(struct kvm *kvm, + const struct kvm_memory_slot *slot, + gfn_t start, gfn_t end, + int target_level) +{ + struct kvm_mmu_page *root; + + lockdep_assert_held_read(&kvm->mmu_lock); + + for_each_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true) + tdp_mmu_split_large_pages_root(kvm, root, start, end, target_level); + +} + /* * Clear the dirty status of all the SPTEs mapping GFNs in the memslot. If * AD bits are enabled, this will involve clearing the dirty bit on each SPTE. diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 476b133544dd..7812087836b2 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -72,6 +72,11 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, int min_level); +void kvm_tdp_mmu_try_split_large_pages(struct kvm *kvm, + const struct kvm_memory_slot *slot, + gfn_t start, gfn_t end, + int target_level); + static inline void kvm_tdp_mmu_walk_lockless_begin(void) { rcu_read_lock(); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 04e8dabc187d..4702ebfd394b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11735,6 +11735,12 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, if (kvm_dirty_log_manual_protect_and_init_set(kvm)) return; + /* + * Attempt to split all large pages into 4K pages so that vCPUs + * do not have to take write-protection faults. + */ + kvm_mmu_slot_try_split_large_pages(kvm, new, PG_LEVEL_4K); + if (kvm_x86_ops.cpu_dirty_log_size) { kvm_mmu_slot_leaf_clear_dirty(kvm, new); kvm_mmu_slot_remove_write_access(kvm, new, PG_LEVEL_2M); From patchwork Fri Nov 19 23:57:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DA76C433EF for ; Fri, 19 Nov 2021 23:59:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236359AbhKTACG (ORCPT ); Fri, 19 Nov 2021 19:02:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236249AbhKTABj (ORCPT ); Fri, 19 Nov 2021 19:01:39 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 963B4C06175D for ; Fri, 19 Nov 2021 15:58:31 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id n6-20020a17090a670600b001a9647fd1aaso7470968pjj.1 for ; Fri, 19 Nov 2021 15:58:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pkWaF7bQP6gucghx+x3mEjN+bvfdj/rHkNl/xx3z+d8=; b=bAz8AOc/hZUuzNMBOzzmxr0xbPUPI6Q9V5CmnFRWmbdg/O4erZO/M/GRxEPp6v7/AY 6mIvJNpFIqefvB8veWA9J1SYzi/tE94oR0k/wuzQSSXS1xeMnPYRaMoq5Q91VT4oVFBz O5+u+plrt8PBdCepbxKrLbBgGe2Vcgyq/hw40KWYKzH1EZD37fUxyE6VX32UAes5uTqY n9DKrwmXYdmAE6c4lK3qWkoWpHq/S6r3DGvA1DPw5J9D49VUtvnXldPx0G21TLHZtaBo QipCeUjSvs3/7t2JgEDK8HpQSjHI0cmL1Fmoxv9xcczRqlnjffJoPhaIlOvWu/lNAFfH IsTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pkWaF7bQP6gucghx+x3mEjN+bvfdj/rHkNl/xx3z+d8=; b=B2GpX1eDqxvaOV737VYdAhHMdHIcASfqCxAX3ONyLFcweyB//oMUhhDHMqHIb+MVNv iYD6P8BimyY/s/jB36U/Mixlob/0Tam+XJju3fgNFv/7a+kVeF29gvZp8f/lxZkCHL5+ QUbJiSDk/5cuGlSlfb/Sd5QfnFY7qFB/7qv6r6gLgacAXDrpVf4Z5BA/aRerAhwomnrp ZxdFceRPdtTFQuD2cxxi/5PN4dl7t0teTbgxL55fNtvtLiq3jxRGMXm/Vnv78TiqzAUB D/GFvQKfmVGNUP9Juuoxi0Qe1EyAvljrgB+gFg6t9YKWDVFxL0hnKE45x6nbpY583WD/ Wy4A== X-Gm-Message-State: AOAM533DWGxH0uncrWkFraVzXh8BofMJYccM3Cqx3whF9klZTX1p0/XZ c5KAdJC37OfUixmnu5KZKmszQSBCsUUezw== X-Google-Smtp-Source: ABdhPJxez272G3wG/hupmmPHs5bDdzSEpf3O30ohlyIH3Jz4ZmCj+aZRQ/W++MAypps8SLIS1u8CxvFdl3mkeg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:1b4a:: with SMTP id nv10mr4710359pjb.118.1637366311056; Fri, 19 Nov 2021 15:58:31 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:57 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-14-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 13/15] KVM: x86/mmu: Split large pages during CLEAR_DIRTY_LOG From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When using initially-all-set, large pages are not write-protected when dirty logging is enabled on the memslot. Instead they are write-protected once userspace invoked CLEAR_DIRTY_LOG for the first time, and only for the specific sub-region of the memslot that userspace whishes to clear. Enhance CLEAR_DIRTY_LOG to also try to split large pages prior to write-protecting to avoid causing write-protection faults on vCPU threads. This also allows userspace to smear the cost of large page splitting across multiple ioctls rather than splitting the entire memslot when not using initially-all-set. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 4 ++++ arch/x86/kvm/mmu/mmu.c | 30 ++++++++++++++++++++++-------- 2 files changed, 26 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 432a4df817ec..6b5bf99f57af 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1591,6 +1591,10 @@ void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); void kvm_mmu_slot_remove_write_access(struct kvm *kvm, const struct kvm_memory_slot *memslot, int start_level); +void kvm_mmu_try_split_large_pages(struct kvm *kvm, + const struct kvm_memory_slot *memslot, + u64 start, u64 end, + int target_level); void kvm_mmu_slot_try_split_large_pages(struct kvm *kvm, const struct kvm_memory_slot *memslot, int target_level); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6768ef9c0891..4e78ef2dd352 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1448,6 +1448,12 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, gfn_t start = slot->base_gfn + gfn_offset + __ffs(mask); gfn_t end = slot->base_gfn + gfn_offset + __fls(mask); + /* + * Try to proactively split any large pages down to 4KB so that + * vCPUs don't have to take write-protection faults. + */ + kvm_mmu_try_split_large_pages(kvm, slot, start, end, PG_LEVEL_4K); + kvm_mmu_slot_gfn_write_protect(kvm, slot, start, PG_LEVEL_2M); /* Cross two large pages? */ @@ -5880,21 +5886,17 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); } -void kvm_mmu_slot_try_split_large_pages(struct kvm *kvm, - const struct kvm_memory_slot *memslot, - int target_level) +void kvm_mmu_try_split_large_pages(struct kvm *kvm, + const struct kvm_memory_slot *memslot, + u64 start, u64 end, + int target_level) { - u64 start, end; - if (!is_tdp_mmu_enabled(kvm)) return; if (mmu_topup_split_caches(kvm)) return; - start = memslot->base_gfn; - end = start + memslot->npages; - read_lock(&kvm->mmu_lock); kvm_tdp_mmu_try_split_large_pages(kvm, memslot, start, end, target_level); read_unlock(&kvm->mmu_lock); @@ -5902,6 +5904,18 @@ void kvm_mmu_slot_try_split_large_pages(struct kvm *kvm, mmu_free_split_caches(kvm); } +void kvm_mmu_slot_try_split_large_pages(struct kvm *kvm, + const struct kvm_memory_slot *memslot, + int target_level) +{ + u64 start, end; + + start = memslot->base_gfn; + end = start + memslot->npages; + + kvm_mmu_try_split_large_pages(kvm, memslot, start, end, target_level); +} + static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, const struct kvm_memory_slot *slot) From patchwork Fri Nov 19 23:57:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629891 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 417B1C433F5 for ; Fri, 19 Nov 2021 23:59:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236282AbhKTACN (ORCPT ); Fri, 19 Nov 2021 19:02:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236217AbhKTABj (ORCPT ); Fri, 19 Nov 2021 19:01:39 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 502D5C06175E for ; Fri, 19 Nov 2021 15:58:33 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id q2-20020a056a00084200b004a2582fcec1so6456880pfk.15 for ; Fri, 19 Nov 2021 15:58:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=907pw9qKzH1wGLGqMFvCzKM8gYj98bpyPMmwUQUZ+H8=; b=gbpk8jWnIl4njyuLIyTnd7OaVlUgBgzPcG1SWhfuug0e11AuYP5dRnZZm7gCoNnu+D H4tOA9vVkPPEaKKU5aX4z31B7YtJjDUkgylae+zJAxUJ3c96NwjsaiA003E/d34G82I3 wF2Vu6psZ3x+bmJlnZ0HhAWYImEbuR2d2/r4OwbdnBLA5hudmU4H9oc3ozcQydfubPgT iBrh+Ma9Uv2w4pk3jd/oAD//S4QHja3k/iXePv7AOavIS4NHxPMFViViaVu3wtz+3/os NTjkzDlpZDjSQLs0SrwFGZC+x2ZaZShUEIe4ghsP09hiXtpvdQjqcdWZwqoXSCuC23Xt NaYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=907pw9qKzH1wGLGqMFvCzKM8gYj98bpyPMmwUQUZ+H8=; b=RIH8HgDsDJ+1+/f8eBYU3fza8g1lFrNgnNSgFlp1X4ubn962SySRcSDSoPkRrBSS80 Y5L6Hw4KXBRP6YlIZRwZVEYhyhvMnUjaN9/uXn0ZMJf/gU3iHS+B4S9dQOndy+2uERn8 /j/vfSS9YDEiigHWupkoz7dWWIrRgIHAD8txq106qCLdBcj/LMuOAMgnGm9GohJwJRUG IMS92+g8iyRIoml6/6++EbcasUnomKSTrP0jFOxhl11mtJpvk873tVh87wY6DvFc3/vb N5dWVC2q8Yx0LkUKNIZQsCr29M5fV+w6w7nLZAMNIMxivyda71qmi8VQKJVGVZVA24zO Fk0g== X-Gm-Message-State: AOAM531NKpxTA3Dafp08wNWIlY9pPw4zws7H/00xKUdHfg5N7A4BWCZS wuboxKOe+xrmY1BxzVRaYCOHZ4imAcG6/Q== X-Google-Smtp-Source: ABdhPJw3Jmz2av0uQKnM0ABopk5NJT4vWMk0To2NoBoGrb0U3lrALLjdUXWi8GMq/m4XrP7MlUMyehzYK5k1yA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:bd87:b0:143:c6e8:4110 with SMTP id q7-20020a170902bd8700b00143c6e84110mr57007798pls.23.1637366312753; Fri, 19 Nov 2021 15:58:32 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:58 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-15-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 14/15] KVM: x86/mmu: Add tracepoint for splitting large pages From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a tracepoint that records whenever we split a large page. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmutrace.h | 20 ++++++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.c | 2 ++ 2 files changed, 22 insertions(+) diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index b8151bbca36a..4adb794470ae 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -416,6 +416,26 @@ TRACE_EVENT( ) ); +TRACE_EVENT( + kvm_mmu_split_large_page, + TP_PROTO(u64 gfn, u64 spte, int level), + TP_ARGS(gfn, spte, level), + + TP_STRUCT__entry( + __field(u64, gfn) + __field(u64, spte) + __field(int, level) + ), + + TP_fast_assign( + __entry->gfn = gfn; + __entry->spte = spte; + __entry->level = level; + ), + + TP_printk("gfn %llx spte %llx level %d", __entry->gfn, __entry->spte, __entry->level) +); + #endif /* _TRACE_KVMMMU_H */ #undef TRACE_INCLUDE_PATH diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 366857b9fb3b..8f60d942c789 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1284,6 +1284,8 @@ static bool tdp_mmu_split_large_page_atomic(struct kvm *kvm, struct tdp_iter *it BUG_ON(mmu_split_caches_need_topup(kvm)); + trace_kvm_mmu_split_large_page(iter->gfn, large_spte, level); + child_sp = alloc_child_tdp_mmu_page(&kvm->arch.split_caches, iter); for (i = 0; i < PT64_ENT_PER_PAGE; i++) { From patchwork Fri Nov 19 23:57:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12629889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A38A3C433EF for ; Fri, 19 Nov 2021 23:59:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236232AbhKTACM (ORCPT ); Fri, 19 Nov 2021 19:02:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236447AbhKTABk (ORCPT ); Fri, 19 Nov 2021 19:01:40 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB8FCC06175F for ; Fri, 19 Nov 2021 15:58:34 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id q2-20020a056a00084200b004a2582fcec1so6456911pfk.15 for ; Fri, 19 Nov 2021 15:58:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=vNwFPHl85HCSd6FrobTYNAaEsPPYg0rnClUTDebkkfM=; b=tejtz/QT28zt+AH8kdTJ7B0WJpXaA0BCyraKwK0Um3p7G3n8Lh/A2VMpI9ox2PMw6Q rfdqYEqp4xXcV2c2T6mkQgEf95PXbgFwPzUSuumlvZwMAmhI6U+S8ZYFSmNlv2H5QeAB m+kDj0chR+vXRyxvaBa6Eui9q0MXt/R4Uy6brsqQkox4lNd34sE9cMvrAT51fO2a2EeC x+gaJ6W5lWTd+lDKa9j52Q7wlql779fOMy4cmauE90B43A6ZZhcv3TlmT4SED+SIFoow j+L5ikv3FtboeJ0Gtyi+jcM2MZDCbMH3gsCAiq/VwYECttUO7pp1wChHh3dIyyv47hXv tSOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vNwFPHl85HCSd6FrobTYNAaEsPPYg0rnClUTDebkkfM=; b=vvzsVOSaYEY9XOSX5C1xRowwvI46Kz50teZG0yK3IRdIt0+APYDjjtaThIT9gngrBZ vDOLFW29f1lRZ716sksTMe4qCb9VQhbRDvWHmT96fvI/KXGC6MvUG83lQgHVHHNIZ9Az 7zx6sUE7lW6pxqiRgGjOiWy/9v9Eq92eirB2QocZ28KD9pEIxZXWU8+ENpNDlh7QyUrF JwFLfU2It+6iwZ+0Lj0YrrLtkxWRDNf+PnP7QQaKa+iAyoOazqOpdIpAi2GbS4wyYu6V 3TBVbbAztkPMlk0WEq9QZX7e+YjE5CUvD3S72sHlHi47JDVCt4AgRAbIN6h5pMbhcWMH qNjg== X-Gm-Message-State: AOAM530H8rskXS/FgEmA33zZDsp+XOhqlJTWv5gyzDXWty4nY8bylbSd tDZEBXjbLm3B32TUELj/AQ/mH4PpI5oulg== X-Google-Smtp-Source: ABdhPJzzcMnMves3sTRprmzo1YUtxIAkCh5AoU8KogpEHmRL1i++riG7ky88KfHlQ0BG6kLgB2cwlnkL6OMSDw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:fd8c:: with SMTP id cx12mr4652427pjb.11.1637366314282; Fri, 19 Nov 2021 15:58:34 -0800 (PST) Date: Fri, 19 Nov 2021 23:57:59 +0000 In-Reply-To: <20211119235759.1304274-1-dmatlack@google.com> Message-Id: <20211119235759.1304274-16-dmatlack@google.com> Mime-Version: 1.0 References: <20211119235759.1304274-1-dmatlack@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [RFC PATCH 15/15] KVM: x86/mmu: Update page stats when splitting large pages From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Janis Schoetterl-Glausch , Junaid Shahid , Oliver Upton , Harish Barathvajasankar , Peter Xu , Peter Shier , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When splitting large pages we need to update the pages stats to reflect all of the new pages at the lower level. We do not need to change the page stats for the large page that was removed as that is already handled tdp_mmu_set_spte_atomic. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 8f60d942c789..4c313613a939 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1299,7 +1299,12 @@ static bool tdp_mmu_split_large_page_atomic(struct kvm *kvm, struct tdp_iter *it child_sp->spt[i] = child_spte; } - return tdp_mmu_install_sp_atomic(kvm, iter, child_sp, false); + if (!tdp_mmu_install_sp_atomic(kvm, iter, child_sp, false)) + return false; + + kvm_update_page_stats(kvm, level - 1, PT64_ENT_PER_PAGE); + + return true; } static void tdp_mmu_split_large_pages_root(struct kvm *kvm, struct kvm_mmu_page *root,