From patchwork Fri Mar 11 00:25:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12777175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6949C433F5 for ; Fri, 11 Mar 2022 00:25:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345177AbiCKA0r (ORCPT ); Thu, 10 Mar 2022 19:26:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345173AbiCKA0o (ORCPT ); Thu, 10 Mar 2022 19:26:44 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB86F1A1C75 for ; Thu, 10 Mar 2022 16:25:41 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id lp2-20020a17090b4a8200b001bc449ecbceso6757404pjb.8 for ; Thu, 10 Mar 2022 16:25:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=E1f7dwVMCTIQxzMxcNwcOjBa8usGSsNkBUA4TV7/K5Q=; b=dyFlaYzlo6UILgMHQAlPkaaMWow2Ul4B7LMcFG47WnUc9EGN8o4wI1lMnJ3utbSG7p q9GeTDSR1RhxNQy56RSpOC5NArGbG4gGdb7XRaIJJfeqUYsywCbqkmnVbTgkMS9Wau2M qeWdDfXmPCEfp84dJU25XOuzordVzgtSPCRymjS7qB1/GTeYHqtXALIOMRi2PQLz5BvD AdyrtucZlIf+ZhYFru+CvqSrBIOWiaZHMvEkUi3BaaeFk2pLY6D+IXBbJE9WAoXsMSuR A54OuPtVrIT/1g7o3yy9wrm+WFzZFhuIaFAK0CDO/gJBFzmHnhMWGPNd5MLO3oeSADwf qhOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=E1f7dwVMCTIQxzMxcNwcOjBa8usGSsNkBUA4TV7/K5Q=; b=5uPykYlztefhv+bAV2csT1LS8wJ7wVz+txUZcW6oehvnP0dVXFkkpzq+UsSpkgiLso iVrNrfT5+GTeeo69yUshzz5u8zVDdpl98N91BQRfYucmpq/YSnj/3VeqvvBI6OCP5T9J 5GdyvN0r50Wcr+/3pZWxtmiTYLuP+fzr+tGabapu6zFeckcuAFJVlFbz0PcINqFiCAe0 vzKVqNB8uj1xrH1NjVs9dBX25EEYCjhrE1ND41Ar8PJUsq8t8t2MnrjyFOyGG8IzgWVu K8eWMvIYxLt9FBrcvwn+RLe58FvacQYTS9tKDtmdf1CWOnFyAu7trvW8xbRGGrdp32jg J9aQ== X-Gm-Message-State: AOAM530HBK/xA+8pgDwTxj0LqkD7FDiiWhS1llVb2y2HW7bhkW7bX+I8 idCvubWEhUYFtEOUCgV1SDrtNbw76uKh/g== X-Google-Smtp-Source: ABdhPJwCJvv1gmlTzqVHHEXadtDnxMhsFf7CFfV0r1s95pboZSQVSEZ+RKGDk2zGMU2B9w5mZno63N0XkT+ONQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:a3c3:b0:151:ec83:d88 with SMTP id q3-20020a170902a3c300b00151ec830d88mr8062530plb.9.1646958341309; Thu, 10 Mar 2022 16:25:41 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:08 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-7-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 06/26] KVM: x86/mmu: Pass memslot to kvm_mmu_new_shadow_page() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Passing the memslot to kvm_mmu_new_shadow_page() avoids the need for the vCPU pointer when write-protecting indirect 4k shadow pages. This moves us closer to being able to create new shadow pages during VM ioctls for eager page splitting, where there is not vCPU pointer. This change does not negatively impact "Populate memory time" for ept=Y or ept=N configurations since kvm_vcpu_gfn_to_memslot() caches the last use slot. So even though we now look up the slot more often, it is a very cheap check. Opportunistically move the code to write-protect GFNs shadowed by PG_LEVEL_4K shadow pages into account_shadowed() to reduce indentation and consolidate the code. This also eliminates a memslot lookup. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b6fb50e32291..519910938478 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -793,16 +793,14 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn) update_gfn_disallow_lpage_count(slot, gfn, -1); } -static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) +static void account_shadowed(struct kvm *kvm, + struct kvm_memory_slot *slot, + struct kvm_mmu_page *sp) { - struct kvm_memslots *slots; - struct kvm_memory_slot *slot; gfn_t gfn; kvm->arch.indirect_shadow_pages++; gfn = sp->gfn; - slots = kvm_memslots_for_spte_role(kvm, sp->role); - slot = __gfn_to_memslot(slots, gfn); /* the non-leaf shadow pages are keeping readonly. */ if (sp->role.level > PG_LEVEL_4K) @@ -810,6 +808,9 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) KVM_PAGE_TRACK_WRITE); kvm_mmu_gfn_disallow_lpage(slot, gfn); + + if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn, PG_LEVEL_4K)) + kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); } void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp) @@ -2127,6 +2128,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, } static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, + struct kvm_memory_slot *slot, gfn_t gfn, union kvm_mmu_page_role role) { @@ -2142,11 +2144,8 @@ static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; hlist_add_head(&sp->hash_link, sp_list); - if (!role.direct) { - account_shadowed(vcpu->kvm, sp); - if (role.level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); - } + if (!role.direct) + account_shadowed(vcpu->kvm, slot, sp); return sp; } @@ -2155,6 +2154,7 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, gfn_t gfn, union kvm_mmu_page_role role) { + struct kvm_memory_slot *slot; struct kvm_mmu_page *sp; bool created = false; @@ -2163,7 +2163,8 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, goto out; created = true; - sp = kvm_mmu_new_shadow_page(vcpu, gfn, role); + slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); + sp = kvm_mmu_new_shadow_page(vcpu, slot, gfn, role); out: trace_kvm_mmu_get_page(sp, created);