From patchwork Tue Dec 6 17:35:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 13066188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF83BC352A1 for ; Tue, 6 Dec 2022 17:36:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235363AbiLFRgM (ORCPT ); Tue, 6 Dec 2022 12:36:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235259AbiLFRgJ (ORCPT ); Tue, 6 Dec 2022 12:36:09 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00C1A37F93 for ; Tue, 6 Dec 2022 09:36:07 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id z1-20020a17090a66c100b002196a0895a6so13677235pjl.5 for ; Tue, 06 Dec 2022 09:36:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EexaDR+ZUMOeyHekBnbhmko3+9Xc1K29meG6UCLtOrc=; b=h+zaYT9end89tIo9b2PVWNxFrVHzi+Iim+OouFjtvn2sBZEdZu0z4TsPxoZaO27dOJ p5rvD7S2+MUVgtauzWvUtd8OEbeS05jWjro8t99zLHM1rJs2l0OvfcDkrG76OPUECgjA zJRDQIAXpk6lQy7W3HdN9RTXYen3QoVCz//rHnev2iMgv7itjFppiRrY8QewM+REbgND xc5QRxhfvR0/nJs8vXhcd1+1yAR54THKwSx2HtQGLKK385JZXgvgfR9wV7+QQtbv8qMR 1JEqPxBj3gZDK5UWCw1XhUMgoJb303nVQy49bINx+QxxtHhnuXEA1/4JJb6GrZgt+lLL RIfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EexaDR+ZUMOeyHekBnbhmko3+9Xc1K29meG6UCLtOrc=; b=GZN/bg0camBH4BONzlCTkPrYndd9ukmDSm3pwB3Idc4c4UVVmkBodI2zz3iVdEWd1r tvKbrsIV92wWNETeDR+Duw2em0NypwVw033TziVWqBfPn/3httQB6nG7I3kSsVrV5or4 VtWLubi1M91Y3zfK++2F0GEUk3te/M4n3rC/tp1vswyD1/LKA0FnAr3340jwnn/gl3uq GNPhikE99wqnxDRSQhMgioamBTZT7P522ezrLqiimJtoU2xl3H3EP0npHVmRIt4/betP Lm2tgt9qjJdHwH8xsxgFI1pmx36Sn7P/ZvFui2hBUanh9Lexhu3Pi73pwVgISsh/v7rR ppUQ== X-Gm-Message-State: ANoB5pmER3XR1ItDoygypjM8fIFvBOdZOadzv0PzNiQzRSxSuneAKB8A h7QedsTNT+fC+8K3NWbzhOF1FZeH9sGq X-Google-Smtp-Source: AA0mqf5M3cker1iBSFrDHcbA7eaCcZjIXVNYlpw6/d4VHns8SW/q7qZqaIOhrX5owyF4SPbITG46zwD7NB7u X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:90a:d086:b0:219:227d:d91f with SMTP id k6-20020a17090ad08600b00219227dd91fmr4993173pju.0.1670348167215; Tue, 06 Dec 2022 09:36:07 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:55 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-2-bgardon@google.com> Subject: [PATCH 1/7] KVM: x86/MMU: Move pte_list operations to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In the interest of eventually splitting the Shadow MMU out of mmu.c, start by moving some of the operations for manipulating pte_lists out of mmu.c and into a new pair of files: rmap.c and rmap.h. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/Makefile | 2 +- arch/x86/kvm/debugfs.c | 1 + arch/x86/kvm/mmu/mmu.c | 152 +------------------------------- arch/x86/kvm/mmu/mmu_internal.h | 1 - arch/x86/kvm/mmu/rmap.c | 141 +++++++++++++++++++++++++++++ arch/x86/kvm/mmu/rmap.h | 34 +++++++ 6 files changed, 179 insertions(+), 152 deletions(-) create mode 100644 arch/x86/kvm/mmu/rmap.c create mode 100644 arch/x86/kvm/mmu/rmap.h diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index 80e3fe184d17..9f766eebeddf 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -12,7 +12,7 @@ include $(srctree)/virt/kvm/Makefile.kvm kvm-y += x86.o emulate.o i8259.o irq.o lapic.o \ i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \ hyperv.o debugfs.o mmu/mmu.o mmu/page_track.o \ - mmu/spte.o + mmu/spte.o mmu/rmap.o ifdef CONFIG_HYPERV kvm-y += kvm_onhyperv.o diff --git a/arch/x86/kvm/debugfs.c b/arch/x86/kvm/debugfs.c index c1390357126a..29f692ecd6f3 100644 --- a/arch/x86/kvm/debugfs.c +++ b/arch/x86/kvm/debugfs.c @@ -9,6 +9,7 @@ #include "lapic.h" #include "mmu.h" #include "mmu/mmu_internal.h" +#include "mmu/rmap.h" static int vcpu_get_timer_advance_ns(void *data, u64 *val) { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4736d7849c60..90b3735d6064 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -26,6 +26,7 @@ #include "kvm_emulate.h" #include "cpuid.h" #include "spte.h" +#include "rmap.h" #include #include @@ -112,24 +113,6 @@ module_param(dbg, bool, 0644); #include -/* make pte_list_desc fit well in cache lines */ -#define PTE_LIST_EXT 14 - -/* - * Slight optimization of cacheline layout, by putting `more' and `spte_count' - * at the start; then accessing it will only use one single cacheline for - * either full (entries==PTE_LIST_EXT) case or entries<=6. - */ -struct pte_list_desc { - struct pte_list_desc *more; - /* - * Stores number of entries stored in the pte_list_desc. No need to be - * u64 but just for easier alignment. When PTE_LIST_EXT, means full. - */ - u64 spte_count; - u64 *sptes[PTE_LIST_EXT]; -}; - struct kvm_shadow_walk_iterator { u64 addr; hpa_t shadow_addr; @@ -155,7 +138,6 @@ struct kvm_shadow_walk_iterator { ({ spte = mmu_spte_get_lockless(_walker.sptep); 1; }); \ __shadow_walk_next(&(_walker), spte)) -static struct kmem_cache *pte_list_desc_cache; struct kmem_cache *mmu_page_header_cache; static struct percpu_counter kvm_total_used_mmu_pages; @@ -674,11 +656,6 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } -static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) -{ - kmem_cache_free(pte_list_desc_cache, pte_list_desc); -} - static bool sp_has_gptes(struct kvm_mmu_page *sp); static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) @@ -878,111 +855,6 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t gfn, return slot; } -/* - * About rmap_head encoding: - * - * If the bit zero of rmap_head->val is clear, then it points to the only spte - * in this rmap chain. Otherwise, (rmap_head->val & ~1) points to a struct - * pte_list_desc containing more mappings. - */ - -/* - * Returns the number of pointers in the rmap chain, not counting the new one. - */ -static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, - struct kvm_rmap_head *rmap_head) -{ - struct pte_list_desc *desc; - int count = 0; - - if (!rmap_head->val) { - rmap_printk("%p %llx 0->1\n", spte, *spte); - rmap_head->val = (unsigned long)spte; - } else if (!(rmap_head->val & 1)) { - rmap_printk("%p %llx 1->many\n", spte, *spte); - desc = kvm_mmu_memory_cache_alloc(cache); - desc->sptes[0] = (u64 *)rmap_head->val; - desc->sptes[1] = spte; - desc->spte_count = 2; - rmap_head->val = (unsigned long)desc | 1; - ++count; - } else { - rmap_printk("%p %llx many->many\n", spte, *spte); - desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); - while (desc->spte_count == PTE_LIST_EXT) { - count += PTE_LIST_EXT; - if (!desc->more) { - desc->more = kvm_mmu_memory_cache_alloc(cache); - desc = desc->more; - desc->spte_count = 0; - break; - } - desc = desc->more; - } - count += desc->spte_count; - desc->sptes[desc->spte_count++] = spte; - } - return count; -} - -static void -pte_list_desc_remove_entry(struct kvm_rmap_head *rmap_head, - struct pte_list_desc *desc, int i, - struct pte_list_desc *prev_desc) -{ - int j = desc->spte_count - 1; - - desc->sptes[i] = desc->sptes[j]; - desc->sptes[j] = NULL; - desc->spte_count--; - if (desc->spte_count) - return; - if (!prev_desc && !desc->more) - rmap_head->val = 0; - else - if (prev_desc) - prev_desc->more = desc->more; - else - rmap_head->val = (unsigned long)desc->more | 1; - mmu_free_pte_list_desc(desc); -} - -static void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head) -{ - struct pte_list_desc *desc; - struct pte_list_desc *prev_desc; - int i; - - if (!rmap_head->val) { - pr_err("%s: %p 0->BUG\n", __func__, spte); - BUG(); - } else if (!(rmap_head->val & 1)) { - rmap_printk("%p 1->0\n", spte); - if ((u64 *)rmap_head->val != spte) { - pr_err("%s: %p 1->BUG\n", __func__, spte); - BUG(); - } - rmap_head->val = 0; - } else { - rmap_printk("%p many->many\n", spte); - desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); - prev_desc = NULL; - while (desc) { - for (i = 0; i < desc->spte_count; ++i) { - if (desc->sptes[i] == spte) { - pte_list_desc_remove_entry(rmap_head, - desc, i, prev_desc); - return; - } - } - prev_desc = desc; - desc = desc->more; - } - pr_err("%s: %p many->many\n", __func__, spte); - BUG(); - } -} - static void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, u64 *sptep) { @@ -1011,7 +883,7 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, for (i = 0; i < desc->spte_count; i++) mmu_spte_clear_track_bits(kvm, desc->sptes[i]); next = desc->more; - mmu_free_pte_list_desc(desc); + free_pte_list_desc(desc); } out: /* rmap_head is meaningless now, remember to reset it */ @@ -1019,26 +891,6 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, return true; } -unsigned int pte_list_count(struct kvm_rmap_head *rmap_head) -{ - struct pte_list_desc *desc; - unsigned int count = 0; - - if (!rmap_head->val) - return 0; - else if (!(rmap_head->val & 1)) - return 1; - - desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); - - while (desc) { - count += desc->spte_count; - desc = desc->more; - } - - return count; -} - static struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, const struct kvm_memory_slot *slot) { diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index dbaf6755c5a7..cd1c8f32269d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -166,7 +166,6 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, int min_level); void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages); -unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; static inline bool is_nx_huge_page_enabled(struct kvm *kvm) diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c new file mode 100644 index 000000000000..daa99dee0709 --- /dev/null +++ b/arch/x86/kvm/mmu/rmap.c @@ -0,0 +1,141 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "mmu.h" +#include "mmu_internal.h" +#include "mmutrace.h" +#include "rmap.h" +#include "spte.h" + +#include +#include + +/* + * About rmap_head encoding: + * + * If the bit zero of rmap_head->val is clear, then it points to the only spte + * in this rmap chain. Otherwise, (rmap_head->val & ~1) points to a struct + * pte_list_desc containing more mappings. + */ + +/* + * Returns the number of pointers in the rmap chain, not counting the new one. + */ +int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, + struct kvm_rmap_head *rmap_head) +{ + struct pte_list_desc *desc; + int count = 0; + + if (!rmap_head->val) { + rmap_printk("%p %llx 0->1\n", spte, *spte); + rmap_head->val = (unsigned long)spte; + } else if (!(rmap_head->val & 1)) { + rmap_printk("%p %llx 1->many\n", spte, *spte); + desc = kvm_mmu_memory_cache_alloc(cache); + desc->sptes[0] = (u64 *)rmap_head->val; + desc->sptes[1] = spte; + desc->spte_count = 2; + rmap_head->val = (unsigned long)desc | 1; + ++count; + } else { + rmap_printk("%p %llx many->many\n", spte, *spte); + desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); + while (desc->spte_count == PTE_LIST_EXT) { + count += PTE_LIST_EXT; + if (!desc->more) { + desc->more = kvm_mmu_memory_cache_alloc(cache); + desc = desc->more; + desc->spte_count = 0; + break; + } + desc = desc->more; + } + count += desc->spte_count; + desc->sptes[desc->spte_count++] = spte; + } + return count; +} + +void free_pte_list_desc(struct pte_list_desc *pte_list_desc) +{ + kmem_cache_free(pte_list_desc_cache, pte_list_desc); +} + +static void +pte_list_desc_remove_entry(struct kvm_rmap_head *rmap_head, + struct pte_list_desc *desc, int i, + struct pte_list_desc *prev_desc) +{ + int j = desc->spte_count - 1; + + desc->sptes[i] = desc->sptes[j]; + desc->sptes[j] = NULL; + desc->spte_count--; + if (desc->spte_count) + return; + if (!prev_desc && !desc->more) + rmap_head->val = 0; + else + if (prev_desc) + prev_desc->more = desc->more; + else + rmap_head->val = (unsigned long)desc->more | 1; + free_pte_list_desc(desc); +} + +void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head) +{ + struct pte_list_desc *desc; + struct pte_list_desc *prev_desc; + int i; + + if (!rmap_head->val) { + pr_err("%s: %p 0->BUG\n", __func__, spte); + BUG(); + } else if (!(rmap_head->val & 1)) { + rmap_printk("%p 1->0\n", spte); + if ((u64 *)rmap_head->val != spte) { + pr_err("%s: %p 1->BUG\n", __func__, spte); + BUG(); + } + rmap_head->val = 0; + } else { + rmap_printk("%p many->many\n", spte); + desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); + prev_desc = NULL; + while (desc) { + for (i = 0; i < desc->spte_count; ++i) { + if (desc->sptes[i] == spte) { + pte_list_desc_remove_entry(rmap_head, + desc, i, prev_desc); + return; + } + } + prev_desc = desc; + desc = desc->more; + } + pr_err("%s: %p many->many\n", __func__, spte); + BUG(); + } +} + +unsigned int pte_list_count(struct kvm_rmap_head *rmap_head) +{ + struct pte_list_desc *desc; + unsigned int count = 0; + + if (!rmap_head->val) + return 0; + else if (!(rmap_head->val & 1)) + return 1; + + desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); + + while (desc) { + count += desc->spte_count; + desc = desc->more; + } + + return count; +} + diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h new file mode 100644 index 000000000000..059765b6e066 --- /dev/null +++ b/arch/x86/kvm/mmu/rmap.h @@ -0,0 +1,34 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifndef __KVM_X86_MMU_RMAP_H +#define __KVM_X86_MMU_RMAP_H + +#include + +/* make pte_list_desc fit well in cache lines */ +#define PTE_LIST_EXT 14 + +/* + * Slight optimization of cacheline layout, by putting `more' and `spte_count' + * at the start; then accessing it will only use one single cacheline for + * either full (entries==PTE_LIST_EXT) case or entries<=6. + */ +struct pte_list_desc { + struct pte_list_desc *more; + /* + * Stores number of entries stored in the pte_list_desc. No need to be + * u64 but just for easier alignment. When PTE_LIST_EXT, means full. + */ + u64 spte_count; + u64 *sptes[PTE_LIST_EXT]; +}; + +static struct kmem_cache *pte_list_desc_cache; + +int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, + struct kvm_rmap_head *rmap_head); +void free_pte_list_desc(struct pte_list_desc *pte_list_desc); +void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head); +unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); + +#endif /* __KVM_X86_MMU_RMAP_H */ From patchwork Tue Dec 6 17:35:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 13066189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 568A4C352A1 for ; Tue, 6 Dec 2022 17:36:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235423AbiLFRgT (ORCPT ); Tue, 6 Dec 2022 12:36:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235291AbiLFRgK (ORCPT ); Tue, 6 Dec 2022 12:36:10 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D08D37F93 for ; Tue, 6 Dec 2022 09:36:09 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id q63-20020a632a42000000b0045724b1dfb9so12543742pgq.3 for ; Tue, 06 Dec 2022 09:36:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=akd5rMPTncKvwlPwwGjqiEGLXYk8H+l4pcsG5gvu2EQ=; b=P3zoi6Oo6b7q+soGLki3tsCGlAU9dd9EOInpq1nWCzLQXQqk0hqowY4Hkq8RRNwYi6 O/UBNmRBhKw2A1BU/fGaVbprZcSxapecuRH8bj0X94WYxwiQFZ0e5TqdxvXyApmA6Nfz t5JaVSrJVdjIYX+F6sAntOrzLzbsicP3kYMzZ6UugA24K+onBITM6wb3Db0Dtkp1heHx /1bFv58lqLKo0R59joc5Q/rGf0bzp/hDa9eIH2sRAt1ATPZrn1sxXPFjwd1gzFhDLEW3 AWzzmxhdvQN+2ZBILDa3i/iPLkmZGA3zrWubVUq/F/L7sKGoicBwrrWfy5g3Lzl1Q63C 9QUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=akd5rMPTncKvwlPwwGjqiEGLXYk8H+l4pcsG5gvu2EQ=; b=oU2HHIoP7p9d7e6TWwfr/JqRzJ8dULQ1Xaci1xg/zrcX/CIr9LG52bfhPSaPuIshtZ l40xEpl7wWuw4cTU4juw5zK7AeaJWNzhbvPnkpgpsv3esv8O0xdJUUMcz8t0QvTEImU+ Z/GKTSVvFp2X7QzwmBVxMNzV1ST+Liz1xUdsyaYyGpolcM5sMWUP/P4qUWXuXixGHOyJ SZK0BLlDgeN2MXMm7LJvDnwdrz87ghtq54kbISpH6ZAt29LWqnCHME4akqJBP0k8DZjn GUxTm1h5JlyuEsBEdfL85l9N+VhT65JBzzbpUTyZRVXDHoKmuTLX+xWUVk0pkfs8hjHb 74Yg== X-Gm-Message-State: ANoB5pmaKup1qRZopjG0VP6uk0ZNFFCXw076RZkL0FkaEH8XA7v9EGkH bE62tDIbrP+CSkFIkdNISkXbf9OL03LU X-Google-Smtp-Source: AA0mqf7m3FYBiYJRx6kc4LjbWv9jhqrSqbUCo0Kd5jklLnQdmTyeb6gzvI5DTS5hZQdVHl/r3uSYpdQ1+RCr X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a62:8683:0:b0:577:3624:2d49 with SMTP id x125-20020a628683000000b0057736242d49mr5185978pfd.64.1670348169136; Tue, 06 Dec 2022 09:36:09 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:56 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-3-bgardon@google.com> Subject: [PATCH 2/7] KVM: x86/MMU: Move rmap_iterator to rmap.h From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In continuing to factor the rmap out of mmu.c, move the rmap_iterator and associated functions and macros into rmap.(c|h). No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 76 ----------------------------------------- arch/x86/kvm/mmu/rmap.c | 61 +++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/rmap.h | 18 ++++++++++ 3 files changed, 79 insertions(+), 76 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 90b3735d6064..c3a7f443a213 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -932,82 +932,6 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) pte_list_remove(spte, rmap_head); } -/* - * Used by the following functions to iterate through the sptes linked by a - * rmap. All fields are private and not assumed to be used outside. - */ -struct rmap_iterator { - /* private fields */ - struct pte_list_desc *desc; /* holds the sptep if not NULL */ - int pos; /* index of the sptep */ -}; - -/* - * Iteration must be started by this function. This should also be used after - * removing/dropping sptes from the rmap link because in such cases the - * information in the iterator may not be valid. - * - * Returns sptep if found, NULL otherwise. - */ -static u64 *rmap_get_first(struct kvm_rmap_head *rmap_head, - struct rmap_iterator *iter) -{ - u64 *sptep; - - if (!rmap_head->val) - return NULL; - - if (!(rmap_head->val & 1)) { - iter->desc = NULL; - sptep = (u64 *)rmap_head->val; - goto out; - } - - iter->desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); - iter->pos = 0; - sptep = iter->desc->sptes[iter->pos]; -out: - BUG_ON(!is_shadow_present_pte(*sptep)); - return sptep; -} - -/* - * Must be used with a valid iterator: e.g. after rmap_get_first(). - * - * Returns sptep if found, NULL otherwise. - */ -static u64 *rmap_get_next(struct rmap_iterator *iter) -{ - u64 *sptep; - - if (iter->desc) { - if (iter->pos < PTE_LIST_EXT - 1) { - ++iter->pos; - sptep = iter->desc->sptes[iter->pos]; - if (sptep) - goto out; - } - - iter->desc = iter->desc->more; - - if (iter->desc) { - iter->pos = 0; - /* desc->sptes[0] cannot be NULL */ - sptep = iter->desc->sptes[iter->pos]; - goto out; - } - } - - return NULL; -out: - BUG_ON(!is_shadow_present_pte(*sptep)); - return sptep; -} - -#define for_each_rmap_spte(_rmap_head_, _iter_, _spte_) \ - for (_spte_ = rmap_get_first(_rmap_head_, _iter_); \ - _spte_; _spte_ = rmap_get_next(_iter_)) - static void drop_spte(struct kvm *kvm, u64 *sptep) { u64 old_spte = mmu_spte_clear_track_bits(kvm, sptep); diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index daa99dee0709..c3bad366b627 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -139,3 +139,64 @@ unsigned int pte_list_count(struct kvm_rmap_head *rmap_head) return count; } +/* + * Iteration must be started by this function. This should also be used after + * removing/dropping sptes from the rmap link because in such cases the + * information in the iterator may not be valid. + * + * Returns sptep if found, NULL otherwise. + */ +u64 *rmap_get_first(struct kvm_rmap_head *rmap_head, struct rmap_iterator *iter) +{ + u64 *sptep; + + if (!rmap_head->val) + return NULL; + + if (!(rmap_head->val & 1)) { + iter->desc = NULL; + sptep = (u64 *)rmap_head->val; + goto out; + } + + iter->desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); + iter->pos = 0; + sptep = iter->desc->sptes[iter->pos]; +out: + BUG_ON(!is_shadow_present_pte(*sptep)); + return sptep; +} + +/* + * Must be used with a valid iterator: e.g. after rmap_get_first(). + * + * Returns sptep if found, NULL otherwise. + */ +u64 *rmap_get_next(struct rmap_iterator *iter) +{ + u64 *sptep; + + if (iter->desc) { + if (iter->pos < PTE_LIST_EXT - 1) { + ++iter->pos; + sptep = iter->desc->sptes[iter->pos]; + if (sptep) + goto out; + } + + iter->desc = iter->desc->more; + + if (iter->desc) { + iter->pos = 0; + /* desc->sptes[0] cannot be NULL */ + sptep = iter->desc->sptes[iter->pos]; + goto out; + } + } + + return NULL; +out: + BUG_ON(!is_shadow_present_pte(*sptep)); + return sptep; +} + diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index 059765b6e066..13b265f3a95e 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -31,4 +31,22 @@ void free_pte_list_desc(struct pte_list_desc *pte_list_desc); void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); +/* + * Used by the following functions to iterate through the sptes linked by a + * rmap. All fields are private and not assumed to be used outside. + */ +struct rmap_iterator { + /* private fields */ + struct pte_list_desc *desc; /* holds the sptep if not NULL */ + int pos; /* index of the sptep */ +}; + +u64 *rmap_get_first(struct kvm_rmap_head *rmap_head, + struct rmap_iterator *iter); +u64 *rmap_get_next(struct rmap_iterator *iter); + +#define for_each_rmap_spte(_rmap_head_, _iter_, _spte_) \ + for (_spte_ = rmap_get_first(_rmap_head_, _iter_); \ + _spte_; _spte_ = rmap_get_next(_iter_)) + #endif /* __KVM_X86_MMU_RMAP_H */ From patchwork Tue Dec 6 17:35:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 13066190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69C83C352A1 for ; Tue, 6 Dec 2022 17:36:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235453AbiLFRgY (ORCPT ); Tue, 6 Dec 2022 12:36:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235345AbiLFRgL (ORCPT ); Tue, 6 Dec 2022 12:36:11 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 536DB3AC22 for ; Tue, 6 Dec 2022 09:36:11 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id y11-20020a056a00190b00b005749340b8a8so13540230pfi.11 for ; Tue, 06 Dec 2022 09:36:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rbUiksLk/ndFO2E6RSCpSBYGdIi7ojQOCeIOZeAA2ow=; b=Ie/mmiJHQUpEyx6oKaIMTBhaNypqhp7wR5I9PgyeMslphAOxXP9Mt0H1vtwizsJhIe KudAd2OjLuL/EPpuQX9326EMo5VoILq1g7QyBXb6UFkaTuiDxZclbwT36DYio2poZCsS QnyUIh5aI4j5k7lLeu39t1k7Ll7Jcd5AbHaDNYnuGkYcn8utG14NtPeFRh8YUFbIi8cL cJWfh4uSFvrE9hq19uADqys655XW12zElikxi/7jrj2UQ1uE/y7Rfbjh6LTg7OdPXF8N 4p3XD24i3opGf7A5gtoUq/j1EaErnZ4+1m3FR0zLUPit2WLjg1bVsAfAScnCsjsWWKKN kj+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rbUiksLk/ndFO2E6RSCpSBYGdIi7ojQOCeIOZeAA2ow=; b=Ftr5jXY2UdNxGlcegAADkmauV//emghb5qM1AMpBCKS2u5MSF5HVKxhmBLqCv4+jid WxMgD57ldSryOteACD/vhOrERfcBCqi2S+hNbHo9wEkKNUr6K2y4XjmLDQx5p2luXrAs SRLDa75MMKYYics5c9bM4Z6ThPpoBoH7Zn+h6ZMIny+X3WC7zpuQ4/SJMf1mNStfLmIE MzpSIL9pAIw8cPnICMKVopsI64V2o1FKFBG5gXfqdLKlJ6WmyunzfoWwBlfS5lZlBP7h 6i45m1cuotR6EW+MvPm88g9EaPRFHrKc/zW2wPaNRFRaC6vmjOLeN4/UjSxC66SH0Yt3 e9SA== X-Gm-Message-State: ANoB5pm5OTraGFviYrqjmzuaGa8A6NJxEU8jmEzTMd/WAJJ5YTrAAI4W nbV2SIGQf+FsPfEFbv1G5Svg7romVr9h X-Google-Smtp-Source: AA0mqf7CYvLk1N+e34pjvlhKeqbll+Of27+rNyiXwB2nQ8FwQUBi1/c58gdg0HYBifU3odb0bEYVaQvdni0M X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:902:b707:b0:189:5f3c:fb25 with SMTP id d7-20020a170902b70700b001895f3cfb25mr60010769pls.123.1670348170913; Tue, 06 Dec 2022 09:36:10 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:57 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-4-bgardon@google.com> Subject: [PATCH 3/7] KVM: x86/MMU: Move gfn_to_rmap() to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move gfn_to_rmap() to rmap.c. While the function is not part of manipulating the rmap, it is the main way that the MMU gets pointers to the rmaps. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 9 --------- arch/x86/kvm/mmu/rmap.c | 8 ++++++++ arch/x86/kvm/mmu/rmap.h | 2 ++ 3 files changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c3a7f443a213..f8d7201210c8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -891,15 +891,6 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, return true; } -static struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, - const struct kvm_memory_slot *slot) -{ - unsigned long idx; - - idx = gfn_to_index(gfn, slot->base_gfn, level); - return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; -} - static bool rmap_can_add(struct kvm_vcpu *vcpu) { struct kvm_mmu_memory_cache *mc; diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index c3bad366b627..272e89147d96 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -200,3 +200,11 @@ u64 *rmap_get_next(struct rmap_iterator *iter) return sptep; } +struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, + const struct kvm_memory_slot *slot) +{ + unsigned long idx; + + idx = gfn_to_index(gfn, slot->base_gfn, level); + return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index 13b265f3a95e..45732eda57e5 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -49,4 +49,6 @@ u64 *rmap_get_next(struct rmap_iterator *iter); for (_spte_ = rmap_get_first(_rmap_head_, _iter_); \ _spte_; _spte_ = rmap_get_next(_iter_)) +struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, + const struct kvm_memory_slot *slot); #endif /* __KVM_X86_MMU_RMAP_H */ From patchwork Tue Dec 6 17:35:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 13066191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04054C352A1 for ; Tue, 6 Dec 2022 17:36:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235541AbiLFRg3 (ORCPT ); Tue, 6 Dec 2022 12:36:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235289AbiLFRgQ (ORCPT ); Tue, 6 Dec 2022 12:36:16 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 162713AC32 for ; Tue, 6 Dec 2022 09:36:12 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id d2-20020a170902cec200b001899479b1d8so17060807plg.22 for ; Tue, 06 Dec 2022 09:36:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=H8e8OULO3pzQ8jNXethnMgHASPzYAV1ngKTaICSK2LU=; b=FRZhVvY5mtOkbQD31kJD+YaHVJajPmTARMOqiCcI+/xvmLCmh6BGekzUTtjDNeotzq uV42h3eYcy2Y1U/jWIpUCdXPMDgEMZB91kpXLVv2PxG8wk3IE0JRMUxaYINZkDlBMDBa xpCtjKpLUcnALTVXha4JOqpMnLfJtCq7RtJbdS9Y/NG265fX+/qqKwUBhyWT0k0diYdV hq5HsZiSMwFdUeFrTC5lDJVcu3kftBgtGZ1hMUr8kPf0vK+aWbxsNiNiyI68v8zC2qbC 6pvrAGBfgZS4RecxrDW7y8awmGVUrRF0AFrFHz2Pjud05rWFkofdAumELQtzKuoObx4B tcew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=H8e8OULO3pzQ8jNXethnMgHASPzYAV1ngKTaICSK2LU=; b=xZHpJ6yhy+Z8g7FyCk0qgwPTjcTaK8RimWJmtkx1SuoXVreNrBWFLEV1EnXMYByvDM wGdRqTl+OMyrcrDiTAtTa8RIJTl0MDkGpFJ72+Er/PYqDFcCozFhPYSZlvfgEcPyjUe4 iQKJw8148ClvTHo6WPBlbyoTbxfqAX4+6JHhgu3xzAEsdN0MG+JN0IdbTvFArnt3e4qr KgR5FzubUQNT2fMZLGpPXrmtiD4rJKkqJHmSlG2r8SV6HqZtavgPSs4R1ChjdfqKFoQV 9py6yNh9L3sLUQxs5ZSPiXbp9f6L/QgPAXqxHHPvvIYhoaU4FfDsLu0W6SAS6CMo7lQm V/Nw== X-Gm-Message-State: ANoB5pk+5hYdQSFQT5GAzVz2hQGDctZcbCrYVoxsI0fEFFZYChPicyuZ XZrvLuGAmU3LAfwqmUxEq3Ixhlu8WtZV X-Google-Smtp-Source: AA0mqf5wvq/CNKL0wy9/uNEygU7TfpDW2kz7kcpjud97MrqPQOnNGktDWLeek44G5uBaOn37u7mXhaJhOxg8 X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a05:6a00:3398:b0:575:72f3:d4dc with SMTP id cm24-20020a056a00339800b0057572f3d4dcmr36836377pfb.6.1670348172521; Tue, 06 Dec 2022 09:36:12 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:58 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-5-bgardon@google.com> Subject: [PATCH 4/7] KVM: x86/MMU: Move rmap_can_add() and rmap_remove() to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the functions to check if an entry can be added to an rmap and for removing elements from an rmap to rmap.(c|h). No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 34 +-------------------------------- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/rmap.c | 32 +++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/rmap.h | 3 +++ 4 files changed, 37 insertions(+), 33 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f8d7201210c8..52e487d89d54 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -658,7 +658,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) static bool sp_has_gptes(struct kvm_mmu_page *sp); -static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) +gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) { if (sp->role.passthrough) return sp->gfn; @@ -891,38 +891,6 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, return true; } -static bool rmap_can_add(struct kvm_vcpu *vcpu) -{ - struct kvm_mmu_memory_cache *mc; - - mc = &vcpu->arch.mmu_pte_list_desc_cache; - return kvm_mmu_memory_cache_nr_free_objects(mc); -} - -static void rmap_remove(struct kvm *kvm, u64 *spte) -{ - struct kvm_memslots *slots; - struct kvm_memory_slot *slot; - struct kvm_mmu_page *sp; - gfn_t gfn; - struct kvm_rmap_head *rmap_head; - - sp = sptep_to_sp(spte); - gfn = kvm_mmu_page_get_gfn(sp, spte_index(spte)); - - /* - * Unlike rmap_add, rmap_remove does not run in the context of a vCPU - * so we have to determine which memslots to use based on context - * information in sp->role. - */ - slots = kvm_memslots_for_spte_role(kvm, sp->role); - - slot = __gfn_to_memslot(slots, gfn); - rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); - - pte_list_remove(spte, rmap_head); -} - static void drop_spte(struct kvm *kvm, u64 *sptep) { u64 old_spte = mmu_spte_clear_track_bits(kvm, sptep); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index cd1c8f32269d..3de703c2a5d4 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -318,4 +318,5 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); +gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index 272e89147d96..6833676aa9ea 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -208,3 +208,35 @@ struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, idx = gfn_to_index(gfn, slot->base_gfn, level); return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; } + +bool rmap_can_add(struct kvm_vcpu *vcpu) +{ + struct kvm_mmu_memory_cache *mc; + + mc = &vcpu->arch.mmu_pte_list_desc_cache; + return kvm_mmu_memory_cache_nr_free_objects(mc); +} + +void rmap_remove(struct kvm *kvm, u64 *spte) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *slot; + struct kvm_mmu_page *sp; + gfn_t gfn; + struct kvm_rmap_head *rmap_head; + + sp = sptep_to_sp(spte); + gfn = kvm_mmu_page_get_gfn(sp, spte_index(spte)); + + /* + * Unlike rmap_add, rmap_remove does not run in the context of a vCPU + * so we have to determine which memslots to use based on context + * information in sp->role. + */ + slots = kvm_memslots_for_spte_role(kvm, sp->role); + + slot = __gfn_to_memslot(slots, gfn); + rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); + + pte_list_remove(spte, rmap_head); +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index 45732eda57e5..81df186ba3c3 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -51,4 +51,7 @@ u64 *rmap_get_next(struct rmap_iterator *iter); struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, const struct kvm_memory_slot *slot); + +bool rmap_can_add(struct kvm_vcpu *vcpu); +void rmap_remove(struct kvm *kvm, u64 *spte); #endif /* __KVM_X86_MMU_RMAP_H */ From patchwork Tue Dec 6 17:35:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 13066192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 944E0C4708C for ; Tue, 6 Dec 2022 17:36:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235538AbiLFRgb (ORCPT ); Tue, 6 Dec 2022 12:36:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235410AbiLFRgR (ORCPT ); Tue, 6 Dec 2022 12:36:17 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE2853B9C1 for ; Tue, 6 Dec 2022 09:36:14 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id 94-20020a17090a09e700b002191897f70aso13662192pjo.9 for ; Tue, 06 Dec 2022 09:36:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/TVyB4/pSme/cFmmgXmwyjFM6o5GyBUdN87C0tWnxAo=; b=oVPJ2nG7PFgq3bksWsgionXC79t3Kk/k1kgfLgV8FmLQwSFfrzXfhIv7LJ4yoS4zML KxXcvA273d0cpEacTsrs3CPwZ0QeGzc/wrtfBDP8vEEiwzvY9qAx/YXCee4fbzCDlyPA wf12/QEcjIC8x4ZQyEhI4RVPSyH0ittgNyouIntMcJOj9jEe20vMURToED9sTUyrn/4z 9oQV3HNShZ/sB9f4KGupI+YqgXr8EERfZU+z+oZiaGH+zBRcZ0PjIAAJsyAjMUzSm7Vs 4g6VdBhmU7ZsH3z3DVifdx/0F/p8Z+JwDN7p3vpkEeG1Qde7kp/1dlDnT2BnY/1VIB4d dowA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/TVyB4/pSme/cFmmgXmwyjFM6o5GyBUdN87C0tWnxAo=; b=ONW08CWf+IzhDb1Kfs0aMiD8K8N3KbxH72hwK3U4gnb/VU5yg/UosJspfI2cipxYpX hipdxlnNuGwiy34sQsbnoy4zBTLByMh+l40Y3ACDCaZr4QxU4bExl10pl8sUtEYooikz hOfRWZw1o3lulgTRKgODVPPW64rg9X5FCJZt1rVWKBny+rQPwbk7+krOG7luE/DQoZQc X/I3p30B7DY06brUc4gHkWR0q+/CFdtEd1KBp3GDSevvddC9/qm+L3CxPYJFBPusJQC6 XrmAZvIbuowzY2tpHvfdvgINJC01kJ8wj4yQsQzNUK9Gl+gS/mDCGEHCL+KusufgDOnD kdHQ== X-Gm-Message-State: ANoB5pncHKp4GuwmYFTXHu93ODxn+bx5+ia5K/K/rvmIhtahr/Ya+aXO /5BrSqWsvQKLBYpEovzbzD68uY25VKnI X-Google-Smtp-Source: AA0mqf7SW6OB8Y2Moh616DcgP97JKus7htQeSRUpj5qdfuWqrXLEsHouDMvq55z0jw4zpn9+Xt6dTXMtuqKH X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:90a:8b03:b0:213:16d2:4d4c with SMTP id y3-20020a17090a8b0300b0021316d24d4cmr96765107pjn.70.1670348174449; Tue, 06 Dec 2022 09:36:14 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:59 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-6-bgardon@google.com> Subject: [PATCH 5/7] KVM: x86/MMU: Move the rmap walk iterator out of mmu.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move slot_rmap_walk_iterator and its associated functions out of mmu.c to rmap.(c|h). No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 73 ----------------------------------------- arch/x86/kvm/mmu/rmap.c | 43 ++++++++++++++++++++++++ arch/x86/kvm/mmu/rmap.h | 36 ++++++++++++++++++++ 3 files changed, 79 insertions(+), 73 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 52e487d89d54..88da2abc2375 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1198,79 +1198,6 @@ static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, return need_flush; } -struct slot_rmap_walk_iterator { - /* input fields. */ - const struct kvm_memory_slot *slot; - gfn_t start_gfn; - gfn_t end_gfn; - int start_level; - int end_level; - - /* output fields. */ - gfn_t gfn; - struct kvm_rmap_head *rmap; - int level; - - /* private field. */ - struct kvm_rmap_head *end_rmap; -}; - -static void -rmap_walk_init_level(struct slot_rmap_walk_iterator *iterator, int level) -{ - iterator->level = level; - iterator->gfn = iterator->start_gfn; - iterator->rmap = gfn_to_rmap(iterator->gfn, level, iterator->slot); - iterator->end_rmap = gfn_to_rmap(iterator->end_gfn, level, iterator->slot); -} - -static void -slot_rmap_walk_init(struct slot_rmap_walk_iterator *iterator, - const struct kvm_memory_slot *slot, int start_level, - int end_level, gfn_t start_gfn, gfn_t end_gfn) -{ - iterator->slot = slot; - iterator->start_level = start_level; - iterator->end_level = end_level; - iterator->start_gfn = start_gfn; - iterator->end_gfn = end_gfn; - - rmap_walk_init_level(iterator, iterator->start_level); -} - -static bool slot_rmap_walk_okay(struct slot_rmap_walk_iterator *iterator) -{ - return !!iterator->rmap; -} - -static void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator) -{ - while (++iterator->rmap <= iterator->end_rmap) { - iterator->gfn += (1UL << KVM_HPAGE_GFN_SHIFT(iterator->level)); - - if (iterator->rmap->val) - return; - } - - if (++iterator->level > iterator->end_level) { - iterator->rmap = NULL; - return; - } - - rmap_walk_init_level(iterator, iterator->level); -} - -#define for_each_slot_rmap_range(_slot_, _start_level_, _end_level_, \ - _start_gfn, _end_gfn, _iter_) \ - for (slot_rmap_walk_init(_iter_, _slot_, _start_level_, \ - _end_level_, _start_gfn, _end_gfn); \ - slot_rmap_walk_okay(_iter_); \ - slot_rmap_walk_next(_iter_)) - -typedef bool (*rmap_handler_t)(struct kvm *kvm, struct kvm_rmap_head *rmap_head, - struct kvm_memory_slot *slot, gfn_t gfn, - int level, pte_t pte); - static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range, rmap_handler_t handler) diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index 6833676aa9ea..91af5b32cffb 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -240,3 +240,46 @@ void rmap_remove(struct kvm *kvm, u64 *spte) pte_list_remove(spte, rmap_head); } + +void rmap_walk_init_level(struct slot_rmap_walk_iterator *iterator, int level) +{ + iterator->level = level; + iterator->gfn = iterator->start_gfn; + iterator->rmap = gfn_to_rmap(iterator->gfn, level, iterator->slot); + iterator->end_rmap = gfn_to_rmap(iterator->end_gfn, level, iterator->slot); +} + +void slot_rmap_walk_init(struct slot_rmap_walk_iterator *iterator, + const struct kvm_memory_slot *slot, int start_level, + int end_level, gfn_t start_gfn, gfn_t end_gfn) +{ + iterator->slot = slot; + iterator->start_level = start_level; + iterator->end_level = end_level; + iterator->start_gfn = start_gfn; + iterator->end_gfn = end_gfn; + + rmap_walk_init_level(iterator, iterator->start_level); +} + +bool slot_rmap_walk_okay(struct slot_rmap_walk_iterator *iterator) +{ + return !!iterator->rmap; +} + +void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator) +{ + while (++iterator->rmap <= iterator->end_rmap) { + iterator->gfn += (1UL << KVM_HPAGE_GFN_SHIFT(iterator->level)); + + if (iterator->rmap->val) + return; + } + + if (++iterator->level > iterator->end_level) { + iterator->rmap = NULL; + return; + } + + rmap_walk_init_level(iterator, iterator->level); +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index 81df186ba3c3..dc4bf7e609ec 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -54,4 +54,40 @@ struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, bool rmap_can_add(struct kvm_vcpu *vcpu); void rmap_remove(struct kvm *kvm, u64 *spte); + +struct slot_rmap_walk_iterator { + /* input fields. */ + const struct kvm_memory_slot *slot; + gfn_t start_gfn; + gfn_t end_gfn; + int start_level; + int end_level; + + /* output fields. */ + gfn_t gfn; + struct kvm_rmap_head *rmap; + int level; + + /* private field. */ + struct kvm_rmap_head *end_rmap; +}; + +void rmap_walk_init_level(struct slot_rmap_walk_iterator *iterator, int level); +void slot_rmap_walk_init(struct slot_rmap_walk_iterator *iterator, + const struct kvm_memory_slot *slot, int start_level, + int end_level, gfn_t start_gfn, gfn_t end_gfn); +bool slot_rmap_walk_okay(struct slot_rmap_walk_iterator *iterator); +void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator); + +#define for_each_slot_rmap_range(_slot_, _start_level_, _end_level_, \ + _start_gfn, _end_gfn, _iter_) \ + for (slot_rmap_walk_init(_iter_, _slot_, _start_level_, \ + _end_level_, _start_gfn, _end_gfn); \ + slot_rmap_walk_okay(_iter_); \ + slot_rmap_walk_next(_iter_)) + +typedef bool (*rmap_handler_t)(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + struct kvm_memory_slot *slot, gfn_t gfn, + int level, pte_t pte); + #endif /* __KVM_X86_MMU_RMAP_H */ From patchwork Tue Dec 6 17:36:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 13066193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 766FFC352A1 for ; Tue, 6 Dec 2022 17:36:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235464AbiLFRgh (ORCPT ); Tue, 6 Dec 2022 12:36:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235297AbiLFRgW (ORCPT ); Tue, 6 Dec 2022 12:36:22 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C9B43B9D3 for ; Tue, 6 Dec 2022 09:36:16 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id t1-20020a170902b20100b001893ac9f0feso16862833plr.4 for ; Tue, 06 Dec 2022 09:36:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jOLoTsUdfTgKzlK6Hy0CC7NNiyXgF9iSm7cJp2Q7w9I=; b=EGiIlaHT8ks2bvi5YXJjx9YD9KUtl5Bv7ychIQYtNPVEquCrgQSKDZ3acg5PgVPIVe pA28XThubo+ao/p3YKAxinQQWFyIQQSFkmKsnVY8WZCzhxljx0GJPSk1N3aJKeKOnyzO y3L6C6OLq9XYYqt7xwnr4+YTvztFafCbhJZxAymemVVXLgFhs4mJlMvh9daCOWTjifxU tHgiZumwPGXMLfuo+hZRZ6k7aZq4LV1VJveUo3alSBxbMD7MvMmBtQTJ66hHFiQOaMJ4 UB2QL9SzWEUQbGKo7llrKdknPgVwZyt1vBwC1vd6/WMmb9z36w4uwQ5uWqH0Hqjl2gat pqig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jOLoTsUdfTgKzlK6Hy0CC7NNiyXgF9iSm7cJp2Q7w9I=; b=2aIaiNLIZRK4nIzP7kAPih7jC0YL6xAk2wibnZNhOA5OudBHi5BBG23RfXqgKPewKB EFeOrm2zQbLTR5UgA3ogQxqaYjgZ0mTeLmGuFTBPs6fLTyjasr6y2CEo64UwxsY82zxw jHtdO2EB7VN+9YQeypG134pbr54nA1T6KYbUOJTw6TpMOpb8jLAqcwXRv9ksTTRJ2RsU bWbjSl8qyposciC+pPdDAjwZ1mcYj3bXquqVYW2c7BLW85zzOCSELZ0DBbGTVikChZNg iKrsTxdDpLydDujjUVW1hh9FxVVlmBhZdZbdYc1XfezhRFJQwAoar7hwiJNGs6qBbQce fa7g== X-Gm-Message-State: ANoB5pnWXnoCSsdCWYTGt48SdhfoXbLor2caB6tPkBNzzGqJ7F/l6ujf ag2e+5usoL/eWPicUtufsLfAaarsRq1X X-Google-Smtp-Source: AA0mqf6SMlVyiA0N9Q0gw7pZUuNmuzbXTUgkBqIAbP6ewRDGoB4dQ9xMEoHDig2j00UC567TTm2tAOf3qMjb X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:90a:43a4:b0:219:1d0a:34a6 with SMTP id r33-20020a17090a43a400b002191d0a34a6mr5141962pjg.1.1670348175816; Tue, 06 Dec 2022 09:36:15 -0800 (PST) Date: Tue, 6 Dec 2022 17:36:00 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-7-bgardon@google.com> Subject: [PATCH 6/7] KVM: x86/MMU: Move rmap zap operations to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the various rmap zap functions to rmap.c. These functions are less "pure" rmap operations in that they also contain some SPTE manipulation, however they're mostly about rmap / pte list manipulation. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 51 +-------------------------------- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/rmap.c | 50 +++++++++++++++++++++++++++++++- arch/x86/kvm/mmu/rmap.h | 9 +++++- 4 files changed, 59 insertions(+), 52 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 88da2abc2375..12082314d82d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -512,7 +512,7 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) * state bits, it is used to clear the last level sptep. * Returns the old PTE. */ -static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) +u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) { kvm_pfn_t pfn; u64 old_spte = *sptep; @@ -855,42 +855,6 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t gfn, return slot; } -static void kvm_zap_one_rmap_spte(struct kvm *kvm, - struct kvm_rmap_head *rmap_head, u64 *sptep) -{ - mmu_spte_clear_track_bits(kvm, sptep); - pte_list_remove(sptep, rmap_head); -} - -/* Return true if at least one SPTE was zapped, false otherwise */ -static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, - struct kvm_rmap_head *rmap_head) -{ - struct pte_list_desc *desc, *next; - int i; - - if (!rmap_head->val) - return false; - - if (!(rmap_head->val & 1)) { - mmu_spte_clear_track_bits(kvm, (u64 *)rmap_head->val); - goto out; - } - - desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); - - for (; desc; desc = next) { - for (i = 0; i < desc->spte_count; i++) - mmu_spte_clear_track_bits(kvm, desc->sptes[i]); - next = desc->more; - free_pte_list_desc(desc); - } -out: - /* rmap_head is meaningless now, remember to reset it */ - rmap_head->val = 0; - return true; -} - static void drop_spte(struct kvm *kvm, u64 *sptep) { u64 old_spte = mmu_spte_clear_track_bits(kvm, sptep); @@ -1145,19 +1109,6 @@ static bool kvm_vcpu_write_protect_gfn(struct kvm_vcpu *vcpu, u64 gfn) return kvm_mmu_slot_gfn_write_protect(vcpu->kvm, slot, gfn, PG_LEVEL_4K); } -static bool __kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, - const struct kvm_memory_slot *slot) -{ - return kvm_zap_all_rmap_sptes(kvm, rmap_head); -} - -static bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, - struct kvm_memory_slot *slot, gfn_t gfn, int level, - pte_t unused) -{ - return __kvm_zap_rmap(kvm, rmap_head, slot); -} - static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, struct kvm_memory_slot *slot, gfn_t gfn, int level, pte_t pte) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 3de703c2a5d4..a219c8e556e9 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -319,4 +319,5 @@ void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); +u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index 91af5b32cffb..9cc4252aaabb 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -56,7 +56,7 @@ int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, return count; } -void free_pte_list_desc(struct pte_list_desc *pte_list_desc) +static void free_pte_list_desc(struct pte_list_desc *pte_list_desc) { kmem_cache_free(pte_list_desc_cache, pte_list_desc); } @@ -283,3 +283,51 @@ void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator) rmap_walk_init_level(iterator, iterator->level); } + +void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + u64 *sptep) +{ + mmu_spte_clear_track_bits(kvm, sptep); + pte_list_remove(sptep, rmap_head); +} + +/* Return true if at least one SPTE was zapped, false otherwise */ +bool kvm_zap_all_rmap_sptes(struct kvm *kvm, struct kvm_rmap_head *rmap_head) +{ + struct pte_list_desc *desc, *next; + int i; + + if (!rmap_head->val) + return false; + + if (!(rmap_head->val & 1)) { + mmu_spte_clear_track_bits(kvm, (u64 *)rmap_head->val); + goto out; + } + + desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); + + for (; desc; desc = next) { + for (i = 0; i < desc->spte_count; i++) + mmu_spte_clear_track_bits(kvm, desc->sptes[i]); + next = desc->more; + free_pte_list_desc(desc); + } +out: + /* rmap_head is meaningless now, remember to reset it */ + rmap_head->val = 0; + return true; +} + +bool __kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + const struct kvm_memory_slot *slot) +{ + return kvm_zap_all_rmap_sptes(kvm, rmap_head); +} + +bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + struct kvm_memory_slot *slot, gfn_t gfn, int level, + pte_t unused) +{ + return __kvm_zap_rmap(kvm, rmap_head, slot); +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index dc4bf7e609ec..a9bf48494e1a 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -27,7 +27,6 @@ static struct kmem_cache *pte_list_desc_cache; int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, struct kvm_rmap_head *rmap_head); -void free_pte_list_desc(struct pte_list_desc *pte_list_desc); void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); @@ -90,4 +89,12 @@ typedef bool (*rmap_handler_t)(struct kvm *kvm, struct kvm_rmap_head *rmap_head, struct kvm_memory_slot *slot, gfn_t gfn, int level, pte_t pte); +void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + u64 *sptep); +bool kvm_zap_all_rmap_sptes(struct kvm *kvm, struct kvm_rmap_head *rmap_head); +bool __kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + const struct kvm_memory_slot *slot); +bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + struct kvm_memory_slot *slot, gfn_t gfn, int level, + pte_t unused); #endif /* __KVM_X86_MMU_RMAP_H */ From patchwork Tue Dec 6 17:36:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 13066194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50999C352A1 for ; Tue, 6 Dec 2022 17:36:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235656AbiLFRgs (ORCPT ); Tue, 6 Dec 2022 12:36:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235506AbiLFRg2 (ORCPT ); Tue, 6 Dec 2022 12:36:28 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90E643B9E7 for ; Tue, 6 Dec 2022 09:36:18 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id pq17-20020a17090b3d9100b0020a4c65c3a9so12036809pjb.0 for ; Tue, 06 Dec 2022 09:36:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zGVIdVHmu98j0ATMgshfuLelOvL1LTZm9iF00wr+z1c=; b=the3UR5nZvASMSHWZbLNk/phN3QpGy2ZKbi9DHYfcXEQ159p8R/Z/7Kn8+DTeMoLVC pHaMaXdy8nBuAF7ZQ2GCqrLjaiaRR0ok0NVp/8M1NybmC1AX9iALlFOCP1J9eBukP62K TexEBs1IBdXfB8yWGuUpm4cBsb0DlWf6ECzEhJKJpgiNZHKgKJslJ4+mK+GkHrO3BR6n whrLxfo9JKB/TzV1zU5UtNlT+X/J7I2EibnOxAYrtaKAZ5MlhSUTaNP44stCu67XoUmc VA412z6d51UVGfKCbdx1Ny80uArOlblTwP+eCoH6u+49GoAX1nxW82zITs2cd+GzLIVD agtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zGVIdVHmu98j0ATMgshfuLelOvL1LTZm9iF00wr+z1c=; b=ezAWGjrY0ihDKrF+aNiZrOinnP4W15wdgtgNVyo2JO35sAZnj/VaJ2APy9GeDxWITb 7C9OdCZ8Fbgsyn0MOKwF+vKMmY1F6jV1MOBIO5Khr2dH3jPAEL6asIOgO5lWGPGOSHdK Jr2ik4q5XmrTefqgIkf3rA2VCRhgCTUNrxEAykaYIME9nB5GOCNxzewFeamxHOq/Eak0 XWmeU2xNFPPDY4R3TXrNiFRo2s/L/7eYjmdheA1+RwBbEtuwQnEuQ6xOwLiyG6Ol7Kpd 5+WBKHxRhN6JHqlglCd3up92qGjf6ci9BmY/lZXFhmoyJEKbOEmDbmuEozP1vAW3Py7w /zzA== X-Gm-Message-State: ANoB5pnlHvsI8Zu0jJwCVTpkLw8L6SkuV5OKcRQ8gzx5SqdeSo0WZPst GbdKAj7OaSTAs5ZAwWOKG7vlKAiAzqJh X-Google-Smtp-Source: AA0mqf68Pz8Y3Hu8v2Z/yj/mfJs/DBPFKf1uym2qY0m7kl49GEfjQ/w0L7uOl9vvRQZvkaj86jRMQk2/UOwu X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:90a:d086:b0:219:227d:d91f with SMTP id k6-20020a17090ad08600b00219227dd91fmr4993281pju.0.1670348177805; Tue, 06 Dec 2022 09:36:17 -0800 (PST) Date: Tue, 6 Dec 2022 17:36:01 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-8-bgardon@google.com> Subject: [PATCH 7/7] KVM: x86/MMU: Move rmap_add() to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move rmap_add() to rmap.c to complete the migration of the various rmap operations out of mmu.c. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 45 ++++----------------------------- arch/x86/kvm/mmu/mmu_internal.h | 6 +++++ arch/x86/kvm/mmu/rmap.c | 37 ++++++++++++++++++++++++++- arch/x86/kvm/mmu/rmap.h | 8 +++++- 4 files changed, 54 insertions(+), 42 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 12082314d82d..b122c90a3e5f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -215,13 +215,13 @@ static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu) return regs; } -static inline bool kvm_available_flush_tlb_with_range(void) +inline bool kvm_available_flush_tlb_with_range(void) { return kvm_x86_ops.tlb_remote_flush_with_range; } -static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, - struct kvm_tlb_range *range) +void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, + struct kvm_tlb_range *range) { int ret = -ENOTSUPP; @@ -695,8 +695,8 @@ static u32 kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index) return sp->role.access; } -static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, - gfn_t gfn, unsigned int access) +void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, + gfn_t gfn, unsigned int access) { if (sp_has_gptes(sp)) { sp->shadowed_translation[index] = (gfn << PAGE_SHIFT) | access; @@ -1217,41 +1217,6 @@ static bool kvm_test_age_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, return false; } -#define RMAP_RECYCLE_THRESHOLD 1000 - -static void __rmap_add(struct kvm *kvm, - struct kvm_mmu_memory_cache *cache, - const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn, unsigned int access) -{ - struct kvm_mmu_page *sp; - struct kvm_rmap_head *rmap_head; - int rmap_count; - - sp = sptep_to_sp(spte); - kvm_mmu_page_set_translation(sp, spte_index(spte), gfn, access); - kvm_update_page_stats(kvm, sp->role.level, 1); - - rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); - rmap_count = pte_list_add(cache, spte, rmap_head); - - if (rmap_count > kvm->stat.max_mmu_rmap_size) - kvm->stat.max_mmu_rmap_size = rmap_count; - if (rmap_count > RMAP_RECYCLE_THRESHOLD) { - kvm_zap_all_rmap_sptes(kvm, rmap_head); - kvm_flush_remote_tlbs_with_address( - kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); - } -} - -static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn, unsigned int access) -{ - struct kvm_mmu_memory_cache *cache = &vcpu->arch.mmu_pte_list_desc_cache; - - __rmap_add(vcpu->kvm, cache, slot, spte, gfn, access); -} - bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young = false; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index a219c8e556e9..03da1f8b066e 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -320,4 +320,10 @@ void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep); +void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, + gfn_t gfn, unsigned int access); + +inline bool kvm_available_flush_tlb_with_range(void); +void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, + struct kvm_tlb_range *range); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index 9cc4252aaabb..136c5f4f867b 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -292,7 +292,8 @@ void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, } /* Return true if at least one SPTE was zapped, false otherwise */ -bool kvm_zap_all_rmap_sptes(struct kvm *kvm, struct kvm_rmap_head *rmap_head) +static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, + struct kvm_rmap_head *rmap_head) { struct pte_list_desc *desc, *next; int i; @@ -331,3 +332,37 @@ bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, { return __kvm_zap_rmap(kvm, rmap_head, slot); } + +#define RMAP_RECYCLE_THRESHOLD 1000 + +void __rmap_add(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, + const struct kvm_memory_slot *slot, u64 *spte, gfn_t gfn, + unsigned int access) +{ + struct kvm_mmu_page *sp; + struct kvm_rmap_head *rmap_head; + int rmap_count; + + sp = sptep_to_sp(spte); + kvm_mmu_page_set_translation(sp, spte_index(spte), gfn, access); + kvm_update_page_stats(kvm, sp->role.level, 1); + + rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); + rmap_count = pte_list_add(cache, spte, rmap_head); + + if (rmap_count > kvm->stat.max_mmu_rmap_size) + kvm->stat.max_mmu_rmap_size = rmap_count; + if (rmap_count > RMAP_RECYCLE_THRESHOLD) { + kvm_zap_all_rmap_sptes(kvm, rmap_head); + kvm_flush_remote_tlbs_with_address( + kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + } +} + +void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn, unsigned int access) +{ + struct kvm_mmu_memory_cache *cache = &vcpu->arch.mmu_pte_list_desc_cache; + + __rmap_add(vcpu->kvm, cache, slot, spte, gfn, access); +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index a9bf48494e1a..b06897dad76a 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -91,10 +91,16 @@ typedef bool (*rmap_handler_t)(struct kvm *kvm, struct kvm_rmap_head *rmap_head, void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, u64 *sptep); -bool kvm_zap_all_rmap_sptes(struct kvm *kvm, struct kvm_rmap_head *rmap_head); bool __kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, const struct kvm_memory_slot *slot); bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, struct kvm_memory_slot *slot, gfn_t gfn, int level, pte_t unused); + +void __rmap_add(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, + const struct kvm_memory_slot *slot, u64 *spte, gfn_t gfn, + unsigned int access); +void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn, unsigned int access); + #endif /* __KVM_X86_MMU_RMAP_H */