From patchwork Tue Dec 21 15:11:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12689879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 157B4C433EF for ; Tue, 21 Dec 2021 15:13:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A1DBB6B0073; Tue, 21 Dec 2021 10:13:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9CDA96B0099; Tue, 21 Dec 2021 10:13:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86F176B009A; Tue, 21 Dec 2021 10:13:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0244.hostedemail.com [216.40.44.244]) by kanga.kvack.org (Postfix) with ESMTP id 79B436B0073 for ; Tue, 21 Dec 2021 10:13:22 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 42E1E82499B9 for ; Tue, 21 Dec 2021 15:13:22 +0000 (UTC) X-FDA: 78942145044.14.6A2466E Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf22.hostedemail.com (Postfix) with ESMTP id 5D83EC0017 for ; Tue, 21 Dec 2021 15:13:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1640099601; x=1671635601; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=fdSTZmNU5WiLyWiykXl851QB1KBby9LMRNODzJD3mcE=; b=L4IQuXP2cn/ptRd7iWNoPu5TfcFlGnBLHDaAIy6ddlmqdpON3eHhpfSz re3WgYq5+8pasaUgI5TJSNoJ3+xPhmGtG4QYTDPqo/r6G/3r0LE3J5Q5K pBPxdmUY9f8H2K7gKg+AGbbi0T/pgfaHiL8XB5YrP26SPuf9ut/QE1RCP AILQbd5+MYwWPTRQL90Ji/0nHfg/BbXd5YC7v/22rTbqLse4IM9fV6jjk JnKq+KSxF9DxgYOHP8nt/B19GmrrQXuRmskQ68G/whVIbQ1Rs1qxdvy/1 HDotcXv98nY7IzBJIWDT0VuOeFmtjjCaEE3hvlSnr2Y2/gDmrTn1nuPM8 A==; X-IronPort-AV: E=McAfee;i="6200,9189,10204"; a="239157733" X-IronPort-AV: E=Sophos;i="5.88,223,1635231600"; d="scan'208";a="239157733" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2021 07:13:18 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,223,1635231600"; d="scan'208";a="684688531" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga005.jf.intel.com with ESMTP; 21 Dec 2021 07:13:10 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [PATCH v3 09/15] KVM: Implement fd-based memory invalidation Date: Tue, 21 Dec 2021 23:11:19 +0800 Message-Id: <20211221151125.19446-10-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211221151125.19446-1-chao.p.peng@linux.intel.com> References: <20211221151125.19446-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5D83EC0017 X-Stat-Signature: cw9f74iuusoekh9amx46esu79gqzbquu Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=L4IQuXP2; spf=none (imf22.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1640099599-864865 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KVM gets notified when userspace punches a hole in a fd which is used for guest memory. KVM should invalidate the mapping in the second MMU page tables. This is the same logic as MMU notifier invalidation except the fd related information is carried around to indicate the memory range. KVM hence can reuse most of existing MMU notifier invalidation code including looping through the memslots and then calling into kvm_unmap_gfn_range() which should do whatever needed for fd-based memory unmapping (e.g. for private memory managed by TDX it may need call into SEAM-MODULE). Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- include/linux/kvm_host.h | 8 ++++- virt/kvm/kvm_main.c | 69 +++++++++++++++++++++++++++++++--------- virt/kvm/memfd.c | 2 ++ 3 files changed, 63 insertions(+), 16 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 7279f46f35d3..d9573305e273 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -229,7 +229,7 @@ bool kvm_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu); #endif -#ifdef KVM_ARCH_WANT_MMU_NOTIFIER +#if defined(KVM_ARCH_WANT_MMU_NOTIFIER) || defined(CONFIG_MEMFD_OPS) struct kvm_gfn_range { struct kvm_memory_slot *slot; gfn_t start; @@ -1874,4 +1874,10 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 +#ifdef CONFIG_MEMFD_OPS +int kvm_memfd_invalidate_range(struct kvm *kvm, struct inode *inode, + unsigned long start, unsigned long end); +#endif /* CONFIG_MEMFD_OPS */ + + #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 59f01e68337b..d84cb867b686 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -453,7 +453,8 @@ void kvm_vcpu_destroy(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_vcpu_destroy); -#if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER) +#if defined(CONFIG_MEMFD_OPS) ||\ + (defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)) typedef bool (*gfn_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); @@ -564,6 +565,30 @@ static __always_inline int __kvm_handle_useraddr_range(struct kvm *kvm, /* The notifiers are averse to booleans. :-( */ return (int)ret; } + +static void mn_active_invalidate_count_inc(struct kvm *kvm) +{ + spin_lock(&kvm->mn_invalidate_lock); + kvm->mn_active_invalidate_count++; + spin_unlock(&kvm->mn_invalidate_lock); + +} + +static void mn_active_invalidate_count_dec(struct kvm *kvm) +{ + bool wake; + + spin_lock(&kvm->mn_invalidate_lock); + wake = (--kvm->mn_active_invalidate_count == 0); + spin_unlock(&kvm->mn_invalidate_lock); + + /* + * There can only be one waiter, since the wait happens under + * slots_lock. + */ + if (wake) + rcuwait_wake_up(&kvm->mn_memslots_update_rcuwait); +} #endif #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER) @@ -701,9 +726,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, * * Pairs with the decrement in range_end(). */ - spin_lock(&kvm->mn_invalidate_lock); - kvm->mn_active_invalidate_count++; - spin_unlock(&kvm->mn_invalidate_lock); + mn_active_invalidate_count_inc(kvm); __kvm_handle_useraddr_range(kvm, &useraddr_range); @@ -742,21 +765,11 @@ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, .may_block = mmu_notifier_range_blockable(range), .inode = NULL, }; - bool wake; __kvm_handle_useraddr_range(kvm, &useraddr_range); /* Pairs with the increment in range_start(). */ - spin_lock(&kvm->mn_invalidate_lock); - wake = (--kvm->mn_active_invalidate_count == 0); - spin_unlock(&kvm->mn_invalidate_lock); - - /* - * There can only be one waiter, since the wait happens under - * slots_lock. - */ - if (wake) - rcuwait_wake_up(&kvm->mn_memslots_update_rcuwait); + mn_active_invalidate_count_dec(kvm); BUG_ON(kvm->mmu_notifier_count < 0); } @@ -841,6 +854,32 @@ static int kvm_init_mmu_notifier(struct kvm *kvm) #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */ +#ifdef CONFIG_MEMFD_OPS +int kvm_memfd_invalidate_range(struct kvm *kvm, struct inode *inode, + unsigned long start, unsigned long end) +{ + int ret; + const struct kvm_useraddr_range useraddr_range = { + .start = start, + .end = end, + .pte = __pte(0), + .handler = kvm_unmap_gfn_range, + .on_lock = (void *)kvm_null_fn, + .flush_on_ret = true, + .may_block = false, + .inode = inode, + }; + + + /* Prevent memslot modification */ + mn_active_invalidate_count_inc(kvm); + ret = __kvm_handle_useraddr_range(kvm, &useraddr_range); + mn_active_invalidate_count_dec(kvm); + + return ret; +} +#endif /* CONFIG_MEMFD_OPS */ + #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER static int kvm_pm_notifier_call(struct notifier_block *bl, unsigned long state, diff --git a/virt/kvm/memfd.c b/virt/kvm/memfd.c index 96a1a5bee0f7..d092a9b6f496 100644 --- a/virt/kvm/memfd.c +++ b/virt/kvm/memfd.c @@ -16,6 +16,8 @@ static const struct memfd_pfn_ops *memfd_ops; static void memfd_invalidate_page_range(struct inode *inode, void *owner, pgoff_t start, pgoff_t end) { + kvm_memfd_invalidate_range(owner, inode, start >> PAGE_SHIFT, + end >> PAGE_SHIFT); } static void memfd_fallocate(struct inode *inode, void *owner,