From patchwork Tue Dec 21 15:11:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12689873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAE0DC433FE for ; Tue, 21 Dec 2021 15:12:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3DC346B0095; Tue, 21 Dec 2021 10:12:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 38C246B0096; Tue, 21 Dec 2021 10:12:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 22D786B0098; Tue, 21 Dec 2021 10:12:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0132.hostedemail.com [216.40.44.132]) by kanga.kvack.org (Postfix) with ESMTP id 1605B6B0095 for ; Tue, 21 Dec 2021 10:12:58 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C94F28B339 for ; Tue, 21 Dec 2021 15:12:57 +0000 (UTC) X-FDA: 78942143994.19.CE7D984 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf20.hostedemail.com (Postfix) with ESMTP id 6845A1C0017 for ; Tue, 21 Dec 2021 15:12:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1640099576; x=1671635576; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=4BfQuzincLOVrXQzV2p7CB14wEIknv0E51W7rzSAi78=; b=S3RWjVRxk2sO6a1D0RpHIvfnGtfDjmgwVN6N1gAZGL9sHHOrJ0PCwrcv Vs4fBB7ZAUbxRI7qOMRX1vS+ApaOiK8imtVMKQhJA8Z1/eK67Q2a5py6E lYYgxOXK6PHdFXQ6K5GK8pP3bNZavSy+rAak4zZuSE6S2cOvugDRLzDtb 1BWrXJF+G1YQyYQJOFjguuwVceALPMRodJQgxeUtxxYHFVHrR/lifxyxF UNiuvtf3HCbT+bKFMjN6he/CSJmCN9PyCWDBwaMeEhKG4lmf3MF6MGYS4 maDl7IkPwNEFU8gFyUpBW4k40y4D7ljZoiQ9ZGmGi/jOCfAwcgset++k2 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10204"; a="240216607" X-IronPort-AV: E=Sophos;i="5.88,223,1635231600"; d="scan'208";a="240216607" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2021 07:12:55 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,223,1635231600"; d="scan'208";a="684688446" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga005.jf.intel.com with ESMTP; 21 Dec 2021 07:12:48 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [PATCH v3 06/15] KVM: Refactor hva based memory invalidation code Date: Tue, 21 Dec 2021 23:11:16 +0800 Message-Id: <20211221151125.19446-7-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211221151125.19446-1-chao.p.peng@linux.intel.com> References: <20211221151125.19446-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 6845A1C0017 X-Stat-Signature: byqk7uhc8f5z8accco5qtyyu749j6xkc Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=S3RWjVRx; spf=none (imf20.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.24) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1640099572-357387 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The purpose of this patch is for fd-based memslot to reuse the same mmu_notifier based guest memory invalidation code for private pages. No functional changes except renaming 'hva' to more neutral 'useraddr' so that it can also cover 'offset' in a fd that private pages live in. Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- include/linux/kvm_host.h | 8 +++++-- virt/kvm/kvm_main.c | 47 +++++++++++++++++++++------------------- 2 files changed, 31 insertions(+), 24 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index b0b63c9a160f..7279f46f35d3 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1327,9 +1327,13 @@ static inline int memslot_id(struct kvm *kvm, gfn_t gfn) } static inline gfn_t -hva_to_gfn_memslot(unsigned long hva, struct kvm_memory_slot *slot) +useraddr_to_gfn_memslot(unsigned long useraddr, struct kvm_memory_slot *slot, + bool addr_is_hva) { - gfn_t gfn_offset = (hva - slot->userspace_addr) >> PAGE_SHIFT; + unsigned long useraddr_base = addr_is_hva ? slot->userspace_addr + : slot->file_ofs; + + gfn_t gfn_offset = (useraddr - useraddr_base) >> PAGE_SHIFT; return slot->base_gfn + gfn_offset; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 68018ee7f0cd..856f89ed8ab5 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -471,16 +471,16 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn, srcu_read_unlock(&kvm->srcu, idx); } -typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); +typedef bool (*gfn_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, unsigned long end); -struct kvm_hva_range { +struct kvm_useraddr_range { unsigned long start; unsigned long end; pte_t pte; - hva_handler_t handler; + gfn_handler_t handler; on_lock_fn_t on_lock; bool flush_on_ret; bool may_block; @@ -499,8 +499,8 @@ static void kvm_null_fn(void) } #define IS_KVM_NULL_FN(fn) ((fn) == (void *)kvm_null_fn) -static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, - const struct kvm_hva_range *range) +static __always_inline int __kvm_handle_useraddr_range(struct kvm *kvm, + const struct kvm_useraddr_range *range) { bool ret = false, locked = false; struct kvm_gfn_range gfn_range; @@ -518,12 +518,12 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { slots = __kvm_memslots(kvm, i); kvm_for_each_memslot(slot, slots) { - unsigned long hva_start, hva_end; + unsigned long useraddr_start, useraddr_end; - hva_start = max(range->start, slot->userspace_addr); - hva_end = min(range->end, slot->userspace_addr + + useraddr_start = max(range->start, slot->userspace_addr); + useraddr_end = min(range->end, slot->userspace_addr + (slot->npages << PAGE_SHIFT)); - if (hva_start >= hva_end) + if (useraddr_start >= useraddr_end) continue; /* @@ -536,11 +536,14 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, gfn_range.may_block = range->may_block; /* - * {gfn(page) | page intersects with [hva_start, hva_end)} = + * {gfn(page) | page intersects with [useraddr_start, useraddr_end)} = * {gfn_start, gfn_start+1, ..., gfn_end-1}. */ - gfn_range.start = hva_to_gfn_memslot(hva_start, slot); - gfn_range.end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, slot); + gfn_range.start = useraddr_to_gfn_memslot(useraddr_start, + slot, true); + gfn_range.end = useraddr_to_gfn_memslot( + useraddr_end + PAGE_SIZE - 1, + slot, true); gfn_range.slot = slot; if (!locked) { @@ -571,10 +574,10 @@ static __always_inline int kvm_handle_hva_range(struct mmu_notifier *mn, unsigned long start, unsigned long end, pte_t pte, - hva_handler_t handler) + gfn_handler_t handler) { struct kvm *kvm = mmu_notifier_to_kvm(mn); - const struct kvm_hva_range range = { + const struct kvm_useraddr_range range = { .start = start, .end = end, .pte = pte, @@ -584,16 +587,16 @@ static __always_inline int kvm_handle_hva_range(struct mmu_notifier *mn, .may_block = false, }; - return __kvm_handle_hva_range(kvm, &range); + return __kvm_handle_useraddr_range(kvm, &range); } static __always_inline int kvm_handle_hva_range_no_flush(struct mmu_notifier *mn, unsigned long start, unsigned long end, - hva_handler_t handler) + gfn_handler_t handler) { struct kvm *kvm = mmu_notifier_to_kvm(mn); - const struct kvm_hva_range range = { + const struct kvm_useraddr_range range = { .start = start, .end = end, .pte = __pte(0), @@ -603,7 +606,7 @@ static __always_inline int kvm_handle_hva_range_no_flush(struct mmu_notifier *mn .may_block = false, }; - return __kvm_handle_hva_range(kvm, &range); + return __kvm_handle_useraddr_range(kvm, &range); } static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, struct mm_struct *mm, @@ -661,7 +664,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { struct kvm *kvm = mmu_notifier_to_kvm(mn); - const struct kvm_hva_range hva_range = { + const struct kvm_useraddr_range useraddr_range = { .start = range->start, .end = range->end, .pte = __pte(0), @@ -685,7 +688,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, kvm->mn_active_invalidate_count++; spin_unlock(&kvm->mn_invalidate_lock); - __kvm_handle_hva_range(kvm, &hva_range); + __kvm_handle_useraddr_range(kvm, &useraddr_range); return 0; } @@ -712,7 +715,7 @@ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { struct kvm *kvm = mmu_notifier_to_kvm(mn); - const struct kvm_hva_range hva_range = { + const struct kvm_useraddr_range useraddr_range = { .start = range->start, .end = range->end, .pte = __pte(0), @@ -723,7 +726,7 @@ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, }; bool wake; - __kvm_handle_hva_range(kvm, &hva_range); + __kvm_handle_useraddr_range(kvm, &useraddr_range); /* Pairs with the increment in range_start(). */ spin_lock(&kvm->mn_invalidate_lock);