From patchwork Fri Jun 24 03:37:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Wenlong X-Patchwork-Id: 12893620 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4D2ECCA47C for ; Fri, 24 Jun 2022 03:37:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230427AbiFXDhS (ORCPT ); Thu, 23 Jun 2022 23:37:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230356AbiFXDhQ (ORCPT ); Thu, 23 Jun 2022 23:37:16 -0400 Received: from out0-130.mail.aliyun.com (out0-130.mail.aliyun.com [140.205.0.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA61653A73; Thu, 23 Jun 2022 20:37:11 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R801e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047201;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---.OBgNiz3_1656041826; Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com fp:SMTPD_---.OBgNiz3_1656041826) by smtp.aliyun-inc.com; Fri, 24 Jun 2022 11:37:07 +0800 From: "Hou Wenlong" To: kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Lan Tianyu , linux-kernel@vger.kernel.org Subject: [PATCH 4/5] KVM: x86/mmu: Fix wrong start gfn of tlb flushing with range Date: Fri, 24 Jun 2022 11:37:00 +0800 Message-Id: <1dc86beeb58c54ac027d9c67d7e1ad9252b4b2a4.1656039275.git.houwenlong.hwl@antgroup.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When a spte is dropped, the start gfn of tlb flushing should be the gfn of spte not the base gfn of SP which contains the spte. Fixes: c3134ce240eed ("KVM: Replace old tlb flush function with new one to flush a specified range.") Signed-off-by: Hou Wenlong --- arch/x86/kvm/mmu/mmu.c | 8 +++++--- arch/x86/kvm/mmu/paging_tmpl.h | 3 ++- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 37bfc88ea212..577b85860891 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1145,7 +1145,8 @@ static void drop_large_spte(struct kvm *kvm, u64 *sptep, bool flush) drop_spte(kvm, sptep); if (flush) - kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, + kvm_flush_remote_tlbs_with_address(kvm, + kvm_mmu_page_get_gfn(sp, sptep - sp->spt), KVM_PAGES_PER_HPAGE(sp->role.level)); } @@ -1596,7 +1597,7 @@ static void __rmap_add(struct kvm *kvm, if (rmap_count > RMAP_RECYCLE_THRESHOLD) { kvm_unmap_rmapp(kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); kvm_flush_remote_tlbs_with_address( - kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm, gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); } } @@ -6397,7 +6398,8 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, pte_list_remove(kvm, rmap_head, sptep); if (kvm_available_flush_tlb_with_range()) - kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, + kvm_flush_remote_tlbs_with_address(kvm, + kvm_mmu_page_get_gfn(sp, sptep - sp->spt), KVM_PAGES_PER_HPAGE(sp->role.level)); else need_tlb_flush = 1; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 2448fa8d8438..fa78ee0caffd 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -938,7 +938,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL); if (is_shadow_present_pte(old_spte)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, - sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_mmu_page_get_gfn(sp, sptep - sp->spt), + KVM_PAGES_PER_HPAGE(sp->role.level)); if (!rmap_can_add(vcpu)) break;