From patchwork Fri Sep 23 10:52:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Wenlong X-Patchwork-Id: 12986439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BC9DC6FA91 for ; Fri, 23 Sep 2022 10:52:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231965AbiIWKwu (ORCPT ); Fri, 23 Sep 2022 06:52:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231805AbiIWKwb (ORCPT ); Fri, 23 Sep 2022 06:52:31 -0400 Received: from out0-157.mail.aliyun.com (out0-157.mail.aliyun.com [140.205.0.157]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44258E6DD3; Fri, 23 Sep 2022 03:52:29 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R531e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047190;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=12;SR=0;TI=SMTPD_---.PMZwGQI_1663930346; Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com fp:SMTPD_---.PMZwGQI_1663930346) by smtp.aliyun-inc.com; Fri, 23 Sep 2022 18:52:26 +0800 From: "Hou Wenlong" To: kvm@vger.kernel.org Cc: David Matlack , Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Lan Tianyu , linux-kernel@vger.kernel.org Subject: [PATCH v3 3/6] KVM: x86/mmu: Fix wrong gfn range of tlb flushing in kvm_set_pte_rmapp() Date: Fri, 23 Sep 2022 18:52:19 +0800 Message-Id: <478926d56e8f574cb95686c19bf0910eaba19fd5.1663929851.git.houwenlong.hwl@antgroup.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When the spte of hupe page is dropped in kvm_set_pte_rmapp(), the whole gfn range covered by the spte should be flushed. However, rmap_walk_init_level() doesn't align down the gfn for new level like tdp iterator does, then the gfn used in kvm_set_pte_rmapp() is not the base gfn of huge page. And the size of gfn range is wrong too for huge page. Use the base gfn of huge page and the size of huge page for flushing tlbs for huge page. Fixes: c3134ce240eed ("KVM: Replace old tlb flush function with new one to flush a specified range.") Signed-off-by: Hou Wenlong --- arch/x86/kvm/mmu/mmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 70c57ed166c7..ec5b3b9880de 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1437,7 +1437,9 @@ static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, } if (need_flush && kvm_available_flush_tlb_with_range()) { - kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); + gfn_t base = gfn_round_for_level(gfn, level); + + kvm_flush_remote_tlbs_gfn(kvm, base, level); return false; }