From patchwork Tue Feb 7 15:57:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13131817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A195C636CD for ; Tue, 7 Feb 2023 15:56:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232647AbjBGP43 (ORCPT ); Tue, 7 Feb 2023 10:56:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232088AbjBGP4X (ORCPT ); Tue, 7 Feb 2023 10:56:23 -0500 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F8FE28850; Tue, 7 Feb 2023 07:56:22 -0800 (PST) Received: by mail-pg1-x535.google.com with SMTP id 78so10712657pgb.8; Tue, 07 Feb 2023 07:56:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=52gmGFk0S/EI3/tgfOzdMgf+mQOxiRfHBzVFZtJM5HI=; b=HjrTclXdWf1pzZiCYqcCUtzZBBqLeOZfUkCvLwpTsrVUEMkg9esqxssiIFn8Ysb+N2 V92Q0Q9Llwc46gy4N22/9EBEX5i+buUK8llS9ztKyFUUOChzPBgeEL0ltxUc3yzmbYME D0FTrO2Df1Ong2tsYimdupTUa+lpsnX3kORzFe43gNB/ZFcpUrpfAUSF2HGtfz5Yi/KS iiJk8yYoToQrmzL8g0Sffci+TMK2EQN+hhWCCauxfWc+Oei+7jUXIEexseAtvuodkFKA gnpNw0vwhq7mrPqJigg7/UkzBm03f1J72g1KZ0ArhiVf0BanLxTJjNw5tVB2SI0giD5O BEzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=52gmGFk0S/EI3/tgfOzdMgf+mQOxiRfHBzVFZtJM5HI=; b=reX5psNJIUca6Vht99Sk7uIVTBE3T2AJaTKG5HuN6Hx5a/l4kJXPsAAMOX0/gXzaE9 sc6T3yyfskeawDjdhVN9yHAV7P5GEIeghm0UtgV+8eAU5FkbotNccDwuW310e2dPBJoS LzgNxt/uvn9o/tGW4SGxtAlrLGJHki4CpXpR/kZyJdb1JsDl5QtmaPRP919WGcpNN3vd jhbwfamO/DyiK7fqPdhyn6nPvuhZrTPoqbS1oS6z8uQ8J7VZ2OTrnlKMk1Th7cD2FzHn M8hvBEJ9mcr9C9lUoATojjuNbDk51df19WTUX/PmzVbPTWol0nNu+d7LGNEbBhvX2DtB FtTA== X-Gm-Message-State: AO0yUKXGdEcnKx/J4e7XLUmyxae3+KSbX0V//5U7JQex2f/3A/ZQG8ZV QlfmyJaOpHgko2LbQXsPBQ2rkmBBtZU= X-Google-Smtp-Source: AK7set/Jd6stMkH3hTzys0MvdDjYSvqdc+G5YoiCol+EIwItiMiesIOcJa9KQQPjvZcYdJInKz4f2Q== X-Received: by 2002:a62:1b43:0:b0:593:b37c:c7ad with SMTP id b64-20020a621b43000000b00593b37cc7admr3175876pfb.22.1675785381377; Tue, 07 Feb 2023 07:56:21 -0800 (PST) Received: from localhost ([47.88.5.130]) by smtp.gmail.com with ESMTPSA id a24-20020aa79718000000b00593b82ea1cesm9351115pfg.49.2023.02.07.07.56.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Feb 2023 07:56:20 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 1/8] kvm: x86/mmu: Use KVM_MMU_ROOT_XXX for kvm_mmu_invalidate_gva() Date: Tue, 7 Feb 2023 23:57:27 +0800 Message-Id: <20230207155735.2845-2-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230207155735.2845-1-jiangshanlai@gmail.com> References: <20230207155735.2845-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan The @root_hpa for kvm_mmu_invalidate_gva() is called with @mmu->root.hpa or INVALID_PAGE where @mmu->root.hpa is to invalidate gva for the current root (the same meaning as KVM_MMU_ROOT_CURRENT) and INVALID_PAGE is to invalidate gva for all roots (the same meaning as KVM_MMU_ROOTS_ALL). Change the argument type of kvm_mmu_invalidate_gva() and use KVM_MMU_ROOT_XXX instead so that we can reuse the function for kvm_mmu_invpcid_gva() and nested_ept_invalidate_addr() for invalidating gva for different set of roots. No fuctionalities changed. Signed-off-by: Lai Jiangshan Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 39 +++++++++++++++++---------------- arch/x86/kvm/x86.c | 2 +- 3 files changed, 22 insertions(+), 21 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4d2bc08794e4..81429a5640d6 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2026,7 +2026,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code, void *insn, int insn_len); void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva); void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gva_t gva, hpa_t root_hpa); + gva_t gva, unsigned long roots); void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid); void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c91ee2927dd7..958e8eb977ed 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5707,10 +5707,12 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err EXPORT_SYMBOL_GPL(kvm_mmu_page_fault); void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gva_t gva, hpa_t root_hpa) + gva_t gva, unsigned long roots) { int i; + WARN_ON_ONCE(roots & ~KVM_MMU_ROOTS_ALL); + /* It's actually a GPA for vcpu->arch.guest_mmu. */ if (mmu != &vcpu->arch.guest_mmu) { /* INVLPG on a non-canonical address is a NOP according to the SDM. */ @@ -5723,31 +5725,30 @@ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, if (!mmu->invlpg) return; - if (root_hpa == INVALID_PAGE) { + if ((roots & KVM_MMU_ROOT_CURRENT) && VALID_PAGE(mmu->root.hpa)) mmu->invlpg(vcpu, gva, mmu->root.hpa); - /* - * INVLPG is required to invalidate any global mappings for the VA, - * irrespective of PCID. Since it would take us roughly similar amount - * of work to determine whether any of the prev_root mappings of the VA - * is marked global, or to just sync it blindly, so we might as well - * just always sync it. - * - * Mappings not reachable via the current cr3 or the prev_roots will be - * synced when switching to that cr3, so nothing needs to be done here - * for them. - */ - for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) - if (VALID_PAGE(mmu->prev_roots[i].hpa)) - mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); - } else { - mmu->invlpg(vcpu, gva, root_hpa); + for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { + if ((roots & KVM_MMU_ROOT_PREVIOUS(i)) && + VALID_PAGE(mmu->prev_roots[i].hpa)) + mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); } } void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) { - kvm_mmu_invalidate_gva(vcpu, vcpu->arch.walk_mmu, gva, INVALID_PAGE); + /* + * INVLPG is required to invalidate any global mappings for the VA, + * irrespective of PCID. Since it would take us roughly similar amount + * of work to determine whether any of the prev_root mappings of the VA + * is marked global, or to just sync it blindly, so we might as well + * just always sync it. + * + * Mappings not reachable via the current cr3 or the prev_roots will be + * synced when switching to that cr3, so nothing needs to be done here + * for them. + */ + kvm_mmu_invalidate_gva(vcpu, vcpu->arch.walk_mmu, gva, KVM_MMU_ROOTS_ALL); ++vcpu->stat.invlpg; } EXPORT_SYMBOL_GPL(kvm_mmu_invlpg); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 508074e47bc0..a81937a8fe0c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -799,7 +799,7 @@ void kvm_inject_emulated_page_fault(struct kvm_vcpu *vcpu, if ((fault->error_code & PFERR_PRESENT_MASK) && !(fault->error_code & PFERR_RSVD_MASK)) kvm_mmu_invalidate_gva(vcpu, fault_mmu, fault->address, - fault_mmu->root.hpa); + KVM_MMU_ROOT_CURRENT); fault_mmu->inject_page_fault(vcpu, fault); } From patchwork Tue Feb 7 15:57:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13131818 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9990DC636CD for ; Tue, 7 Feb 2023 15:56:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232630AbjBGP4e (ORCPT ); Tue, 7 Feb 2023 10:56:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232064AbjBGP41 (ORCPT ); Tue, 7 Feb 2023 10:56:27 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9221F1DBA5; Tue, 7 Feb 2023 07:56:26 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id on9-20020a17090b1d0900b002300a96b358so15417736pjb.1; Tue, 07 Feb 2023 07:56:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=00EnWk3ZcvlWSt+eqRLR8f7k4INrMYq7M/LxOj8BTmY=; b=i/WILB2TOMxVYBBVBjuXsmnKJfVb94zgDGMBL/RIvs5DS7GLZ6hwhtC9t1FDjk6yoQ QVr/kftMtd8RQiTodOpfr9biP801XloUaScWN3RbR+H3FEZsYdRn3fTr0ckdERqbiCzy tl9yVOBkG7ddLgufjWNX1QbJtxqF9R4W6Hr7RC6l5t9tPReTUwhmFCW4h0XiLgAkbHUy JlP64Bqz0X0SueJz0UBQekkqQKe090jaE7aKyRj20Qyz1Bs1G9ly1Uvk8vHtaVcS1mCa 1vKqIHGDpROAZlUu8weoM63rekXJAP7xpzaHDMYSOxGyCulzcQ/WV91GGzt8d2nU7W4A XGew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=00EnWk3ZcvlWSt+eqRLR8f7k4INrMYq7M/LxOj8BTmY=; b=HWiKWgiCvxoX+tenH3B8oJLHj36FO2k4jN8LWa0LvdPljS+9/ByDKEK4owqGTxP/Gg m1GQnFM8GQNgqOUeP7QBU4/oXEvJGtmi78Of4CUH64iyyCopxk12dFpiLdeMtQin2811 yrumXQsDNxcHayiCtcXsIGrMfPcyiL7wR3AvsiMbGhZ1oHWtzKZOM5OA9tqlDXhf6lts LnG7LQcEvIHEfJDqeCFEoMIPm6xYmvJa8yVGfrYmPldx1EsJQ9tKbJqJF82o6R2+WhFT UQuOLVBK4PZKBxej+++dlRO/dVpnEL/QXEJmKP7ksSPifB7eN3Y5O+TnpTsO+0l0aPeD iHHg== X-Gm-Message-State: AO0yUKWxhO06Pdo56WPbje9yXlGcZ6veEEmYqykvtD5NCkUBiG6j7iDT DAgiVk1qp0732S+sbbDDTLsAYmTFy2I= X-Google-Smtp-Source: AK7set+sZsv0dwA6ObpBEtm4xzK/yk9gog3oSWr78QTFaDILZO3EgaAPT8ppnTJPSLlcktYUvUcW/A== X-Received: by 2002:a05:6a21:980b:b0:be:db40:634f with SMTP id ue11-20020a056a21980b00b000bedb40634fmr2933632pzb.55.1675785385858; Tue, 07 Feb 2023 07:56:25 -0800 (PST) Received: from localhost ([47.89.225.180]) by smtp.gmail.com with ESMTPSA id j13-20020a170902c3cd00b0019339f3368asm1713221plj.3.2023.02.07.07.56.24 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Feb 2023 07:56:25 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 2/8] kvm: x86/mmu: Use kvm_mmu_invalidate_gva() in kvm_mmu_invpcid_gva() Date: Tue, 7 Feb 2023 23:57:28 +0800 Message-Id: <20230207155735.2845-3-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230207155735.2845-1-jiangshanlai@gmail.com> References: <20230207155735.2845-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Use kvm_mmu_invalidate_gva() instead open calls to mmu->invlpg(). No functional change intended. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 21 +++++++-------------- 1 file changed, 7 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 958e8eb977ed..8563b52b8bb7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5757,27 +5757,20 @@ EXPORT_SYMBOL_GPL(kvm_mmu_invlpg); void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid) { struct kvm_mmu *mmu = vcpu->arch.mmu; - bool tlb_flush = false; + unsigned long roots = 0; uint i; - if (pcid == kvm_get_active_pcid(vcpu)) { - if (mmu->invlpg) - mmu->invlpg(vcpu, gva, mmu->root.hpa); - tlb_flush = true; - } + if (pcid == kvm_get_active_pcid(vcpu)) + roots |= KVM_MMU_ROOT_CURRENT; for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { if (VALID_PAGE(mmu->prev_roots[i].hpa) && - pcid == kvm_get_pcid(vcpu, mmu->prev_roots[i].pgd)) { - if (mmu->invlpg) - mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); - tlb_flush = true; - } + pcid == kvm_get_pcid(vcpu, mmu->prev_roots[i].pgd)) + roots |= KVM_MMU_ROOT_PREVIOUS(i); } - if (tlb_flush) - static_call(kvm_x86_flush_tlb_gva)(vcpu, gva); - + if (roots) + kvm_mmu_invalidate_gva(vcpu, mmu, gva, roots); ++vcpu->stat.invlpg; /* From patchwork Tue Feb 7 15:57:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13131819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B24CC636CD for ; Tue, 7 Feb 2023 15:56:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232631AbjBGP4q (ORCPT ); Tue, 7 Feb 2023 10:56:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232166AbjBGP4f (ORCPT ); Tue, 7 Feb 2023 10:56:35 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 014C061A4; Tue, 7 Feb 2023 07:56:30 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id f16-20020a17090a9b1000b0023058bbd7b2so14787880pjp.0; Tue, 07 Feb 2023 07:56:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1yYdus7dKKkMlbDfPhfvxL4FYYeMhTDOYIC9ymbscZ4=; b=o0e8+3+c4Md/NPq+c0373J4woOiYj9bLd4hEWPkGMMF+pT30BMbvFFRmosmA0Skq9t LdNrqFsh5tP3gDiOKZ61M+nBMVnK6r94e4MiEGTptFal9azeTXUt21VBVuW5pzVVHgxb 6GPL4J1F1kbnZ/lpIxi8+eVR8YC+YtI2Rj5pE4KwIbzDaP/2kVpaPdaszkXtnIIKNKKa FrSG9yYHzjzI6yR4pyLZP2dBBzrsNBr8WeZb4L7zMJ7TA1lsHrEb4Qg3yrtL9Ng2pXSw WHiaUYBEyX2Co7PDsxuHMezqh9DdfyKf54Zznf0cwVu5Kx+djdx9Q1KPFLjx7eCFGjpS ujyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1yYdus7dKKkMlbDfPhfvxL4FYYeMhTDOYIC9ymbscZ4=; b=bs88CEkdAzUcEsk5WK+xU46nDDE/3xV/64W93DjK/q+Arout9PiIBQ5cKvVhB0uPIN NdgNU7X/RwM3p6p8hMRBdW4MSjCmRiOYIvELD1NYxVjOvlL/1tEPO095pGHLFFeNZjNK YkFs/VNdYWflInPW1zh3zET9NuD8suAi2IphZiYH6Vf6bWLtw6bmLmUKWUb2j/KFtyCz gn+Ybn+rzFe88td5XAVdFGBWJnR+YApiaOyYdNeaXRIXKFnzWKbaBPfpJddqp+B9++wK vS8dH38xhtInb5kH892FfcCSWdfHyforZrL9xry5Ly574esxuPfwvnsLqtV8/lY4g/kZ eM8A== X-Gm-Message-State: AO0yUKX75LZ/8hT7aD3Wu9us3GKEVkvSqMjnTafY6zCAaapPkwOTgeXd AynupG7BPTs7b0LkXnsYnhPaDMwUM5Y= X-Google-Smtp-Source: AK7set+eyy44U0q2wWGu5RTvXQmwtsMl63mctcywFQm5vAs63mlGNWJ8GZIwPGvnyr/9oYTjjv8iVA== X-Received: by 2002:a17:902:ced0:b0:196:725c:6ea with SMTP id d16-20020a170902ced000b00196725c06eamr4266341plg.19.1675785390089; Tue, 07 Feb 2023 07:56:30 -0800 (PST) Received: from localhost ([47.89.225.180]) by smtp.gmail.com with ESMTPSA id jk16-20020a170903331000b001993909eae4sm392926plb.253.2023.02.07.07.56.29 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Feb 2023 07:56:29 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 3/8] kvm: x86/mmu: Use kvm_mmu_invalidate_gva() in nested_ept_invalidate_addr() Date: Tue, 7 Feb 2023 23:57:29 +0800 Message-Id: <20230207155735.2845-4-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230207155735.2845-1-jiangshanlai@gmail.com> References: <20230207155735.2845-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Use kvm_mmu_invalidate_gva() instead open calls to mmu->invlpg(). No functional change intended. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 1 + arch/x86/kvm/vmx/nested.c | 5 ++++- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8563b52b8bb7..e03cf5558773 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5734,6 +5734,7 @@ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); } } +EXPORT_SYMBOL_GPL(kvm_mmu_invalidate_gva); void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) { diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 557b9c468734..f552f3c454b1 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -358,6 +358,7 @@ static bool nested_ept_root_matches(hpa_t root_hpa, u64 root_eptp, u64 eptp) static void nested_ept_invalidate_addr(struct kvm_vcpu *vcpu, gpa_t eptp, gpa_t addr) { + unsigned long roots = 0; uint i; struct kvm_mmu_root_info *cached_root; @@ -368,8 +369,10 @@ static void nested_ept_invalidate_addr(struct kvm_vcpu *vcpu, gpa_t eptp, if (nested_ept_root_matches(cached_root->hpa, cached_root->pgd, eptp)) - vcpu->arch.mmu->invlpg(vcpu, addr, cached_root->hpa); + roots |= KVM_MMU_ROOT_PREVIOUS(i); } + if (roots) + kvm_mmu_invalidate_gva(vcpu, vcpu->arch.mmu, addr, roots); } static void nested_ept_inject_page_fault(struct kvm_vcpu *vcpu, From patchwork Tue Feb 7 15:57:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13131820 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6491C636CC for ; Tue, 7 Feb 2023 15:57:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232756AbjBGP5C (ORCPT ); Tue, 7 Feb 2023 10:57:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232049AbjBGP4u (ORCPT ); Tue, 7 Feb 2023 10:56:50 -0500 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D4E03E0B6; Tue, 7 Feb 2023 07:56:35 -0800 (PST) Received: by mail-pg1-x535.google.com with SMTP id 78so10713068pgb.8; Tue, 07 Feb 2023 07:56:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fOnDqaG2XhUHB/WmsQz/FiP9408Q1azUvig3p3fnFEw=; b=BDo/Q+Rwl0qIWD2sVWQsUQDfJsileXeDrgFTGRu1b0HAlYjUeI1pg6kUmFNmYm6GaG qTPQksTq2H3cqau/VKs22+LzQbHh6z1UlhEnKr/NFRTGSkwXD4YbIOvbCZdo3ZFuPWwD 5xgLXLhQBLnmNpERxLNYIzT0YFNIadpfAmkru0PYMbiy8QrZYtaFecFaviAwXWErZmZK Yw8pTLkOKRQsuj0BR483beNc281YtoiIcCmJG9PbjyjwbnjCK5ntWMIXeJn/Xb5w+EIi 5JJ4q3pcSFV1imNvs4pIfWVkSqk99peCsc3pZ2m+qeOMNmdlLcX5fUnqbrfgxtjmoudR NJiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fOnDqaG2XhUHB/WmsQz/FiP9408Q1azUvig3p3fnFEw=; b=kng+cMoPHZaIZfnjKaTzlVM/mpNdZI2vPWgScC7dMIFQsgIgCykrLg600gykcSTBmA h5+jrS14krlyarBW3MXneli9c3z02iW3kj6eUNbpyvfS9GHjLYh22TqsngVv8802wlQH Gjhshu0jSXafK6cwpAyyucqLlP5dNPHceJm0Io5kdneRP7ZEVaQS0TGl/mgBUX3hQdew +2efRCghgg8QgXiCs00HqoSWEMMwDVDFfnrv2rmwORNFLgeCcU0YqqeVYd4Ra7wocG1X L3IcMNS3sQ8NP+7pcxKEnSbphcKdpB6E2fNaSy8oDYfFG2SGloXSqzD3ZBwl8/Zr1fX7 zzYQ== X-Gm-Message-State: AO0yUKU47HBO/9Rxz11GawlXCg7DJl5ymVD7T+h0KM5SdnTqFXXet8JB jJMewLP5T91Mb8OlVrjrfG3YzC5pRuo= X-Google-Smtp-Source: AK7set90ZmmklO1BbI+kvC5qGbxtQneKMdFglsTQL/UbG0r3+oXsj8983bu1zrphuNABhnM5SeIdVw== X-Received: by 2002:a62:1457:0:b0:593:d46b:ab73 with SMTP id 84-20020a621457000000b00593d46bab73mr3039040pfu.29.1675785394567; Tue, 07 Feb 2023 07:56:34 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id n24-20020a056a0007d800b005897f5436c0sm9363725pfu.118.2023.02.07.07.56.33 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Feb 2023 07:56:34 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 4/8] kvm: x86/mmu: Set mmu->sync_page as NULL for direct paging Date: Tue, 7 Feb 2023 23:57:30 +0800 Message-Id: <20230207155735.2845-5-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230207155735.2845-1-jiangshanlai@gmail.com> References: <20230207155735.2845-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan mmu->sync_page for direct paging is never called. And both mmu->sync_page and mm->invlpg only make sense in shadowpaging. Setting mmu->sync_page as NULL for direct paging makes it consistent with mm->invlpg which is set NULL for the case. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e03cf5558773..e30ca652f6ff 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1789,12 +1789,6 @@ static void mark_unsync(u64 *spte) kvm_mmu_mark_parents_unsync(sp); } -static int nonpaging_sync_page(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *sp) -{ - return -1; -} - #define KVM_PAGE_ARRAY_NR 16 struct kvm_mmu_pages { @@ -4469,7 +4463,7 @@ static void nonpaging_init_context(struct kvm_mmu *context) { context->page_fault = nonpaging_page_fault; context->gva_to_gpa = nonpaging_gva_to_gpa; - context->sync_page = nonpaging_sync_page; + context->sync_page = NULL; context->invlpg = NULL; } @@ -5157,7 +5151,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->cpu_role.as_u64 = cpu_role.as_u64; context->root_role.word = root_role.word; context->page_fault = kvm_tdp_page_fault; - context->sync_page = nonpaging_sync_page; + context->sync_page = NULL; context->invlpg = NULL; context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; From patchwork Tue Feb 7 15:57:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13131821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43F1EC636CD for ; Tue, 7 Feb 2023 15:57:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232740AbjBGP50 (ORCPT ); Tue, 7 Feb 2023 10:57:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232697AbjBGP5I (ORCPT ); Tue, 7 Feb 2023 10:57:08 -0500 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E89744B8; Tue, 7 Feb 2023 07:56:50 -0800 (PST) Received: by mail-pf1-x430.google.com with SMTP id 203so11056364pfx.6; Tue, 07 Feb 2023 07:56:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l44WGxeGJgAySirL6qry2zKnKy2AnsVDG7wbNgoTUKk=; b=cfna03S4XwiQ7DIvqL9ZM2rlnjXulPlyzfMzN4YRuoSzKSPOMSQEzQ6hKWW+jbQvgF puGJNcezJmFBg4MToF2+rqMOaDlPBUULR1MAuiG4zQ9c8j7lWcv+8S3XMjk7pDJxNjaI mSURSW08Db40o0AxBT9v/jZB8FLqmFz/41G8F8j42XRoTdlRLV1xz9r0KvK5Ssbu1KhX Ji4OsmtF3IUzeshDyyCPsmtMADRPeqvbPm1Dsy3wURXVeFdeOe9XOU9wzoeoUUs6AjUN kSEF8aofOVBMg+kYKotbCxsWwnuES8jTvdPJirjKHFR+ilhcKMFXGdJGfnHAF7xkmtR0 vzxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l44WGxeGJgAySirL6qry2zKnKy2AnsVDG7wbNgoTUKk=; b=OWl4MbwrE7RubDqvL5Xq+J999W9shcCiF/CDttEbCqeQ7Ef+uAAFjTI44fbr7nQLI6 ZeEt166anlkgcpTCOT2Kk6S53HQ92OIXxQUm5Xa5YTrSBcfFO9y9qi5ZxPx9swwLOIq7 FZle0CCoQ45Jnxuk1QRXHFQdHccvOFc2s/nz6m6S/bMKgA1gF/2fBezJ99Wja01gMlug 4PiCNU5KUPZ19Ddvxrl0m2OY1wB5uJnKT36AM1A7akAf36w/oBBqXRH54Z4CV1g/i40T n7nBVmJwYVPn/z3UeQHfkAzL+E2WyNMZsRIqcM6OUvd4zMft99NGum816TYVCs/1aFvE t9og== X-Gm-Message-State: AO0yUKXd/fBWH0WNW2XhlvVspjCOp+UREunSKmvZyVGN/g3VK7W38Znz 4K6RYUW3LpKPGUqr5NL23tmKol+CSZA= X-Google-Smtp-Source: AK7set+EeT0jtMXWyonRDnBGhAPBQpPE3/p/FzUf9gjRLTLIra7oyE0gsGwMAi1pkndK+lnmHAG2sQ== X-Received: by 2002:a62:79c4:0:b0:593:c648:f836 with SMTP id u187-20020a6279c4000000b00593c648f836mr3719990pfc.3.1675785399661; Tue, 07 Feb 2023 07:56:39 -0800 (PST) Received: from localhost ([198.11.176.14]) by smtp.gmail.com with ESMTPSA id c8-20020aa78c08000000b0059071156016sm7162573pfd.87.2023.02.07.07.56.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Feb 2023 07:56:39 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 5/8] kvm: x86/mmu: Move the code out of FNAME(sync_page)'s loop body into mmu.c Date: Tue, 7 Feb 2023 23:57:31 +0800 Message-Id: <20230207155735.2845-6-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230207155735.2845-1-jiangshanlai@gmail.com> References: <20230207155735.2845-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Rename mmu->sync_page to mmu->sync_spte and move the code out of FNAME(sync_page)'s loop body into mmu.c. No functionalities change intended. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 4 +- arch/x86/kvm/mmu/mmu.c | 64 +++++++++++++-- arch/x86/kvm/mmu/paging_tmpl.h | 139 +++++++++++--------------------- 3 files changed, 106 insertions(+), 101 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 81429a5640d6..6c64ebfbd778 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -441,8 +441,8 @@ struct kvm_mmu { gpa_t (*gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, gpa_t gva_or_gpa, u64 access, struct x86_exception *exception); - int (*sync_page)(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *sp); + int (*sync_spte)(struct kvm_vcpu *vcpu, + struct kvm_mmu_page *sp, int i); void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa); struct kvm_mmu_root_info root; union kvm_cpu_role cpu_role; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e30ca652f6ff..c271d0a1ed54 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1908,10 +1908,62 @@ static bool sp_has_gptes(struct kvm_mmu_page *sp) &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \ if ((_sp)->gfn != (_gfn) || !sp_has_gptes(_sp)) {} else +static int __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) +{ + union kvm_mmu_page_role root_role = vcpu->arch.mmu->root_role; + bool flush = false; + int i; + + /* + * Ignore various flags when verifying that it's safe to sync a shadow + * page using the current MMU context. + * + * - level: not part of the overall MMU role and will never match as the MMU's + * level tracks the root level + * - access: updated based on the new guest PTE + * - quadrant: not part of the overall MMU role (similar to level) + */ + const union kvm_mmu_page_role sync_role_ign = { + .level = 0xf, + .access = 0x7, + .quadrant = 0x3, + .passthrough = 0x1, + }; + + /* + * Direct pages can never be unsync, and KVM should never attempt to + * sync a shadow page for a different MMU context, e.g. if the role + * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the + * reserved bits checks will be wrong, etc... + */ + if (WARN_ON_ONCE(sp->role.direct || + (sp->role.word ^ root_role.word) & ~sync_role_ign.word)) + return -1; + + for (i = 0; i < SPTE_ENT_PER_PAGE; i++) { + int ret = vcpu->arch.mmu->sync_spte(vcpu, sp, i); + + if (ret < -1) + return -1; + flush |= ret; + } + + /* + * Note, any flush is purely for KVM's correctness, e.g. when dropping + * an existing SPTE or clearing W/A/D bits to ensure an mmu_notifier + * unmap or dirty logging event doesn't fail to flush. The guest is + * responsible for flushing the TLB to ensure any changes in protection + * bits are recognized, i.e. until the guest flushes or page faults on + * a relevant address, KVM is architecturally allowed to let vCPUs use + * cached translations with the old protection bits. + */ + return flush; +} + static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct list_head *invalid_list) { - int ret = vcpu->arch.mmu->sync_page(vcpu, sp); + int ret = __kvm_sync_page(vcpu, sp); if (ret < 0) kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); @@ -4463,7 +4515,7 @@ static void nonpaging_init_context(struct kvm_mmu *context) { context->page_fault = nonpaging_page_fault; context->gva_to_gpa = nonpaging_gva_to_gpa; - context->sync_page = NULL; + context->sync_spte = NULL; context->invlpg = NULL; } @@ -5054,7 +5106,7 @@ static void paging64_init_context(struct kvm_mmu *context) { context->page_fault = paging64_page_fault; context->gva_to_gpa = paging64_gva_to_gpa; - context->sync_page = paging64_sync_page; + context->sync_spte = paging64_sync_spte; context->invlpg = paging64_invlpg; } @@ -5062,7 +5114,7 @@ static void paging32_init_context(struct kvm_mmu *context) { context->page_fault = paging32_page_fault; context->gva_to_gpa = paging32_gva_to_gpa; - context->sync_page = paging32_sync_page; + context->sync_spte = paging32_sync_spte; context->invlpg = paging32_invlpg; } @@ -5151,7 +5203,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->cpu_role.as_u64 = cpu_role.as_u64; context->root_role.word = root_role.word; context->page_fault = kvm_tdp_page_fault; - context->sync_page = NULL; + context->sync_spte = NULL; context->invlpg = NULL; context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; @@ -5283,7 +5335,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->page_fault = ept_page_fault; context->gva_to_gpa = ept_gva_to_gpa; - context->sync_page = ept_sync_page; + context->sync_spte = ept_sync_spte; context->invlpg = ept_invlpg; update_permission_bitmask(context, true); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 57f0b75c80f9..5ab9e974fdac 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -977,114 +977,67 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, * can't change unless all sptes pointing to it are nuked first. * * Returns - * < 0: the sp should be zapped - * 0: the sp is synced and no tlb flushing is required - * > 0: the sp is synced and tlb flushing is required + * < 0: failed to sync spte + * 0: the spte is synced and no tlb flushing is required + * > 0: the spte is synced and tlb flushing is required */ -static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) +static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int i) { - union kvm_mmu_page_role root_role = vcpu->arch.mmu->root_role; - int i; bool host_writable; gpa_t first_pte_gpa; - bool flush = false; - - /* - * Ignore various flags when verifying that it's safe to sync a shadow - * page using the current MMU context. - * - * - level: not part of the overall MMU role and will never match as the MMU's - * level tracks the root level - * - access: updated based on the new guest PTE - * - quadrant: not part of the overall MMU role (similar to level) - */ - const union kvm_mmu_page_role sync_role_ign = { - .level = 0xf, - .access = 0x7, - .quadrant = 0x3, - .passthrough = 0x1, - }; + u64 *sptep, spte; + struct kvm_memory_slot *slot; + unsigned pte_access; + pt_element_t gpte; + gpa_t pte_gpa; + gfn_t gfn; - /* - * Direct pages can never be unsync, and KVM should never attempt to - * sync a shadow page for a different MMU context, e.g. if the role - * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the - * reserved bits checks will be wrong, etc... - */ - if (WARN_ON_ONCE(sp->role.direct || - (sp->role.word ^ root_role.word) & ~sync_role_ign.word)) - return -1; + if (!sp->spt[i]) + return 0; first_pte_gpa = FNAME(get_level1_sp_gpa)(sp); + pte_gpa = first_pte_gpa + i * sizeof(pt_element_t); - for (i = 0; i < SPTE_ENT_PER_PAGE; i++) { - u64 *sptep, spte; - struct kvm_memory_slot *slot; - unsigned pte_access; - pt_element_t gpte; - gpa_t pte_gpa; - gfn_t gfn; - - if (!sp->spt[i]) - continue; - - pte_gpa = first_pte_gpa + i * sizeof(pt_element_t); - - if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte, - sizeof(pt_element_t))) - return -1; - - if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) { - flush = true; - continue; - } + if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte, + sizeof(pt_element_t))) + return -1; - gfn = gpte_to_gfn(gpte); - pte_access = sp->role.access; - pte_access &= FNAME(gpte_access)(gpte); - FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); + if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) + return 1; - if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access)) - continue; + gfn = gpte_to_gfn(gpte); + pte_access = sp->role.access; + pte_access &= FNAME(gpte_access)(gpte); + FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); - /* - * Drop the SPTE if the new protections would result in a RWX=0 - * SPTE or if the gfn is changing. The RWX=0 case only affects - * EPT with execute-only support, i.e. EPT without an effective - * "present" bit, as all other paging modes will create a - * read-only SPTE if pte_access is zero. - */ - if ((!pte_access && !shadow_present_mask) || - gfn != kvm_mmu_page_get_gfn(sp, i)) { - drop_spte(vcpu->kvm, &sp->spt[i]); - flush = true; - continue; - } + if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access)) + return 0; - /* Update the shadowed access bits in case they changed. */ - kvm_mmu_page_set_access(sp, i, pte_access); + /* + * Drop the SPTE if the new protections would result in a RWX=0 + * SPTE or if the gfn is changing. The RWX=0 case only affects + * EPT with execute-only support, i.e. EPT without an effective + * "present" bit, as all other paging modes will create a + * read-only SPTE if pte_access is zero. + */ + if ((!pte_access && !shadow_present_mask) || + gfn != kvm_mmu_page_get_gfn(sp, i)) { + drop_spte(vcpu->kvm, &sp->spt[i]); + return 1; + } - sptep = &sp->spt[i]; - spte = *sptep; - host_writable = spte & shadow_host_writable_mask; - slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - make_spte(vcpu, sp, slot, pte_access, gfn, - spte_to_pfn(spte), spte, true, false, - host_writable, &spte); + /* Update the shadowed access bits in case they changed. */ + kvm_mmu_page_set_access(sp, i, pte_access); - flush |= mmu_spte_update(sptep, spte); - } + sptep = &sp->spt[i]; + spte = *sptep; + host_writable = spte & shadow_host_writable_mask; + slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); + make_spte(vcpu, sp, slot, pte_access, gfn, + spte_to_pfn(spte), spte, true, false, + host_writable, &spte); - /* - * Note, any flush is purely for KVM's correctness, e.g. when dropping - * an existing SPTE or clearing W/A/D bits to ensure an mmu_notifier - * unmap or dirty logging event doesn't fail to flush. The guest is - * responsible for flushing the TLB to ensure any changes in protection - * bits are recognized, i.e. until the guest flushes or page faults on - * a relevant address, KVM is architecturally allowed to let vCPUs use - * cached translations with the old protection bits. - */ - return flush; + return mmu_spte_update(sptep, spte); } #undef pt_element_t From patchwork Tue Feb 7 15:57:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13131822 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB3C2C636CD for ; Tue, 7 Feb 2023 15:57:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232752AbjBGP5j (ORCPT ); Tue, 7 Feb 2023 10:57:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232726AbjBGP5N (ORCPT ); Tue, 7 Feb 2023 10:57:13 -0500 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EACA3D09C; Tue, 7 Feb 2023 07:56:56 -0800 (PST) Received: by mail-pf1-x42c.google.com with SMTP id ea13so1791202pfb.13; Tue, 07 Feb 2023 07:56:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IO+JMapXJDsVghpHInjqEBNXRoXI+h2VyeiakV33xtE=; b=PzaMe5XudmvXBLGLehAPnf3zt/QAijE3r/Bzs8KqzLkjQ3RKtv82dsloHYRr89a6pf DZnufhXM4lwpZDEnRPk+FyV7//M70L+EjHQUxRwJWq/qJLTP3KB3XC615TUAmq4YKCS3 MvCAKocI3ENNOaC6yhsgHluB0LH+efDdFHZXAgWI7TeqtG/w+nRdhtD1OD2abc5S6M0m K5iu2NdGPSICimJ1fgJoXj6UUTuUVzc4cNRnIrY/OEF0uPRMNBFklIAI4ZsQa1DdvNvn E/+iaiAaEQ9+BO9CLrTJEAIwa7WEejI5Pl0FtSYj3i+QpVfl6UvV+dEr498kiKA6HzoS kN1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IO+JMapXJDsVghpHInjqEBNXRoXI+h2VyeiakV33xtE=; b=YU5IpxAhtBOJ1sg1oqiM/6pRdxv0VgV6gatYR5qgn5+pQZwZ/5rQmn9CkUETPhChuN hyn6qEDmm+j/Q+2lYbbFEMUkuAZIG/U4r5k+Vm5jFIPOGGOmSqR/+pBzrwn8xuQ13ijp V4R/UK7AyDyUP4PWhjfPnGG42X3TOBGR5bXar7aSKb/aY9W9tUT6zCBwmUGq9Qz6LTKT e4MPPtx0tegh4YsE/4Slh8cOLy/dbylq7XUymI/Nzknaf1q6wSZaQZjGKBxGVKwUzTZu GU3gVJTXpmyIlg7dddvOOXGA1ZIjyWPJ/ZSfjtfirrHuIVY8CUsGtGDpP8IbrBKrkmDC qGdg== X-Gm-Message-State: AO0yUKVeGzkml9w+FWg4drv/FwziVAyBJVO1CG1pLFsTTgQnzIithf8h QsWNB1hPd3r/x9s9nPcF6NQy0G8vIe4= X-Google-Smtp-Source: AK7set8047yit0338Qn0QMgF0JPAjYzonfdm9VZuck+g/PtCLQRsKQ3oBxHgJ6H+jotFyH77/utYHg== X-Received: by 2002:a62:6d85:0:b0:590:7616:41eb with SMTP id i127-20020a626d85000000b00590761641ebmr3114790pfc.30.1675785403870; Tue, 07 Feb 2023 07:56:43 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id p18-20020a62ab12000000b005a7135a0290sm2014900pff.161.2023.02.07.07.56.42 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Feb 2023 07:56:43 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 6/8] kvm: x86/mmu: Remove FNAME(invlpg) Date: Tue, 7 Feb 2023 23:57:32 +0800 Message-Id: <20230207155735.2845-7-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230207155735.2845-1-jiangshanlai@gmail.com> References: <20230207155735.2845-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Use FNAME(sync_spte) to invalidate vTLB instead. In hardware TLB, invalidating TLB entries means the translations are removed from the TLB. In kvm shadowed vTLB, vTLB translations (combinations of shadowpaging and hardware TLB) are usually kept as long as they are clean when flushing TLB of an address space (a PCID or all) with the help of write-protections, sp->unsync, kvm_sync_page(). But a single vTLB entry is always removed in FNAME(invlpg) if sp->unsync and then prefetched. And clean vTLB entry is always removed and a new one is recreated. The new one might be failed to be created or different (with more permission) and a remote flush is always requred. Above all, it is duplicate implementation of FNAME(sync_spte) to invalidate a vTLB entry. Use FNAME(sync_spte) to share the code which has a slight semantics changed: clean vTLB entry is kept. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/mmu/mmu.c | 48 +++++++++++++++++---------- arch/x86/kvm/mmu/paging_tmpl.h | 58 --------------------------------- 3 files changed, 31 insertions(+), 76 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 6c64ebfbd778..86ae8f6419f1 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -443,7 +443,6 @@ struct kvm_mmu { struct x86_exception *exception); int (*sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int i); - void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa); struct kvm_mmu_root_info root; union kvm_cpu_role cpu_role; union kvm_mmu_page_role root_role; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c271d0a1ed54..3880f98a9cb6 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1073,14 +1073,6 @@ static struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; } -static bool rmap_can_add(struct kvm_vcpu *vcpu) -{ - struct kvm_mmu_memory_cache *mc; - - mc = &vcpu->arch.mmu_pte_list_desc_cache; - return kvm_mmu_memory_cache_nr_free_objects(mc); -} - static void rmap_remove(struct kvm *kvm, u64 *spte) { struct kvm_memslots *slots; @@ -4516,7 +4508,6 @@ static void nonpaging_init_context(struct kvm_mmu *context) context->page_fault = nonpaging_page_fault; context->gva_to_gpa = nonpaging_gva_to_gpa; context->sync_spte = NULL; - context->invlpg = NULL; } static inline bool is_root_usable(struct kvm_mmu_root_info *root, gpa_t pgd, @@ -5107,7 +5098,6 @@ static void paging64_init_context(struct kvm_mmu *context) context->page_fault = paging64_page_fault; context->gva_to_gpa = paging64_gva_to_gpa; context->sync_spte = paging64_sync_spte; - context->invlpg = paging64_invlpg; } static void paging32_init_context(struct kvm_mmu *context) @@ -5115,7 +5105,6 @@ static void paging32_init_context(struct kvm_mmu *context) context->page_fault = paging32_page_fault; context->gva_to_gpa = paging32_gva_to_gpa; context->sync_spte = paging32_sync_spte; - context->invlpg = paging32_invlpg; } static union kvm_cpu_role @@ -5204,7 +5193,6 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->root_role.word = root_role.word; context->page_fault = kvm_tdp_page_fault; context->sync_spte = NULL; - context->invlpg = NULL; context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; @@ -5336,7 +5324,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->page_fault = ept_page_fault; context->gva_to_gpa = ept_gva_to_gpa; context->sync_spte = ept_sync_spte; - context->invlpg = ept_invlpg; update_permission_bitmask(context, true); context->pkru_mask = 0; @@ -5377,7 +5364,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, * L2 page tables are never shadowed, so there is no need to sync * SPTEs. */ - g_context->invlpg = NULL; + g_context->sync_spte = NULL; /* * Note that arch.mmu->gva_to_gpa translates l2_gpa to l1_gpa using @@ -5752,6 +5739,33 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err } EXPORT_SYMBOL_GPL(kvm_mmu_page_fault); +static void __kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + gva_t gva, hpa_t root_hpa) +{ + struct kvm_shadow_walk_iterator iterator; + + vcpu_clear_mmio_info(vcpu, gva); + + write_lock(&vcpu->kvm->mmu_lock); + for_each_shadow_entry_using_root(vcpu, root_hpa, gva, iterator) { + struct kvm_mmu_page *sp = sptep_to_sp(iterator.sptep); + + if (sp->unsync && *iterator.sptep) { + gfn_t gfn = kvm_mmu_page_get_gfn(sp, iterator.index); + int ret = mmu->sync_spte(vcpu, sp, iterator.index); + + if (ret < 0) + mmu_page_zap_pte(vcpu->kvm, sp, iterator.sptep, NULL); + if (ret) + kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); + } + + if (!sp->unsync_children) + break; + } + write_unlock(&vcpu->kvm->mmu_lock); +} + void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, gva_t gva, unsigned long roots) { @@ -5768,16 +5782,16 @@ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, static_call(kvm_x86_flush_tlb_gva)(vcpu, gva); } - if (!mmu->invlpg) + if (!mmu->sync_spte) return; if ((roots & KVM_MMU_ROOT_CURRENT) && VALID_PAGE(mmu->root.hpa)) - mmu->invlpg(vcpu, gva, mmu->root.hpa); + __kvm_mmu_invalidate_gva(vcpu, mmu, gva, mmu->root.hpa); for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { if ((roots & KVM_MMU_ROOT_PREVIOUS(i)) && VALID_PAGE(mmu->prev_roots[i].hpa)) - mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); + __kvm_mmu_invalidate_gva(vcpu, mmu, gva, mmu->prev_roots[i].hpa); } } EXPORT_SYMBOL_GPL(kvm_mmu_invalidate_gva); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 5ab9e974fdac..0031fe22af3d 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -887,64 +887,6 @@ static gpa_t FNAME(get_level1_sp_gpa)(struct kvm_mmu_page *sp) return gfn_to_gpa(sp->gfn) + offset * sizeof(pt_element_t); } -static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) -{ - struct kvm_shadow_walk_iterator iterator; - struct kvm_mmu_page *sp; - u64 old_spte; - int level; - u64 *sptep; - - vcpu_clear_mmio_info(vcpu, gva); - - /* - * No need to check return value here, rmap_can_add() can - * help us to skip pte prefetch later. - */ - mmu_topup_memory_caches(vcpu, true); - - if (!VALID_PAGE(root_hpa)) { - WARN_ON(1); - return; - } - - write_lock(&vcpu->kvm->mmu_lock); - for_each_shadow_entry_using_root(vcpu, root_hpa, gva, iterator) { - level = iterator.level; - sptep = iterator.sptep; - - sp = sptep_to_sp(sptep); - old_spte = *sptep; - if (is_last_spte(old_spte, level)) { - pt_element_t gpte; - gpa_t pte_gpa; - - if (!sp->unsync) - break; - - pte_gpa = FNAME(get_level1_sp_gpa)(sp); - pte_gpa += spte_index(sptep) * sizeof(pt_element_t); - - mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL); - if (is_shadow_present_pte(old_spte)) - kvm_flush_remote_tlbs_sptep(vcpu->kvm, sptep); - - if (!rmap_can_add(vcpu)) - break; - - if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte, - sizeof(pt_element_t))) - break; - - FNAME(prefetch_gpte)(vcpu, sp, sptep, gpte, false); - } - - if (!sp->unsync_children) - break; - } - write_unlock(&vcpu->kvm->mmu_lock); -} - /* Note, @addr is a GPA when gva_to_gpa() translates an L2 GPA to an L1 GPA. */ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, gpa_t addr, u64 access, From patchwork Tue Feb 7 15:57:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13131823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBADAC636CC for ; Tue, 7 Feb 2023 15:57:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232266AbjBGP5r (ORCPT ); Tue, 7 Feb 2023 10:57:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232475AbjBGP5S (ORCPT ); Tue, 7 Feb 2023 10:57:18 -0500 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 504193D93D; Tue, 7 Feb 2023 07:57:01 -0800 (PST) Received: by mail-pj1-x102b.google.com with SMTP id d2so11775572pjd.5; Tue, 07 Feb 2023 07:57:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FpPbA9xF6oScMvGYfln/2x0CbG9qI6ccW9bRoyYMKlo=; b=FgRxsfcfNNSPl9UUDPrvsGQ/rioCqZMHpfN+fci/cOEObU+HScNFDqKB8NGpnEPw9K j4YGCNwnoxavJhV0Uy5sD7OHyx/1IfAgeeP4d0tNnWqXwlVAjNpLmNKwXdIHdDyad0wt gpBfSsn3yb+Er/jm614kGsoR1nIdmfvvmdnWhuRUIY9HXPAQZICHEWw1wFxpRFCk9oZl ckAHbVj45e0ENExbEijc+8LBpcm64h9lhosAu5GnYUNAIuPeyCB1RaGeZXXsEuPwofY0 I+GdGeeTORcyDwfmPoOotUPjFSu1FwSr/0qrvUK6O7IkMDWkc5ruGxK24VosgTtSvoBj VNMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FpPbA9xF6oScMvGYfln/2x0CbG9qI6ccW9bRoyYMKlo=; b=nu9UUbIdxOXBLjmO624Bmh2tdjGpBHaiY3Nerb1/J7Vp8Noe8zakkY9MxdNTuXWlNc MkrHPXB0m0rcl6/nxUMppKpfAopmNkjve99I9Ipcvo+qUrEXFboL89j5Umm/XVG8f3mI qtAarp/rmE0d65+vWrB7vwVMOkeEMmrdpX4YJBWRiKYf4gJg2JZF6/It/lkxMnPpNLp0 7oIpWQXBoqIImYoX7eOle//WqoAqksNPF7gFrOMkpZCFTgpLxf5wftUXEcilox7+TX/H zPCAxotz243cQSd+VdYPHyxGd6VPIby8Gw6DVj2/Ua6WW650AqjfMXikU61ao3nS+gR8 9nlw== X-Gm-Message-State: AO0yUKXckMlHGi1mbzmbtUMx7xzKNT8ETq1FOYvoWFinOkY/IDIArNXU BpL5SvJBDwFxVpU4sVFH7WVssXxSFkg= X-Google-Smtp-Source: AK7set8SJWkiq4MaeBhB9HwD4mo0PT8egLGypFDUBbzvRFNxpmY6WEOO14XCeVQUYS93xk8BZSUOUw== X-Received: by 2002:a17:902:ea09:b0:198:fd67:ba46 with SMTP id s9-20020a170902ea0900b00198fd67ba46mr3920789plg.1.1675785408129; Tue, 07 Feb 2023 07:56:48 -0800 (PST) Received: from localhost ([47.89.225.180]) by smtp.gmail.com with ESMTPSA id jl13-20020a170903134d00b00198fde9178csm5942539plb.197.2023.02.07.07.56.47 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Feb 2023 07:56:47 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 7/8] kvm: x86/mmu: Reduce the update to the spte in FNAME(sync_page) Date: Tue, 7 Feb 2023 23:57:33 +0800 Message-Id: <20230207155735.2845-8-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230207155735.2845-1-jiangshanlai@gmail.com> References: <20230207155735.2845-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Sometimes when the guest updates its pagetable, it adds only new gptes to it without changing any existed one, so there is no point to update the sptes for these existed gptes. Also when the sptes for these unchanged gptes are updated, the AD bits are also removed since make_spte() is called with prefetch=true which might result unneeded TLB flushing. Do nothing if the gpte's permissions are unchanged. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/paging_tmpl.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 0031fe22af3d..fca5ce349d9d 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -967,6 +967,11 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int drop_spte(vcpu->kvm, &sp->spt[i]); return 1; } + /* + * Do nothing if the permissions are unchanged. + */ + if (kvm_mmu_page_get_access(sp, i) == pte_access) + return 0; /* Update the shadowed access bits in case they changed. */ kvm_mmu_page_set_access(sp, i, pte_access); From patchwork Tue Feb 7 15:57:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13131824 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2D5CC636CD for ; Tue, 7 Feb 2023 15:57:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232641AbjBGP5w (ORCPT ); Tue, 7 Feb 2023 10:57:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232516AbjBGP5U (ORCPT ); Tue, 7 Feb 2023 10:57:20 -0500 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80A4D3D904; Tue, 7 Feb 2023 07:57:04 -0800 (PST) Received: by mail-pf1-x429.google.com with SMTP id ea13so1791513pfb.13; Tue, 07 Feb 2023 07:57:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=W618oeFBR26jFkCOGW2ktI8DuH0UezwdqqlJmNvjLzY=; b=TdKwZwkTOw4aMzgL/pQ0BVKPf9xEXoQKqCpxIoxZga4Nn4zeI5+YQCE1q/M2q6uDjL pOHM1tow2f8f5GA7trx3Ub+6E71/0Cn0TN1zKluzTbLlDG7zC2H/51b/inJ8niNs+lh0 JoGNLdT4Mg45NTXKBEg1IH02+KT5/mjwd+jaByFgEqm825G3o5kd5hgZSvh4d5NxPwVF hnDbL0j9PK9MLXsF2+ErYM0N7037mWcxkVeEbXXk9jZpCg4rSXFD6w1znQ0mI6SYIbd9 N5Ui+sWmB0Oo38neAjwfLR3/VQixI0b3fzEdM3Bc6nDW1QL/v553rwAVvG40Ea4TV5XF R8Bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W618oeFBR26jFkCOGW2ktI8DuH0UezwdqqlJmNvjLzY=; b=ZHB0kC+CjU6PNChEkD5Irow0uZ8s5IKsZdEMf7W2UmtKCnoDpiqkGgjnwK1h535LsT XOySj6EK9WHw947EBP2EUTsh/u0eMTUYLMin48KUgAHrL+WEaE4XZu0wz+UBpTB/YpmY td73hHs/blpY3iI0E69VhpYmjvPfDnrnx1WUvZqUTfNZ8ms9rYR8k3uX2DkJEdP312B+ wLlHedFPz9pnyUE9xL9xwk0WxeGjcTnnHnQ4igA6XAxuAczIVEicG3Ro+DCKZleotrzI 8Cvql5zWNnis4TiCUXW1Dm7dBvqlntnum6kNF39Q8pvCbUpxxA0BHE/h+7zJNRXt1m8C Ys0w== X-Gm-Message-State: AO0yUKVSeSSNlJxWpAFRIa0E1Pel33MordlJ8qRrSFni/YVi2WktbNft DV1vxnqWQX7lncV8+1xkkttdapxgKpM= X-Google-Smtp-Source: AK7set9yQ3QONY05PNePbYqJFdpCTCpxdbeX01ePwGmNeDpYmNE68460R+df1pwHjYKj7+YpF3jukg== X-Received: by 2002:aa7:94b3:0:b0:594:2e35:594e with SMTP id a19-20020aa794b3000000b005942e35594emr3496773pfl.20.1675785412562; Tue, 07 Feb 2023 07:56:52 -0800 (PST) Received: from localhost ([47.89.225.180]) by smtp.gmail.com with ESMTPSA id 12-20020aa7924c000000b0058bb0fb6295sm1824940pfp.26.2023.02.07.07.56.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Feb 2023 07:56:52 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 8/8] kvm: x86/mmu: Remove @no_dirty_log from FNAME(prefetch_gpte) Date: Tue, 7 Feb 2023 23:57:34 +0800 Message-Id: <20230207155735.2845-9-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230207155735.2845-1-jiangshanlai@gmail.com> References: <20230207155735.2845-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan FNAME(prefetch_gpte) is always called with @no_dirty_log=true. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/paging_tmpl.h | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index fca5ce349d9d..e04950015dc4 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -519,7 +519,7 @@ static int FNAME(walk_addr)(struct guest_walker *walker, static bool FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - u64 *spte, pt_element_t gpte, bool no_dirty_log) + u64 *spte, pt_element_t gpte) { struct kvm_memory_slot *slot; unsigned pte_access; @@ -535,8 +535,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, pte_access = sp->role.access & FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); - slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, - no_dirty_log && (pte_access & ACC_WRITE_MASK)); + slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, pte_access & ACC_WRITE_MASK); if (!slot) return false; @@ -605,7 +604,7 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, if (is_shadow_present_pte(*spte)) continue; - if (!FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i], true)) + if (!FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i])) break; } }