From patchwork Wed Jul 15 04:27:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11664167 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 20611913 for ; Wed, 15 Jul 2020 04:27:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 080EE20720 for ; Wed, 15 Jul 2020 04:27:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726938AbgGOE1a (ORCPT ); Wed, 15 Jul 2020 00:27:30 -0400 Received: from mga05.intel.com ([192.55.52.43]:59539 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726023AbgGOE11 (ORCPT ); Wed, 15 Jul 2020 00:27:27 -0400 IronPort-SDR: rdKDBCCr0sQNNk++uBrIur9cfBae2UiSmnL9e3xxeXejqYE9kRMbeTOVZ6GPCp6NcD6hTmDA+k bBvehhDYzs1g== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="233936294" X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="233936294" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 21:27:26 -0700 IronPort-SDR: DNE1cujfr2YcOX6Y8wIRz+c51jOPkd1aM06hjDa0GiaU/wlWW0/H97wHq1+QkQ3uAJ/ShSDN6z Dj5rKSlifSMA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="308118775" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga004.fm.intel.com with ESMTP; 14 Jul 2020 21:27:26 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Junaid Shahid Subject: [PATCH 1/8] KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages Date: Tue, 14 Jul 2020 21:27:18 -0700 Message-Id: <20200715042725.10961-2-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200715042725.10961-1-sean.j.christopherson@intel.com> References: <20200715042725.10961-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Call kvm_mmu_commit_zap_page() after exiting the "prepare zap" loop in kvm_recover_nx_lpages() to finish zapping pages in the unlikely event that the loop exited due to lpage_disallowed_mmu_pages being empty. Because the recovery thread drops mmu_lock() when rescheduling, it's possible that lpage_disallowed_mmu_pages could be emptied by a different thread without to_zap reaching zero despite to_zap being derived from the number of disallowed lpages. Fixes: 1aa9b9572b105 ("kvm: x86: mmu: Recovery of shattered NX large pages") Cc: Junaid Shahid Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 77810ce66bdb4..48be51027af64 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6340,6 +6340,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) cond_resched_lock(&kvm->mmu_lock); } } + kvm_mmu_commit_zap_page(kvm, &invalid_list); spin_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, rcu_idx); From patchwork Wed Jul 15 04:27:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11664161 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 60485913 for ; Wed, 15 Jul 2020 04:27:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51569206F4 for ; Wed, 15 Jul 2020 04:27:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726856AbgGOE13 (ORCPT ); Wed, 15 Jul 2020 00:27:29 -0400 Received: from mga05.intel.com ([192.55.52.43]:59540 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726648AbgGOE11 (ORCPT ); Wed, 15 Jul 2020 00:27:27 -0400 IronPort-SDR: LG9/xqolDxs6T2wgjRBSwOCtXb6s7a/btCgWNS6/NlDa2194u5VQI0WwHJygYQmLwJIfJMvTzT JtQRMSKLFwvw== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="233936295" X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="233936295" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 21:27:26 -0700 IronPort-SDR: XG45825FcRk1uS6H+/j/TMlWjpS9BTUauZtMIS3M1GzE3c522iNOCo/BtTS22KqOhjz6agcSoK VeesXwip3Pzw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="308118778" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga004.fm.intel.com with ESMTP; 14 Jul 2020 21:27:26 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Junaid Shahid Subject: [PATCH 2/8] KVM: x86/mmu: Refactor the zap loop for recovering NX lpages Date: Tue, 14 Jul 2020 21:27:19 -0700 Message-Id: <20200715042725.10961-3-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200715042725.10961-1-sean.j.christopherson@intel.com> References: <20200715042725.10961-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor the zap loop in kvm_recover_nx_lpages() to be a for loop that iterates on to_zap and drop the !to_zap check that leads to the in-loop calling of kvm_mmu_commit_zap_page(). The in-loop commit when to_zap hits zero is superfluous now that there's an unconditional commit after the loop to handle the case where lpage_disallowed_mmu_pages is emptied. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 48be51027af64..9cd3d2a23f8a5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6321,7 +6321,10 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) ratio = READ_ONCE(nx_huge_pages_recovery_ratio); to_zap = ratio ? DIV_ROUND_UP(kvm->stat.nx_lpage_splits, ratio) : 0; - while (to_zap && !list_empty(&kvm->arch.lpage_disallowed_mmu_pages)) { + for ( ; to_zap; --to_zap) { + if (list_empty(&kvm->arch.lpage_disallowed_mmu_pages)) + break; + /* * We use a separate list instead of just using active_mmu_pages * because the number of lpage_disallowed pages is expected to @@ -6334,10 +6337,9 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); WARN_ON_ONCE(sp->lpage_disallowed); - if (!--to_zap || need_resched() || spin_needbreak(&kvm->mmu_lock)) { + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { kvm_mmu_commit_zap_page(kvm, &invalid_list); - if (to_zap) - cond_resched_lock(&kvm->mmu_lock); + cond_resched_lock(&kvm->mmu_lock); } } kvm_mmu_commit_zap_page(kvm, &invalid_list); From patchwork Wed Jul 15 04:27:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11664171 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BB97013A4 for ; Wed, 15 Jul 2020 04:27:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A7E4520720 for ; Wed, 15 Jul 2020 04:27:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728092AbgGOE1p (ORCPT ); Wed, 15 Jul 2020 00:27:45 -0400 Received: from mga05.intel.com ([192.55.52.43]:59539 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725770AbgGOE1a (ORCPT ); Wed, 15 Jul 2020 00:27:30 -0400 IronPort-SDR: dv0aeoJGA7ho5RjqQ5M3Ye1ceiTmo9q1V+GHDlHvvWGt6K2RfkBIphOqJFBlNA39dGKVjosM8b F8rY8oAOkeEQ== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="233936296" X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="233936296" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 21:27:26 -0700 IronPort-SDR: 10/bIyehB7O6p0S0ujr75Ti1qWiFAohdKvFBk0w0LnZZV6SzqFL1xqzX4Y7mbpp+VBz72YGHg1 Lai4dH/jx8Vg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="308118782" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga004.fm.intel.com with ESMTP; 14 Jul 2020 21:27:26 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Junaid Shahid Subject: [PATCH 3/8] KVM: x86/mmu: Move "huge page disallowed" calculation into mapping helpers Date: Tue, 14 Jul 2020 21:27:20 -0700 Message-Id: <20200715042725.10961-4-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200715042725.10961-1-sean.j.christopherson@intel.com> References: <20200715042725.10961-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Calculate huge_page_disallowed in __direct_map() and FNAME(fetch) in preparation for reworking the calculation so that it preserves the requested map level and eventually to avoid flagging a shadow page as being disallowed for being used as a large/huge page when it couldn't have been huge in the first place, e.g. because the backing page in the host is not large. Pass the error code into the helpers and use it to recalcuate exec and write_fault instead adding yet more booleans to the parameters. Opportunistically use huge_page_disallowed instead of lpage_disallowed to match the nomenclature used within the mapping helpers (though even they have existing inconsistencies). No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 22 ++++++++++++---------- arch/x86/kvm/mmu/paging_tmpl.h | 24 ++++++++++++++---------- 2 files changed, 26 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 9cd3d2a23f8a5..bbd7e8be2b936 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3306,10 +3306,14 @@ static void disallowed_hugepage_adjust(struct kvm_shadow_walk_iterator it, } } -static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, int write, +static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, int map_writable, int max_level, kvm_pfn_t pfn, - bool prefault, bool account_disallowed_nx_lpage) + bool prefault, bool is_tdp) { + bool nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(); + bool write = error_code & PFERR_WRITE_MASK; + bool exec = error_code & PFERR_FETCH_MASK; + bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled; struct kvm_shadow_walk_iterator it; struct kvm_mmu_page *sp; int level, ret; @@ -3319,6 +3323,9 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, int write, if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa))) return RET_PF_RETRY; + if (huge_page_disallowed) + max_level = PG_LEVEL_4K; + level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn); trace_kvm_mmu_spte_requested(gpa, level, pfn); @@ -3339,7 +3346,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, int write, it.level - 1, true, ACC_ALL); link_shadow_page(vcpu, it.sptep, sp); - if (account_disallowed_nx_lpage) + if (is_tdp && huge_page_disallowed) account_huge_nx_page(vcpu->kvm, sp); } } @@ -4078,8 +4085,6 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, bool prefault, int max_level, bool is_tdp) { bool write = error_code & PFERR_WRITE_MASK; - bool exec = error_code & PFERR_FETCH_MASK; - bool lpage_disallowed = exec && is_nx_huge_page_enabled(); bool map_writable; gfn_t gfn = gpa >> PAGE_SHIFT; @@ -4097,9 +4102,6 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, if (r) return r; - if (lpage_disallowed) - max_level = PG_LEVEL_4K; - mmu_seq = vcpu->kvm->mmu_notifier_seq; smp_rmb(); @@ -4116,8 +4118,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, r = make_mmu_pages_available(vcpu); if (r) goto out_unlock; - r = __direct_map(vcpu, gpa, write, map_writable, max_level, pfn, - prefault, is_tdp && lpage_disallowed); + r = __direct_map(vcpu, gpa, error_code, map_writable, max_level, pfn, + prefault, is_tdp); out_unlock: spin_unlock(&vcpu->kvm->mmu_lock); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 0172a949f6a75..5536b2004dac8 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -625,11 +625,14 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, * emulate this operation, return 1 to indicate this case. */ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, - struct guest_walker *gw, - int write_fault, int max_level, - kvm_pfn_t pfn, bool map_writable, bool prefault, - bool lpage_disallowed) + struct guest_walker *gw, u32 error_code, + int max_level, kvm_pfn_t pfn, bool map_writable, + bool prefault) { + bool nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(); + int write_fault = error_code & PFERR_WRITE_MASK; + bool exec = error_code & PFERR_FETCH_MASK; + bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled; struct kvm_mmu_page *sp = NULL; struct kvm_shadow_walk_iterator it; unsigned direct_access, access = gw->pt_access; @@ -679,6 +682,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, link_shadow_page(vcpu, it.sptep, sp); } + if (huge_page_disallowed) + max_level = PG_LEVEL_4K; + hlevel = kvm_mmu_hugepage_adjust(vcpu, gw->gfn, max_level, &pfn); trace_kvm_mmu_spte_requested(addr, gw->level, pfn); @@ -704,7 +710,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, sp = kvm_mmu_get_page(vcpu, base_gfn, addr, it.level - 1, true, direct_access); link_shadow_page(vcpu, it.sptep, sp); - if (lpage_disallowed) + if (huge_page_disallowed) account_huge_nx_page(vcpu->kvm, sp); } } @@ -783,8 +789,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, kvm_pfn_t pfn; unsigned long mmu_seq; bool map_writable, is_self_change_mapping; - bool lpage_disallowed = (error_code & PFERR_FETCH_MASK) && - is_nx_huge_page_enabled(); int max_level; pgprintk("%s: addr %lx err %x\n", __func__, addr, error_code); @@ -825,7 +829,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, is_self_change_mapping = FNAME(is_self_change_mapping)(vcpu, &walker, user_fault, &vcpu->arch.write_fault_to_shadow_pgtable); - if (lpage_disallowed || is_self_change_mapping) + if (is_self_change_mapping) max_level = PG_LEVEL_4K; else max_level = walker.level; @@ -869,8 +873,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, r = make_mmu_pages_available(vcpu); if (r) goto out_unlock; - r = FNAME(fetch)(vcpu, addr, &walker, write_fault, max_level, pfn, - map_writable, prefault, lpage_disallowed); + r = FNAME(fetch)(vcpu, addr, &walker, error_code, max_level, pfn, + map_writable, prefault); kvm_mmu_audit(vcpu, AUDIT_POST_PAGE_FAULT); out_unlock: From patchwork Wed Jul 15 04:27:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11664175 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 44BD513A4 for ; Wed, 15 Jul 2020 04:28:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 348552067D for ; Wed, 15 Jul 2020 04:28:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728265AbgGOE2D (ORCPT ); Wed, 15 Jul 2020 00:28:03 -0400 Received: from mga05.intel.com ([192.55.52.43]:59540 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726780AbgGOE12 (ORCPT ); Wed, 15 Jul 2020 00:27:28 -0400 IronPort-SDR: H7eK1RSY1BZZagiC3h5/Jpbj99JPH4F+JQ1xlgzJPe1VHK17CcR2xgKjgSqv49tRg/JOYe9gXt MalkdKGVNosQ== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="233936297" X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="233936297" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 21:27:27 -0700 IronPort-SDR: 4b5FfUz1z2vyggpQuTzBGG5pbJLZUer9B/ILHKL5vCY3QJpNVVrpJSUEeZKkF663KHcIJbdQ+S 41dbgY74ibTg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="308118786" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga004.fm.intel.com with ESMTP; 14 Jul 2020 21:27:26 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Junaid Shahid Subject: [PATCH 4/8] KVM: x86/mmu: Capture requested page level before NX huge page workaround Date: Tue, 14 Jul 2020 21:27:21 -0700 Message-Id: <20200715042725.10961-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200715042725.10961-1-sean.j.christopherson@intel.com> References: <20200715042725.10961-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Apply the "huge page disallowed" adjustment of the max level only after capturing the original requested level. The requested level will be used in a future patch to skip adding pages to the list of disallowed huge pages if a huge page wasn't possible anyways, e.g. if the page isn't mapped as a huge page in the host. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 22 +++++++++++++++------- arch/x86/kvm/mmu/paging_tmpl.h | 8 +++----- 2 files changed, 18 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index bbd7e8be2b936..974c9a89c2454 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3238,7 +3238,8 @@ static int host_pfn_mapping_level(struct kvm_vcpu *vcpu, gfn_t gfn, } static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, - int max_level, kvm_pfn_t *pfnp) + int max_level, kvm_pfn_t *pfnp, + bool huge_page_disallowed, int *req_level) { struct kvm_memory_slot *slot; struct kvm_lpage_info *linfo; @@ -3246,6 +3247,8 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t mask; int level; + *req_level = PG_LEVEL_4K; + if (unlikely(max_level == PG_LEVEL_4K)) return PG_LEVEL_4K; @@ -3270,7 +3273,14 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, if (level == PG_LEVEL_4K) return level; - level = min(level, max_level); + *req_level = level = min(level, max_level); + + /* + * Enforce the iTLB multihit workaround after capturing the requested + * level, which will be used to do precise, accurate accounting. + */ + if (huge_page_disallowed) + return PG_LEVEL_4K; /* * mmu_notifier_retry() was successful and mmu_lock is held, so @@ -3316,17 +3326,15 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled; struct kvm_shadow_walk_iterator it; struct kvm_mmu_page *sp; - int level, ret; + int level, req_level, ret; gfn_t gfn = gpa >> PAGE_SHIFT; gfn_t base_gfn = gfn; if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa))) return RET_PF_RETRY; - if (huge_page_disallowed) - max_level = PG_LEVEL_4K; - - level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn); + level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn, + huge_page_disallowed, &req_level); trace_kvm_mmu_spte_requested(gpa, level, pfn); for_each_shadow_entry(vcpu, gpa, it) { diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 5536b2004dac8..b92d936c0900d 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -636,7 +636,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, struct kvm_mmu_page *sp = NULL; struct kvm_shadow_walk_iterator it; unsigned direct_access, access = gw->pt_access; - int top_level, hlevel, ret; + int top_level, hlevel, req_level, ret; gfn_t base_gfn = gw->gfn; direct_access = gw->pte_access; @@ -682,10 +682,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, link_shadow_page(vcpu, it.sptep, sp); } - if (huge_page_disallowed) - max_level = PG_LEVEL_4K; - - hlevel = kvm_mmu_hugepage_adjust(vcpu, gw->gfn, max_level, &pfn); + hlevel = kvm_mmu_hugepage_adjust(vcpu, gw->gfn, max_level, &pfn, + huge_page_disallowed, &req_level); trace_kvm_mmu_spte_requested(addr, gw->level, pfn); From patchwork Wed Jul 15 04:27:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11664163 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DF702913 for ; Wed, 15 Jul 2020 04:27:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D0167206E9 for ; Wed, 15 Jul 2020 04:27:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727065AbgGOE1b (ORCPT ); Wed, 15 Jul 2020 00:27:31 -0400 Received: from mga05.intel.com ([192.55.52.43]:59540 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726873AbgGOE13 (ORCPT ); Wed, 15 Jul 2020 00:27:29 -0400 IronPort-SDR: 17ZcdRBIR41q3HAlSntD8iLvgqZK+vj4WcgP24+g6T3sa9oI7EDAE8VawU7Zt1dx7gnJJ1Fbgi REMnktyM65wA== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="233936298" X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="233936298" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 21:27:27 -0700 IronPort-SDR: eLDwK2HR7k12jV4xAJjo9SlR6U+K/v3g5GNr+vzN0F0id3gvClnJ7Ivrk1TADrSQnQGfUd6+kv YWSromu9/jqA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="308118789" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga004.fm.intel.com with ESMTP; 14 Jul 2020 21:27:27 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Junaid Shahid Subject: [PATCH 5/8] KVM: x86/mmu: Account NX huge page disallowed iff huge page was requested Date: Tue, 14 Jul 2020 21:27:22 -0700 Message-Id: <20200715042725.10961-6-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200715042725.10961-1-sean.j.christopherson@intel.com> References: <20200715042725.10961-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Condition the accounting of a disallowed huge NX page on the original requested level of the page being greater than the current iterator level. This does two things: accounts the page if and only if a huge page was actually disallowed, and accounts the shadow page if and only if it was the level at which the huge page was disallowed. For the latter case, the previous logic would account all shadow pages used to create the translation for the forced small page, e.g. even PML4, which can't be a huge page on current hardware, would be accounted as having been a disallowed huge page when using 5-level EPT. The overzealous accounting is purely a performance issue, i.e. the recovery thread will spuriously zap shadow pages, but otherwise the bad behavior is harmless. Cc: Junaid Shahid Fixes: b8e8c8303ff28 ("kvm: mmu: ITLB_MULTIHIT mitigation") Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 3 ++- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 974c9a89c2454..1b2ef2f61d997 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3354,7 +3354,8 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, it.level - 1, true, ACC_ALL); link_shadow_page(vcpu, it.sptep, sp); - if (is_tdp && huge_page_disallowed) + if (is_tdp && huge_page_disallowed && + req_level >= it.level) account_huge_nx_page(vcpu->kvm, sp); } } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index b92d936c0900d..39578a1839ca4 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -708,7 +708,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, sp = kvm_mmu_get_page(vcpu, base_gfn, addr, it.level - 1, true, direct_access); link_shadow_page(vcpu, it.sptep, sp); - if (huge_page_disallowed) + if (huge_page_disallowed && req_level >= it.level) account_huge_nx_page(vcpu->kvm, sp); } } From patchwork Wed Jul 15 04:27:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11664173 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A4C29913 for ; Wed, 15 Jul 2020 04:28:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 96B7C207F5 for ; Wed, 15 Jul 2020 04:28:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728172AbgGOE14 (ORCPT ); Wed, 15 Jul 2020 00:27:56 -0400 Received: from mga05.intel.com ([192.55.52.43]:59545 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726878AbgGOE1a (ORCPT ); Wed, 15 Jul 2020 00:27:30 -0400 IronPort-SDR: jkN8i+RfbEU+ASdhB6fp57ZpTBu1nKVlD0fnmKnJ1reJ0jOcSTA3JZd8FCCaNKH3xLopwXMkhM YWrzM3g0NFGA== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="233936299" X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="233936299" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 21:27:27 -0700 IronPort-SDR: 3e1NIAbU/Y0O7P6YeabD6WshsL5X1M2LcV4VTJkIvNVeSkEq56DTv1DaSuRqfxxvCpZS/uS5EQ u/XxGmjYuHNA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="308118792" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga004.fm.intel.com with ESMTP; 14 Jul 2020 21:27:27 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Junaid Shahid Subject: [PATCH 6/8] KVM: x86/mmu: Rename 'hlevel' to 'level' in FNAME(fetch) Date: Tue, 14 Jul 2020 21:27:23 -0700 Message-Id: <20200715042725.10961-7-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200715042725.10961-1-sean.j.christopherson@intel.com> References: <20200715042725.10961-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename 'hlevel', which presumably stands for 'host level', to simply 'level' in FNAME(fetch). The variable hasn't tracked the host level for quite some time. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/paging_tmpl.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 39578a1839ca4..35867a1a1ee89 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -636,7 +636,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, struct kvm_mmu_page *sp = NULL; struct kvm_shadow_walk_iterator it; unsigned direct_access, access = gw->pt_access; - int top_level, hlevel, req_level, ret; + int top_level, level, req_level, ret; gfn_t base_gfn = gw->gfn; direct_access = gw->pte_access; @@ -682,8 +682,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, link_shadow_page(vcpu, it.sptep, sp); } - hlevel = kvm_mmu_hugepage_adjust(vcpu, gw->gfn, max_level, &pfn, - huge_page_disallowed, &req_level); + level = kvm_mmu_hugepage_adjust(vcpu, gw->gfn, max_level, &pfn, + huge_page_disallowed, &req_level); trace_kvm_mmu_spte_requested(addr, gw->level, pfn); @@ -694,10 +694,10 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, * We cannot overwrite existing page tables with an NX * large page, as the leaf could be executable. */ - disallowed_hugepage_adjust(it, gw->gfn, &pfn, &hlevel); + disallowed_hugepage_adjust(it, gw->gfn, &pfn, &level); base_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); - if (it.level == hlevel) + if (it.level == level) break; validate_direct_spte(vcpu, it.sptep, direct_access); From patchwork Wed Jul 15 04:27:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11664169 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7AE8E13A4 for ; Wed, 15 Jul 2020 04:27:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6DD9D2067D for ; Wed, 15 Jul 2020 04:27:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728112AbgGOE1q (ORCPT ); Wed, 15 Jul 2020 00:27:46 -0400 Received: from mga05.intel.com ([192.55.52.43]:59540 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726936AbgGOE1a (ORCPT ); Wed, 15 Jul 2020 00:27:30 -0400 IronPort-SDR: Nu5thcu46LDUxqxIltCQXsnnbB8Du0zwD+p3wNHKmIHEQHIA3n6Kuym94qzta9fg6GC9+fd+Ou ItNvnxiJNe1A== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="233936300" X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="233936300" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 21:27:27 -0700 IronPort-SDR: OgRE+dfGvD6j+aPbMPG8UbKnKZo8UJL2vLJ2CVUGW5PVZl63WuM/spQhG0mNcSg1vWvvNu5QNm ybLkUtNue4GA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="308118796" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga004.fm.intel.com with ESMTP; 14 Jul 2020 21:27:27 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Junaid Shahid Subject: [PATCH 7/8] KVM: x86/mmu: Hoist ITLB multi-hit workaround check up a level Date: Tue, 14 Jul 2020 21:27:24 -0700 Message-Id: <20200715042725.10961-8-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200715042725.10961-1-sean.j.christopherson@intel.com> References: <20200715042725.10961-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the "ITLB multi-hit workaround enabled" check into the callers of disallowed_hugepage_adjust() to make it more obvious that the helper is specific to the workaround, and to be consistent with the accounting, i.e. account_huge_nx_page() is called if and only if the workaround is enabled. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/mmu/paging_tmpl.h | 3 ++- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1b2ef2f61d997..135bdf6ef8ca9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3300,7 +3300,6 @@ static void disallowed_hugepage_adjust(struct kvm_shadow_walk_iterator it, u64 spte = *it.sptep; if (it.level == level && level > PG_LEVEL_4K && - is_nx_huge_page_enabled() && is_shadow_present_pte(spte) && !is_large_pte(spte)) { /* @@ -3342,7 +3341,8 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, * We cannot overwrite existing page tables with an NX * large page, as the leaf could be executable. */ - disallowed_hugepage_adjust(it, gfn, &pfn, &level); + if (nx_huge_page_workaround_enabled) + disallowed_hugepage_adjust(it, gfn, &pfn, &level); base_gfn = gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); if (it.level == level) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 35867a1a1ee89..db5734d7c80b6 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -694,7 +694,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, * We cannot overwrite existing page tables with an NX * large page, as the leaf could be executable. */ - disallowed_hugepage_adjust(it, gw->gfn, &pfn, &level); + if (nx_huge_page_workaround_enabled) + disallowed_hugepage_adjust(it, gw->gfn, &pfn, &level); base_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); if (it.level == level) From patchwork Wed Jul 15 04:27:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11664165 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3BDA4913 for ; Wed, 15 Jul 2020 04:27:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2CDEA2067D for ; Wed, 15 Jul 2020 04:27:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727991AbgGOE1k (ORCPT ); Wed, 15 Jul 2020 00:27:40 -0400 Received: from mga05.intel.com ([192.55.52.43]:59545 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726968AbgGOE1a (ORCPT ); Wed, 15 Jul 2020 00:27:30 -0400 IronPort-SDR: mZjnGUBD0gv4D/3Moxeizy3vXF1et7NE29tCqloq59yYlev08Bh90KRdDtN37+FyqmVIYN9Xbu SxVSMvsWWN+A== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="233936301" X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="233936301" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 21:27:28 -0700 IronPort-SDR: 4BvSwwySUxZDLzLvfSTSrilKkFgN/W5i7VesyyASTRt5M2Xq4LRnVwWPzNZXq0kBurMQdJs42I Ww6N16GF7bNQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="308118801" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga004.fm.intel.com with ESMTP; 14 Jul 2020 21:27:27 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Junaid Shahid Subject: [PATCH 8/8] KVM: x86/mmu: Track write/user faults using bools Date: Tue, 14 Jul 2020 21:27:25 -0700 Message-Id: <20200715042725.10961-9-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200715042725.10961-1-sean.j.christopherson@intel.com> References: <20200715042725.10961-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use bools to track write and user faults throughout the page fault paths and down into mmu_set_spte(). The actual usage is purely boolean, but that's not obvious without digging into all paths as the current code uses a mix of bools (TDP and try_async_pf) and ints (shadow paging and mmu_set_spte()). No true functional change intended (although the pgprintk() will now print 0/1 instead of 0/PFERR_WRITE_MASK). Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/mmu/paging_tmpl.h | 10 +++++----- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 135bdf6ef8ca9..5c47af2cc9a19 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3062,7 +3062,7 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, } static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, - unsigned int pte_access, int write_fault, int level, + unsigned int pte_access, bool write_fault, int level, gfn_t gfn, kvm_pfn_t pfn, bool speculative, bool host_writable) { @@ -3159,7 +3159,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, return -1; for (i = 0; i < ret; i++, gfn++, start++) { - mmu_set_spte(vcpu, start, access, 0, sp->role.level, gfn, + mmu_set_spte(vcpu, start, access, false, sp->role.level, gfn, page_to_pfn(pages[i]), true, true); put_page(pages[i]); } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index db5734d7c80b6..e30e0d9b4613d 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -550,7 +550,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, * we call mmu_set_spte() with host_writable = true because * pte_prefetch_gfn_to_pfn always gets a writable pfn. */ - mmu_set_spte(vcpu, spte, pte_access, 0, PG_LEVEL_4K, gfn, pfn, + mmu_set_spte(vcpu, spte, pte_access, false, PG_LEVEL_4K, gfn, pfn, true, true); kvm_release_pfn_clean(pfn); @@ -630,7 +630,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, bool prefault) { bool nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(); - int write_fault = error_code & PFERR_WRITE_MASK; + bool write_fault = error_code & PFERR_WRITE_MASK; bool exec = error_code & PFERR_FETCH_MASK; bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled; struct kvm_mmu_page *sp = NULL; @@ -743,7 +743,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, */ static bool FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu, - struct guest_walker *walker, int user_fault, + struct guest_walker *walker, bool user_fault, bool *write_fault_to_shadow_pgtable) { int level; @@ -781,8 +781,8 @@ FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu, static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, bool prefault) { - int write_fault = error_code & PFERR_WRITE_MASK; - int user_fault = error_code & PFERR_USER_MASK; + bool write_fault = error_code & PFERR_WRITE_MASK; + bool user_fault = error_code & PFERR_USER_MASK; struct guest_walker walker; int r; kvm_pfn_t pfn;