From patchwork Fri Oct 13 23:16:01 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Yi" X-Patchwork-Id: 10004895 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7A4C960360 for ; Fri, 13 Oct 2017 14:30:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6DF6B285A0 for ; Fri, 13 Oct 2017 14:30:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 62FD428644; Fri, 13 Oct 2017 14:30:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00, DATE_IN_FUTURE_06_12, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 14788285A0 for ; Fri, 13 Oct 2017 14:30:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758319AbdJMO36 (ORCPT ); Fri, 13 Oct 2017 10:29:58 -0400 Received: from mga14.intel.com ([192.55.52.115]:41064 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758255AbdJMO35 (ORCPT ); Fri, 13 Oct 2017 10:29:57 -0400 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Oct 2017 07:29:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.43,371,1503385200"; d="scan'208";a="146135042" Received: from linux.intel.com ([10.54.29.200]) by orsmga002.jf.intel.com with ESMTP; 13 Oct 2017 07:29:56 -0700 Received: from dazhang1-ssd.sh.intel.com (unknown [10.239.48.120]) by linux.intel.com (Postfix) with ESMTP id B54D55802C7; Fri, 13 Oct 2017 07:29:55 -0700 (PDT) From: Zhang Yi To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, rkrcmar@redhat.com, Zhang Yi Z Subject: [PATCH RFC 10/10] KVM: VMX: implement setup SPP page structure in spp miss. Date: Sat, 14 Oct 2017 07:16:01 +0800 Message-Id: <1ec7c3f3618e9a1c7a25ab29279b83d9f758650a.1506559196.git.yi.z.zhang@linux.intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Zhang Yi Z We also should setup SPP page structure while we catch a SPP miss, some case, such as hotplug vcpu, should update the SPP page table in SPP miss handler. Signed-off-by: Zhang Yi Z --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu.c | 12 ++++++++++++ arch/x86/kvm/vmx.c | 8 ++++++++ 3 files changed, 22 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ef50d98..bc56c4c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1260,6 +1260,8 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u64 error_code, int kvm_mmu_setup_spp_structure(struct kvm_vcpu *vcpu, u32 access_map, gfn_t gfn); +int kvm_mmu_get_spp_acsess_map(struct kvm *kvm, u32 *access_map, gfn_t gfn); + void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva); void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index c229324..88b8571 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -4129,6 +4129,17 @@ static void mmu_spp_spte_set(u64 *sptep, u64 new_spte) __set_spte(sptep, new_spte); } +int kvm_mmu_get_spp_acsess_map(struct kvm *kvm, u32 *access_map, gfn_t gfn) +{ + struct kvm_memory_slot *slot; + + slot = gfn_to_memslot(kvm, gfn); + *access_map = *gfn_to_subpage_wp_info(slot, gfn); + + return 0; +} +EXPORT_SYMBOL_GPL(kvm_mmu_get_spp_acsess_map); + int kvm_mmu_setup_spp_structure(struct kvm_vcpu *vcpu, u32 access_map, gfn_t gfn) { @@ -4174,6 +4185,7 @@ int kvm_mmu_setup_spp_structure(struct kvm_vcpu *vcpu, spin_unlock(&kvm->mmu_lock); return -EFAULT; } +EXPORT_SYMBOL_GPL(kvm_mmu_setup_spp_structure); int kvm_mmu_get_subpages(struct kvm *kvm, struct kvm_subpage *spp_info) { diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 9116b53..c4cd773 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -8005,6 +8005,9 @@ static int handle_invvpid(struct kvm_vcpu *vcpu) static int handle_spp(struct kvm_vcpu *vcpu) { unsigned long exit_qualification; + gpa_t gpa; + gfn_t gfn; + u32 map; exit_qualification = vmcs_readl(EXIT_QUALIFICATION); @@ -8031,6 +8034,11 @@ static int handle_spp(struct kvm_vcpu *vcpu) * SPP table here. */ pr_debug("SPP: %s: SPPT Miss!!!\n", __func__); + + gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS); + gfn = gpa >> PAGE_SHIFT; + kvm_mmu_get_spp_acsess_map(vcpu->kvm, &map, gfn); + kvm_mmu_setup_spp_structure(vcpu, map, gfn); return 1; }