From patchwork Tue Aug 4 10:59:18 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 6936291 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 948BDC05AC for ; Tue, 4 Aug 2015 11:07:53 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A36F020412 for ; Tue, 4 Aug 2015 11:07:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 95562203C4 for ; Tue, 4 Aug 2015 11:07:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933191AbbHDLHf (ORCPT ); Tue, 4 Aug 2015 07:07:35 -0400 Received: from mga03.intel.com ([134.134.136.65]:19531 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933343AbbHDLFG (ORCPT ); Tue, 4 Aug 2015 07:05:06 -0400 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP; 04 Aug 2015 04:05:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,608,1432623600"; d="scan'208";a="741839601" Received: from xiao.sh.intel.com ([10.239.159.86]) by orsmga001.jf.intel.com with ESMTP; 04 Aug 2015 04:05:05 -0700 From: Xiao Guangrong To: pbonzini@redhat.com Cc: gleb@kernel.org, mtosatti@redhat.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Xiao Guangrong Subject: [PATCH 6/9] KVM: MMU: introduce the framework to check reserved bits on sptes Date: Tue, 4 Aug 2015 18:59:18 +0800 Message-Id: <1438685961-8107-7-git-send-email-guangrong.xiao@linux.intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1438685961-8107-1-git-send-email-guangrong.xiao@linux.intel.com> References: <1438685961-8107-1-git-send-email-guangrong.xiao@linux.intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.0 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We have abstracted the data struct and functions which are used to check reserved bit on guest page tables, now we extend the logic to check reserved bits on shadow page tables Signed-off-by: Xiao Guangrong --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu.c | 51 +++++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu.h | 3 +++ arch/x86/kvm/svm.c | 1 + 4 files changed, 56 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3e33c0d..8356259 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -294,6 +294,7 @@ struct kvm_mmu { u64 *pae_root; u64 *lm_root; + struct rsvd_bits_validate shadow_rsvd_check; struct rsvd_bits_validate guest_rsvd_check; /* diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d11d212..50fbc14 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3699,6 +3699,54 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu, cpuid_maxphyaddr(vcpu), execonly); } +/* + * the page table on host is the shadow page table for the page + * table in guest or amd nested guest, its mmu features completely + * follow the features in guest. + */ +void +reset_shadow_rsvds_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) +{ + __reset_rsvds_bits_mask(vcpu, &context->shadow_rsvd_check, + boot_cpu_data.x86_phys_bits, + context->shadow_root_level, context->nx, + guest_cpuid_has_gbpages(vcpu), is_pse(vcpu)); +} +EXPORT_SYMBOL_GPL(reset_shadow_rsvds_bits_mask); + +/* + * the direct page table on host, use as much mmu features as + * possible, however, kvm currently does not do execution-protection. + */ +static void +reset_tdp_shadow_rsvds_bits_mask(struct kvm_vcpu *vcpu, + struct kvm_mmu *context) +{ + if (guest_cpuid_is_amd(vcpu)) + __reset_rsvds_bits_mask(vcpu, &context->shadow_rsvd_check, + boot_cpu_data.x86_phys_bits, + context->shadow_root_level, false, + cpu_has_gbpages, true); + else + __reset_rsvds_bits_mask_ept(&context->shadow_rsvd_check, + boot_cpu_data.x86_phys_bits, + false); + +} + +/* + * as the comments in reset_shadow_rsvds_bits_mask() except it + * is the shadow page table for intel nested guest. + */ +static void +reset_ept_shadow_rsvds_bits_mask(struct kvm_vcpu *vcpu, + struct kvm_mmu *context, + bool execonly) +{ + __reset_rsvds_bits_mask_ept(&context->shadow_rsvd_check, + boot_cpu_data.x86_phys_bits, execonly); +} + static void update_permission_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, bool ept) { @@ -3877,6 +3925,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) update_permission_bitmask(vcpu, context, false); update_last_pte_bitmap(vcpu, context); + reset_tdp_shadow_rsvds_bits_mask(vcpu, context); } void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu) @@ -3904,6 +3953,7 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu) context->base_role.smap_andnot_wp = smap && !is_write_protection(vcpu); context->base_role.smm = is_smm(vcpu); + reset_shadow_rsvds_bits_mask(vcpu, context); } EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu); @@ -3927,6 +3977,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly) update_permission_bitmask(vcpu, context, true); reset_rsvds_bits_mask_ept(vcpu, context, execonly); + reset_ept_shadow_rsvds_bits_mask(vcpu, context, execonly); } EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu); diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 398d21c..96563be 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -53,6 +53,9 @@ static inline u64 rsvd_bits(int s, int e) int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]); void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask); +void +reset_shadow_rsvds_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context); + /* * Return values of handle_mmio_page_fault_common: * RET_MMIO_PF_EMULATE: it is a real mmio page fault, emulate the instruction diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 568cd0f..1de4685 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -2107,6 +2107,7 @@ static void nested_svm_init_mmu_context(struct kvm_vcpu *vcpu) vcpu->arch.mmu.get_pdptr = nested_svm_get_tdp_pdptr; vcpu->arch.mmu.inject_page_fault = nested_svm_inject_npf_exit; vcpu->arch.mmu.shadow_root_level = get_npt_level(); + reset_shadow_rsvds_bits_mask(vcpu, &vcpu->arch.mmu); vcpu->arch.walk_mmu = &vcpu->arch.nested_mmu; }