From patchwork Sat May 30 10:59:26 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 6512771 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 22F8D9F1CC for ; Sat, 30 May 2015 11:01:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 32ABD20851 for ; Sat, 30 May 2015 11:01:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2A18020850 for ; Sat, 30 May 2015 11:01:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932486AbbE3LBR (ORCPT ); Sat, 30 May 2015 07:01:17 -0400 Received: from mga11.intel.com ([192.55.52.93]:25879 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757823AbbE3K7z (ORCPT ); Sat, 30 May 2015 06:59:55 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 30 May 2015 03:59:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,522,1427785200"; d="scan'208";a="717943286" Received: from slu40-mobl2.ccr.corp.intel.com (HELO homedesktop.ccr.corp.intel.com) ([10.254.212.183]) by fmsmga001.fm.intel.com with ESMTP; 30 May 2015 03:59:53 -0700 From: Xiao Guangrong To: pbonzini@redhat.com Cc: gleb@kernel.org, mtosatti@redhat.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Xiao Guangrong Subject: [PATCH 15/15] KVM: VMX: fully implement guest MTRR virtualization Date: Sat, 30 May 2015 18:59:26 +0800 Message-Id: <1432983566-15773-16-git-send-email-guangrong.xiao@linux.intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1432983566-15773-1-git-send-email-guangrong.xiao@linux.intel.com> References: <1432983566-15773-1-git-send-email-guangrong.xiao@linux.intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently guest MTRR is completely prohibited if cache snoop is supported on IOMMU (!noncoherent_dma) and host does the emulation based on the knowledge from host side, however, host side is not the good point to know what the purpose of guest is. A good example is that pass-throughed VGA frame buffer is not always UC as host expected This patchset enables full MTRR virtualization and currently only works on Intel EPT architecture Signed-off-by: Xiao Guangrong --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu.c | 3 +-- arch/x86/kvm/mtrr.c | 3 +-- arch/x86/kvm/svm.c | 2 +- arch/x86/kvm/vmx.c | 28 ++++------------------------ 5 files changed, 8 insertions(+), 30 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 5be8f2e..b34de27 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -806,7 +806,7 @@ struct kvm_x86_ops { void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu); int (*set_tss_addr)(struct kvm *kvm, unsigned int addr); int (*get_tdp_level)(void); - u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio); + u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn); int (*get_lpage_level)(void); bool (*rdtscp_supported)(void); bool (*invpcid_supported)(void); diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index c8c2a90..828fcd6 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2496,8 +2496,7 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, if (level > PT_PAGE_TABLE_LEVEL) spte |= PT_PAGE_SIZE_MASK; if (tdp_enabled) - spte |= kvm_x86_ops->get_mt_mask(vcpu, gfn, - kvm_is_reserved_pfn(pfn)); + spte |= kvm_x86_ops->get_mt_mask(vcpu, gfn); if (host_writable) spte |= SPTE_HOST_WRITEABLE; diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c index 703a66b..bf84218 100644 --- a/arch/x86/kvm/mtrr.c +++ b/arch/x86/kvm/mtrr.c @@ -259,8 +259,7 @@ static void update_mtrr(struct kvm_vcpu *vcpu, u32 msr) gfn_t start, end; int index; - if (msr == MSR_IA32_CR_PAT || !tdp_enabled || - !kvm_arch_has_noncoherent_dma(vcpu->kvm)) + if (msr == MSR_IA32_CR_PAT || !tdp_enabled) return; if (!mtrr_state->mtrr_enabled && msr != MSR_MTRRdefType) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index b9f9e10..23dd78a 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -4075,7 +4075,7 @@ static bool svm_cpu_has_accelerated_tpr(void) return false; } -static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) +static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn) { return 0; } diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index fe7a589..78b77be 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -8626,31 +8626,11 @@ static int get_ept_level(void) return VMX_EPT_DEFAULT_GAW + 1; } -static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) -{ - u64 ret; - - /* For VT-d and EPT combination - * 1. MMIO: always map as UC - * 2. EPT with VT-d: - * a. VT-d without snooping control feature: can't guarantee the - * result, try to trust guest. - * b. VT-d with snooping control feature: snooping control feature of - * VT-d engine can guarantee the cache correctness. Just set it - * to WB to keep consistent with host. So the same as item 3. - * 3. EPT without VT-d: always map as WB and set IPAT=1 to keep - * consistent with host MTRR - */ - if (is_mmio) - ret = MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; - else if (kvm_arch_has_noncoherent_dma(vcpu->kvm)) - ret = kvm_mtrr_get_guest_memory_type(vcpu, gfn) << - VMX_EPT_MT_EPTE_SHIFT; - else - ret = (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) - | VMX_EPT_IPAT_BIT; +static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn) +{ + u8 type = kvm_mtrr_get_guest_memory_type(vcpu, gfn); - return ret; + return type << VMX_EPT_MT_EPTE_SHIFT; } static int vmx_get_lpage_level(void)