From patchwork Fri Oct 27 02:25:24 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haozhong Zhang X-Patchwork-Id: 10028985 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A66CC6022E for ; Fri, 27 Oct 2017 02:26:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 953E028EC4 for ; Fri, 27 Oct 2017 02:26:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 875A928EC9; Fri, 27 Oct 2017 02:26:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3C9AD28EC4 for ; Fri, 27 Oct 2017 02:26:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932456AbdJ0C0X (ORCPT ); Thu, 26 Oct 2017 22:26:23 -0400 Received: from mga04.intel.com ([192.55.52.120]:2310 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932452AbdJ0C0T (ORCPT ); Thu, 26 Oct 2017 22:26:19 -0400 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Oct 2017 19:26:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.44,303,1505804400"; d="scan'208";a="914265577" Received: from hz-desktop.sh.intel.com (HELO localhost) ([10.239.159.142]) by FMSMGA003.fm.intel.com with ESMTP; 26 Oct 2017 19:26:16 -0700 From: Haozhong Zhang To: kvm@vger.kernel.org, x86@kernel.org Cc: linux-kernel@vger.kernel.org, Paolo Bonzini , rkrcmar@redhat.com, Xiao Guangrong , Dan Williams , ivan.d.cuevas.escareno@intel.com, karthik.kumar@intel.com, Haozhong Zhang Subject: [PATCH 3/3] KVM: MMU: consider host cache type in MMIO pfn check Date: Fri, 27 Oct 2017 10:25:24 +0800 Message-Id: <20171027022524.22589-4-haozhong.zhang@intel.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171027022524.22589-1-haozhong.zhang@intel.com> References: <20171027022524.22589-1-haozhong.zhang@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP By default, KVM treats a reserved page as for MMIO purpose, and maps it to guest with UC memory type. However, some reserved pages are not for MMIO, such as pages of DAX device (e.g., /dev/daxX.Y). Mapping them with UC memory type will harm the performance. In order to exclude those cases, we check the host cache mode in addition and only treat UC/UC- pages as MMIO. Signed-off-by: Haozhong Zhang Reported-by: Cuevas Escareno, Ivan D Reported-by: Kumar, Karthik --- arch/x86/kvm/mmu.c | 32 +++++++++++++++++++++++++++++--- 1 file changed, 29 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 0b481cc9c725..d4c821a6df3d 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2707,10 +2707,36 @@ static bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) { - if (pfn_valid(pfn)) - return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn)); + bool is_mmio = true; - return true; + if (pfn_valid(pfn)) { + is_mmio = !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn)); + + /* + * By default, KVM treats a reserved page as for MMIO + * purpose, and maps it to guest with UC memory type. + * However, some reserved pages are not for MMIO, such + * as pages of DAX device (e.g., /dev/daxX.Y). Mapping + * them with UC memory type will harm the performance. + * In order to exclude those cases, we check the host + * cache mode in addition and only treat UC/UC- pages + * as MMIO. + * + * track_pfn_insert() works only when PAT is enabled, + * so add pat_enabled() here. + */ + if (is_mmio && pat_enabled()) { + pgprot_t prot; + enum page_cache_mode cm; + + track_pfn_insert(NULL, &prot, kvm_pfn_to_pfn(pfn)); + cm = pgprot2cachemode(prot); + is_mmio = (cm == _PAGE_CACHE_MODE_UC || + cm == _PAGE_CACHE_MODE_UC_MINUS); + } + } + + return is_mmio; } static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,