From patchwork Wed Jan 8 20:24:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11324649 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 46D3092A for ; Wed, 8 Jan 2020 20:27:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1E4FD2075D for ; Wed, 8 Jan 2020 20:27:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1E4FD2075D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1B5C08E0014; Wed, 8 Jan 2020 15:27:12 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0F34A8E000D; Wed, 8 Jan 2020 15:27:12 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E36F18E0014; Wed, 8 Jan 2020 15:27:11 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id B0F5C8E000D for ; Wed, 8 Jan 2020 15:27:11 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 620264427 for ; Wed, 8 Jan 2020 20:27:11 +0000 (UTC) X-FDA: 76355601462.04.voice67_2197880dc7909 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,sean.j.christopherson@intel.com,:pbonzini@redhat.com:paulus@ozlabs.org:sean.j.christopherson@intel.com:vkuznets@redhat.com:wanpengli@tencent.com:jmattson@google.com:joro@8bytes.org:dave.hansen@linux.intel.com:luto@kernel.org:peterz@infradead.org:akpm@linux-foundation.org:maz@kernel.org:james.morse@arm.com:julien.thierry.kdev@gmail.com:suzuki.poulose@arm.com:kvm-ppc@vger.kernel.org:kvm@vger.kernel.org:linux-kernel@vger.kernel.org::linux-arm-kernel@lists.infradead.org:kvmarm@lists.cs.columbia.edu:syzbot+c9d1fb51ac9d0d10c39d@syzkaller.appspotmail.com:aarcange@redhat.com:dan.j.williams@intel.com:brho@google.com:david@redhat.com:jason.zeng@intel.com:dave.jiang@intel.com:liran.alon@oracle.com:linux-nvdimm@lists.01.org,RULES_HIT:30054:30064,0,RBL:192.55.52.93:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95;030134373732613035643462346335363165313835396461623731626438.633931640363386662613331306661616561316466643564363531316635.323266396 53130-lb X-HE-Tag: voice67_2197880dc7909 X-Filterd-Recvd-Size: 4757 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Wed, 8 Jan 2020 20:27:10 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Jan 2020 12:27:07 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,411,1571727600"; d="scan'208";a="211658401" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.202]) by orsmga007.jf.intel.com with ESMTP; 08 Jan 2020 12:27:07 -0800 From: Sean Christopherson To: Paolo Bonzini Cc: Paul Mackerras , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Andrew Morton , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , kvm-ppc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, syzbot+c9d1fb51ac9d0d10c39d@syzkaller.appspotmail.com, Andrea Arcangeli , Dan Williams , Barret Rhoden , David Hildenbrand , Jason Zeng , Dave Jiang , Liran Alon , linux-nvdimm Subject: [PATCH 14/14] KVM: x86/mmu: Use huge pages for DAX-backed files Date: Wed, 8 Jan 2020 12:24:48 -0800 Message-Id: <20200108202448.9669-15-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200108202448.9669-1-sean.j.christopherson@intel.com> References: <20200108202448.9669-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Walk the host page tables to identify hugepage mappings for ZONE_DEVICE pfns, i.e. DAX pages. Explicitly query kvm_is_zone_device_pfn() when deciding whether or not to bother walking the host page tables, as DAX pages do not set up the head/tail infrastructure, i.e. will return false for PageCompound() even when using huge pages. Zap ZONE_DEVICE sptes when disabling dirty logging, e.g. if live migration fails, to allow KVM to rebuild large pages for DAX-based mappings. Presumably DAX favors large pages, and worst case scenario is a minor performance hit as KVM will need to re-fault all DAX-based pages. Suggested-by: Barret Rhoden Cc: David Hildenbrand Cc: Dan Williams Cc: Jason Zeng Cc: Dave Jiang Cc: Liran Alon Cc: linux-nvdimm Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1e4e0ac169a7..324e1919722f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3250,7 +3250,7 @@ static int host_pfn_mapping_level(struct kvm_vcpu *vcpu, gfn_t gfn, PT_DIRECTORY_LEVEL != (int)PG_LEVEL_2M || PT_PDPE_LEVEL != (int)PG_LEVEL_1G); - if (!PageCompound(pfn_to_page(pfn))) + if (!PageCompound(pfn_to_page(pfn)) && !kvm_is_zone_device_pfn(pfn)) return PT_PAGE_TABLE_LEVEL; /* @@ -3282,8 +3282,7 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, if (unlikely(max_level == PT_PAGE_TABLE_LEVEL)) return PT_PAGE_TABLE_LEVEL; - if (is_error_noslot_pfn(pfn) || kvm_is_reserved_pfn(pfn) || - kvm_is_zone_device_pfn(pfn)) + if (is_error_noslot_pfn(pfn) || kvm_is_reserved_pfn(pfn)) return PT_PAGE_TABLE_LEVEL; slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); @@ -5910,8 +5909,8 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, * mapping if the indirect sp has level = 1. */ if (sp->role.direct && !kvm_is_reserved_pfn(pfn) && - !kvm_is_zone_device_pfn(pfn) && - PageCompound(pfn_to_page(pfn))) { + (kvm_is_zone_device_pfn(pfn) || + PageCompound(pfn_to_page(pfn)))) { pte_list_remove(rmap_head, sptep); if (kvm_available_flush_tlb_with_range())