From patchwork Wed Oct 22 22:38:46 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 5137451 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 5E4FD9F349 for ; Wed, 22 Oct 2014 22:43:08 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 614DE20253 for ; Wed, 22 Oct 2014 22:43:07 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 276CD2024F for ; Wed, 22 Oct 2014 22:43:06 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xh4aJ-0006RT-9f; Wed, 22 Oct 2014 22:40:51 +0000 Received: from mailout1.w2.samsung.com ([211.189.100.11] helo=usmailout1.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xh4Z2-0004X7-C2 for linux-arm-kernel@lists.infradead.org; Wed, 22 Oct 2014 22:39:34 +0000 Received: from uscpsbgex1.samsung.com (u122.gpu85.samsung.co.kr [203.254.195.122]) by mailout1.w2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0NDV00JB3AXE8P60@mailout1.w2.samsung.com> for linux-arm-kernel@lists.infradead.org; Wed, 22 Oct 2014 18:39:14 -0400 (EDT) X-AuditID: cbfec37a-b7f516d000004eff-f9-54483212679b Received: from usmmp1.samsung.com ( [203.254.195.77]) by uscpsbgex1.samsung.com (USCPEXMTA) with SMTP id 46.4D.20223.21238445; Wed, 22 Oct 2014 18:39:14 -0400 (EDT) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp1.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0NDV005Y7AXEK760@usmmp1.samsung.com>; Wed, 22 Oct 2014 18:39:14 -0400 (EDT) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.144.129.79) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.3.123.3; Wed, 22 Oct 2014 15:39:13 -0700 From: Mario Smarduch To: kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, pbonzini@redhat.com, agraf@suse.de, catalin.marinas@arm.com, cornelia.huck@de.ibm.com, borntraeger@de.ibm.com, james.hogan@imgtec.com, marc.zyngier@arm.com, xiaoguangrong@linux.vnet.ibm.com Subject: [PATCH v12 6/6] arm: KVM: ARMv7 dirty page logging 2nd stage page fault Date: Wed, 22 Oct 2014 15:38:46 -0700 Message-id: <1414017526-5870-2-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1414017526-5870-1-git-send-email-m.smarduch@samsung.com> References: <1414017526-5870-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.144.129.79] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFuphkeLIzCtJLcpLzFFi42I5/e+wr66QkUeIwavzShYnrvxjtJi+YjuL xftlPYwWL14DufObGxkt3s17wWzR/ayZ0eLNJ22LOVMLLT6eOs5usenxNVaLv3f+sVns3/aP 1WLOmQcsFpPebGOyWPj/JqODgMeaeWsYPQ4+OsTm0bPzDKPHnWt72DweHNrM4nF+0xpmj81L 6j3e77vK5rH5dLXH501yAVxRXDYpqTmZZalF+nYJXBn7XjxlKnimVLFt6je2BsZFMl2MnBwS AiYS39Y1skLYYhIX7q1nA7GFBJYxSmxq4+5i5AKye5kknn7sZYNwLjJKLLjWzgxSxSagK7H/ 3kZ2kISIQBOTxO0/t8AcZoGzjBLtW36yg1QJCwRIfLm2EGwHi4CqxP8ZlxlBbF4BV4nZO48D jeUA2q0gMWeSDUiYU8BN4sHvvywQZ7hK9C29zwJRLijxY/I9FpByZgEJieeflSBKVCW23XzO CPGBksS0w1fZJzAKzULSMQuhYwEj0ypGsdLi5ILipPTUCkO94sTc4tK8dL3k/NxNjJAYrNrB eOerzSFGAQ5GJR7eGRzuIUKsiWXFlbmHGCU4mJVEeLcJeoQI8aYkVlalFuXHF5XmpBYfYmTi 4JRqYKy84BHxMPZD89nHX693+D13tdPap2fpFSspI8n9vPDq/GWdq9xWeWmszpx877Pxxj3d T9qiywRsJh7xvL5S8WjB/U9+ijdMT6fMl8up0Qw2MJm1wzW0jyNSyt709damNpGLVz3/JejI bu5ILLc3ke74FLSxQZFl24z3IUszH+/PX1F4sTZ+jhJLcUaioRZzUXEiAIgkzR2fAgAA X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141022_153932_497319_E7023042 X-CRM114-Status: GOOD ( 13.44 ) X-Spam-Score: -6.4 (------) Cc: peter.maydell@linaro.org, kvm@vger.kernel.org, steve.capper@arm.com, kvm-ia64@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Mario Smarduch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds support for handling 2nd stage page faults during migration, it disables faulting in huge pages, and dissolves huge pages to page tables. In case migration is canceled huge pages are used again. Reviewed-by: Christoffer Dall Signed-off-by: Mario Smarduch --- arch/arm/kvm/mmu.c | 47 ++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 40 insertions(+), 7 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index e348386..b00dec6 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -47,6 +47,15 @@ static phys_addr_t hyp_idmap_vector; #define kvm_pmd_huge(_x) (pmd_huge(_x) || pmd_trans_huge(_x)) #define kvm_pud_huge(_x) pud_huge(_x) +static bool kvm_get_logging_state(struct kvm_memory_slot *memslot) +{ +#ifdef CONFIG_ARM + return !!memslot->dirty_bitmap; +#else + return false; +#endif +} + static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { /* @@ -626,7 +635,8 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache } static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, - phys_addr_t addr, const pte_t *new_pte, bool iomap) + phys_addr_t addr, const pte_t *new_pte, bool iomap, + bool logging_active) { pmd_t *pmd; pte_t *pte, old_pte; @@ -641,6 +651,18 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, return 0; } + /* + * While dirty memory logging, clear PMD entry for huge page and split + * into smaller pages, to track dirty memory at page granularity. + */ + if (logging_active && kvm_pmd_huge(*pmd)) { + phys_addr_t ipa = pmd_pfn(*pmd) << PAGE_SHIFT; + + pmd_clear(pmd); + kvm_tlb_flush_vmid_ipa(kvm, ipa); + put_page(virt_to_page(pmd)); + } + /* Create stage-2 page mappings - Level 2 */ if (pmd_none(*pmd)) { if (!cache) @@ -693,7 +715,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, if (ret) goto out; spin_lock(&kvm->mmu_lock); - ret = stage2_set_pte(kvm, &cache, addr, &pte, true); + ret = stage2_set_pte(kvm, &cache, addr, &pte, true, false); spin_unlock(&kvm->mmu_lock); if (ret) goto out; @@ -910,6 +932,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct vm_area_struct *vma; pfn_t pfn; pgprot_t mem_type = PAGE_S2; + bool logging_active = kvm_get_logging_state(memslot); write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu)); if (fault_status == FSC_PERM && !write_fault) { @@ -920,7 +943,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, /* Let's check if we will get back a huge page backed by hugetlbfs */ down_read(¤t->mm->mmap_sem); vma = find_vma_intersection(current->mm, hva, hva + 1); - if (is_vm_hugetlb_page(vma)) { + if (is_vm_hugetlb_page(vma) && !logging_active) { hugetlb = true; gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; } else { @@ -966,7 +989,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, spin_lock(&kvm->mmu_lock); if (mmu_notifier_retry(kvm, mmu_seq)) goto out_unlock; - if (!hugetlb && !force_pte) + if (!hugetlb && !force_pte && !logging_active) hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); if (hugetlb) { @@ -986,10 +1009,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } coherent_cache_guest_page(vcpu, hva, PAGE_SIZE); ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, - mem_type == PAGE_S2_DEVICE); + mem_type == PAGE_S2_DEVICE, + logging_active); } - + if (write_fault) + mark_page_dirty(kvm, gfn); out_unlock: spin_unlock(&kvm->mmu_lock); kvm_release_pfn_clean(pfn); @@ -1139,7 +1164,15 @@ static void kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, void *data) { pte_t *pte = (pte_t *)data; - stage2_set_pte(kvm, NULL, gpa, pte, false); + /* + * We can always call stage2_set_pte with logging_active == false, + * because MMU notifiers will have unmapped a huge PMD before calling + * ->change_pte() (which in turn calls kvm_set_spte_hva()) and therefore + * stage2_set_pte() never needs to clear out a huge PMD through this + * calling path. + */ + + stage2_set_pte(kvm, NULL, gpa, pte, false, false); }