From patchwork Tue Apr 29 00:54:56 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 4084011 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 555839F38E for ; Tue, 29 Apr 2014 00:58:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 40625201EC for ; Tue, 29 Apr 2014 00:58:16 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 24110201E7 for ; Tue, 29 Apr 2014 00:58:15 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WewKN-0007FJ-7Q; Tue, 29 Apr 2014 00:55:19 +0000 Received: from mailout2.w2.samsung.com ([211.189.100.12] helo=usmailout2.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WewKD-0006Aw-72 for linux-arm-kernel@lists.infradead.org; Tue, 29 Apr 2014 00:55:10 +0000 Received: from uscpsbgex1.samsung.com (u122.gpu85.samsung.co.kr [203.254.195.122]) by mailout2.w2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N4R007H2P7LK080@mailout2.w2.samsung.com> for linux-arm-kernel@lists.infradead.org; Mon, 28 Apr 2014 20:54:57 -0400 (EDT) X-AuditID: cbfec37a-b7fd76d000006048-73-535ef8615729 Received: from usmmp2.samsung.com ( [203.254.195.78]) by uscpsbgex1.samsung.com (USCPEXMTA) with SMTP id E2.B9.24648.168FE535; Mon, 28 Apr 2014 20:54:57 -0400 (EDT) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp2.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0N4R005L8P7LYO30@usmmp2.samsung.com>; Mon, 28 Apr 2014 20:54:57 -0400 (EDT) Received: from [105.144.129.76] (105.144.129.76) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.1.421.2; Mon, 28 Apr 2014 17:54:56 -0700 Message-id: <535EF860.9040002@samsung.com> Date: Mon, 28 Apr 2014 17:54:56 -0700 From: Mario Smarduch User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130804 Thunderbird/17.0.8 MIME-version: 1.0 To: "kvmarm@lists.cs.columbia.edu" , Marc Zyngier , "christoffer.dall@linaro.org" , Steve Capper , "kvm@vger.kernel.org" , linux-arm-kernel@lists.infradead.org, "gavin.guo@canonical.com" , Peter Maydell , =?UTF-8?B?7J207KCV7ISd?= , =?UTF-8?B?7KCV7ISx7KeE?= Subject: [PATCH v4 2/5] live migration support for initial write protect of VM X-Originating-IP: [105.144.129.76] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrKLMWRmVeSWpSXmKPExsVy+t9hP93EH3HBBs/uKVm8eP2P0aL3/0VW i/tXvzNazJlaaPHx1HF2i02Pr7Fa/L3zj81izpkHLBaT3mxjsvgwYyWjA5fHmnlrGD1mNfSy edy5tofN4/ymNcwem5fUe/RtWcXo8XmTXAB7FJdNSmpOZllqkb5dAlfGrfaj7AUzfCt6Hn1n b2DcZNvFyMkhIWAi8fvSHGYIW0ziwr31bF2MXBxCAssYJTp2rGaCcHqZJF6c3ArlbGGU2Ptn C0sXIwcHr4CWxKvlFiAmi4CqxIdOXZBBbAK6EvvvbWQHsUUFIiT+nN7HCmLzCghK/Jh8jwVk jIjAF2aJCXv/gyWEBfwkOidOBLuCWUBdYtK8RVC2vMTmNW/BbCGg+dtuPmcE2SUhoCSx+oj5 BEaBWUjGzkLSPQtJ9wJG5lWMYqXFyQXFSempFYZ6xYm5xaV56XrJ+bmbGCFRUbWD8c5Xm0OM AhyMSjy8HTFxwUKsiWXFlbmHGCU4mJVEeG1bgUK8KYmVValF+fFFpTmpxYcYmTg4pRoYax+I 7Oq/d62SP+HayZi5p29OTO8yOnDewGPOvrB+t9Mpgc9m7lz9lG/Pb62OnI5Z773yJD5tjVAp XC3uKTb9w+S24I4wc4YkvkVPHy7zLCqdFl7sn2Jfmvee5eNcRpsDd7Q36l+6ETNBSzPIV+v5 P3NR9eNqUcoxZwtvFAVv+rz/nzP7rFuXlFiKMxINtZiLihMBlbEv5GgCAAA= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140428_175509_339724_17B4F98F X-CRM114-Status: GOOD ( 21.86 ) X-Spam-Score: -5.6 (-----) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Patch adds support for live migration initial split up of huge pages in memory slot and write protection of all pages in memory slot. Signed-off-by: Mario Smarduch --- arch/arm/include/asm/kvm_host.h | 8 ++ arch/arm/include/asm/kvm_mmu.h | 11 ++ arch/arm/kvm/arm.c | 3 + arch/arm/kvm/mmu.c | 215 +++++++++++++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 5 +- 5 files changed, 241 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 1e739f9..9f827c8 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -67,6 +67,12 @@ struct kvm_arch { /* Interrupt controller */ struct vgic_dist vgic; + + /* Marks start of migration, used to handle 2nd stage page faults + * during migration, prevent installing huge pages and split huge pages + * to small pages. + */ + int migration_in_progress; }; #define KVM_NR_MEM_OBJS 40 @@ -230,4 +236,6 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value); void kvm_tlb_flush_vmid(struct kvm *kvm); +int kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot); + #endif /* __ARM_KVM_HOST_H__ */ diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index a91c863..342ae81 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -111,6 +111,17 @@ static inline void kvm_set_s2pte_writable(pte_t *pte) pte_val(*pte) |= L_PTE_S2_RDWR; } +static inline void kvm_set_s2pte_readonly(pte_t *pte) +{ + pte_val(*pte) &= ~(L_PTE_S2_RDONLY ^ L_PTE_S2_RDWR); +} + +static inline bool kvm_s2pte_readonly(pte_t *pte) +{ + return (pte_val(*pte) & L_PTE_S2_RDWR) == L_PTE_S2_RDONLY; +} + + static inline void kvm_set_s2pmd_writable(pmd_t *pmd) { pmd_val(*pmd) |= L_PMD_S2_RDWR; diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 9a4bc10..b916478 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -233,6 +233,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, struct kvm_userspace_memory_region *mem, enum kvm_mr_change change) { + /* Request for migration issued by user, write protect memory slot */ + if ((change != KVM_MR_DELETE) && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)) + return kvm_mmu_slot_remove_write_access(kvm, mem->slot); return 0; } diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 7ab77f3..15bbca2 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -44,6 +44,41 @@ static phys_addr_t hyp_idmap_vector; #define kvm_pmd_huge(_x) (pmd_huge(_x) || pmd_trans_huge(_x)) +/* Used for 2nd stage and identity mappings. For stage 2 mappings + * instead of unsigned long, u64 is use which won't overflow on ARMv7 for + * IPAs above 4GB. For ARMv8 use default functions. + */ + +static phys_addr_t kvm_pgd_addr_end(phys_addr_t addr, phys_addr_t end) +{ +#if BITS_PER_LONG == 32 + u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK; + return __boundary - 1 < end - 1 ? __boundary : end; +#else + return pgd_addr_end(addr, end); +#endif +} + +static phys_addr_t kvm_pud_addr_end(phys_addr_t addr, phys_addr_t end) +{ +#if BITS_PER_LONG == 32 + u64 __boundary = ((addr) + PUD_SIZE) & PUD_MASK; + return __boundary - 1 < end - 1 ? __boundary : end; +#else + return pud_addr_end(addr, end); +#endif +} + +static phys_addr_t kvm_pmd_addr_end(phys_addr_t addr, phys_addr_t end) +{ +#if BITS_PER_LONG == 32 + u64 __boundary = ((addr) + PMD_SIZE) & PMD_MASK; + return __boundary - 1 < end - 1 ? __boundary : end; +#else + return pmd_addr_end(addr, end); +#endif +} + static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { /* @@ -649,6 +684,186 @@ static bool transparent_hugepage_adjust(pfn_t *pfnp, phys_addr_t *ipap) return false; } +/** + * kvm_split_pmd - splits huge pages to small pages, required to keep a dirty + * log of smaller memory granules, otherwise huge pages would need to be + * migrated. Practically an idle system has problems migrating with + * huge pages. Called during WP of entire VM address space, done + * initially when migration thread isses the KVM_MEM_LOG_DIRTY_PAGES + * ioctl. + * The mmu_lock is held during splitting. + * + * @kvm: The KVM pointer + * @pmd: Pmd to 2nd stage huge page + * @addr: Guest Physical Address + */ +static int kvm_split_pmd(struct kvm *kvm, pmd_t *pmd, u64 addr) +{ + struct page *page; + pfn_t pfn = pmd_pfn(*pmd); + pte_t *pte; + int i; + + page = alloc_page(GFP_KERNEL); + if (page == NULL) + return -ENOMEM; + + pte = page_address(page); + /* cycle through ptes first, use pmd pfn */ + for (i = 0; i < PTRS_PER_PMD; i++) + pte[i] = pfn_pte(pfn+i, PAGE_S2); + + kvm_clean_pte(pte); + /* After page table setup set pmd */ + pmd_populate_kernel(NULL, pmd, pte); + + /* get reference on pte page */ + get_page(virt_to_page(pte)); + return 0; +} + +/* Walks PMD page table range and write protects it. Called with + * 'kvm->mmu_lock' held + */ +static void stage2_wp_pmd_range(phys_addr_t addr, phys_addr_t end, pmd_t *pmd) +{ + pte_t *pte; + + while (addr < end) { + pte = pte_offset_kernel(pmd, addr); + addr += PAGE_SIZE; + if (!pte_present(*pte)) + continue; + /* skip write protected pages */ + if (kvm_s2pte_readonly(pte)) + continue; + kvm_set_s2pte_readonly(pte); + } +} + +/* Walks PUD page table range to write protects it , if necessary splits up + * huge pages to small pages. Called with 'kvm->mmu_lock' held. + */ +static int stage2_wp_pud_range(struct kvm *kvm, phys_addr_t addr, + phys_addr_t end, pud_t *pud) +{ + pmd_t *pmd; + phys_addr_t pmd_end; + int ret; + + while (addr < end) { + /* If needed give up CPU during PUD page table walk */ + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) + cond_resched_lock(&kvm->mmu_lock); + + pmd = pmd_offset(pud, addr); + if (!pmd_present(*pmd)) { + addr = kvm_pmd_addr_end(addr, end); + continue; + } + + if (kvm_pmd_huge(*pmd)) { + ret = kvm_split_pmd(kvm, pmd, addr); + /* Failed to split up huge page abort. */ + if (ret < 0) + return ret; + + addr = kvm_pmd_addr_end(addr, end); + continue; + } + + pmd_end = kvm_pmd_addr_end(addr, end); + stage2_wp_pmd_range(addr, pmd_end, pmd); + addr = pmd_end; + } + return 0; +} + +/* Walks PGD page table range to write protect it. Called with 'kvm->mmu_lock' + * held. + */ +static int stage2_wp_pgd_range(struct kvm *kvm, phys_addr_t addr, + phys_addr_t end, pgd_t *pgd) +{ + phys_addr_t pud_end; + pud_t *pud; + int ret; + + while (addr < end) { + /* give up CPU if mmu_lock is needed by other vCPUs */ + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) + cond_resched_lock(&kvm->mmu_lock); + + pud = pud_offset(pgd, addr); + if (!pud_present(*pud)) { + addr = kvm_pud_addr_end(addr, end); + continue; + } + + /* Fail if PUD is huge, splitting PUDs not supported */ + if (pud_huge(*pud)) + return -EFAULT; + + /* By default 'nopud' is supported which fails with guests + * larger then 1GB. Added to support 4-level page tables. + */ + pud_end = kvm_pud_addr_end(addr, end); + ret = stage2_wp_pud_range(kvm, addr, pud_end, pud); + if (ret < 0) + return ret; + addr = pud_end; + } + return 0; +} + +/** + * kvm_mmu_slot_remove_access - write protects entire memslot address space. + * Called at start of migration when KVM_MEM_LOG_DIRTY_PAGES ioctl is + * issued. After this function returns all pages (minus the ones faulted + * in or released when mmu_lock is give nup) must be write protected to + * keep track of dirty pages to migrate on subsequent dirty log read. + * mmu_lock is held during write protecting, released on contention. + * + * @kvm: The KVM pointer + * @slot: The memory slot the dirty log is retrieved for + */ +int kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot) +{ + pgd_t *pgd; + pgd_t *pgdp = kvm->arch.pgd; + struct kvm_memory_slot *memslot = id_to_memslot(kvm->memslots, slot); + phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + phys_addr_t pgdir_end; + int ret = -ENOMEM; + + spin_lock(&kvm->mmu_lock); + /* set start of migration, sychronize with Data Abort handler */ + kvm->arch.migration_in_progress = 1; + + /* Walk range, split up huge pages as needed and write protect ptes */ + while (addr < end) { + pgd = pgdp + pgd_index(addr); + if (!pgd_present(*pgd)) { + addr = kvm_pgd_addr_end(addr, end); + continue; + } + + pgdir_end = kvm_pgd_addr_end(addr, end); + ret = stage2_wp_pgd_range(kvm, addr, pgdir_end, pgd); + /* Failed to WP a pgd range abort */ + if (ret < 0) + goto out; + addr = pgdir_end; + } + ret = 0; + /* Flush TLBs, >= ARMv7 variant uses hardware broadcast not IPIs */ + kvm_flush_remote_tlbs(kvm); +out: + spin_unlock(&kvm->mmu_lock); + return ret; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long fault_status) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 03a0381..1d11912 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -184,7 +184,10 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req) return called; } -void kvm_flush_remote_tlbs(struct kvm *kvm) +/* Architectures like >= ARMv7 hardware broadcast TLB invalidations and don't + * use IPIs. + */ +void __weak kvm_flush_remote_tlbs(struct kvm *kvm) { long dirty_count = kvm->tlbs_dirty;