From patchwork Tue Jul 30 13:02:09 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 2835665 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A21AE9F7D6 for ; Tue, 30 Jul 2013 13:08:45 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 94AC6203F3 for ; Tue, 30 Jul 2013 13:08:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9971C20398 for ; Tue, 30 Jul 2013 13:08:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755696Ab3G3NIN (ORCPT ); Tue, 30 Jul 2013 09:08:13 -0400 Received: from e28smtp08.in.ibm.com ([122.248.162.8]:54580 "EHLO e28smtp08.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755512Ab3G3NHz (ORCPT ); Tue, 30 Jul 2013 09:07:55 -0400 Received: from /spool/local by e28smtp08.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 30 Jul 2013 18:27:45 +0530 Received: from d28dlp01.in.ibm.com (9.184.220.126) by e28smtp08.in.ibm.com (192.168.1.138) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 30 Jul 2013 18:27:43 +0530 Received: from d28relay03.in.ibm.com (d28relay03.in.ibm.com [9.184.220.60]) by d28dlp01.in.ibm.com (Postfix) with ESMTP id 2B05DE005D; Tue, 30 Jul 2013 18:37:57 +0530 (IST) Received: from d28av02.in.ibm.com (d28av02.in.ibm.com [9.184.220.64]) by d28relay03.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r6UD8oc745482120; Tue, 30 Jul 2013 18:38:50 +0530 Received: from d28av02.in.ibm.com (loopback [127.0.0.1]) by d28av02.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r6UD7lYG011639; Tue, 30 Jul 2013 23:07:49 +1000 Received: from localhost (ericxiao.cn.ibm.com [9.111.29.99]) by d28av02.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r6UD7kNm011572; Tue, 30 Jul 2013 23:07:47 +1000 From: Xiao Guangrong To: gleb@redhat.com Cc: avi.kivity@gmail.com, mtosatti@redhat.com, pbonzini@redhat.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Xiao Guangrong Subject: [PATCH 11/12] KVM: MMU: locklessly write-protect the page Date: Tue, 30 Jul 2013 21:02:09 +0800 Message-Id: <1375189330-24066-12-git-send-email-xiaoguangrong@linux.vnet.ibm.com> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1375189330-24066-1-git-send-email-xiaoguangrong@linux.vnet.ibm.com> References: <1375189330-24066-1-git-send-email-xiaoguangrong@linux.vnet.ibm.com> X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13073012-2000-0000-0000-00000D19E821 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-8.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, when mark memslot dirty logged or get dirty page, we need to write-protect large guest memory, it is the heavy work, especially, we need to hold mmu-lock which is also required by vcpu to fix its page table fault and mmu-notifier when host page is being changed. In the extreme cpu / memory used guest, it becomes a scalability issue This patch introduces a way to locklessly write-protect guest memory Now, lockless rmap walk, lockless shadow page table access and lockless spte wirte-protection are ready, it is the time to implements page write-protection out of mmu-lock Signed-off-by: Xiao Guangrong --- arch/x86/include/asm/kvm_host.h | 4 --- arch/x86/kvm/mmu.c | 62 ++++++++++++++++++++++++++++------------- arch/x86/kvm/mmu.h | 6 ++++ arch/x86/kvm/x86.c | 19 +++++++++---- 4 files changed, 62 insertions(+), 29 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index dc842b6..3ef5645 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -780,10 +780,6 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, u64 dirty_mask, u64 nx_mask, u64 x_mask); int kvm_mmu_reset_context(struct kvm_vcpu *vcpu); -void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot); -void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, - struct kvm_memory_slot *slot, - gfn_t gfn_offset, unsigned long mask); void kvm_mmu_zap_all(struct kvm *kvm); void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm); unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm); diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 7f3391f..a50eea8 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1365,8 +1365,30 @@ static bool __rmap_write_protect(struct kvm *kvm, unsigned long *rmapp, return flush; } -/** - * kvm_mmu_write_protect_pt_masked - write protect selected PT level pages +static void __rmap_write_protect_lockless(u64 *sptep, int level) +{ + u64 spte; + +retry: + spte = mmu_spte_get_lockless(sptep); + if (unlikely(!is_last_spte(spte, level) || !is_writable_pte(spte))) + return; + + if (likely(cmpxchg64(sptep, spte, spte & ~PT_WRITABLE_MASK) == spte)) + return; + + goto retry; +} + +static void rmap_write_protect_lockless(unsigned long *rmapp, int level) +{ + pte_list_walk_lockless(rmapp, __rmap_write_protect_lockless, level); +} + +/* + * kvm_mmu_write_protect_pt_masked_lockless - write protect selected PT level + * pages out of mmu-lock. + * * @kvm: kvm instance * @slot: slot to protect * @gfn_offset: start of the BITS_PER_LONG pages we care about @@ -1375,16 +1397,17 @@ static bool __rmap_write_protect(struct kvm *kvm, unsigned long *rmapp, * Used when we do not need to care about huge page mappings: e.g. during dirty * logging we do not have any such mappings. */ -void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, - struct kvm_memory_slot *slot, - gfn_t gfn_offset, unsigned long mask) +void +kvm_mmu_write_protect_pt_masked_lockless(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long mask) { unsigned long *rmapp; while (mask) { rmapp = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PT_PAGE_TABLE_LEVEL, slot); - __rmap_write_protect(kvm, rmapp, false); + rmap_write_protect_lockless(rmapp, PT_PAGE_TABLE_LEVEL); /* clear the first set bit */ mask &= mask - 1; @@ -2661,6 +2684,15 @@ set_pte: ++vcpu->kvm->stat.lpages; } + /* + * We should put the sptep into rmap before dirty log + * otherwise the lockless spte write-protect path will + * clear the dirty bit map but fail to find the spte. + * + * See the comments in kvm_vm_ioctl_get_dirty_log(). + */ + smp_wmb(); + if (pte_access & ACC_WRITE_MASK) mark_page_dirty(vcpu->kvm, gfn); done: @@ -4422,7 +4454,7 @@ int kvm_mmu_setup(struct kvm_vcpu *vcpu) return init_kvm_mmu(vcpu); } -void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot) +void kvm_mmu_slot_remove_write_access_lockless(struct kvm *kvm, int slot) { struct kvm_memory_slot *memslot; gfn_t last_gfn; @@ -4431,8 +4463,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot) memslot = id_to_memslot(kvm->memslots, slot); last_gfn = memslot->base_gfn + memslot->npages - 1; - spin_lock(&kvm->mmu_lock); - + kvm_mmu_rcu_free_page_begin(kvm); for (i = PT_PAGE_TABLE_LEVEL; i < PT_PAGE_TABLE_LEVEL + KVM_NR_PAGE_SIZES; ++i) { unsigned long *rmapp; @@ -4441,19 +4472,12 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot) rmapp = memslot->arch.rmap[i - PT_PAGE_TABLE_LEVEL]; last_index = gfn_to_index(last_gfn, memslot->base_gfn, i); - for (index = 0; index <= last_index; ++index, ++rmapp) { - if (*rmapp) - __rmap_write_protect(kvm, rmapp, false); - - if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { - kvm_flush_remote_tlbs(kvm); - cond_resched_lock(&kvm->mmu_lock); - } - } + for (index = 0; index <= last_index; ++index, ++rmapp) + rmap_write_protect_lockless(rmapp, i); } + kvm_mmu_rcu_free_page_end(kvm); kvm_flush_remote_tlbs(kvm); - spin_unlock(&kvm->mmu_lock); } #define BATCH_ZAP_PAGES 10 diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 85405f1..2a66c57 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -137,4 +137,10 @@ static inline void kvm_mmu_rcu_free_page_end(struct kvm *kvm) rcu_read_unlock(); } + +void kvm_mmu_slot_remove_write_access_lockless(struct kvm *kvm, int slot); +void +kvm_mmu_write_protect_pt_masked_lockless(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long mask); #endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d2caeb9..4983eb3 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3531,8 +3531,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long); memset(dirty_bitmap_buffer, 0, n); - spin_lock(&kvm->mmu_lock); - + kvm_mmu_rcu_free_page_begin(kvm); for (i = 0; i < n / sizeof(long); i++) { unsigned long mask; gfn_t offset; @@ -3542,17 +3541,25 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) is_dirty = true; + /* + * xchg acts as a full barrier that ensures + * clearing dirty bitmap before read rmap. + * + * See the comments in set_spte(). + */ mask = xchg(&dirty_bitmap[i], 0); + dirty_bitmap_buffer[i] = mask; offset = i * BITS_PER_LONG; - kvm_mmu_write_protect_pt_masked(kvm, memslot, offset, mask); + kvm_mmu_write_protect_pt_masked_lockless(kvm, memslot, + offset, mask); } + kvm_mmu_rcu_free_page_end(kvm); + if (is_dirty) kvm_flush_remote_tlbs(kvm); - spin_unlock(&kvm->mmu_lock); - r = -EFAULT; if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n)) goto out; @@ -7088,7 +7095,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * not be created until the end of the logging. */ if ((change != KVM_MR_DELETE) && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)) - kvm_mmu_slot_remove_write_access(kvm, mem->slot); + kvm_mmu_slot_remove_write_access_lockless(kvm, mem->slot); } void kvm_arch_flush_shadow_all(struct kvm *kvm)