From patchwork Tue Mar 12 08:45:30 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 2253941 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 2627DDF23A for ; Tue, 12 Mar 2013 08:45:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932115Ab3CLIpO (ORCPT ); Tue, 12 Mar 2013 04:45:14 -0400 Received: from tama500.ecl.ntt.co.jp ([129.60.39.148]:58519 "EHLO tama500.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753910Ab3CLIpM (ORCPT ); Tue, 12 Mar 2013 04:45:12 -0400 Received: from mfs5.rdh.ecl.ntt.co.jp (mfs5.rdh.ecl.ntt.co.jp [129.60.39.144]) by tama500.ecl.ntt.co.jp (8.13.8/8.13.8) with ESMTP id r2C8j9el003245; Tue, 12 Mar 2013 17:45:09 +0900 Received: from mfs5.rdh.ecl.ntt.co.jp (localhost.localdomain [127.0.0.1]) by mfs5.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id EFAEFE0151; Tue, 12 Mar 2013 17:45:08 +0900 (JST) Received: from imail2.m.ecl.ntt.co.jp (imail2.m.ecl.ntt.co.jp [129.60.5.247]) by mfs5.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id E4453E0150; Tue, 12 Mar 2013 17:45:08 +0900 (JST) Received: from yshpad ([129.60.241.181]) by imail2.m.ecl.ntt.co.jp (8.13.8/8.13.8) with SMTP id r2C8j8pR007957; Tue, 12 Mar 2013 17:45:08 +0900 Date: Tue, 12 Mar 2013 17:45:30 +0900 From: Takuya Yoshikawa To: mtosatti@redhat.com, gleb@redhat.com Cc: kvm@vger.kernel.org Subject: [PATCH 2/2] KVM: x86: Optimize mmio spte zapping when creating/moving memslot Message-Id: <20130312174530.489f793c.yoshikawa_takuya_b1@lab.ntt.co.jp> In-Reply-To: <20130312174333.7f76148e.yoshikawa_takuya_b1@lab.ntt.co.jp> References: <20130312174333.7f76148e.yoshikawa_takuya_b1@lab.ntt.co.jp> X-Mailer: Sylpheed 3.1.0 (GTK+ 2.24.4; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When we create or move a memory slot, we need to zap mmio sptes. Currently, zap_all() is used for this and this is causing two problems: - extra page faults after zapping mmu pages - long mmu_lock hold time during zapping mmu pages For the latter, Marcelo reported a disastrous mmu_lock hold time during hot-plug, which made the guest unresponsive for a long time. This patch takes a simple way to fix these problems: do not zap mmu pages unless they are marked mmio cached. On our test box, this took only 50us for the 4GB guest and we did not see ms of mmu_lock hold time any more. Note that we still need to do zap_all() for other cases. So another work is also needed: Xiao's work may be the one. Signed-off-by: Takuya Yoshikawa --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu.c | 18 ++++++++++++++++++ arch/x86/kvm/x86.c | 2 +- 3 files changed, 20 insertions(+), 1 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index b84310a..028b03f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -768,6 +768,7 @@ void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask); void kvm_mmu_zap_all(struct kvm *kvm); +void kvm_mmu_zap_mmio_sptes(struct kvm *kvm); unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm); void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages); diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index de45ec1..c1a9b7b 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -4189,6 +4189,24 @@ restart: spin_unlock(&kvm->mmu_lock); } +void kvm_mmu_zap_mmio_sptes(struct kvm *kvm) +{ + struct kvm_mmu_page *sp, *node; + LIST_HEAD(invalid_list); + + spin_lock(&kvm->mmu_lock); +restart: + list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { + if (!sp->mmio_cached) + continue; + if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list)) + goto restart; + } + + kvm_mmu_commit_zap_page(kvm, &invalid_list); + spin_unlock(&kvm->mmu_lock); +} + static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc) { struct kvm *kvm; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 35b4912..16b6df2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6969,7 +6969,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * mmio sptes. */ if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) { - kvm_mmu_zap_all(kvm); + kvm_mmu_zap_mmio_sptes(kvm); kvm_reload_remote_mmus(kvm); } }