From patchwork Wed Jan 23 10:16:54 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 2023621 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 67F3A3FD1A for ; Wed, 23 Jan 2013 10:16:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755103Ab3AWKQd (ORCPT ); Wed, 23 Jan 2013 05:16:33 -0500 Received: from tama50.ecl.ntt.co.jp ([129.60.39.147]:38668 "EHLO tama50.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755086Ab3AWKQc (ORCPT ); Wed, 23 Jan 2013 05:16:32 -0500 Received: from mfs5.rdh.ecl.ntt.co.jp (mfs5.rdh.ecl.ntt.co.jp [129.60.39.144]) by tama50.ecl.ntt.co.jp (8.13.8/8.13.8) with ESMTP id r0NAGT6M001448; Wed, 23 Jan 2013 19:16:29 +0900 Received: from mfs5.rdh.ecl.ntt.co.jp (localhost.localdomain [127.0.0.1]) by mfs5.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id EE309E0171; Wed, 23 Jan 2013 19:16:28 +0900 (JST) Received: from imail2.m.ecl.ntt.co.jp (imail2.m.ecl.ntt.co.jp [129.60.5.247]) by mfs5.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id E26E2E0170; Wed, 23 Jan 2013 19:16:28 +0900 (JST) Received: from yshpad ([129.60.241.247]) by imail2.m.ecl.ntt.co.jp (8.13.8/8.13.8) with SMTP id r0NAGSVR031682; Wed, 23 Jan 2013 19:16:28 +0900 Date: Wed, 23 Jan 2013 19:16:54 +0900 From: Takuya Yoshikawa To: mtosatti@redhat.com, gleb@redhat.com Cc: kvm@vger.kernel.org Subject: [PATCH 6/8] KVM: MMU: Introduce free_zapped_mmu_pages() for freeing mmu pages in a list Message-Id: <20130123191654.63e1e44c.yoshikawa_takuya_b1@lab.ntt.co.jp> In-Reply-To: <20130123191231.d66489d2.yoshikawa_takuya_b1@lab.ntt.co.jp> References: <20130123191231.d66489d2.yoshikawa_takuya_b1@lab.ntt.co.jp> X-Mailer: Sylpheed 3.1.0 (GTK+ 2.24.4; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This will be split out from kvm_mmu_commit_zap_page() and moved out of the protection of the mmu_lock later. Note: kvm_mmu_isolate_page() is folded into kvm_mmu_free_page() since it now does nothing but free sp->gfns. Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/mmu.c | 35 +++++++++++++++++------------------ 1 files changed, 17 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index a72c573..97d372a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1461,27 +1461,32 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, int nr) } /* - * Remove the sp from shadow page cache, after call it, - * we can not find this sp from the cache, and the shadow - * page table is still valid. - * It should be under the protection of mmu lock. + * Free the shadow page table and the sp, we can do it + * out of the protection of mmu lock. */ -static void kvm_mmu_isolate_page(struct kvm_mmu_page *sp) +static void kvm_mmu_free_page(struct kvm_mmu_page *sp) { ASSERT(is_empty_shadow_page(sp->spt)); + if (!sp->role.direct) free_page((unsigned long)sp->gfns); + + list_del(&sp->link); + free_page((unsigned long)sp->spt); + kmem_cache_free(mmu_page_header_cache, sp); } /* - * Free the shadow page table and the sp, we can do it - * out of the protection of mmu lock. + * Free zapped mmu pages in @invalid_list. + * Call this after releasing mmu_lock if possible. */ -static void kvm_mmu_free_page(struct kvm_mmu_page *sp) +static void free_zapped_mmu_pages(struct kvm *kvm, + struct list_head *invalid_list) { - list_del(&sp->link); - free_page((unsigned long)sp->spt); - kmem_cache_free(mmu_page_header_cache, sp); + struct kvm_mmu_page *sp, *nsp; + + list_for_each_entry_safe(sp, nsp, invalid_list, link) + kvm_mmu_free_page(sp); } static unsigned kvm_page_table_hashfn(gfn_t gfn) @@ -2133,8 +2138,6 @@ static int kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp, static void kvm_mmu_commit_zap_page(struct kvm *kvm, struct list_head *invalid_list) { - struct kvm_mmu_page *sp, *nsp; - if (list_empty(invalid_list)) return; @@ -2150,11 +2153,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, */ kvm_flush_remote_tlbs(kvm); - list_for_each_entry_safe(sp, nsp, invalid_list, link) { - WARN_ON(!sp->role.invalid || sp->root_count); - kvm_mmu_isolate_page(sp); - kvm_mmu_free_page(sp); - } + free_zapped_mmu_pages(kvm, invalid_list); } /*