From patchwork Wed Jan 23 10:16:17 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 2023601 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 077C9DF2EF for ; Wed, 23 Jan 2013 10:16:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754983Ab3AWKQF (ORCPT ); Wed, 23 Jan 2013 05:16:05 -0500 Received: from tama50.ecl.ntt.co.jp ([129.60.39.147]:38664 "EHLO tama50.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754743Ab3AWKPy (ORCPT ); Wed, 23 Jan 2013 05:15:54 -0500 Received: from mfs5.rdh.ecl.ntt.co.jp (mfs5.rdh.ecl.ntt.co.jp [129.60.39.144]) by tama50.ecl.ntt.co.jp (8.13.8/8.13.8) with ESMTP id r0NAFpDP001434; Wed, 23 Jan 2013 19:15:51 +0900 Received: from mfs5.rdh.ecl.ntt.co.jp (localhost.localdomain [127.0.0.1]) by mfs5.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id 96A54E0171; Wed, 23 Jan 2013 19:15:51 +0900 (JST) Received: from imail2.m.ecl.ntt.co.jp (imail2.m.ecl.ntt.co.jp [129.60.5.247]) by mfs5.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id 80E1EE0170; Wed, 23 Jan 2013 19:15:51 +0900 (JST) Received: from yshpad ([129.60.241.247]) by imail2.m.ecl.ntt.co.jp (8.13.8/8.13.8) with SMTP id r0NAFpS7030244; Wed, 23 Jan 2013 19:15:51 +0900 Date: Wed, 23 Jan 2013 19:16:17 +0900 From: Takuya Yoshikawa To: mtosatti@redhat.com, gleb@redhat.com Cc: kvm@vger.kernel.org Subject: [PATCH 5/8] KVM: MMU: Delete hash_link node in kvm_mmu_prepare_zap_page() Message-Id: <20130123191617.029d218d.yoshikawa_takuya_b1@lab.ntt.co.jp> In-Reply-To: <20130123191231.d66489d2.yoshikawa_takuya_b1@lab.ntt.co.jp> References: <20130123191231.d66489d2.yoshikawa_takuya_b1@lab.ntt.co.jp> X-Mailer: Sylpheed 3.1.0 (GTK+ 2.24.4; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that we are using for_each_gfn_indirect_valid_sp_safe, we can safely delete the node by correctly updating the pointer to the next one. The only case we need to care about is when mmu_zap_unsync_children() has zapped anything other than the current one. Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/mmu.c | 7 ++++++- 1 files changed, 6 insertions(+), 1 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d5bf373..a72c573 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1469,7 +1469,6 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, int nr) static void kvm_mmu_isolate_page(struct kvm_mmu_page *sp) { ASSERT(is_empty_shadow_page(sp->spt)); - hlist_del(&sp->hash_link); if (!sp->role.direct) free_page((unsigned long)sp->gfns); } @@ -2111,9 +2110,15 @@ static int kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp, unaccount_shadowed(kvm, sp->gfn); if (sp->unsync) kvm_unlink_unsync_page(kvm, sp); + + /* Next entry might be deleted by mmu_zap_unsync_children(). */ + if (npos && ret) + npos->hn = sp->hash_link.next; + if (!sp->root_count) { /* Count self */ ret++; + hlist_del(&sp->hash_link); list_move(&sp->link, invalid_list); kvm_mod_used_mmu_pages(kvm, -1); } else {