From patchwork Fri Nov 20 08:48:05 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 7665721 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 9E60F9F2E2 for ; Fri, 20 Nov 2015 08:35:31 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B08E120497 for ; Fri, 20 Nov 2015 08:35:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B4D4520494 for ; Fri, 20 Nov 2015 08:35:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1162105AbbKTIfJ (ORCPT ); Fri, 20 Nov 2015 03:35:09 -0500 Received: from tama50.ecl.ntt.co.jp ([129.60.39.147]:56590 "EHLO tama50.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934453AbbKTIfH (ORCPT ); Fri, 20 Nov 2015 03:35:07 -0500 Received: from vc1.ecl.ntt.co.jp (vc1.ecl.ntt.co.jp [129.60.86.153]) by tama50.ecl.ntt.co.jp (8.13.8/8.13.8) with ESMTP id tAK8Z2XJ032115; Fri, 20 Nov 2015 17:35:02 +0900 Received: from vc1.ecl.ntt.co.jp (localhost [127.0.0.1]) by vc1.ecl.ntt.co.jp (Postfix) with ESMTP id 4AFB65F612; Fri, 20 Nov 2015 17:35:02 +0900 (JST) Received: from imail2.m.ecl.ntt.co.jp (imail2.m.ecl.ntt.co.jp [129.60.5.247]) by vc1.ecl.ntt.co.jp (Postfix) with ESMTP id 3CCE45F58A; Fri, 20 Nov 2015 17:35:02 +0900 (JST) Received: from localhost.localdomain ([129.60.241.174]) by imail2.m.ecl.ntt.co.jp (8.13.8/8.13.8) with SMTP id tAK8Z2Qx029411; Fri, 20 Nov 2015 17:35:02 +0900 Date: Fri, 20 Nov 2015 17:48:05 +0900 From: Takuya Yoshikawa To: pbonzini@redhat.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, mtosatti@redhat.com, guangrong.xiao@linux.intel.com Subject: [PATCH 09/10] KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to link_shadow_page() Message-Id: <20151120174805.a91793d1fce2f0a493f1b955@lab.ntt.co.jp> In-Reply-To: <20151120174005.9880b89f54eee2cec2422da5@lab.ntt.co.jp> References: <20151120174005.9880b89f54eee2cec2422da5@lab.ntt.co.jp> X-Mailer: Sylpheed 3.4.2 (GTK+ 2.24.28; x86_64-redhat-linux-gnu) Mime-Version: 1.0 X-TM-AS-MML: disable Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Every time kvm_mmu_get_page() is called with a non-NULL parent_pte argument, link_shadow_page() follows that to set the parent entry so that the new mapping will point to the returned page table. Moving parent_pte handling there allows to clean up the code because parent_pte is passed to kvm_mmu_get_page() just for mark_unsync() and mmu_page_add_parent_pte(). Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/mmu.c | 22 ++++++++-------------- arch/x86/kvm/paging_tmpl.h | 6 ++---- 2 files changed, 10 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 4e29d9a..b020323 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2107,14 +2107,9 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, if (sp->unsync_children) { kvm_make_request(KVM_REQ_MMU_SYNC, vcpu); kvm_mmu_mark_parents_unsync(sp); - if (parent_pte) - mark_unsync(parent_pte); } else if (sp->unsync) { kvm_mmu_mark_parents_unsync(sp); - if (parent_pte) - mark_unsync(parent_pte); } - mmu_page_add_parent_pte(vcpu, sp, parent_pte); __clear_sp_write_flooding_count(sp); trace_kvm_mmu_get_page(sp, false); @@ -2125,8 +2120,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, sp = kvm_mmu_alloc_page(vcpu, direct); - mmu_page_add_parent_pte(vcpu, sp, parent_pte); - sp->gfn = gfn; sp->role = role; hlist_add_head(&sp->hash_link, @@ -2194,7 +2187,8 @@ static void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator) return __shadow_walk_next(iterator, *iterator->sptep); } -static void link_shadow_page(u64 *sptep, struct kvm_mmu_page *sp) +static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, + struct kvm_mmu_page *sp) { u64 spte; @@ -2205,6 +2199,11 @@ static void link_shadow_page(u64 *sptep, struct kvm_mmu_page *sp) shadow_user_mask | shadow_x_mask | shadow_accessed_mask; mmu_spte_set(sptep, spte); + + if (sp->unsync_children || sp->unsync) + mark_unsync(sptep); + + mmu_page_add_parent_pte(vcpu, sp, sptep); } static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, @@ -2263,11 +2262,6 @@ static void kvm_mmu_page_unlink_children(struct kvm *kvm, mmu_page_zap_pte(kvm, sp, sp->spt + i); } -static void kvm_mmu_put_page(struct kvm_mmu_page *sp, u64 *parent_pte) -{ - mmu_page_remove_parent_pte(sp, parent_pte); -} - static void kvm_mmu_unlink_parents(struct kvm *kvm, struct kvm_mmu_page *sp) { u64 *sptep; @@ -2733,7 +2727,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, int map_writable, iterator.level - 1, 1, ACC_ALL, iterator.sptep); - link_shadow_page(iterator.sptep, sp); + link_shadow_page(vcpu, iterator.sptep, sp); } } return emulate; diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 11650ea..0dcf9c8 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -598,7 +598,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, goto out_gpte_changed; if (sp) - link_shadow_page(it.sptep, sp); + link_shadow_page(vcpu, it.sptep, sp); } for (; @@ -618,7 +618,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1, true, direct_access, it.sptep); - link_shadow_page(it.sptep, sp); + link_shadow_page(vcpu, it.sptep, sp); } clear_sp_write_flooding_count(it.sptep); @@ -629,8 +629,6 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, return emulate; out_gpte_changed: - if (sp) - kvm_mmu_put_page(sp, it.sptep); kvm_release_pfn_clean(pfn); return 0; }