From patchwork Thu Nov 26 12:15:38 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 7706211 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id B25179F2E9 for ; Thu, 26 Nov 2015 12:03:14 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E113220789 for ; Thu, 26 Nov 2015 12:03:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DFAC120783 for ; Thu, 26 Nov 2015 12:03:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751514AbbKZMCg (ORCPT ); Thu, 26 Nov 2015 07:02:36 -0500 Received: from tama50.ecl.ntt.co.jp ([129.60.39.147]:58152 "EHLO tama50.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751274AbbKZMCf (ORCPT ); Thu, 26 Nov 2015 07:02:35 -0500 Received: from vc2.ecl.ntt.co.jp (vc2.ecl.ntt.co.jp [129.60.86.154]) by tama50.ecl.ntt.co.jp (8.13.8/8.13.8) with ESMTP id tAQC2VjG015764; Thu, 26 Nov 2015 21:02:31 +0900 Received: from vc2.ecl.ntt.co.jp (localhost [127.0.0.1]) by vc2.ecl.ntt.co.jp (Postfix) with ESMTP id 8295E5F594; Thu, 26 Nov 2015 21:02:31 +0900 (JST) Received: from imail2.m.ecl.ntt.co.jp (imail2.m.ecl.ntt.co.jp [129.60.5.247]) by vc2.ecl.ntt.co.jp (Postfix) with ESMTP id 74B1E5F592; Thu, 26 Nov 2015 21:02:31 +0900 (JST) Received: from localhost.localdomain ([129.60.241.174]) by imail2.m.ecl.ntt.co.jp (8.13.8/8.13.8) with SMTP id tAQC2VTJ027204; Thu, 26 Nov 2015 21:02:31 +0900 Date: Thu, 26 Nov 2015 21:15:38 +0900 From: Takuya Yoshikawa To: pbonzini@redhat.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, mtosatti@redhat.com, guangrong.xiao@linux.intel.com Subject: [PATCH 2/3] KVM: x86: MMU: Use for_each_rmap_spte macro instead of pte_list_walk() Message-Id: <20151126211538.76ebbdd477ebdcaef3110384@lab.ntt.co.jp> In-Reply-To: <20151126211331.92fd33fae0be8225ab64d577@lab.ntt.co.jp> References: <20151126211331.92fd33fae0be8225ab64d577@lab.ntt.co.jp> X-Mailer: Sylpheed 3.4.2 (GTK+ 2.24.28; x86_64-redhat-linux-gnu) Mime-Version: 1.0 X-TM-AS-MML: disable Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As kvm_mmu_get_page() was changed so that every parent pointer would not get into the sp->parent_ptes chain before the entry pointed to by it was set properly, we can use the for_each_rmap_spte macro instead of pte_list_walk(). Signed-off-by: Takuya Yoshikawa Cc: Xiao Guangrong --- arch/x86/kvm/mmu.c | 27 ++++++--------------------- 1 file changed, 6 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index ec61b22..204c7d4 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1007,26 +1007,6 @@ static void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head) } } -typedef void (*pte_list_walk_fn) (u64 *spte); -static void pte_list_walk(struct kvm_rmap_head *rmap_head, pte_list_walk_fn fn) -{ - struct pte_list_desc *desc; - int i; - - if (!rmap_head->val) - return; - - if (!(rmap_head->val & 1)) - return fn((u64 *)rmap_head->val); - - desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); - while (desc) { - for (i = 0; i < PTE_LIST_EXT && desc->sptes[i]; ++i) - fn(desc->sptes[i]); - desc = desc->more; - } -} - static struct kvm_rmap_head *__gfn_to_rmap(gfn_t gfn, int level, struct kvm_memory_slot *slot) { @@ -1749,7 +1729,12 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct static void mark_unsync(u64 *spte); static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp) { - pte_list_walk(&sp->parent_ptes, mark_unsync); + u64 *sptep; + struct rmap_iterator iter; + + for_each_rmap_spte(&sp->parent_ptes, &iter, sptep) { + mark_unsync(sptep); + } } static void mark_unsync(u64 *spte)