From patchwork Thu May 9 06:44:33 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 2543321 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 03BB9DF24C for ; Thu, 9 May 2013 06:43:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751194Ab3EIGnj (ORCPT ); Thu, 9 May 2013 02:43:39 -0400 Received: from tama50.ecl.ntt.co.jp ([129.60.39.147]:58436 "EHLO tama50.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750704Ab3EIGni (ORCPT ); Thu, 9 May 2013 02:43:38 -0400 Received: from mfs6.rdh.ecl.ntt.co.jp (mfs6.rdh.ecl.ntt.co.jp [129.60.39.149]) by tama50.ecl.ntt.co.jp (8.13.8/8.13.8) with ESMTP id r496hYsi000406; Thu, 9 May 2013 15:43:34 +0900 Received: from mfs6.rdh.ecl.ntt.co.jp (localhost.localdomain [127.0.0.1]) by mfs6.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id D6138E01A1; Thu, 9 May 2013 15:43:33 +0900 (JST) Received: from imail2.m.ecl.ntt.co.jp (imail2.m.ecl.ntt.co.jp [129.60.5.247]) by mfs6.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id C00F6E0196; Thu, 9 May 2013 15:43:33 +0900 (JST) Received: from yshpad ([129.60.241.181]) by imail2.m.ecl.ntt.co.jp (8.13.8/8.13.8) with SMTP id r496hXNu027240; Thu, 9 May 2013 15:43:33 +0900 Date: Thu, 9 May 2013 15:44:33 +0900 From: Takuya Yoshikawa To: gleb@redhat.com, pbonzini@redhat.com, mtosatti@redhat.com Cc: kvm@vger.kernel.org Subject: [PATCH 1/3] KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling Message-Id: <20130509154433.d8b62a0f.yoshikawa_takuya_b1@lab.ntt.co.jp> In-Reply-To: <20130509154350.15b956c4.yoshikawa_takuya_b1@lab.ntt.co.jp> References: <20130509154350.15b956c4.yoshikawa_takuya_b1@lab.ntt.co.jp> X-Mailer: Sylpheed 3.1.0 (GTK+ 2.24.4; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rather than clearing the ACC_WRITE_MASK bit of pte_access in the "if (mmu_need_write_protect())" block not to call mark_page_dirty() in the following if statement, simply moving the call into the appropriate else block is better. Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/mmu.c | 7 ++----- 1 files changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 004cc87..08119a8 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2386,14 +2386,11 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); ret = 1; - pte_access &= ~ACC_WRITE_MASK; spte &= ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE); - } + } else + mark_page_dirty(vcpu->kvm, gfn); } - if (pte_access & ACC_WRITE_MASK) - mark_page_dirty(vcpu->kvm, gfn); - set_pte: if (mmu_spte_update(sptep, spte)) kvm_flush_remote_tlbs(vcpu->kvm);