From patchwork Thu Jun 9 14:01:26 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 865512 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter2.kernel.org (8.14.4/8.14.4) with ESMTP id p59E1YaH029523 for ; Thu, 9 Jun 2011 14:01:35 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757637Ab1FIOBc (ORCPT ); Thu, 9 Jun 2011 10:01:32 -0400 Received: from mail-pz0-f46.google.com ([209.85.210.46]:60492 "EHLO mail-pz0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753919Ab1FIOBb (ORCPT ); Thu, 9 Jun 2011 10:01:31 -0400 Received: by pzk9 with SMTP id 9so751697pzk.19 for ; Thu, 09 Jun 2011 07:01:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:date:from:to:cc:subject:message-id:in-reply-to :references:x-mailer:mime-version:content-type :content-transfer-encoding; bh=8BfYrUC48TpCbMO2l94nqAKRddZUZbA7YTvWcoJGX8A=; b=nrg2IHY6MGAt9ZENYAI6ugJFBSw3LjoVknKtnjBfvlRwels1pZicHCqKGCOIFkRjuQ kr1VEfvvPCo2uvKd7CRNkXTI8mf8zNhVGizDPhWLQ4bYhnUrrgKrFRh+OGoAlCW3nZI9 wABCpMQTifSCjMwTnd2lTzHPFWtde4c+EjO8o= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:in-reply-to:references:x-mailer :mime-version:content-type:content-transfer-encoding; b=Renqu+sBUoyG1Ze84cL9czC2TFA/ViwvpqY+81HYpuRRNB3VVFMtNpszJoZFjrtJgv W393W7GzdDpVcK9NmOiao5Lb/VM4D9VrWnkdMGwUH/4DE35ZfgAVY8M53btCan0S7fi9 MwtgMymPne2eUonTG8z1UWbbp+7f1fBPoCotU= Received: by 10.68.23.6 with SMTP id i6mr293802pbf.314.1307628090216; Thu, 09 Jun 2011 07:01:30 -0700 (PDT) Received: from amd (x096101.dynamic.ppp.asahi-net.or.jp [122.249.96.101]) by mx.google.com with ESMTPS id f3sm1438716pbj.0.2011.06.09.07.01.27 (version=SSLv3 cipher=OTHER); Thu, 09 Jun 2011 07:01:29 -0700 (PDT) Date: Thu, 9 Jun 2011 23:01:26 +0900 From: Takuya Yoshikawa To: avi@redhat.com, mtosatti@redhat.com Cc: kvm@vger.kernel.org, yoshikawa.takuya@oss.ntt.co.jp, mingo@elte.hu Subject: [PATCH 1/4] KVM: MMU: Clean up the error handling of walk_addr_generic() Message-Id: <20110609230126.145e8029.takuya.yoshikawa@gmail.com> In-Reply-To: <20110609225949.91cce4a0.takuya.yoshikawa@gmail.com> References: <20110609225949.91cce4a0.takuya.yoshikawa@gmail.com> X-Mailer: Sylpheed 3.1.0 (GTK+ 2.24.4; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Thu, 09 Jun 2011 14:01:35 +0000 (UTC) From: Takuya Yoshikawa Avoid two step jumps to the error handling part. This eliminates the use of the variables present and rsvd_fault. We also mark the variables write/user/fetch_fault with const to show these do not change in the function. These were suggested by Ingo Molnar. Cc: Ingo Molnar Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/paging_tmpl.h | 64 +++++++++++++++++++------------------------ 1 files changed, 28 insertions(+), 36 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 6c4dc01..51e5990 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -125,18 +125,17 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, gfn_t table_gfn; unsigned index, pt_access, uninitialized_var(pte_access); gpa_t pte_gpa; - bool eperm, present, rsvd_fault; - int offset, write_fault, user_fault, fetch_fault; - - write_fault = access & PFERR_WRITE_MASK; - user_fault = access & PFERR_USER_MASK; - fetch_fault = access & PFERR_FETCH_MASK; + bool eperm; + int offset; + const int write_fault = access & PFERR_WRITE_MASK; + const int user_fault = access & PFERR_USER_MASK; + const int fetch_fault = access & PFERR_FETCH_MASK; + u16 errcode = 0; trace_kvm_mmu_pagetable_walk(addr, write_fault, user_fault, fetch_fault); walk: - present = true; - eperm = rsvd_fault = false; + eperm = false; walker->level = mmu->root_level; pte = mmu->get_cr3(vcpu); @@ -145,7 +144,7 @@ walk: pte = kvm_pdptr_read_mmu(vcpu, mmu, (addr >> 30) & 3); trace_kvm_mmu_paging_element(pte, walker->level); if (!is_present_gpte(pte)) { - present = false; + errcode |= PFERR_PRESENT_MASK; goto error; } --walker->level; @@ -171,34 +170,34 @@ walk: real_gfn = mmu->translate_gpa(vcpu, gfn_to_gpa(table_gfn), PFERR_USER_MASK|PFERR_WRITE_MASK); if (unlikely(real_gfn == UNMAPPED_GVA)) { - present = false; - break; + errcode |= PFERR_PRESENT_MASK; + goto error; } real_gfn = gpa_to_gfn(real_gfn); host_addr = gfn_to_hva(vcpu->kvm, real_gfn); if (unlikely(kvm_is_error_hva(host_addr))) { - present = false; - break; + errcode |= PFERR_PRESENT_MASK; + goto error; } ptep_user = (pt_element_t __user *)((void *)host_addr + offset); if (unlikely(__copy_from_user(&pte, ptep_user, sizeof(pte)))) { - present = false; - break; + errcode |= PFERR_PRESENT_MASK; + goto error; } trace_kvm_mmu_paging_element(pte, walker->level); if (unlikely(!is_present_gpte(pte))) { - present = false; - break; + errcode |= PFERR_PRESENT_MASK; + goto error; } if (unlikely(is_rsvd_bits_set(&vcpu->arch.mmu, pte, walker->level))) { - rsvd_fault = true; - break; + errcode |= PFERR_RSVD_MASK; + goto error; } if (unlikely(write_fault && !is_writable_pte(pte) @@ -213,16 +212,15 @@ walk: eperm = true; #endif - if (!eperm && !rsvd_fault - && unlikely(!(pte & PT_ACCESSED_MASK))) { + if (!eperm && unlikely(!(pte & PT_ACCESSED_MASK))) { int ret; trace_kvm_mmu_set_accessed_bit(table_gfn, index, sizeof(pte)); ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index, pte, pte|PT_ACCESSED_MASK); if (unlikely(ret < 0)) { - present = false; - break; + errcode |= PFERR_PRESENT_MASK; + goto error; } else if (ret) goto walk; @@ -270,7 +268,7 @@ walk: --walker->level; } - if (unlikely(!present || eperm || rsvd_fault)) + if (unlikely(eperm)) goto error; if (write_fault && unlikely(!is_dirty_gpte(pte))) { @@ -280,7 +278,7 @@ walk: ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index, pte, pte|PT_DIRTY_MASK); if (unlikely(ret < 0)) { - present = false; + errcode |= PFERR_PRESENT_MASK; goto error; } else if (ret) goto walk; @@ -297,19 +295,13 @@ walk: return 1; error: - walker->fault.vector = PF_VECTOR; - walker->fault.error_code_valid = true; - walker->fault.error_code = 0; - if (present) - walker->fault.error_code |= PFERR_PRESENT_MASK; - - walker->fault.error_code |= write_fault | user_fault; - + errcode |= write_fault | user_fault; if (fetch_fault && mmu->nx) - walker->fault.error_code |= PFERR_FETCH_MASK; - if (rsvd_fault) - walker->fault.error_code |= PFERR_RSVD_MASK; + errcode |= PFERR_FETCH_MASK; + walker->fault.vector = PF_VECTOR; + walker->fault.error_code_valid = true; + walker->fault.error_code = errcode; walker->fault.address = addr; walker->fault.nested_page_fault = mmu != vcpu->arch.walk_mmu;