From patchwork Mon Feb 22 08:23:40 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 8373131 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 885F09F314 for ; Mon, 22 Feb 2016 08:13:14 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B50EE20274 for ; Mon, 22 Feb 2016 08:13:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C8601202C8 for ; Mon, 22 Feb 2016 08:13:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753934AbcBVIMe (ORCPT ); Mon, 22 Feb 2016 03:12:34 -0500 Received: from tama500.ecl.ntt.co.jp ([129.60.39.148]:54471 "EHLO tama500.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752933AbcBVIMc (ORCPT ); Mon, 22 Feb 2016 03:12:32 -0500 Received: from vc1.ecl.ntt.co.jp (vc1.ecl.ntt.co.jp [129.60.86.153]) by tama500.ecl.ntt.co.jp (8.13.8/8.13.8) with ESMTP id u1M8CRm6000377; Mon, 22 Feb 2016 17:12:27 +0900 Received: from vc1.ecl.ntt.co.jp (localhost [127.0.0.1]) by vc1.ecl.ntt.co.jp (Postfix) with ESMTP id 93A3F5F63F; Mon, 22 Feb 2016 17:12:27 +0900 (JST) Received: from jcms-pop21.ecl.ntt.co.jp (jcms-pop21.ecl.ntt.co.jp [129.60.87.134]) by vc1.ecl.ntt.co.jp (Postfix) with ESMTP id 863D15F612; Mon, 22 Feb 2016 17:12:27 +0900 (JST) Received: from localhost.m.ecl.ntt.co.jp (takuya-think.sic.ecl.ntt.co.jp [129.60.241.50]) by jcms-pop21.ecl.ntt.co.jp (Postfix) with ESMTP id 6BB78400AA7; Mon, 22 Feb 2016 17:12:27 +0900 (JST) From: Takuya Yoshikawa Subject: [PATCH 1/2] KVM: x86: MMU: Consolidate quickly_check_mmio_pf() and is_mmio_page_fault() Date: Mon, 22 Feb 2016 17:23:40 +0900 Message-Id: <1456129421-5891-2-git-send-email-yoshikawa_takuya_b1@lab.ntt.co.jp> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1456129421-5891-1-git-send-email-yoshikawa_takuya_b1@lab.ntt.co.jp> References: <1456129421-5891-1-git-send-email-yoshikawa_takuya_b1@lab.ntt.co.jp> To: pbonzini@redhat.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Takuya Yoshikawa X-TM-AS-MML: disable Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP These two have only slight differences: - whether 'addr' is of type u64 or of type gva_t - whether they have 'direct' parameter or not Concerning the former, quickly_check_mmio_pf()'s u64 is better because 'addr' needs to be able to have both a guest physical address and a guest virtual address. The latter is just a stylistic issue as we can always calculate the mode from the 'vcpu' as is_mmio_page_fault() does. This patch keeps the parameter to make the following patch cleaner. In addition, the patch renames the function to mmio_info_in_cache() to make it clear what it actually checks for. Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/mmu.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 95a955d..a28b734 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3273,7 +3273,7 @@ static bool is_shadow_zero_bits_set(struct kvm_mmu *mmu, u64 spte, int level) return __is_rsvd_bits_set(&mmu->shadow_zero_check, spte, level); } -static bool quickly_check_mmio_pf(struct kvm_vcpu *vcpu, u64 addr, bool direct) +static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct) { if (direct) return vcpu_match_mmio_gpa(vcpu, addr); @@ -3332,7 +3332,7 @@ int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct) u64 spte; bool reserved; - if (quickly_check_mmio_pf(vcpu, addr, direct)) + if (mmio_info_in_cache(vcpu, addr, direct)) return RET_MMIO_PF_EMULATE; reserved = walk_shadow_page_get_mmio_spte(vcpu, addr, &spte); @@ -4354,19 +4354,12 @@ static void make_mmu_pages_available(struct kvm_vcpu *vcpu) kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); } -static bool is_mmio_page_fault(struct kvm_vcpu *vcpu, gva_t addr) -{ - if (vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu)) - return vcpu_match_mmio_gpa(vcpu, addr); - - return vcpu_match_mmio_gva(vcpu, addr); -} - int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code, void *insn, int insn_len) { int r, emulation_type = EMULTYPE_RETRY; enum emulation_result er; + bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu); r = vcpu->arch.mmu.page_fault(vcpu, cr2, error_code, false); if (r < 0) @@ -4377,7 +4370,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code, goto out; } - if (is_mmio_page_fault(vcpu, cr2)) + if (mmio_info_in_cache(vcpu, cr2, direct)) emulation_type = 0; er = x86_emulate_instruction(vcpu, cr2, emulation_type, insn, insn_len);