From patchwork Mon Nov 28 04:18:53 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kyle Huey X-Patchwork-Id: 9449073 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 602BF6071E for ; Mon, 28 Nov 2016 04:19:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F5ED204C1 for ; Mon, 28 Nov 2016 04:19:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 43ED62074F; Mon, 28 Nov 2016 04:19:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 34E8320649 for ; Mon, 28 Nov 2016 04:19:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754036AbcK1ETZ (ORCPT ); Sun, 27 Nov 2016 23:19:25 -0500 Received: from mail-pg0-f65.google.com ([74.125.83.65]:34557 "EHLO mail-pg0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753805AbcK1ETT (ORCPT ); Sun, 27 Nov 2016 23:19:19 -0500 Received: by mail-pg0-f65.google.com with SMTP id e9so11867416pgc.1 for ; Sun, 27 Nov 2016 20:19:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kylehuey.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=coXEt3ME4mEZcb9zs4xwZthy6E4Dkt0NJtwtVLB/rqs=; b=gE7PLwLufDcQqEfliKdM+PsuYmbXJIl8uCb6UQy07pGsC3EVxnaWqvfThkCaiHb2tg USe6tXwuoh8rtH5RabzHKlLF17dPxTyLbF8+5fn3c2aH5MhK4cZgxfiRE2RWU1i5AJR2 f735dxGXCfv28SADdZVOFJY8PdUz+h9ZV8+XeUYsZdedZRamjP6dgdCoeAhJmXt8/COf bsDgpMD2/Au/QAxv6A2x2UaO8xzjDSjOpg0oBjLR0GRUAyiLNHkKshFhMaAbu+n7gBFn UiTrXQtX9/GWX0chErXu6YlPvd2Tq7xQ5eNI6DW3jaOb4U+GSglUFK6seNgjAhP+YgMq wnfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=coXEt3ME4mEZcb9zs4xwZthy6E4Dkt0NJtwtVLB/rqs=; b=UOq1TRyHfnwdlXp6DPYtSjd2ReeO2AAB7dRyAVZm5ju7j3g68HJxw294CUJQ58GvWO pqelfEnn+7VW0IozzYwGGu+2BF/A7mQov0JAQy962qX1AHFq8kuucwfXgLegdJvStSYx +rXGhfAOOn0dHRIeJ7M8MQ0muL9d/ROjdyE+Mk+qqps9iTBuKTIg15lsuY3e6J/xbA8H VsyRvjKZD4pq46FE54YvgQMKVGj8DJhdvXzvltmwyoRtZsXpmG8Csndfe8Y0tsojkFDD n3yb7ampVvTEEk7I7paYdDfQKaj6sQQv/jXe+CKrXRUw0bX/BukHyNEpFtiiH/rhC6KS jMhw== X-Gm-Message-State: AKaTC01/ACJJnBpT4Z+c7NJ5is08rBCfFhABy4Zr/fEgwds/fw07goHZgIZ1ncFF1UH/TA== X-Received: by 10.84.206.37 with SMTP id f34mr45120040ple.35.1480306758417; Sun, 27 Nov 2016 20:19:18 -0800 (PST) Received: from minbar.hsd1.ca.comcast.net (c-73-162-102-141.hsd1.ca.comcast.net. [73.162.102.141]) by smtp.gmail.com with ESMTPSA id t89sm83037711pfe.50.2016.11.27.20.19.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 27 Nov 2016 20:19:17 -0800 (PST) From: Kyle Huey X-Google-Original-From: Kyle Huey To: Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , x86@kernel.org, Joerg Roedel Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/5] KVM: VMX: Reorder some skip_emulated_instruction calls Date: Sun, 27 Nov 2016 20:18:53 -0800 Message-Id: <20161128041856.11420-3-khuey@kylehuey.com> X-Mailer: git-send-email 2.10.2 In-Reply-To: <20161128041856.11420-1-khuey@kylehuey.com> References: <20161128041856.11420-1-khuey@kylehuey.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The functions being moved ahead of skip_emulated_instruction here don't need updated IPs, and moving skip_emulated_instruction to the end will make it easier to return its return value. Signed-off-by: Kyle Huey --- arch/x86/kvm/vmx.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index e4af9699..f2f9cf5 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -5703,18 +5703,18 @@ static int handle_cr(struct kvm_vcpu *vcpu) vcpu->run->exit_reason = KVM_EXIT_SET_TPR; return 0; } } break; case 2: /* clts */ handle_clts(vcpu); trace_kvm_cr_write(0, kvm_read_cr0(vcpu)); - skip_emulated_instruction(vcpu); vmx_fpu_activate(vcpu); + skip_emulated_instruction(vcpu); return 1; case 1: /*mov from cr*/ switch (cr) { case 3: val = kvm_read_cr3(vcpu); kvm_register_write(vcpu, reg, val); trace_kvm_cr_read(cr, val); skip_emulated_instruction(vcpu); @@ -6128,18 +6128,18 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) static int handle_ept_misconfig(struct kvm_vcpu *vcpu) { int ret; gpa_t gpa; gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS); if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) { - skip_emulated_instruction(vcpu); trace_kvm_fast_mmio(gpa); + skip_emulated_instruction(vcpu); return 1; } ret = handle_mmio_page_fault(vcpu, gpa, true); if (likely(ret == RET_MMIO_PF_EMULATE)) return x86_emulate_instruction(vcpu, gpa, 0, NULL, 0) == EMULATE_DONE; @@ -6502,18 +6502,18 @@ static __exit void hardware_unsetup(void) * Indicate a busy-waiting vcpu in spinlock. We do not enable the PAUSE * exiting, so only get here on cpu with PAUSE-Loop-Exiting. */ static int handle_pause(struct kvm_vcpu *vcpu) { if (ple_gap) grow_ple_window(vcpu); - skip_emulated_instruction(vcpu); kvm_vcpu_on_spin(vcpu); + skip_emulated_instruction(vcpu); return 1; } static int handle_nop(struct kvm_vcpu *vcpu) { skip_emulated_instruction(vcpu); return 1; @@ -6957,18 +6957,18 @@ static int handle_vmon(struct kvm_vcpu *vcpu) vmx->nested.vmcs02_num = 0; hrtimer_init(&vmx->nested.preemption_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED); vmx->nested.preemption_timer.function = vmx_preemption_timer_fn; vmx->nested.vmxon = true; - skip_emulated_instruction(vcpu); nested_vmx_succeed(vcpu); + skip_emulated_instruction(vcpu); return 1; out_shadow_vmcs: kfree(vmx->nested.cached_vmcs12); out_cached_vmcs12: free_page((unsigned long)vmx->nested.msr_bitmap); @@ -7078,18 +7078,18 @@ static void free_nested(struct vcpu_vmx *vmx) } /* Emulate the VMXOFF instruction */ static int handle_vmoff(struct kvm_vcpu *vcpu) { if (!nested_vmx_check_permission(vcpu)) return 1; free_nested(to_vmx(vcpu)); - skip_emulated_instruction(vcpu); nested_vmx_succeed(vcpu); + skip_emulated_instruction(vcpu); return 1; } /* Emulate the VMCLEAR instruction */ static int handle_vmclear(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); gpa_t vmptr; @@ -7119,18 +7119,18 @@ static int handle_vmclear(struct kvm_vcpu *vcpu) } vmcs12 = kmap(page); vmcs12->launch_state = 0; kunmap(page); nested_release_page(page); nested_free_vmcs02(vmx, vmptr); - skip_emulated_instruction(vcpu); nested_vmx_succeed(vcpu); + skip_emulated_instruction(vcpu); return 1; } static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch); /* Emulate the VMLAUNCH instruction */ static int handle_vmlaunch(struct kvm_vcpu *vcpu) {