From patchwork Mon Dec 9 06:55:51 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gleb Natapov X-Patchwork-Id: 3308781 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 197F79F384 for ; Mon, 9 Dec 2013 06:56:04 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E71882010C for ; Mon, 9 Dec 2013 06:56:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7A7B9200ED for ; Mon, 9 Dec 2013 06:56:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760542Ab3LIGz5 (ORCPT ); Mon, 9 Dec 2013 01:55:57 -0500 Received: from mail-ee0-f50.google.com ([74.125.83.50]:59399 "EHLO mail-ee0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1760356Ab3LIGz4 (ORCPT ); Mon, 9 Dec 2013 01:55:56 -0500 Received: by mail-ee0-f50.google.com with SMTP id c41so1360717eek.9 for ; Sun, 08 Dec 2013 22:55:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-type:content-disposition :in-reply-to; bh=r4AfHqLMGhhY9A+a1JJjlZ0+1dmFf5EfTBpX+8ZhIaI=; b=kSZnPga8pP3NGotVhv2fie3fRUtGsIr8ANSF91lRzRh47bQy9uA3cwaaoR2TiRkrx3 FOy3JuDEXN/HwvKHk/7aGTqbRheXK7IVcReSCnqble/QtaFaSoQO5LJ4QLXBihmit16G H+LVjLvWojS+Nph3pqe7mDglDZlgBy9gIn+RZj2UzxWyjWxdrjVOXQLaXOwfyNxhIFSK Dws/UyaKXcxctau6YI/F8b6cBtEUip85vlk7lNc3MmnA1N2EKY0vPLZiTs8F/7eT6ac6 Uoz63DLy+RnwjHNecFHA49ualXBeKDZu1ZLOCowBuUi5ZB33+0XYBuI1c450s0ucUpjL 5NAw== X-Gm-Message-State: ALoCoQlI+JZsT+BZomkd9LudOZe7vQ4Ht68lAWX7+KNWtqu2QqbO+5vKByBvwlFYeya1Ktdu3mDA X-Received: by 10.15.54.72 with SMTP id s48mr11290974eew.3.1386572155072; Sun, 08 Dec 2013 22:55:55 -0800 (PST) Received: from alisa.broadband.actcom.net.il (bzq-84-111-155-225.red.bezeqint.net. [84.111.155.225]) by mx.google.com with ESMTPSA id e43sm25132537eep.7.2013.12.08.22.55.53 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Dec 2013 22:55:54 -0800 (PST) Received: by alisa.broadband.actcom.net.il (Postfix, from userid 1000) id 847C4224EA5; Mon, 9 Dec 2013 08:55:51 +0200 (IST) Date: Mon, 9 Dec 2013 08:55:51 +0200 From: Gleb Natapov To: Jan Kiszka Cc: Paolo Bonzini , kvm@vger.kernel.org Subject: Re: [PATCH] KVM: VMX: shadow VM_(ENTRY|EXIT)_CONTROLS vmcs field Message-ID: <20131209065550.GB16543@minantech.com> References: <20131125133713.GB959@redhat.com> <529624BE.9090404@redhat.com> <52A46D5B.8040100@web.de> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <52A46D5B.8040100@web.de> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Sun, Dec 08, 2013 at 02:00:11PM +0100, Jan Kiszka wrote: > On 2013-11-27 17:58, Paolo Bonzini wrote: > > Il 25/11/2013 14:37, Gleb Natapov ha scritto: > >> VM_(ENTRY|EXIT)_CONTROLS vmcs fields are read/written on each guest > >> entry but most times it can be avoided since values do not changes. > >> Keep fields copy in memory to avoid unnecessary reads from vmcs. > >> > >> Signed-off-by: Gleb Natapov > >> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > >> index b2fe1c2..29c1c7f 100644 > >> --- a/arch/x86/kvm/vmx.c > >> +++ b/arch/x86/kvm/vmx.c > >> @@ -418,6 +418,8 @@ struct vcpu_vmx { > >> u64 msr_host_kernel_gs_base; > >> u64 msr_guest_kernel_gs_base; > >> #endif > >> + u32 vm_entry_controls_shadow; > >> + u32 vm_exit_controls_shadow; > >> /* > >> * loaded_vmcs points to the VMCS currently used in this vcpu. For a > >> * non-nested (L1) guest, it always points to vmcs01. For a nested > >> @@ -1326,6 +1328,62 @@ static void vmcs_set_bits(unsigned long field, u32 mask) > >> vmcs_writel(field, vmcs_readl(field) | mask); > >> } > >> > >> +static inline void vm_entry_controls_init(struct vcpu_vmx *vmx, u32 val) > >> +{ > >> + vmcs_write32(VM_ENTRY_CONTROLS, val); > >> + vmx->vm_entry_controls_shadow = val; > >> +} > >> + > >> +static inline void vm_entry_controls_set(struct vcpu_vmx *vmx, u32 val) > >> +{ > >> + if (vmx->vm_entry_controls_shadow != val) > >> + vm_entry_controls_init(vmx, val); > >> +} > >> + > >> +static inline u32 vm_entry_controls_get(struct vcpu_vmx *vmx) > >> +{ > >> + return vmx->vm_entry_controls_shadow; > >> +} > >> + > >> + > >> +static inline void vm_entry_controls_setbit(struct vcpu_vmx *vmx, u32 val) > >> +{ > >> + vm_entry_controls_set(vmx, vm_entry_controls_get(vmx) | val); > >> +} > >> + > >> +static inline void vm_entry_controls_clearbit(struct vcpu_vmx *vmx, u32 val) > >> +{ > >> + vm_entry_controls_set(vmx, vm_entry_controls_get(vmx) & ~val); > >> +} > >> + > >> +static inline void vm_exit_controls_init(struct vcpu_vmx *vmx, u32 val) > >> +{ > >> + vmcs_write32(VM_EXIT_CONTROLS, val); > >> + vmx->vm_exit_controls_shadow = val; > >> +} > >> + > >> +static inline void vm_exit_controls_set(struct vcpu_vmx *vmx, u32 val) > >> +{ > >> + if (vmx->vm_exit_controls_shadow != val) > >> + vm_exit_controls_init(vmx, val); > >> +} > >> + > >> +static inline u32 vm_exit_controls_get(struct vcpu_vmx *vmx) > >> +{ > >> + return vmx->vm_exit_controls_shadow; > >> +} > >> + > >> + > >> +static inline void vm_exit_controls_setbit(struct vcpu_vmx *vmx, u32 val) > >> +{ > >> + vm_exit_controls_set(vmx, vm_exit_controls_get(vmx) | val); > >> +} > >> + > >> +static inline void vm_exit_controls_clearbit(struct vcpu_vmx *vmx, u32 val) > >> +{ > >> + vm_exit_controls_set(vmx, vm_exit_controls_get(vmx) & ~val); > >> +} > >> + > >> static void vmx_segment_cache_clear(struct vcpu_vmx *vmx) > >> { > >> vmx->segment_cache.bitmask = 0; > >> @@ -1410,11 +1468,11 @@ static void update_exception_bitmap(struct kvm_vcpu *vcpu) > >> vmcs_write32(EXCEPTION_BITMAP, eb); > >> } > >> > >> -static void clear_atomic_switch_msr_special(unsigned long entry, > >> - unsigned long exit) > >> +static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx, > >> + unsigned long entry, unsigned long exit) > >> { > >> - vmcs_clear_bits(VM_ENTRY_CONTROLS, entry); > >> - vmcs_clear_bits(VM_EXIT_CONTROLS, exit); > >> + vm_entry_controls_clearbit(vmx, entry); > >> + vm_exit_controls_clearbit(vmx, exit); > >> } > >> > >> static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr) > >> @@ -1425,14 +1483,15 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr) > >> switch (msr) { > >> case MSR_EFER: > >> if (cpu_has_load_ia32_efer) { > >> - clear_atomic_switch_msr_special(VM_ENTRY_LOAD_IA32_EFER, > >> + clear_atomic_switch_msr_special(vmx, > >> + VM_ENTRY_LOAD_IA32_EFER, > >> VM_EXIT_LOAD_IA32_EFER); > >> return; > >> } > >> break; > >> case MSR_CORE_PERF_GLOBAL_CTRL: > >> if (cpu_has_load_perf_global_ctrl) { > >> - clear_atomic_switch_msr_special( > >> + clear_atomic_switch_msr_special(vmx, > >> VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL, > >> VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL); > >> return; > >> @@ -1453,14 +1512,15 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr) > >> vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr); > >> } > >> > >> -static void add_atomic_switch_msr_special(unsigned long entry, > >> - unsigned long exit, unsigned long guest_val_vmcs, > >> - unsigned long host_val_vmcs, u64 guest_val, u64 host_val) > >> +static void add_atomic_switch_msr_special(struct vcpu_vmx *vmx, > >> + unsigned long entry, unsigned long exit, > >> + unsigned long guest_val_vmcs, unsigned long host_val_vmcs, > >> + u64 guest_val, u64 host_val) > >> { > >> vmcs_write64(guest_val_vmcs, guest_val); > >> vmcs_write64(host_val_vmcs, host_val); > >> - vmcs_set_bits(VM_ENTRY_CONTROLS, entry); > >> - vmcs_set_bits(VM_EXIT_CONTROLS, exit); > >> + vm_entry_controls_setbit(vmx, entry); > >> + vm_exit_controls_setbit(vmx, exit); > >> } > >> > >> static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr, > >> @@ -1472,7 +1532,8 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr, > >> switch (msr) { > >> case MSR_EFER: > >> if (cpu_has_load_ia32_efer) { > >> - add_atomic_switch_msr_special(VM_ENTRY_LOAD_IA32_EFER, > >> + add_atomic_switch_msr_special(vmx, > >> + VM_ENTRY_LOAD_IA32_EFER, > >> VM_EXIT_LOAD_IA32_EFER, > >> GUEST_IA32_EFER, > >> HOST_IA32_EFER, > >> @@ -1482,7 +1543,7 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr, > >> break; > >> case MSR_CORE_PERF_GLOBAL_CTRL: > >> if (cpu_has_load_perf_global_ctrl) { > >> - add_atomic_switch_msr_special( > >> + add_atomic_switch_msr_special(vmx, > >> VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL, > >> VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL, > >> GUEST_IA32_PERF_GLOBAL_CTRL, > >> @@ -3182,14 +3243,10 @@ static void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer) > >> vmx_load_host_state(to_vmx(vcpu)); > >> vcpu->arch.efer = efer; > >> if (efer & EFER_LMA) { > >> - vmcs_write32(VM_ENTRY_CONTROLS, > >> - vmcs_read32(VM_ENTRY_CONTROLS) | > >> - VM_ENTRY_IA32E_MODE); > >> + vm_entry_controls_setbit(to_vmx(vcpu), VM_ENTRY_IA32E_MODE); > >> msr->data = efer; > >> } else { > >> - vmcs_write32(VM_ENTRY_CONTROLS, > >> - vmcs_read32(VM_ENTRY_CONTROLS) & > >> - ~VM_ENTRY_IA32E_MODE); > >> + vm_entry_controls_clearbit(to_vmx(vcpu), VM_ENTRY_IA32E_MODE); > >> > >> msr->data = efer & ~EFER_LME; > >> } > >> @@ -3217,9 +3274,7 @@ static void enter_lmode(struct kvm_vcpu *vcpu) > >> > >> static void exit_lmode(struct kvm_vcpu *vcpu) > >> { > >> - vmcs_write32(VM_ENTRY_CONTROLS, > >> - vmcs_read32(VM_ENTRY_CONTROLS) > >> - & ~VM_ENTRY_IA32E_MODE); > >> + vm_entry_controls_clearbit(to_vmx(vcpu), VM_ENTRY_IA32E_MODE); > >> vmx_set_efer(vcpu, vcpu->arch.efer & ~EFER_LMA); > >> } > >> > >> @@ -4346,10 +4401,11 @@ static int vmx_vcpu_setup(struct vcpu_vmx *vmx) > >> ++vmx->nmsrs; > >> } > >> > >> - vmcs_write32(VM_EXIT_CONTROLS, vmcs_config.vmexit_ctrl); > >> + > >> + vm_exit_controls_init(vmx, vmcs_config.vmexit_ctrl); > >> > >> /* 22.2.1, 20.8.1 */ > >> - vmcs_write32(VM_ENTRY_CONTROLS, vmcs_config.vmentry_ctrl); > >> + vm_entry_controls_init(vmx, vmcs_config.vmentry_ctrl); > >> > >> vmcs_writel(CR0_GUEST_HOST_MASK, ~0UL); > >> set_cr4_guest_host_mask(vmx); > >> @@ -7759,12 +7815,12 @@ static void prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) > >> exit_control = vmcs_config.vmexit_ctrl; > >> if (vmcs12->pin_based_vm_exec_control & PIN_BASED_VMX_PREEMPTION_TIMER) > >> exit_control |= VM_EXIT_SAVE_VMX_PREEMPTION_TIMER; > >> - vmcs_write32(VM_EXIT_CONTROLS, exit_control); > >> + vm_exit_controls_init(vmx, exit_control); > >> > >> /* vmcs12's VM_ENTRY_LOAD_IA32_EFER and VM_ENTRY_IA32E_MODE are > >> * emulated by vmx_set_efer(), below. > >> */ > >> - vmcs_write32(VM_ENTRY_CONTROLS, > >> + vm_entry_controls_init(vmx, > >> (vmcs12->vm_entry_controls & ~VM_ENTRY_LOAD_IA32_EFER & > >> ~VM_ENTRY_IA32E_MODE) | > >> (vmcs_config.vmentry_ctrl & ~VM_ENTRY_IA32E_MODE)); > >> @@ -8186,7 +8242,7 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) > >> > >> vmcs12->vm_entry_controls = > >> (vmcs12->vm_entry_controls & ~VM_ENTRY_IA32E_MODE) | > >> - (vmcs_read32(VM_ENTRY_CONTROLS) & VM_ENTRY_IA32E_MODE); > >> + (vm_entry_controls_get(to_vmx(vcpu)) & VM_ENTRY_IA32E_MODE); > >> > >> /* TODO: These cannot have changed unless we have MSR bitmaps and > >> * the relevant bit asks not to trap the change */ > >> @@ -8390,6 +8446,7 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu) > >> vcpu->cpu = cpu; > >> put_cpu(); > >> > >> + vm_entry_controls_init(vmx, vmcs_read32(VM_ENTRY_CONTROLS)); > >> vmx_segment_cache_clear(vmx); > >> > >> /* if no vmcs02 cache requested, remove the one we used */ > >> -- > >> Gleb. > >> -- > >> To unsubscribe from this list: send the line "unsubscribe kvm" in > >> the body of a message to majordomo@vger.kernel.org > >> More majordomo info at http://vger.kernel.org/majordomo-info.html > >> > > > > Applied to kvm/queue, thanks. > > Seems to have or cause an issue with nVMX. The (still not merged) vmx > unit test breaks here after apply this commit: > What about this patch? --- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 1d95384..f90320b 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -8452,6 +8452,7 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu) put_cpu(); vm_entry_controls_init(vmx, vmcs_read32(VM_ENTRY_CONTROLS)); + vm_exit_controls_init(vmx, vmcs_read32(VM_EXIT_CONTROLS)); vmx_segment_cache_clear(vmx); /* if no vmcs02 cache requested, remove the one we used */