diff mbox

don't call adjust_vmx_controls() second time

Message ID 20090827154130.GR30093@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Gleb Natapov Aug. 27, 2009, 3:41 p.m. UTC
Don't call adjust_vmx_controls() two times for the same control.
It restores options that was dropped earlier.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Avi Kivity Aug. 27, 2009, 4:21 p.m. UTC | #1
On 08/27/2009 06:41 PM, Gleb Natapov wrote:
> Don't call adjust_vmx_controls() two times for the same control.
> It restores options that was dropped earlier.
>    

Applied, thanks.  Andrew, if you rerun your benchmark atop kvm.git 
'next' branch, I believe you will see dramatically better results.
Andrew Theurer Aug. 27, 2009, 8:42 p.m. UTC | #2
On Thu, 2009-08-27 at 19:21 +0300, Avi Kivity wrote:
> On 08/27/2009 06:41 PM, Gleb Natapov wrote:
> > Don't call adjust_vmx_controls() two times for the same control.
> > It restores options that was dropped earlier.
> >    
> 
> Applied, thanks.  Andrew, if you rerun your benchmark atop kvm.git 
> 'next' branch, I believe you will see dramatically better results.

Yes!  CPU is much lower:
user  nice  system   irq softirq  guest   idle  iowait
5.81  0.00    9.48  0.08    1.04  21.32  57.86    4.41

previous CPU:
user  nice  system   irq  softirq guest   idle  iowait
5.67  0.00   11.64  0.09     1.05 31.90  46.06    3.59

new oprofile:

> samples  %        app name                 symbol name
> 885444   53.2905  kvm-intel.ko             vmx_vcpu_run
> 38090     2.2924  qemu-system-x86_64       cpu_physical_memory_rw
> 34764     2.0923  qemu-system-x86_64       phys_page_find_alloc
> 25278     1.5214  vmlinux-2.6.31-rc5-autokern1 native_write_msr_safe
> 18205     1.0957  libc-2.5.so              memcpy
> 14730     0.8865  qemu-system-x86_64       qemu_get_ram_ptr
> 14189     0.8540  kvm.ko                   kvm_arch_vcpu_ioctl_run
> 12380     0.7451  vmlinux-2.6.31-rc5-autokern1 native_set_debugreg
> 12278     0.7390  vmlinux-2.6.31-rc5-autokern1 native_read_msr_safe
> 10871     0.6543  qemu-system-x86_64       virtqueue_get_head
> 10814     0.6508  vmlinux-2.6.31-rc5-autokern1 copy_user_generic_string
> 9080      0.5465  vmlinux-2.6.31-rc5-autokern1 fget_light
> 9015      0.5426  vmlinux-2.6.31-rc5-autokern1 schedule
> 8557      0.5150  qemu-system-x86_64       virtqueue_avail_bytes
> 7805      0.4697  vmlinux-2.6.31-rc5-autokern1 do_select
> 7173      0.4317  qemu-system-x86_64       lduw_phys
> 7019      0.4224  qemu-system-x86_64       main_loop_wait
> 6979      0.4200  vmlinux-2.6.31-rc5-autokern1 audit_syscall_exit
> 5571      0.3353  vmlinux-2.6.31-rc5-autokern1 kfree
> 5170      0.3112  vmlinux-2.6.31-rc5-autokern1 audit_syscall_entry
> 5086      0.3061  vmlinux-2.6.31-rc5-autokern1 fput
> 4631      0.2787  vmlinux-2.6.31-rc5-autokern1 mwait_idle
> 4584      0.2759  kvm.ko                   kvm_load_guest_fpu
> 4491      0.2703  vmlinux-2.6.31-rc5-autokern1 system_call
> 4461      0.2685  vmlinux-2.6.31-rc5-autokern1 __switch_to
> 4431      0.2667  kvm.ko                   kvm_put_guest_fpu
> 4371      0.2631  vmlinux-2.6.31-rc5-autokern1 __down_read
> 4290      0.2582  qemu-system-x86_64       kvm_run
> 4218      0.2539  vmlinux-2.6.31-rc5-autokern1 getnstimeofday
> 4129      0.2485  libpthread-2.5.so        pthread_mutex_lock
> 4122      0.2481  qemu-system-x86_64       ldl_phys
> 4100      0.2468  vmlinux-2.6.31-rc5-autokern1 do_vfs_ioctl
> 3811      0.2294  kvm.ko                   find_highest_vector
> 3593      0.2162  vmlinux-2.6.31-rc5-autokern1 unroll_tree_refs
> 3560      0.2143  vmlinux-2.6.31-rc5-autokern1 try_to_wake_up
> 3550      0.2137  vmlinux-2.6.31-rc5-autokern1 native_get_debugreg
> 3506      0.2110  kvm-intel.ko             vmcs_writel
> 3487      0.2099  vmlinux-2.6.31-rc5-autokern1 task_rq_lock
> 3434      0.2067  vmlinux-2.6.31-rc5-autokern1 __up_read
> 3368      0.2027  librt-2.5.so             clock_gettime
> 3339      0.2010  qemu-system-x86_64       virtqueue_num_heads
> 

Thanks very much for the fix!

-Andrew

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity Aug. 30, 2009, 8:59 a.m. UTC | #3
On 08/27/2009 11:42 PM, Andrew Theurer wrote:
> On Thu, 2009-08-27 at 19:21 +0300, Avi Kivity wrote:
>    
>> On 08/27/2009 06:41 PM, Gleb Natapov wrote:
>>      
>>> Don't call adjust_vmx_controls() two times for the same control.
>>> It restores options that was dropped earlier.
>>>
>>>        
>> Applied, thanks.  Andrew, if you rerun your benchmark atop kvm.git
>> 'next' branch, I believe you will see dramatically better results.
>>      
> Yes!  CPU is much lower:
> user  nice  system   irq softirq  guest   idle  iowait
> 5.81  0.00    9.48  0.08    1.04  21.32  57.86    4.41
>
> previous CPU:
> user  nice  system   irq  softirq guest   idle  iowait
> 5.67  0.00   11.64  0.09     1.05 31.90  46.06    3.59
>
>    

How does it compare to the other hypervisor now?

> new oprofile:
>
>    
>> samples  %        app name                 symbol name
>> 885444   53.2905  kvm-intel.ko             vmx_vcpu_run
>>      

guest mode = good

>> 38090     2.2924  qemu-system-x86_64       cpu_physical_memory_rw
>> 34764     2.0923  qemu-system-x86_64       phys_page_find_alloc
>> 14730     0.8865  qemu-system-x86_64       qemu_get_ram_ptr
>> 10814     0.6508  vmlinux-2.6.31-rc5-autokern1 copy_user_generic_string
>> 10871     0.6543  qemu-system-x86_64       virtqueue_get_head
>> 8557      0.5150  qemu-system-x86_64       virtqueue_avail_bytes
>> 7173      0.4317  qemu-system-x86_64       lduw_phys
>> 4122      0.2481  qemu-system-x86_64       ldl_phys
>> 3339      0.2010  qemu-system-x86_64       virtqueue_num_heads
>> 4129      0.2485  libpthread-2.5.so        pthread_mutex_lock
>>
>>      

virtio and related qemu overhead: 8.2%.

>> 25278     1.5214  vmlinux-2.6.31-rc5-autokern1 native_write_msr_safe
>> 12278     0.7390  vmlinux-2.6.31-rc5-autokern1 native_read_msr_safe
>>      

This will be reduced to if we move virtio to kernel context.

>> 12380     0.7451  vmlinux-2.6.31-rc5-autokern1 native_set_debugreg
>> 3550      0.2137  vmlinux-2.6.31-rc5-autokern1 native_get_debugreg
>>      

A lot less than before, but still annoying.

>> 4631      0.2787  vmlinux-2.6.31-rc5-autokern1 mwait_idle
>>      

idle=halt may improve this, mwait is slow.
Andrew Theurer Aug. 31, 2009, 1:05 p.m. UTC | #4
Avi Kivity wrote:
> On 08/27/2009 11:42 PM, Andrew Theurer wrote:
>> On Thu, 2009-08-27 at 19:21 +0300, Avi Kivity wrote:
>>   
>>> On 08/27/2009 06:41 PM, Gleb Natapov wrote:
>>>     
>>>> Don't call adjust_vmx_controls() two times for the same control.
>>>> It restores options that was dropped earlier.
>>>>
>>>>        
>>> Applied, thanks.  Andrew, if you rerun your benchmark atop kvm.git
>>> 'next' branch, I believe you will see dramatically better results.
>>>      
>> Yes!  CPU is much lower:
>> user  nice  system   irq softirq  guest   idle  iowait
>> 5.81  0.00    9.48  0.08    1.04  21.32  57.86    4.41
>>
>> previous CPU:
>> user  nice  system   irq  softirq guest   idle  iowait
>> 5.67  0.00   11.64  0.09     1.05 31.90  46.06    3.59
>>
>>    
> 
> How does it compare to the other hypervisor now?

My original results for other hypervisor were a little inaccurate.  They 
mistakenly used 2 vcpu guests. New runs with 1 vcpu guests (as used in 
kvm) have slightly lower CPU utilization.  Anyway, here's the breakdown:

                                CPU       percent more CPU
kvm-master/qemu-kvm-87:        50.15     78%
kvm-next/qemu-kvm-87:          37.73     34%

> 
>> new oprofile:
>>
>>   
>>> samples  %        app name                 symbol name
>>> 885444   53.2905  kvm-intel.ko             vmx_vcpu_run
>>>      
> 
> guest mode = good
> 
>>> 38090     2.2924  qemu-system-x86_64       cpu_physical_memory_rw
>>> 34764     2.0923  qemu-system-x86_64       phys_page_find_alloc
>>> 14730     0.8865  qemu-system-x86_64       qemu_get_ram_ptr
>>> 10814     0.6508  vmlinux-2.6.31-rc5-autokern1 copy_user_generic_string
>>> 10871     0.6543  qemu-system-x86_64       virtqueue_get_head
>>> 8557      0.5150  qemu-system-x86_64       virtqueue_avail_bytes
>>> 7173      0.4317  qemu-system-x86_64       lduw_phys
>>> 4122      0.2481  qemu-system-x86_64       ldl_phys
>>> 3339      0.2010  qemu-system-x86_64       virtqueue_num_heads
>>> 4129      0.2485  libpthread-2.5.so        pthread_mutex_lock
>>>
>>>      
> 
> virtio and related qemu overhead: 8.2%.
> 
>>> 25278     1.5214  vmlinux-2.6.31-rc5-autokern1 native_write_msr_safe
>>> 12278     0.7390  vmlinux-2.6.31-rc5-autokern1 native_read_msr_safe
>>>      
> 
> This will be reduced to if we move virtio to kernel context.

Are there plans to move that to kernel for disk, too?

>>> 12380     0.7451  vmlinux-2.6.31-rc5-autokern1 native_set_debugreg
>>> 3550      0.2137  vmlinux-2.6.31-rc5-autokern1 native_get_debugreg
>>>      
> 
> A lot less than before, but still annoying.
> 
>>> 4631      0.2787  vmlinux-2.6.31-rc5-autokern1 mwait_idle

>>>      
> 
> idle=halt may improve this, mwait is slow.

I can try idle-halt on the host.  I actually assumed it would be using 
that, but I'll check.

Thanks,

-Andrew


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity Aug. 31, 2009, 1:52 p.m. UTC | #5
On 08/31/2009 04:05 PM, Andrew Theurer wrote:
>> How does it compare to the other hypervisor now?
>
>
> My original results for other hypervisor were a little inaccurate.  
> They mistakenly used 2 vcpu guests. New runs with 1 vcpu guests (as 
> used in kvm) have slightly lower CPU utilization.  Anyway, here's the 
> breakdown:
>
>                                CPU       percent more CPU
> kvm-master/qemu-kvm-87:        50.15     78%
> kvm-next/qemu-kvm-87:          37.73     34%
>

Much better, though still a lot of work to do.

>>>> 25278     1.5214  vmlinux-2.6.31-rc5-autokern1 native_write_msr_safe
>>>> 12278     0.7390  vmlinux-2.6.31-rc5-autokern1 native_read_msr_safe
>>
>> This will be reduced to if we move virtio to kernel context.
>
> Are there plans to move that to kernel for disk, too?

We don't know if disk or net contributed to this.  If it turns out that 
vhost-blk makes sense, we'll do it.
diff mbox

Patch

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 6b57eed..78101dd 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1262,12 +1262,9 @@  static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
 	if (_cpu_based_2nd_exec_control & SECONDARY_EXEC_ENABLE_EPT) {
 		/* CR3 accesses and invlpg don't need to cause VM Exits when EPT
 		   enabled */
-		min &= ~(CPU_BASED_CR3_LOAD_EXITING |
-			 CPU_BASED_CR3_STORE_EXITING |
-			 CPU_BASED_INVLPG_EXITING);
-		if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_PROCBASED_CTLS,
-					&_cpu_based_exec_control) < 0)
-			return -EIO;
+		_cpu_based_exec_control &= ~(CPU_BASED_CR3_LOAD_EXITING |
+					     CPU_BASED_CR3_STORE_EXITING |
+					     CPU_BASED_INVLPG_EXITING);
 		rdmsr(MSR_IA32_VMX_EPT_VPID_CAP,
 		      vmx_capability.ept, vmx_capability.vpid);
 	}