Message ID | Y3e7UW0WNV2AZmsZ@p183 (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | kvm, vmx: don't use "unsigned long" in vmx_vcpu_enter_exit() | expand |
Nit (because I really suck at case-insensitive searches), please capitalize "KVM: VMX:" in the shortlog. On Fri, Nov 18, 2022, Alexey Dobriyan wrote: > __vmx_vcpu_run_flags() returns "unsigned int" and uses only 2 bits of it > so using "unsigned long" is very much pointless. And __vmx_vcpu_run() and vmx_spec_ctrl_restore_host() take an "unsigned int" as well, i.e. actually relying on an "unsigned long" value won't actually work. On a related topic, this code in __vmx_vcpu_run() is unnecessarily fragile as it relies on VMX_RUN_VMRESUME being in bits 0-7. /* Copy @flags to BL, _ASM_ARG3 is volatile. */ mov %_ASM_ARG3, %bl ... /* Check if vmlaunch or vmresume is needed */ testb $VMX_RUN_VMRESUME, %bl The "byte" logic is another holdover, from when "flags" was just "launched" and was passed in as a boolean. I'll send a proper patch to do: diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 0b5db4de4d09..5bd39f63497d 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -69,8 +69,8 @@ SYM_FUNC_START(__vmx_vcpu_run) */ push %_ASM_ARG2 - /* Copy @flags to BL, _ASM_ARG3 is volatile. */ - mov %_ASM_ARG3B, %bl + /* Copy @flags to EBX, _ASM_ARG3 is volatile. */ + mov %_ASM_ARG3L, %ebx lea (%_ASM_SP), %_ASM_ARG2 call vmx_update_host_rsp @@ -106,7 +106,7 @@ SYM_FUNC_START(__vmx_vcpu_run) mov (%_ASM_SP), %_ASM_AX /* Check if vmlaunch or vmresume is needed */ - testb $VMX_RUN_VMRESUME, %bl + test $VMX_RUN_VMRESUME, %ebx /* Load guest registers. Don't clobber flags. */ mov VCPU_RCX(%_ASM_AX), %_ASM_CX @@ -128,7 +128,7 @@ SYM_FUNC_START(__vmx_vcpu_run) /* Load guest RAX. This kills the @regs pointer! */ mov VCPU_RAX(%_ASM_AX), %_ASM_AX - /* Check EFLAGS.ZF from 'testb' above */ + /* Check EFLAGS.ZF from 'test VMX_RUN_VMRESUME' above */ jz .Lvmlaunch /* > Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> > --- Reviewed-by: Sean Christopherson <seanjc@google.com>
On Fri, 18 Nov 2022 20:05:21 +0300, Alexey Dobriyan wrote: > __vmx_vcpu_run_flags() returns "unsigned int" and uses only 2 bits of it > so using "unsigned long" is very much pointless. > > Applied to kvm-x86 vmx, thanks! [1/1] kvm, vmx: don't use "unsigned long" in vmx_vcpu_enter_exit() https://github.com/kvm-x86/linux/commit/59fc307f5922 -- https://github.com/kvm-x86/linux/tree/next https://github.com/kvm-x86/linux/tree/fixes
--- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7067,7 +7067,7 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu) static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx, - unsigned long flags) + unsigned int flags) { guest_state_enter_irqoff();
__vmx_vcpu_run_flags() returns "unsigned int" and uses only 2 bits of it so using "unsigned long" is very much pointless. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> --- arch/x86/kvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)