Message ID | Y3abPTOAxbLOpnVN@p183 (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | vmx: use mov instead of lea in __vmx_vcpu_run() | expand |
KVM: VMX: for the shortlog please. On Thu, Nov 17, 2022, Alexey Dobriyan wrote: > "mov rsi, rsp" is equivalent to "lea rsi, [rsp]" but 1 byte shorter. Eww, Intel syntax ;-) > Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> > --- > > arch/x86/kvm/vmx/vmenter.S | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > --- a/arch/x86/kvm/vmx/vmenter.S > +++ b/arch/x86/kvm/vmx/vmenter.S > @@ -72,7 +72,7 @@ SYM_FUNC_START(__vmx_vcpu_run) > /* Copy @flags to BL, _ASM_ARG3 is volatile. */ > mov %_ASM_ARG3B, %bl > > - lea (%_ASM_SP), %_ASM_ARG2 > + mov %_ASM_SP, %_ASM_ARG2 I don't have a strong preference. It's probably worth converting, e.g. move elimination on modern CPUs might shave a whole uop. I'm pretty sure LEA is a holdover from when this code pre-calculated RSP before a series of pushes. > call vmx_update_host_rsp > > ALTERNATIVE "jmp .Lspec_ctrl_done", "", X86_FEATURE_MSR_SPEC_CTRL
--- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -72,7 +72,7 @@ SYM_FUNC_START(__vmx_vcpu_run) /* Copy @flags to BL, _ASM_ARG3 is volatile. */ mov %_ASM_ARG3B, %bl - lea (%_ASM_SP), %_ASM_ARG2 + mov %_ASM_SP, %_ASM_ARG2 call vmx_update_host_rsp ALTERNATIVE "jmp .Lspec_ctrl_done", "", X86_FEATURE_MSR_SPEC_CTRL
"mov rsi, rsp" is equivalent to "lea rsi, [rsp]" but 1 byte shorter. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> --- arch/x86/kvm/vmx/vmenter.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)