Message ID | 1649244302-6777-1-git-send-email-lirongqing@baidu.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] KVM: VMX: optimize pi_wakeup_handler | expand |
On Wed, Apr 06, 2022, Li RongQing wrote: > pi_wakeup_handler is used to wakeup the sleep vCPUs by posted irq > list_for_each_entry is used in it, and whose input is other function > per_cpu(), That cause that per_cpu() be invoked at least twice when > there is one sleep vCPU > > so optimize pi_wakeup_handler it by reading once and same to per CPU > spinlock > > Signed-off-by: Li RongQing <lirongqing@baidu.com> > --- Reviewed-by: Sean Christopherson <seanjc@google.com>
> > > > Signed-off-by: Li RongQing <lirongqing@baidu.com> > > --- > > Reviewed-by: Sean Christopherson <seanjc@google.com> Ping Thanks -Li
On 4/6/22 13:25, Li RongQing wrote: > pi_wakeup_handler is used to wakeup the sleep vCPUs by posted irq > list_for_each_entry is used in it, and whose input is other function > per_cpu(), That cause that per_cpu() be invoked at least twice when > there is one sleep vCPU > > so optimize pi_wakeup_handler it by reading once and same to per CPU > spinlock > > Signed-off-by: Li RongQing <lirongqing@baidu.com> > --- > diff v1: move reading the per-cpu variable out of spinlock protection > > arch/x86/kvm/vmx/posted_intr.c | 9 +++++---- > 1 file changed, 5 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c > index 5fdabf3..c5c1d31 100644 > --- a/arch/x86/kvm/vmx/posted_intr.c > +++ b/arch/x86/kvm/vmx/posted_intr.c > @@ -215,16 +215,17 @@ void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu) > void pi_wakeup_handler(void) > { > int cpu = smp_processor_id(); > + struct list_head *wakeup_list = &per_cpu(wakeup_vcpus_on_cpu, cpu); > + raw_spinlock_t *spinlock = &per_cpu(wakeup_vcpus_on_cpu_lock, cpu); > struct vcpu_vmx *vmx; > > - raw_spin_lock(&per_cpu(wakeup_vcpus_on_cpu_lock, cpu)); > - list_for_each_entry(vmx, &per_cpu(wakeup_vcpus_on_cpu, cpu), > - pi_wakeup_list) { > + raw_spin_lock(spinlock); > + list_for_each_entry(vmx, wakeup_list, pi_wakeup_list) { > > if (pi_test_on(&vmx->pi_desc)) > kvm_vcpu_wake_up(&vmx->vcpu); > } > - raw_spin_unlock(&per_cpu(wakeup_vcpus_on_cpu_lock, cpu)); > + raw_spin_unlock(spinlock); > } > > void __init pi_init_cpu(int cpu) Queued, thanks. Paolo
diff v1: move reading the per-cpu variable out of spinlock protection arch/x86/kvm/vmx/posted_intr.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c index 5fdabf3..c5c1d31 100644 --- a/arch/x86/kvm/vmx/posted_intr.c +++ b/arch/x86/kvm/vmx/posted_intr.c @@ -215,16 +215,17 @@ void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu) void pi_wakeup_handler(void) { int cpu = smp_processor_id(); + struct list_head *wakeup_list = &per_cpu(wakeup_vcpus_on_cpu, cpu); + raw_spinlock_t *spinlock = &per_cpu(wakeup_vcpus_on_cpu_lock, cpu); struct vcpu_vmx *vmx; - raw_spin_lock(&per_cpu(wakeup_vcpus_on_cpu_lock, cpu)); - list_for_each_entry(vmx, &per_cpu(wakeup_vcpus_on_cpu, cpu), - pi_wakeup_list) { + raw_spin_lock(spinlock); + list_for_each_entry(vmx, wakeup_list, pi_wakeup_list) { if (pi_test_on(&vmx->pi_desc)) kvm_vcpu_wake_up(&vmx->vcpu); } - raw_spin_unlock(&per_cpu(wakeup_vcpus_on_cpu_lock, cpu)); + raw_spin_unlock(spinlock); } void __init pi_init_cpu(int cpu)
pi_wakeup_handler is used to wakeup the sleep vCPUs by posted irq list_for_each_entry is used in it, and whose input is other function per_cpu(), That cause that per_cpu() be invoked at least twice when there is one sleep vCPU so optimize pi_wakeup_handler it by reading once and same to per CPU spinlock Signed-off-by: Li RongQing <lirongqing@baidu.com> ---