Message ID | 1620466310-8428-2-git-send-email-wanpengli@tencent.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/3] KVM: PPC: Book3S HV: exit halt polling on need_resched() as well | expand |
On Sat, May 08, 2021, Wanpeng Li wrote: > From: Wanpeng Li <wanpengli@tencent.com> > > In case of undercomitted scenarios, vCPU can get scheduling easily, > kvm_vcpu_yield_to adds extra overhead, we can observe a lot of race > between vcpu->ready is true and yield fails due to p->state is > TASK_RUNNING. Let's bail out is such scenarios by checking the length > of current cpu runqueue. > > Signed-off-by: Wanpeng Li <wanpengli@tencent.com> > --- > arch/x86/kvm/x86.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 5bd550e..c0244a6 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -8358,6 +8358,9 @@ static void kvm_sched_yield(struct kvm_vcpu *vcpu, unsigned long dest_id) > struct kvm_vcpu *target = NULL; > struct kvm_apic_map *map; > > + if (single_task_running()) > + goto no_yield; > + Hmm, could we push the result of kvm_sched_yield() down into the guest? Currently the guest bails after the first attempt, which is perfect for this scenario, but it seems like it would make sense to keep trying to yield if there are multiple preempted vCPUs and the "problem" was with the target. E.g. /* * Make sure other vCPUs get a chance to run if they need to. Yield at * most once, and stop trying to yield if the VMM says yielding isn't * going to happen. */ for_each_cpu(cpu, mask) { if (vcpu_is_preempted(cpu)) { r = kvm_hypercall1(KVM_HC_SCHED_YIELD, per_cpu(x86_cpu_to_apicid, cpu)); if (r != -EBUSY) break; } } Unrelated to this patch, but it's the first time I've really looked at the guest side of directed yield... Wouldn't it also make sense for the guest side to hook .send_call_func_single_ipi? > vcpu->stat.directed_yield_attempted++; Shouldn't directed_yield_attempted be incremented in this case? It doesn't seem fundamentally different than the case where the target was scheduled in between the guest's check and the host's processing of the yield request. In both instances, the guest did indeed attempt to yield. > rcu_read_lock(); > -- > 2.7.4 >
On Wed, 12 May 2021 at 05:44, Sean Christopherson <seanjc@google.com> wrote: > > On Sat, May 08, 2021, Wanpeng Li wrote: > > From: Wanpeng Li <wanpengli@tencent.com> > > > > In case of undercomitted scenarios, vCPU can get scheduling easily, > > kvm_vcpu_yield_to adds extra overhead, we can observe a lot of race > > between vcpu->ready is true and yield fails due to p->state is > > TASK_RUNNING. Let's bail out is such scenarios by checking the length > > of current cpu runqueue. > > > > Signed-off-by: Wanpeng Li <wanpengli@tencent.com> > > --- > > arch/x86/kvm/x86.c | 3 +++ > > 1 file changed, 3 insertions(+) > > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > index 5bd550e..c0244a6 100644 > > --- a/arch/x86/kvm/x86.c > > +++ b/arch/x86/kvm/x86.c > > @@ -8358,6 +8358,9 @@ static void kvm_sched_yield(struct kvm_vcpu *vcpu, unsigned long dest_id) > > struct kvm_vcpu *target = NULL; > > struct kvm_apic_map *map; > > > > + if (single_task_running()) > > + goto no_yield; > > + > > Hmm, could we push the result of kvm_sched_yield() down into the guest? > Currently the guest bails after the first attempt, which is perfect for this > scenario, but it seems like it would make sense to keep trying to yield if there > are multiple preempted vCPUs and It can have a race in case of sustain yield if there are multiple preempted vCPUs , the vCPU which you intend to yield may have already completed to handle IPI and be preempted now when the yielded sender is scheduled again and checks the next preempted candidate. > the "problem" was with the target. E.g. At the beginning of kvm_sched_yield() we can just get the run queue length of the source, it can be treated as a hint of under-committed instead of guarantee of accuracy. > > /* > * Make sure other vCPUs get a chance to run if they need to. Yield at > * most once, and stop trying to yield if the VMM says yielding isn't > * going to happen. > */ > for_each_cpu(cpu, mask) { > if (vcpu_is_preempted(cpu)) { > r = kvm_hypercall1(KVM_HC_SCHED_YIELD, > per_cpu(x86_cpu_to_apicid, cpu)); > if (r != -EBUSY) > break; > } > } > > > Unrelated to this patch, but it's the first time I've really looked at the guest > side of directed yield... > > Wouldn't it also make sense for the guest side to hook .send_call_func_single_ipi? reschedule ipi is called by .smp_send_reschedule hook, there are a lot of researches intend to accelerate idle vCPU reactivation, my original attemption is to boost synchronization primitive, I believe we need a lot of benchmarkings to consider inter-VM fairness and performance benefit for hooks .send_call_func_single_ipi and .smp_send_reschedule. > > > vcpu->stat.directed_yield_attempted++; > > Shouldn't directed_yield_attempted be incremented in this case? It doesn't seem > fundamentally different than the case where the target was scheduled in between > the guest's check and the host's processing of the yield request. In both > instances, the guest did indeed attempt to yield. Yes, it should be treated as attempted, I move it above the counting because this patch helps improve successful ratio in under-committed scenarios and easily shows me how much failure ratio leaves over. I can move it after the counting in the next version. Wanpeng
On Wed, May 12, 2021, Wanpeng Li wrote: > On Wed, 12 May 2021 at 05:44, Sean Christopherson <seanjc@google.com> wrote: > > > > On Sat, May 08, 2021, Wanpeng Li wrote: > > > From: Wanpeng Li <wanpengli@tencent.com> > > > > > > In case of undercomitted scenarios, vCPU can get scheduling easily, > > > kvm_vcpu_yield_to adds extra overhead, we can observe a lot of race > > > between vcpu->ready is true and yield fails due to p->state is > > > TASK_RUNNING. Let's bail out is such scenarios by checking the length > > > of current cpu runqueue. > > > > > > Signed-off-by: Wanpeng Li <wanpengli@tencent.com> > > > --- > > > arch/x86/kvm/x86.c | 3 +++ > > > 1 file changed, 3 insertions(+) > > > > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > > index 5bd550e..c0244a6 100644 > > > --- a/arch/x86/kvm/x86.c > > > +++ b/arch/x86/kvm/x86.c > > > @@ -8358,6 +8358,9 @@ static void kvm_sched_yield(struct kvm_vcpu *vcpu, unsigned long dest_id) > > > struct kvm_vcpu *target = NULL; > > > struct kvm_apic_map *map; > > > > > > + if (single_task_running()) > > > + goto no_yield; > > > + > > > > Hmm, could we push the result of kvm_sched_yield() down into the guest? > > Currently the guest bails after the first attempt, which is perfect for this > > scenario, but it seems like it would make sense to keep trying to yield if there > > are multiple preempted vCPUs and > > It can have a race in case of sustain yield if there are multiple > preempted vCPUs , the vCPU which you intend to yield may have already > completed to handle IPI and be preempted now when the yielded sender > is scheduled again and checks the next preempted candidate. Ah, right, don't want to penalize the happy case. > > Unrelated to this patch, but it's the first time I've really looked at the guest > > side of directed yield... > > > > Wouldn't it also make sense for the guest side to hook .send_call_func_single_ipi? > > reschedule ipi is called by .smp_send_reschedule hook, there are a lot > of researches intend to accelerate idle vCPU reactivation, my original > attemption is to boost synchronization primitive, I believe we need a > lot of benchmarkings to consider inter-VM fairness and performance > benefit for hooks .send_call_func_single_ipi and > .smp_send_reschedule. I was thinking of the 2 vCPU case. If the VM has 2 vCPUs, then this /* * Choose the most efficient way to send an IPI. Note that the * number of CPUs might be zero due to concurrent changes to the * provided mask. */ if (nr_cpus == 1) send_call_function_single_ipi(last_cpu); else if (likely(nr_cpus > 1)) arch_send_call_function_ipi_mask(cfd->cpumask_ipi); means .send_call_func_single_ipi() will always be used to send an IPI to the other vCPU, and thus 2 vCPU VMs will never utilize PV yield.
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5bd550e..c0244a6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8358,6 +8358,9 @@ static void kvm_sched_yield(struct kvm_vcpu *vcpu, unsigned long dest_id) struct kvm_vcpu *target = NULL; struct kvm_apic_map *map; + if (single_task_running()) + goto no_yield; + vcpu->stat.directed_yield_attempted++; rcu_read_lock();