Message ID | 1621260028-6467-1-git-send-email-wanpengli@tencent.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v3,1/5] KVM: exit halt polling on need_resched() for both book3s and generic halt-polling | expand |
On Mon, May 17, 2021 at 7:01 AM Wanpeng Li <kernellwp@gmail.com> wrote: > > From: Wanpeng Li <wanpengli@tencent.com> > > Inspired by commit 262de4102c7bb8 (kvm: exit halt polling on need_resched() > as well), CFS_BANDWIDTH throttling will use resched_task() when there is just > one task to get the task to block. It was likely allowing VMs to overrun their > quota when halt polling. Due to PPC implements an arch specific halt polling > logic, we should add the need_resched() checking there as well. This > patch adds a helper function that to be shared between book3s and generic > halt-polling loop. > > Cc: Ben Segall <bsegall@google.com> > Cc: Venkatesh Srinivas <venkateshs@chromium.org> > Cc: Jim Mattson <jmattson@google.com> > Cc: David Matlack <dmatlack@google.com> > Cc: Paul Mackerras <paulus@ozlabs.org> > Cc: Suraj Jitindar Singh <sjitindarsingh@gmail.com> > Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Reviewed-by: David Matlack <dmatlack@google.com> > --- > v2 -> v3: > * add a helper function > v1 -> v2: > * update patch description > > arch/powerpc/kvm/book3s_hv.c | 2 +- > include/linux/kvm_host.h | 2 ++ > virt/kvm/kvm_main.c | 9 +++++++-- > 3 files changed, 10 insertions(+), 3 deletions(-) > > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c > index 28a80d240b76..360165df345b 100644 > --- a/arch/powerpc/kvm/book3s_hv.c > +++ b/arch/powerpc/kvm/book3s_hv.c > @@ -3936,7 +3936,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) > break; > } > cur = ktime_get(); > - } while (single_task_running() && ktime_before(cur, stop)); > + } while (kvm_vcpu_can_block(cur, stop)); > > spin_lock(&vc->lock); > vc->vcore_state = VCORE_INACTIVE; > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 2f34487e21f2..bf4fd60c4699 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -1583,4 +1583,6 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) > /* Max number of entries allowed for each kvm dirty ring */ > #define KVM_DIRTY_RING_MAX_ENTRIES 65536 > > +bool kvm_vcpu_can_block(ktime_t cur, ktime_t stop); > + > #endif > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 6b4feb92dc79..c81080667fd1 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -2945,6 +2945,12 @@ update_halt_poll_stats(struct kvm_vcpu *vcpu, u64 poll_ns, bool waited) > vcpu->stat.halt_poll_success_ns += poll_ns; > } > > + > +bool kvm_vcpu_can_block(ktime_t cur, ktime_t stop) nit: kvm_vcpu_can_poll() would be a more accurate name for this function. > +{ > + return single_task_running() && !need_resched() && ktime_before(cur, stop); > +} > + > /* > * The vCPU has executed a HLT instruction with in-kernel mode enabled. > */ > @@ -2973,8 +2979,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > goto out; > } > poll_end = cur = ktime_get(); > - } while (single_task_running() && !need_resched() && > - ktime_before(cur, stop)); > + } while (kvm_vcpu_can_block(cur, stop)); > } > > prepare_to_rcuwait(&vcpu->wait); > -- > 2.25.1 >
On Mon, May 17, 2021 at 7:01 AM Wanpeng Li <kernellwp@gmail.com> wrote: > > From: Wanpeng Li <wanpengli@tencent.com> > > Inspired by commit 262de4102c7bb8 (kvm: exit halt polling on need_resched() > as well), CFS_BANDWIDTH throttling will use resched_task() when there is just > one task to get the task to block. It was likely allowing VMs to overrun their > quota when halt polling. Due to PPC implements an arch specific halt polling > logic, we should add the need_resched() checking there as well. This > patch adds a helper function that to be shared between book3s and generic > halt-polling loop. > > Cc: Ben Segall <bsegall@google.com> > Cc: Venkatesh Srinivas <venkateshs@chromium.org> > Cc: Jim Mattson <jmattson@google.com> > Cc: David Matlack <dmatlack@google.com> > Cc: Paul Mackerras <paulus@ozlabs.org> > Cc: Suraj Jitindar Singh <sjitindarsingh@gmail.com> > Signed-off-by: Wanpeng Li <wanpengli@tencent.com> > --- > v2 -> v3: > * add a helper function > v1 -> v2: > * update patch description > > arch/powerpc/kvm/book3s_hv.c | 2 +- > include/linux/kvm_host.h | 2 ++ > virt/kvm/kvm_main.c | 9 +++++++-- > 3 files changed, 10 insertions(+), 3 deletions(-) > > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c > index 28a80d240b76..360165df345b 100644 > --- a/arch/powerpc/kvm/book3s_hv.c > +++ b/arch/powerpc/kvm/book3s_hv.c > @@ -3936,7 +3936,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) > break; > } > cur = ktime_get(); > - } while (single_task_running() && ktime_before(cur, stop)); > + } while (kvm_vcpu_can_block(cur, stop)); > > spin_lock(&vc->lock); > vc->vcore_state = VCORE_INACTIVE; > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 2f34487e21f2..bf4fd60c4699 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -1583,4 +1583,6 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) > /* Max number of entries allowed for each kvm dirty ring */ > #define KVM_DIRTY_RING_MAX_ENTRIES 65536 > > +bool kvm_vcpu_can_block(ktime_t cur, ktime_t stop); > + > #endif > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 6b4feb92dc79..c81080667fd1 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -2945,6 +2945,12 @@ update_halt_poll_stats(struct kvm_vcpu *vcpu, u64 poll_ns, bool waited) > vcpu->stat.halt_poll_success_ns += poll_ns; > } > > + > +bool kvm_vcpu_can_block(ktime_t cur, ktime_t stop) > +{ > + return single_task_running() && !need_resched() && ktime_before(cur, stop); > +} > + > /* > * The vCPU has executed a HLT instruction with in-kernel mode enabled. > */ > @@ -2973,8 +2979,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > goto out; > } > poll_end = cur = ktime_get(); > - } while (single_task_running() && !need_resched() && > - ktime_before(cur, stop)); > + } while (kvm_vcpu_can_block(cur, stop)); > } > > prepare_to_rcuwait(&vcpu->wait); > -- > 2.25.1 > Reviewed-by: Venkatesh Srinivas <venkateshs@chromium.org>
On Tue, 18 May 2021 at 00:35, David Matlack <dmatlack@google.com> wrote: > > On Mon, May 17, 2021 at 7:01 AM Wanpeng Li <kernellwp@gmail.com> wrote: > > > > From: Wanpeng Li <wanpengli@tencent.com> > > > > Inspired by commit 262de4102c7bb8 (kvm: exit halt polling on need_resched() > > as well), CFS_BANDWIDTH throttling will use resched_task() when there is just > > one task to get the task to block. It was likely allowing VMs to overrun their > > quota when halt polling. Due to PPC implements an arch specific halt polling > > logic, we should add the need_resched() checking there as well. This > > patch adds a helper function that to be shared between book3s and generic > > halt-polling loop. > > > > Cc: Ben Segall <bsegall@google.com> > > Cc: Venkatesh Srinivas <venkateshs@chromium.org> > > Cc: Jim Mattson <jmattson@google.com> > > Cc: David Matlack <dmatlack@google.com> > > Cc: Paul Mackerras <paulus@ozlabs.org> > > Cc: Suraj Jitindar Singh <sjitindarsingh@gmail.com> > > Signed-off-by: Wanpeng Li <wanpengli@tencent.com> > > Reviewed-by: David Matlack <dmatlack@google.com> > > > --- > > v2 -> v3: > > * add a helper function > > v1 -> v2: > > * update patch description > > > > arch/powerpc/kvm/book3s_hv.c | 2 +- > > include/linux/kvm_host.h | 2 ++ > > virt/kvm/kvm_main.c | 9 +++++++-- > > 3 files changed, 10 insertions(+), 3 deletions(-) > > > > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c > > index 28a80d240b76..360165df345b 100644 > > --- a/arch/powerpc/kvm/book3s_hv.c > > +++ b/arch/powerpc/kvm/book3s_hv.c > > @@ -3936,7 +3936,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) > > break; > > } > > cur = ktime_get(); > > - } while (single_task_running() && ktime_before(cur, stop)); > > + } while (kvm_vcpu_can_block(cur, stop)); > > > > spin_lock(&vc->lock); > > vc->vcore_state = VCORE_INACTIVE; > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > > index 2f34487e21f2..bf4fd60c4699 100644 > > --- a/include/linux/kvm_host.h > > +++ b/include/linux/kvm_host.h > > @@ -1583,4 +1583,6 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) > > /* Max number of entries allowed for each kvm dirty ring */ > > #define KVM_DIRTY_RING_MAX_ENTRIES 65536 > > > > +bool kvm_vcpu_can_block(ktime_t cur, ktime_t stop); > > + > > #endif > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > index 6b4feb92dc79..c81080667fd1 100644 > > --- a/virt/kvm/kvm_main.c > > +++ b/virt/kvm/kvm_main.c > > @@ -2945,6 +2945,12 @@ update_halt_poll_stats(struct kvm_vcpu *vcpu, u64 poll_ns, bool waited) > > vcpu->stat.halt_poll_success_ns += poll_ns; > > } > > > > + > > +bool kvm_vcpu_can_block(ktime_t cur, ktime_t stop) > > nit: kvm_vcpu_can_poll() would be a more accurate name for this function. Do it in v4. :) Wanpeng
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 28a80d240b76..360165df345b 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3936,7 +3936,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) break; } cur = ktime_get(); - } while (single_task_running() && ktime_before(cur, stop)); + } while (kvm_vcpu_can_block(cur, stop)); spin_lock(&vc->lock); vc->vcore_state = VCORE_INACTIVE; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 2f34487e21f2..bf4fd60c4699 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1583,4 +1583,6 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 +bool kvm_vcpu_can_block(ktime_t cur, ktime_t stop); + #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6b4feb92dc79..c81080667fd1 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2945,6 +2945,12 @@ update_halt_poll_stats(struct kvm_vcpu *vcpu, u64 poll_ns, bool waited) vcpu->stat.halt_poll_success_ns += poll_ns; } + +bool kvm_vcpu_can_block(ktime_t cur, ktime_t stop) +{ + return single_task_running() && !need_resched() && ktime_before(cur, stop); +} + /* * The vCPU has executed a HLT instruction with in-kernel mode enabled. */ @@ -2973,8 +2979,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) goto out; } poll_end = cur = ktime_get(); - } while (single_task_running() && !need_resched() && - ktime_before(cur, stop)); + } while (kvm_vcpu_can_block(cur, stop)); } prepare_to_rcuwait(&vcpu->wait);