mbox series

[v5,0/4] KVM: LAPIC: Implement Exitless Timer

Message ID 1561110002-4438-1-git-send-email-wanpengli@tencent.com (mailing list archive)
Headers show
Series KVM: LAPIC: Implement Exitless Timer | expand

Message

Wanpeng Li June 21, 2019, 9:39 a.m. UTC
Dedicated instances are currently disturbed by unnecessary jitter due 
to the emulated lapic timers fire on the same pCPUs which vCPUs resident.
There is no hardware virtual timer on Intel for guest like ARM. Both 
programming timer in guest and the emulated timer fires incur vmexits.
This patchset tries to avoid vmexit which is incurred by the emulated 
timer fires in dedicated instance scenario. 

When nohz_full is enabled in dedicated instances scenario, the unpinned 
timer will be moved to the nearest busy housekeepers after commit
9642d18eee2cd (nohz: Affine unpinned timers to housekeepers) and commit 
444969223c8 ("sched/nohz: Fix affine unpinned timers mess"). However, 
KVM always makes lapic timer pinned to the pCPU which vCPU residents, the 
reason is explained by commit 61abdbe0 (kvm: x86: make lapic hrtimer 
pinned). Actually, these emulated timers can be offload to the housekeeping 
cpus since APICv is really common in recent years. The guest timer interrupt 
is injected by posted-interrupt which is delivered by housekeeping cpu 
once the emulated timer fires. 

The host admin should fine tuned, e.g. dedicated instances scenario w/ 
nohz_full cover the pCPUs which vCPUs resident, several pCPUs surplus 
for busy housekeeping, disable mwait/hlt/pause vmexits to keep in non-root  
mode, ~3% redis performance benefit can be observed on Skylake server.

w/o patchset:

            VM-EXIT  Samples  Samples%  Time%   Min Time  Max Time   Avg time

EXTERNAL_INTERRUPT    42916    49.43%   39.30%   0.47us   106.09us   0.71us ( +-   1.09% )

w/ patchset:

            VM-EXIT  Samples  Samples%  Time%   Min Time  Max Time         Avg time

EXTERNAL_INTERRUPT    6871     9.29%     2.96%   0.44us    57.88us   0.72us ( +-   4.02% )

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>

v4 -> v5:
 * update patch description in patch 1/4
 * feed latest apic->lapic_timer.expired_tscdeadline to kvm_wait_lapic_expire()
 * squash advance timer handling to patch 2/4

v3 -> v4:
 * drop the HRTIMER_MODE_ABS_PINNED, add kick after set pending timer
 * don't posted inject already-expired timer

v2 -> v3:
 * disarming the vmx preemption timer when posted_interrupt_inject_timer_enabled()
 * check kvm_hlt_in_guest instead

v1 -> v2:
 * check vcpu_halt_in_guest
 * move module parameter from kvm-intel to kvm
 * add housekeeping_enabled
 * rename apic_timer_expired_pi to kvm_apic_inject_pending_timer_irqs


Wanpeng Li (4):
  KVM: LAPIC: Make lapic timer unpinned
  KVM: LAPIC: Inject timer interrupt via posted interrupt
  KVM: LAPIC: Ignore timer migration when lapic timer is injected by pi
  KVM: LAPIC: Don't inject already-expired timer via posted interrupt

 arch/x86/kvm/lapic.c            | 68 +++++++++++++++++++++++++++--------------
 arch/x86/kvm/lapic.h            |  3 +-
 arch/x86/kvm/svm.c              |  2 +-
 arch/x86/kvm/vmx/vmx.c          |  5 +--
 arch/x86/kvm/x86.c              | 11 ++++---
 arch/x86/kvm/x86.h              |  2 ++
 include/linux/sched/isolation.h |  2 ++
 kernel/sched/isolation.c        |  6 ++++
 8 files changed, 67 insertions(+), 32 deletions(-)

Comments

Paolo Bonzini July 2, 2019, 4:38 p.m. UTC | #1
On 21/06/19 11:39, Wanpeng Li wrote:
> Dedicated instances are currently disturbed by unnecessary jitter due 
> to the emulated lapic timers fire on the same pCPUs which vCPUs resident.
> There is no hardware virtual timer on Intel for guest like ARM. Both 
> programming timer in guest and the emulated timer fires incur vmexits.
> This patchset tries to avoid vmexit which is incurred by the emulated 
> timer fires in dedicated instance scenario. 
> 
> When nohz_full is enabled in dedicated instances scenario, the unpinned 
> timer will be moved to the nearest busy housekeepers after commit
> 9642d18eee2cd (nohz: Affine unpinned timers to housekeepers) and commit 
> 444969223c8 ("sched/nohz: Fix affine unpinned timers mess"). However, 
> KVM always makes lapic timer pinned to the pCPU which vCPU residents, the 
> reason is explained by commit 61abdbe0 (kvm: x86: make lapic hrtimer 
> pinned). Actually, these emulated timers can be offload to the housekeeping 
> cpus since APICv is really common in recent years. The guest timer interrupt 
> is injected by posted-interrupt which is delivered by housekeeping cpu 
> once the emulated timer fires. 
> 
> The host admin should fine tuned, e.g. dedicated instances scenario w/ 
> nohz_full cover the pCPUs which vCPUs resident, several pCPUs surplus 
> for busy housekeeping, disable mwait/hlt/pause vmexits to keep in non-root  
> mode, ~3% redis performance benefit can be observed on Skylake server.

Marcelo,

does this patch work for you or can you still see the oops?

Thanks,

Paolo

> w/o patchset:
> 
>             VM-EXIT  Samples  Samples%  Time%   Min Time  Max Time   Avg time
> 
> EXTERNAL_INTERRUPT    42916    49.43%   39.30%   0.47us   106.09us   0.71us ( +-   1.09% )
> 
> w/ patchset:
> 
>             VM-EXIT  Samples  Samples%  Time%   Min Time  Max Time         Avg time
> 
> EXTERNAL_INTERRUPT    6871     9.29%     2.96%   0.44us    57.88us   0.72us ( +-   4.02% )
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Marcelo Tosatti <mtosatti@redhat.com>
> 
> v4 -> v5:
>  * update patch description in patch 1/4
>  * feed latest apic->lapic_timer.expired_tscdeadline to kvm_wait_lapic_expire()
>  * squash advance timer handling to patch 2/4
> 
> v3 -> v4:
>  * drop the HRTIMER_MODE_ABS_PINNED, add kick after set pending timer
>  * don't posted inject already-expired timer
> 
> v2 -> v3:
>  * disarming the vmx preemption timer when posted_interrupt_inject_timer_enabled()
>  * check kvm_hlt_in_guest instead
> 
> v1 -> v2:
>  * check vcpu_halt_in_guest
>  * move module parameter from kvm-intel to kvm
>  * add housekeeping_enabled
>  * rename apic_timer_expired_pi to kvm_apic_inject_pending_timer_irqs
> 
> 
> Wanpeng Li (4):
>   KVM: LAPIC: Make lapic timer unpinned
>   KVM: LAPIC: Inject timer interrupt via posted interrupt
>   KVM: LAPIC: Ignore timer migration when lapic timer is injected by pi
>   KVM: LAPIC: Don't inject already-expired timer via posted interrupt
> 
>  arch/x86/kvm/lapic.c            | 68 +++++++++++++++++++++++++++--------------
>  arch/x86/kvm/lapic.h            |  3 +-
>  arch/x86/kvm/svm.c              |  2 +-
>  arch/x86/kvm/vmx/vmx.c          |  5 +--
>  arch/x86/kvm/x86.c              | 11 ++++---
>  arch/x86/kvm/x86.h              |  2 ++
>  include/linux/sched/isolation.h |  2 ++
>  kernel/sched/isolation.c        |  6 ++++
>  8 files changed, 67 insertions(+), 32 deletions(-)
>
Marcelo Tosatti July 2, 2019, 10:23 p.m. UTC | #2
On Tue, Jul 02, 2019 at 06:38:56PM +0200, Paolo Bonzini wrote:
> On 21/06/19 11:39, Wanpeng Li wrote:
> > Dedicated instances are currently disturbed by unnecessary jitter due 
> > to the emulated lapic timers fire on the same pCPUs which vCPUs resident.
> > There is no hardware virtual timer on Intel for guest like ARM. Both 
> > programming timer in guest and the emulated timer fires incur vmexits.
> > This patchset tries to avoid vmexit which is incurred by the emulated 
> > timer fires in dedicated instance scenario. 
> > 
> > When nohz_full is enabled in dedicated instances scenario, the unpinned 
> > timer will be moved to the nearest busy housekeepers after commit
> > 9642d18eee2cd (nohz: Affine unpinned timers to housekeepers) and commit 
> > 444969223c8 ("sched/nohz: Fix affine unpinned timers mess"). However, 
> > KVM always makes lapic timer pinned to the pCPU which vCPU residents, the 
> > reason is explained by commit 61abdbe0 (kvm: x86: make lapic hrtimer 
> > pinned). Actually, these emulated timers can be offload to the housekeeping 
> > cpus since APICv is really common in recent years. The guest timer interrupt 
> > is injected by posted-interrupt which is delivered by housekeeping cpu 
> > once the emulated timer fires. 
> > 
> > The host admin should fine tuned, e.g. dedicated instances scenario w/ 
> > nohz_full cover the pCPUs which vCPUs resident, several pCPUs surplus 
> > for busy housekeeping, disable mwait/hlt/pause vmexits to keep in non-root  
> > mode, ~3% redis performance benefit can be observed on Skylake server.
> 
> Marcelo,
> 
> does this patch work for you or can you still see the oops?

Hi Paolo,

No more oopses with kvm/queue. Can you include:

Index: kvm/arch/x86/kvm/lapic.c
===================================================================
--- kvm.orig/arch/x86/kvm/lapic.c
+++ kvm/arch/x86/kvm/lapic.c
@@ -124,8 +124,7 @@ static inline u32 kvm_x2apic_id(struct k
 
 bool posted_interrupt_inject_timer(struct kvm_vcpu *vcpu)
 {
-	return pi_inject_timer && kvm_vcpu_apicv_active(vcpu) &&
-		kvm_hlt_in_guest(vcpu->kvm);
+	return pi_inject_timer && kvm_vcpu_apicv_active(vcpu);
 }
 EXPORT_SYMBOL_GPL(posted_interrupt_inject_timer);
 
However, for some reason (hrtimer subsystems responsability) with cyclictest -i 200
on the guest, the timer runs on the local CPU:

       CPU 1/KVM-9454  [003] d..2   881.674196: get_nohz_timer_target: get_nohz_timer_target 3->0
       CPU 1/KVM-9454  [003] d..2   881.674200: get_nohz_timer_target: get_nohz_timer_target 3->0
       CPU 1/KVM-9454  [003] d.h.   881.674387: apic_timer_fn <-__hrtimer_run_queues
       CPU 1/KVM-9454  [003] d..2   881.674393: get_nohz_timer_target: get_nohz_timer_target 3->0
       CPU 1/KVM-9454  [003] d..2   881.674395: get_nohz_timer_target: get_nohz_timer_target 3->0
       CPU 1/KVM-9454  [003] d..2   881.674399: get_nohz_timer_target: get_nohz_timer_target 3->0
       CPU 1/KVM-9454  [003] d.h.   881.674586: apic_timer_fn <-__hrtimer_run_queues
       CPU 1/KVM-9454  [003] d..2   881.674593: get_nohz_timer_target: get_nohz_timer_target 3->0
       CPU 1/KVM-9454  [003] d..2   881.674595: get_nohz_timer_target: get_nohz_timer_target 3->0
       CPU 1/KVM-9454  [003] d..2   881.674599: get_nohz_timer_target: get_nohz_timer_target 3->0
       CPU 1/KVM-9454  [003] d.h.   881.674787: apic_timer_fn <-__hrtimer_run_queues
       CPU 1/KVM-9454  [003] d..2   881.674793: get_nohz_timer_target: get_nohz_timer_target 3->0
       CPU 1/KVM-9454  [003] d..2   881.674795: get_nohz_timer_target: get_nohz_timer_target 3->0

But on boot:

       CPU 1/KVM-9454  [003] d..2   578.625394: get_nohz_timer_target: get_nohz_timer_target 3->0
          <idle>-0     [000] d.h1   578.626390: apic_timer_fn <-__hrtimer_run_queues
          <idle>-0     [000] d.h1   578.626394: apic_timer_fn<-__hrtimer_run_queues
       CPU 1/KVM-9454  [003] d..2   578.626401: get_nohz_timer_target: get_nohz_timer_target 3->0
          <idle>-0     [000] d.h1   578.628397: apic_timer_fn <-__hrtimer_run_queues
       CPU 1/KVM-9454  [003] d..2   578.628407: get_nohz_timer_target: get_nohz_timer_target 3->0
          <idle>-0     [000] d.h1   578.631403: apic_timer_fn <-__hrtimer_run_queues
       CPU 1/KVM-9454  [003] d..2   578.631413: get_nohz_timer_target: get_nohz_timer_target 3->0
          <idle>-0     [000] d.h1   578.635409: apic_timer_fn <-__hrtimer_run_queues
       CPU 1/KVM-9454  [003] d..2   578.635419: get_nohz_timer_target: get_nohz_timer_target 3->0
          <idle>-0     [000] d.h1   578.640415: apic_timer_fn <-__hrtimer_run_queues

Thanks.
Wanpeng Li July 3, 2019, 12:47 a.m. UTC | #3
On Wed, 3 Jul 2019 at 06:23, Marcelo Tosatti <mtosatti@redhat.com> wrote:
>
> On Tue, Jul 02, 2019 at 06:38:56PM +0200, Paolo Bonzini wrote:
> > On 21/06/19 11:39, Wanpeng Li wrote:
> > > Dedicated instances are currently disturbed by unnecessary jitter due
> > > to the emulated lapic timers fire on the same pCPUs which vCPUs resident.
> > > There is no hardware virtual timer on Intel for guest like ARM. Both
> > > programming timer in guest and the emulated timer fires incur vmexits.
> > > This patchset tries to avoid vmexit which is incurred by the emulated
> > > timer fires in dedicated instance scenario.
> > >
> > > When nohz_full is enabled in dedicated instances scenario, the unpinned
> > > timer will be moved to the nearest busy housekeepers after commit
> > > 9642d18eee2cd (nohz: Affine unpinned timers to housekeepers) and commit
> > > 444969223c8 ("sched/nohz: Fix affine unpinned timers mess"). However,
> > > KVM always makes lapic timer pinned to the pCPU which vCPU residents, the
> > > reason is explained by commit 61abdbe0 (kvm: x86: make lapic hrtimer
> > > pinned). Actually, these emulated timers can be offload to the housekeeping
> > > cpus since APICv is really common in recent years. The guest timer interrupt
> > > is injected by posted-interrupt which is delivered by housekeeping cpu
> > > once the emulated timer fires.
> > >
> > > The host admin should fine tuned, e.g. dedicated instances scenario w/
> > > nohz_full cover the pCPUs which vCPUs resident, several pCPUs surplus
> > > for busy housekeeping, disable mwait/hlt/pause vmexits to keep in non-root
> > > mode, ~3% redis performance benefit can be observed on Skylake server.
> >
> > Marcelo,
> >
> > does this patch work for you or can you still see the oops?
>
> Hi Paolo,
>
> No more oopses with kvm/queue. Can you include:

Cool, thanks for the confirm, Marcelo!

>
> Index: kvm/arch/x86/kvm/lapic.c
> ===================================================================
> --- kvm.orig/arch/x86/kvm/lapic.c
> +++ kvm/arch/x86/kvm/lapic.c
> @@ -124,8 +124,7 @@ static inline u32 kvm_x2apic_id(struct k
>
>  bool posted_interrupt_inject_timer(struct kvm_vcpu *vcpu)
>  {
> -       return pi_inject_timer && kvm_vcpu_apicv_active(vcpu) &&
> -               kvm_hlt_in_guest(vcpu->kvm);
> +       return pi_inject_timer && kvm_vcpu_apicv_active(vcpu);
>  }
>  EXPORT_SYMBOL_GPL(posted_interrupt_inject_timer);
>
> However, for some reason (hrtimer subsystems responsability) with cyclictest -i 200
> on the guest, the timer runs on the local CPU:
>
>        CPU 1/KVM-9454  [003] d..2   881.674196: get_nohz_timer_target: get_nohz_timer_target 3->0
>        CPU 1/KVM-9454  [003] d..2   881.674200: get_nohz_timer_target: get_nohz_timer_target 3->0
>        CPU 1/KVM-9454  [003] d.h.   881.674387: apic_timer_fn <-__hrtimer_run_queues
>        CPU 1/KVM-9454  [003] d..2   881.674393: get_nohz_timer_target: get_nohz_timer_target 3->0
>        CPU 1/KVM-9454  [003] d..2   881.674395: get_nohz_timer_target: get_nohz_timer_target 3->0
>        CPU 1/KVM-9454  [003] d..2   881.674399: get_nohz_timer_target: get_nohz_timer_target 3->0
>        CPU 1/KVM-9454  [003] d.h.   881.674586: apic_timer_fn <-__hrtimer_run_queues
>        CPU 1/KVM-9454  [003] d..2   881.674593: get_nohz_timer_target: get_nohz_timer_target 3->0
>        CPU 1/KVM-9454  [003] d..2   881.674595: get_nohz_timer_target: get_nohz_timer_target 3->0
>        CPU 1/KVM-9454  [003] d..2   881.674599: get_nohz_timer_target: get_nohz_timer_target 3->0
>        CPU 1/KVM-9454  [003] d.h.   881.674787: apic_timer_fn <-__hrtimer_run_queues
>        CPU 1/KVM-9454  [003] d..2   881.674793: get_nohz_timer_target: get_nohz_timer_target 3->0
>        CPU 1/KVM-9454  [003] d..2   881.674795: get_nohz_timer_target: get_nohz_timer_target 3->0
>
> But on boot:
>
>        CPU 1/KVM-9454  [003] d..2   578.625394: get_nohz_timer_target: get_nohz_timer_target 3->0
>           <idle>-0     [000] d.h1   578.626390: apic_timer_fn <-__hrtimer_run_queues
>           <idle>-0     [000] d.h1   578.626394: apic_timer_fn<-__hrtimer_run_queues
>        CPU 1/KVM-9454  [003] d..2   578.626401: get_nohz_timer_target: get_nohz_timer_target 3->0
>           <idle>-0     [000] d.h1   578.628397: apic_timer_fn <-__hrtimer_run_queues
>        CPU 1/KVM-9454  [003] d..2   578.628407: get_nohz_timer_target: get_nohz_timer_target 3->0
>           <idle>-0     [000] d.h1   578.631403: apic_timer_fn <-__hrtimer_run_queues
>        CPU 1/KVM-9454  [003] d..2   578.631413: get_nohz_timer_target: get_nohz_timer_target 3->0
>           <idle>-0     [000] d.h1   578.635409: apic_timer_fn <-__hrtimer_run_queues
>        CPU 1/KVM-9454  [003] d..2   578.635419: get_nohz_timer_target: get_nohz_timer_target 3->0
>           <idle>-0     [000] d.h1   578.640415: apic_timer_fn <-__hrtimer_run_queues

You have an idle housekeeping cpu(cpu 0), however, most of
housekeeping cpus will be busy in product environment to avoid to
waste money. get_nohz_timer_target() will find a busy housekeeping cpu
but the timer migration will fail if the timer is the first expiring
timer on the new target(as the comments above the function
switch_hrtimer_base()). Please try taskset -c 0 stress --cpu 1 on your
host, you can observe(through /proc/timer_list) apic_timer_fn running
on cpu 0 most of the time and sporadically on local cpu.

Regards,
Wanpeng Li
Wanpeng Li July 3, 2019, 1:01 a.m. UTC | #4
On Wed, 3 Jul 2019 at 08:47, Wanpeng Li <kernellwp@gmail.com> wrote:
>
> On Wed, 3 Jul 2019 at 06:23, Marcelo Tosatti <mtosatti@redhat.com> wrote:
> >
> > On Tue, Jul 02, 2019 at 06:38:56PM +0200, Paolo Bonzini wrote:
> > > On 21/06/19 11:39, Wanpeng Li wrote:
> > > > Dedicated instances are currently disturbed by unnecessary jitter due
> > > > to the emulated lapic timers fire on the same pCPUs which vCPUs resident.
> > > > There is no hardware virtual timer on Intel for guest like ARM. Both
> > > > programming timer in guest and the emulated timer fires incur vmexits.
> > > > This patchset tries to avoid vmexit which is incurred by the emulated
> > > > timer fires in dedicated instance scenario.
> > > >
> > > > When nohz_full is enabled in dedicated instances scenario, the unpinned
> > > > timer will be moved to the nearest busy housekeepers after commit
> > > > 9642d18eee2cd (nohz: Affine unpinned timers to housekeepers) and commit
> > > > 444969223c8 ("sched/nohz: Fix affine unpinned timers mess"). However,
> > > > KVM always makes lapic timer pinned to the pCPU which vCPU residents, the
> > > > reason is explained by commit 61abdbe0 (kvm: x86: make lapic hrtimer
> > > > pinned). Actually, these emulated timers can be offload to the housekeeping
> > > > cpus since APICv is really common in recent years. The guest timer interrupt
> > > > is injected by posted-interrupt which is delivered by housekeeping cpu
> > > > once the emulated timer fires.
> > > >
> > > > The host admin should fine tuned, e.g. dedicated instances scenario w/
> > > > nohz_full cover the pCPUs which vCPUs resident, several pCPUs surplus
> > > > for busy housekeeping, disable mwait/hlt/pause vmexits to keep in non-root
> > > > mode, ~3% redis performance benefit can be observed on Skylake server.
> > >
> > > Marcelo,
> > >
> > > does this patch work for you or can you still see the oops?
> >
> > Hi Paolo,
> >
> > No more oopses with kvm/queue. Can you include:
>
> Cool, thanks for the confirm, Marcelo!
>
> >
> > Index: kvm/arch/x86/kvm/lapic.c
> > ===================================================================
> > --- kvm.orig/arch/x86/kvm/lapic.c
> > +++ kvm/arch/x86/kvm/lapic.c
> > @@ -124,8 +124,7 @@ static inline u32 kvm_x2apic_id(struct k
> >
> >  bool posted_interrupt_inject_timer(struct kvm_vcpu *vcpu)
> >  {
> > -       return pi_inject_timer && kvm_vcpu_apicv_active(vcpu) &&
> > -               kvm_hlt_in_guest(vcpu->kvm);
> > +       return pi_inject_timer && kvm_vcpu_apicv_active(vcpu);
> >  }
> >  EXPORT_SYMBOL_GPL(posted_interrupt_inject_timer);
> >
> > However, for some reason (hrtimer subsystems responsability) with cyclictest -i 200
> > on the guest, the timer runs on the local CPU:
> >
> >        CPU 1/KVM-9454  [003] d..2   881.674196: get_nohz_timer_target: get_nohz_timer_target 3->0
> >        CPU 1/KVM-9454  [003] d..2   881.674200: get_nohz_timer_target: get_nohz_timer_target 3->0
> >        CPU 1/KVM-9454  [003] d.h.   881.674387: apic_timer_fn <-__hrtimer_run_queues
> >        CPU 1/KVM-9454  [003] d..2   881.674393: get_nohz_timer_target: get_nohz_timer_target 3->0
> >        CPU 1/KVM-9454  [003] d..2   881.674395: get_nohz_timer_target: get_nohz_timer_target 3->0
> >        CPU 1/KVM-9454  [003] d..2   881.674399: get_nohz_timer_target: get_nohz_timer_target 3->0
> >        CPU 1/KVM-9454  [003] d.h.   881.674586: apic_timer_fn <-__hrtimer_run_queues
> >        CPU 1/KVM-9454  [003] d..2   881.674593: get_nohz_timer_target: get_nohz_timer_target 3->0
> >        CPU 1/KVM-9454  [003] d..2   881.674595: get_nohz_timer_target: get_nohz_timer_target 3->0
> >        CPU 1/KVM-9454  [003] d..2   881.674599: get_nohz_timer_target: get_nohz_timer_target 3->0
> >        CPU 1/KVM-9454  [003] d.h.   881.674787: apic_timer_fn <-__hrtimer_run_queues
> >        CPU 1/KVM-9454  [003] d..2   881.674793: get_nohz_timer_target: get_nohz_timer_target 3->0
> >        CPU 1/KVM-9454  [003] d..2   881.674795: get_nohz_timer_target: get_nohz_timer_target 3->0
> >
> > But on boot:
> >
> >        CPU 1/KVM-9454  [003] d..2   578.625394: get_nohz_timer_target: get_nohz_timer_target 3->0
> >           <idle>-0     [000] d.h1   578.626390: apic_timer_fn <-__hrtimer_run_queues
> >           <idle>-0     [000] d.h1   578.626394: apic_timer_fn<-__hrtimer_run_queues
> >        CPU 1/KVM-9454  [003] d..2   578.626401: get_nohz_timer_target: get_nohz_timer_target 3->0
> >           <idle>-0     [000] d.h1   578.628397: apic_timer_fn <-__hrtimer_run_queues
> >        CPU 1/KVM-9454  [003] d..2   578.628407: get_nohz_timer_target: get_nohz_timer_target 3->0
> >           <idle>-0     [000] d.h1   578.631403: apic_timer_fn <-__hrtimer_run_queues
> >        CPU 1/KVM-9454  [003] d..2   578.631413: get_nohz_timer_target: get_nohz_timer_target 3->0
> >           <idle>-0     [000] d.h1   578.635409: apic_timer_fn <-__hrtimer_run_queues
> >        CPU 1/KVM-9454  [003] d..2   578.635419: get_nohz_timer_target: get_nohz_timer_target 3->0
> >           <idle>-0     [000] d.h1   578.640415: apic_timer_fn <-__hrtimer_run_queues
>
> You have an idle housekeeping cpu(cpu 0), however, most of
> housekeeping cpus will be busy in product environment to avoid to
> waste money. get_nohz_timer_target() will find a busy housekeeping cpu
> but the timer migration will fail if the timer is the first expiring
> timer on the new target(as the comments above the function
> switch_hrtimer_base()). Please try taskset -c 0 stress --cpu 1 on your
> host, you can observe(through /proc/timer_list) apic_timer_fn running
> on cpu 0 most of the time and sporadically on local cpu.

Or if you have a little bigger VM/multiple VMs, the apic_timer_fn from
all virtual lapics will make a housekeeping cpu busy. :)

Regards,
Wanpeng Li