Message ID | 20200207163410.31276-1-maz@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: Disable preemption in kvm_get_running_vcpu() | expand |
On Fri, Feb 07, 2020 at 04:34:10PM +0000, Marc Zyngier wrote: > Accessing a per-cpu variable only makes sense when preemption is > disabled (and the kernel does check this when the right debug options > are switched on). > > For kvm_get_running_vcpu(), it is fine to return the value after > re-enabling preemption, as the preempt notifiers will make sure that > this is kept consistent across task migration (the comment above the > function hints at it, but lacks the crucial preemption management). > > While we're at it, move the comment from the ARM code, which explains > why the whole thing works. > > Fixes: 7495e22bb165 ("KVM: Move running VCPU from ARM to common code"). > Cc: Peter Xu <peterx@redhat.com> > Cc: Paolo Bonzini <pbonzini@redhat.com> > Reported-by: Zenghui Yu <yuzenghui@huawei.com> > Signed-off-by: Marc Zyngier <maz@kernel.org> > Link: https://lore.kernel.org/r/318984f6-bc36-33a3-abc6-bf2295974b06@huawei.com Reviewed-by: Peter Xu <peterx@redhat.com>
On 2020/2/8 0:34, Marc Zyngier wrote: > Accessing a per-cpu variable only makes sense when preemption is > disabled (and the kernel does check this when the right debug options > are switched on). > > For kvm_get_running_vcpu(), it is fine to return the value after > re-enabling preemption, as the preempt notifiers will make sure that > this is kept consistent across task migration (the comment above the > function hints at it, but lacks the crucial preemption management). > > While we're at it, move the comment from the ARM code, which explains > why the whole thing works. > > Fixes: 7495e22bb165 ("KVM: Move running VCPU from ARM to common code"). > Cc: Peter Xu <peterx@redhat.com> > Cc: Paolo Bonzini <pbonzini@redhat.com> > Reported-by: Zenghui Yu <yuzenghui@huawei.com> > Signed-off-by: Marc Zyngier <maz@kernel.org> > Link: https://lore.kernel.org/r/318984f6-bc36-33a3-abc6-bf2295974b06@huawei.com Tested-by: Zenghui Yu <yuzenghui@huawei.com> on top of v5.6-rc1. Thanks
On 07/02/20 17:34, Marc Zyngier wrote: > Accessing a per-cpu variable only makes sense when preemption is > disabled (and the kernel does check this when the right debug options > are switched on). > > For kvm_get_running_vcpu(), it is fine to return the value after > re-enabling preemption, as the preempt notifiers will make sure that > this is kept consistent across task migration (the comment above the > function hints at it, but lacks the crucial preemption management). > > While we're at it, move the comment from the ARM code, which explains > why the whole thing works. > > Fixes: 7495e22bb165 ("KVM: Move running VCPU from ARM to common code"). > Cc: Peter Xu <peterx@redhat.com> > Cc: Paolo Bonzini <pbonzini@redhat.com> > Reported-by: Zenghui Yu <yuzenghui@huawei.com> > Signed-off-by: Marc Zyngier <maz@kernel.org> > Link: https://lore.kernel.org/r/318984f6-bc36-33a3-abc6-bf2295974b06@huawei.comz Queued, thanks. Paolo > --- > virt/kvm/arm/vgic/vgic-mmio.c | 12 ------------ > virt/kvm/kvm_main.c | 16 +++++++++++++--- > 2 files changed, 13 insertions(+), 15 deletions(-) > > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c > index d656ebd5f9d4..97fb2a40e6ba 100644 > --- a/virt/kvm/arm/vgic/vgic-mmio.c > +++ b/virt/kvm/arm/vgic/vgic-mmio.c > @@ -179,18 +179,6 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu, > return value; > } > > -/* > - * This function will return the VCPU that performed the MMIO access and > - * trapped from within the VM, and will return NULL if this is a userspace > - * access. > - * > - * We can disable preemption locally around accessing the per-CPU variable, > - * and use the resolved vcpu pointer after enabling preemption again, because > - * even if the current thread is migrated to another CPU, reading the per-CPU > - * value later will give us the same value as we update the per-CPU variable > - * in the preempt notifier handlers. > - */ > - > /* Must be called with irq->irq_lock held */ > static void vgic_hw_irq_spending(struct kvm_vcpu *vcpu, struct vgic_irq *irq, > bool is_uaccess) > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 67ae2d5c37b2..70f03ce0e5c1 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -4409,12 +4409,22 @@ static void kvm_sched_out(struct preempt_notifier *pn, > > /** > * kvm_get_running_vcpu - get the vcpu running on the current CPU. > - * Thanks to preempt notifiers, this can also be called from > - * preemptible context. > + * > + * We can disable preemption locally around accessing the per-CPU variable, > + * and use the resolved vcpu pointer after enabling preemption again, > + * because even if the current thread is migrated to another CPU, reading > + * the per-CPU value later will give us the same value as we update the > + * per-CPU variable in the preempt notifier handlers. > */ > struct kvm_vcpu *kvm_get_running_vcpu(void) > { > - return __this_cpu_read(kvm_running_vcpu); > + struct kvm_vcpu *vcpu; > + > + preempt_disable(); > + vcpu = __this_cpu_read(kvm_running_vcpu); > + preempt_enable(); > + > + return vcpu; > } > > /** >
diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index d656ebd5f9d4..97fb2a40e6ba 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -179,18 +179,6 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu, return value; } -/* - * This function will return the VCPU that performed the MMIO access and - * trapped from within the VM, and will return NULL if this is a userspace - * access. - * - * We can disable preemption locally around accessing the per-CPU variable, - * and use the resolved vcpu pointer after enabling preemption again, because - * even if the current thread is migrated to another CPU, reading the per-CPU - * value later will give us the same value as we update the per-CPU variable - * in the preempt notifier handlers. - */ - /* Must be called with irq->irq_lock held */ static void vgic_hw_irq_spending(struct kvm_vcpu *vcpu, struct vgic_irq *irq, bool is_uaccess) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 67ae2d5c37b2..70f03ce0e5c1 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4409,12 +4409,22 @@ static void kvm_sched_out(struct preempt_notifier *pn, /** * kvm_get_running_vcpu - get the vcpu running on the current CPU. - * Thanks to preempt notifiers, this can also be called from - * preemptible context. + * + * We can disable preemption locally around accessing the per-CPU variable, + * and use the resolved vcpu pointer after enabling preemption again, + * because even if the current thread is migrated to another CPU, reading + * the per-CPU value later will give us the same value as we update the + * per-CPU variable in the preempt notifier handlers. */ struct kvm_vcpu *kvm_get_running_vcpu(void) { - return __this_cpu_read(kvm_running_vcpu); + struct kvm_vcpu *vcpu; + + preempt_disable(); + vcpu = __this_cpu_read(kvm_running_vcpu); + preempt_enable(); + + return vcpu; } /**
Accessing a per-cpu variable only makes sense when preemption is disabled (and the kernel does check this when the right debug options are switched on). For kvm_get_running_vcpu(), it is fine to return the value after re-enabling preemption, as the preempt notifiers will make sure that this is kept consistent across task migration (the comment above the function hints at it, but lacks the crucial preemption management). While we're at it, move the comment from the ARM code, which explains why the whole thing works. Fixes: 7495e22bb165 ("KVM: Move running VCPU from ARM to common code"). Cc: Peter Xu <peterx@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Reported-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/318984f6-bc36-33a3-abc6-bf2295974b06@huawei.com --- virt/kvm/arm/vgic/vgic-mmio.c | 12 ------------ virt/kvm/kvm_main.c | 16 +++++++++++++--- 2 files changed, 13 insertions(+), 15 deletions(-)