Message ID | CAPm50aK9oe-m5QWfrFjzGx_vvNveA+U6-Fs3KD5+Zq5RZ+UhDg@mail.gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | kvm: x86: Reduce unnecessary function call | expand |
On Fri, Oct 07, 2022, Hao Peng wrote: > From: Peng Hao <flyingpeng@tencent.com> > > kvm->lock is held very close to mutex_is_locked(kvm->lock). > Do not need to call mutex_is_locked. > > Signed-off-by: Peng Hao <flyingpeng@tencent.com> > --- > arch/x86/kvm/pmu.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c > index 02f9e4f245bd..8a7dbe2c469a 100644 > --- a/arch/x86/kvm/pmu.c > +++ b/arch/x86/kvm/pmu.c > @@ -601,8 +601,7 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm > *kvm, void __user *argp) > sort(&filter->events, filter->nevents, sizeof(__u64), cmp_u64, NULL); > > mutex_lock(&kvm->lock); > - filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter, > - mutex_is_locked(&kvm->lock)); > + filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter, 1); I'd prefer to keep the mutex_is_locked() call, even though it's quite silly, as it self-documents what is being used to protect writes to pmu_event_filter. The third paramter is evaluated iff CONFIG_PROVE_RCU=y, which is the complete oppositive of performance sensitive, so in practice there's no real downside to the somewhat superfluous call. > mutex_unlock(&kvm->lock); > > synchronize_srcu_expedited(&kvm->srcu); > -- > 2.27.0
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 02f9e4f245bd..8a7dbe2c469a 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -601,8 +601,7 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) sort(&filter->events, filter->nevents, sizeof(__u64), cmp_u64, NULL); mutex_lock(&kvm->lock); - filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter, - mutex_is_locked(&kvm->lock)); + filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter, 1); mutex_unlock(&kvm->lock);