Message ID | 56F3DDF402000078000E008B@prv-mh.provo.novell.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Thu, 2016-03-24 at 05:30 -0600, Jan Beulich wrote: > Recursive locks know their current owner, and since we use the > function > solely to determine whether a particular lock is being held by the > current CPU (which so far has been an imprecise check), make actually > check the owner for recusrively acquired locks. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > Reviewed-by: Dario Faggioli <dario.faggioli@citrix.com> Regards, Dario
On 24/03/16 11:30, Jan Beulich wrote: > Recursive locks know their current owner, and since we use the function > solely to determine whether a particular lock is being held by the > current CPU (which so far has been an imprecise check), make actually > check the owner for recusrively acquired locks. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
On 24/03/16 11:30, Jan Beulich wrote: > Recursive locks know their current owner, and since we use the function > solely to determine whether a particular lock is being held by the > current CPU (which so far has been an imprecise check), make actually > check the owner for recusrively acquired locks. What's the expected behaviour of _spin_is_locked() if the lock is held by another CPU? Before it may return true if it is held by another CPU, now it will always return false in this case. David > --- a/xen/common/spinlock.c > +++ b/xen/common/spinlock.c > @@ -188,7 +188,9 @@ void _spin_unlock_irqrestore(spinlock_t > int _spin_is_locked(spinlock_t *lock) > { > check_lock(&lock->debug); > - return lock->tickets.head != lock->tickets.tail; > + return lock->recurse_cpu == SPINLOCK_NO_CPU > + ? lock->tickets.head != lock->tickets.tail > + : lock->recurse_cpu == smp_processor_id(); > } > > int _spin_trylock(spinlock_t *lock) > > > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel >
>>> On 24.03.16 at 16:55, <david.vrabel@citrix.com> wrote: > On 24/03/16 11:30, Jan Beulich wrote: >> Recursive locks know their current owner, and since we use the function >> solely to determine whether a particular lock is being held by the >> current CPU (which so far has been an imprecise check), make actually >> check the owner for recusrively acquired locks. > > What's the expected behaviour of _spin_is_locked() if the lock is held > by another CPU? > > Before it may return true if it is held by another CPU, now it will > always return false in this case. Correct - hence the reference to this only being used for a limited set of cases (read: ASSERT()s and alike). Jan
On March 24, 2016 7:31pm, <jbeulich@suse.com> wrote: > Recursive locks know their current owner, and since we use the function solely > to determine whether a particular lock is being held by the current CPU (which > so far has been an imprecise check), make actually check the owner for > recusrively acquired locks. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > > --- a/xen/common/spinlock.c > +++ b/xen/common/spinlock.c > @@ -188,7 +188,9 @@ void _spin_unlock_irqrestore(spinlock_t > int _spin_is_locked(spinlock_t *lock) > { > check_lock(&lock->debug); > - return lock->tickets.head != lock->tickets.tail; > + return lock->recurse_cpu == SPINLOCK_NO_CPU > + ? lock->tickets.head != lock->tickets.tail > + : lock->recurse_cpu == smp_processor_id(); > } > > int _spin_trylock(spinlock_t *lock) > > Reviewed-by: Quan Xu <quan.xu@intel.com> Thanks for your enhancement. I am not aware of this case in my previous patch set. Quan
On Thu, Mar 24, 2016 at 4:19 PM, Jan Beulich <JBeulich@suse.com> wrote: >>>> On 24.03.16 at 16:55, <david.vrabel@citrix.com> wrote: >> On 24/03/16 11:30, Jan Beulich wrote: >>> Recursive locks know their current owner, and since we use the function >>> solely to determine whether a particular lock is being held by the >>> current CPU (which so far has been an imprecise check), make actually >>> check the owner for recusrively acquired locks. >> >> What's the expected behaviour of _spin_is_locked() if the lock is held >> by another CPU? >> >> Before it may return true if it is held by another CPU, now it will >> always return false in this case. > > Correct - hence the reference to this only being used for a limited > set of cases (read: ASSERT()s and alike). A bunch of the mm locks add "_by_me" at the end of the function. Did spin_is_locked() used to have that as well? In any case I suppose "spin_is_locked_by_someone()" is really pretty useless information. -George
>>> On 29.03.16 at 12:36, <George.Dunlap@eu.citrix.com> wrote: > On Thu, Mar 24, 2016 at 4:19 PM, Jan Beulich <JBeulich@suse.com> wrote: >>>>> On 24.03.16 at 16:55, <david.vrabel@citrix.com> wrote: >>> On 24/03/16 11:30, Jan Beulich wrote: >>>> Recursive locks know their current owner, and since we use the function >>>> solely to determine whether a particular lock is being held by the >>>> current CPU (which so far has been an imprecise check), make actually >>>> check the owner for recusrively acquired locks. >>> >>> What's the expected behaviour of _spin_is_locked() if the lock is held >>> by another CPU? >>> >>> Before it may return true if it is held by another CPU, now it will >>> always return false in this case. >> >> Correct - hence the reference to this only being used for a limited >> set of cases (read: ASSERT()s and alike). > > A bunch of the mm locks add "_by_me" at the end of the function. Did > spin_is_locked() used to have that as well? I don't think it did. And I think the only other readily visible use case of spin_is_locked() (testing whether a spin lock could be acquired without delay) is bogus anyway, as it would better be performed by doing a try-lock. > In any case I suppose "spin_is_locked_by_someone()" is really pretty > useless information. Indeed, hence I didn't want to alter the functions' names. Jan
--- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -188,7 +188,9 @@ void _spin_unlock_irqrestore(spinlock_t int _spin_is_locked(spinlock_t *lock) { check_lock(&lock->debug); - return lock->tickets.head != lock->tickets.tail; + return lock->recurse_cpu == SPINLOCK_NO_CPU + ? lock->tickets.head != lock->tickets.tail + : lock->recurse_cpu == smp_processor_id(); } int _spin_trylock(spinlock_t *lock)