Message ID | 1547166387-19785-4-git-send-email-vgupta@synopsys.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Replace opencoded set_mask_bits | expand |
On 1/10/19 4:26 PM, Vineet Gupta wrote: > | > Also, set_mask_bits is used in fs quite a bit and we can possibly come up > | > with a generic llsc based implementation (w/o the cmpxchg loop) > | > | May I also suggest changing the return value of set_mask_bits() to old. > | > | You can compute the new value given old, but you cannot compute the old > | value given new, therefore old is the better return value. Also, no > | current user seems to use the return value, so changing it is without > | risk. > > Link: http://lkml.kernel.org/g/20150807110955.GH16853@twins.programming.kicks-ass.net > Suggested-by: Peter Zijlstra <peterz@infradead.org> > Cc: Miklos Szeredi <mszeredi@redhat.com> > Cc: Ingo Molnar <mingo@kernel.org> > Cc: Jani Nikula <jani.nikula@intel.com> > Cc: Chris Wilson <chris@chris-wilson.co.uk> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Will Deacon <will.deacon@arm.com> > Signed-off-by: Vineet Gupta <vgupta@synopsys.com> > Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
On Thu, Jan 10, 2019 at 04:26:27PM -0800, Vineet Gupta wrote: > @@ -246,7 +246,7 @@ static __always_inline void __assign_bit(long nr, volatile unsigned long *addr, > new__ = (old__ & ~mask__) | bits__; \ > } while (cmpxchg(ptr, old__, new__) != old__); \ diff --git a/include/linux/bitops.h b/include/linux/bitops.h index 705f7c442691..2060d26a35f5 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -241,10 +241,10 @@ static __always_inline void __assign_bit(long nr, volatile unsigned long *addr, const typeof(*(ptr)) mask__ = (mask), bits__ = (bits); \ typeof(*(ptr)) old__, new__; \ \ + old__ = READ_ONCE(*(ptr)); \ do { \ - old__ = READ_ONCE(*(ptr)); \ new__ = (old__ & ~mask__) | bits__; \ - } while (cmpxchg(ptr, old__, new__) != old__); \ + } while (!try_cmpxchg(ptr, &old__, new__)); \ \ new__; \ }) While there you probably want something like the above... although, looking at it now, we seem to have 'forgotten' to add try_cmpxchg to the generic code :/
On 1/11/19 1:24 AM, Peter Zijlstra wrote: > diff --git a/include/linux/bitops.h b/include/linux/bitops.h > index 705f7c442691..2060d26a35f5 100644 > --- a/include/linux/bitops.h > +++ b/include/linux/bitops.h > @@ -241,10 +241,10 @@ static __always_inline void __assign_bit(long nr, volatile unsigned long *addr, > const typeof(*(ptr)) mask__ = (mask), bits__ = (bits); \ > typeof(*(ptr)) old__, new__; \ > \ > + old__ = READ_ONCE(*(ptr)); \ > do { \ > - old__ = READ_ONCE(*(ptr)); \ > new__ = (old__ & ~mask__) | bits__; \ > - } while (cmpxchg(ptr, old__, new__) != old__); \ > + } while (!try_cmpxchg(ptr, &old__, new__)); \ > \ > new__; \ > }) > > > While there you probably want something like the above... As a separate change perhaps so that a revert (unlikely as it might be) could be done with less pain. > although, > looking at it now, we seem to have 'forgotten' to add try_cmpxchg to the > generic code :/ So it _has_ to be a separate change ;-) But can we even provide a sane generic try_cmpxchg. The asm-generic cmpxchg relies on local irq save etc so it is clearly only to prevent a new arch from failing to compile. atomic*_cmpxchg() is different story since atomics have to be provided by arch. Anyhow what is more interesting is the try_cmpxchg API itself. So commit a9ebf306f52c756 introduced/use of try_cmpxchg(), which indeed makes the looping "nicer" to read and obvious code gen improvements. So, for (;;) { new = val $op $imm; old = cmpxchg(ptr, val, new); if (old == val) break; val = old; } becomes do { } while (!try_cmpxchg(ptr, &val, val $op $imm)); But on pure LL/SC retry based arches, we still end up with generated code having 2 loops. We discussed something similar a while back: see [1] First loop is inside inline asm to retry LL/SC and the outer one due to code above. Explicit return of try_cmpxchg() means setting up a register with a boolean status of cmpxchg (AFAIKR ARMv7 already does that but ARC e.g. uses a CPU flag thus requires an additional insn or two). We could arguably remove the inline asm loop and retry LL/SC from the outer loop, but it seems cleaner to keep the retry where it belongs. Also under the hood, try_cmpxchg() would end up re-reading it for the issue fixed by commit 44fe84459faf1a. Heck, it would all be simpler if we could express this w/o use of cmpxchg. try_some_op(ptr, &val, val $op $imm); P.S. the horrible API name is for indicative purposes only This would remove the outer loop completely, also avoid any re-reads due to the semantics of cmpxchg etc. [1] https://www.spinics.net/lists/kernel/msg2029217.html
On Thu, Jan 10, 2019 at 04:26:27PM -0800, Vineet Gupta wrote: > | > Also, set_mask_bits is used in fs quite a bit and we can possibly come up > | > with a generic llsc based implementation (w/o the cmpxchg loop) > | > | May I also suggest changing the return value of set_mask_bits() to old. > | > | You can compute the new value given old, but you cannot compute the old > | value given new, therefore old is the better return value. Also, no > | current user seems to use the return value, so changing it is without > | risk. > > Link: http://lkml.kernel.org/g/20150807110955.GH16853@twins.programming.kicks-ass.net > Suggested-by: Peter Zijlstra <peterz@infradead.org> > Cc: Miklos Szeredi <mszeredi@redhat.com> > Cc: Ingo Molnar <mingo@kernel.org> > Cc: Jani Nikula <jani.nikula@intel.com> > Cc: Chris Wilson <chris@chris-wilson.co.uk> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Will Deacon <will.deacon@arm.com> > Signed-off-by: Vineet Gupta <vgupta@synopsys.com> > --- > include/linux/bitops.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/include/linux/bitops.h b/include/linux/bitops.h > index 705f7c442691..602af23b98c7 100644 > --- a/include/linux/bitops.h > +++ b/include/linux/bitops.h > @@ -246,7 +246,7 @@ static __always_inline void __assign_bit(long nr, volatile unsigned long *addr, > new__ = (old__ & ~mask__) | bits__; \ > } while (cmpxchg(ptr, old__, new__) != old__); \ > \ > - new__; \ > + old__; \ > }) > #endif Acked-by: Will Deacon <will.deacon@arm.com> May also explain why no in-tree users appear to use the return value! Will
diff --git a/include/linux/bitops.h b/include/linux/bitops.h index 705f7c442691..602af23b98c7 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -246,7 +246,7 @@ static __always_inline void __assign_bit(long nr, volatile unsigned long *addr, new__ = (old__ & ~mask__) | bits__; \ } while (cmpxchg(ptr, old__, new__) != old__); \ \ - new__; \ + old__; \ }) #endif
| > Also, set_mask_bits is used in fs quite a bit and we can possibly come up | > with a generic llsc based implementation (w/o the cmpxchg loop) | | May I also suggest changing the return value of set_mask_bits() to old. | | You can compute the new value given old, but you cannot compute the old | value given new, therefore old is the better return value. Also, no | current user seems to use the return value, so changing it is without | risk. Link: http://lkml.kernel.org/g/20150807110955.GH16853@twins.programming.kicks-ass.net Suggested-by: Peter Zijlstra <peterz@infradead.org> Cc: Miklos Szeredi <mszeredi@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> --- include/linux/bitops.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)