Message ID | 1362455801.8941.24.camel@hastur.hellion.org.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, Mar 05, 2013 at 03:56:41AM +0000, Ian Campbell wrote: > > > diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h > > > index 94b4e90..5c27696 100644 > > > --- a/arch/arm/include/asm/xen/events.h > > > +++ b/arch/arm/include/asm/xen/events.h > > > @@ -15,4 +15,26 @@ static inline int xen_irqs_disabled(struct pt_regs *regs) > > > return raw_irqs_disabled_flags(regs->ARM_cpsr); > > > } > > > > > > +/* > > > + * We cannot use xchg because it does not support 8-byte > > > + * values. However it is safe to use {ldr,dtd}exd directly because all > > > + * platforms which Xen can run on support those instructions. > > > > Why does atomic64_cmpxchg not work here? > > Just that we don't want/need the cmp aspect, we don't mind if an extra > bit gets set as we read the value, so long as we atomically read and set > to zero. > > > > + */ > > > +static inline xen_ulong_t xchg_xen_ulong(xen_ulong_t *ptr, xen_ulong_t val) > > > +{ > > > + xen_ulong_t oldval; > > > + unsigned int tmp; > > > + > > > + wmb(); > > > > Based on atomic64_cmpxchg implementation, you could use smp_mb here > > which avoids an outer cache flush. > > Good point. > > > > + asm volatile("@ xchg_xen_ulong\n" > > > + "1: ldrexd %0, %H0, [%3]\n" > > > + " strexd %1, %2, %H2, [%3]\n" > > > + " teq %1, #0\n" > > > + " bne 1b" > > > + : "=&r" (oldval), "=&r" (tmp) > > > + : "r" (val), "r" (ptr) > > > + : "memory", "cc"); > > > > And a smp_mb is needed here. > > I think for the specific caller which we have here it isn't strictly > necessary, but for generic correctness I think you are right. > > Thanks for reviewing. > > Konrad, IIRC you have already picked this up (and sent to Linus?) so an Yes. > incremental fix is required? See below. Why don't I wait a bit and wait until you are back from conferences and can post a nice series that fixes the smp_wmb() and also the atomic one and has been run-time tested with Xen on ARM.
diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h index 5c27696..0e1f59e 100644 --- a/arch/arm/include/asm/xen/events.h +++ b/arch/arm/include/asm/xen/events.h @@ -25,7 +25,7 @@ static inline xen_ulong_t xchg_xen_ulong(xen_ulong_t *ptr, xen_ulong_t val) xen_ulong_t oldval; unsigned int tmp; - wmb(); + smp_wmb(); asm volatile("@ xchg_xen_ulong\n" "1: ldrexd %0, %H0, [%3]\n" " strexd %1, %2, %H2, [%3]\n" @@ -34,6 +34,7 @@ static inline xen_ulong_t xchg_xen_ulong(xen_ulong_t *ptr, xen_ulong_t val) : "=&r" (oldval), "=&r" (tmp) : "r" (val), "r" (ptr) : "memory", "cc"); + smp_wmb(); return oldval; }