Message ID | 49FA1AA1.3060104@gmx.de (mailing list archive) |
---|---|
State | Accepted |
Delegated to: | kyle mcmartin |
Headers | show |
On Thu, Apr 30, 2009 at 11:39:45PM +0200, Helge Deller wrote: > This "fix ldcw assembler" patch below is really an old one now, but sadly it never got applied. > > I just tried it again. Without this patch I always see login-problems when ssh-ing into > my parisc box. The very first time sshd just drops the connection (Connection closed by remote host). > With this patch I didn't faced this problem again. > > I'm wondering, if not other userspace problems suddenly go away then as well, > e.g. the uid/gid issues others are seeing: > http://marc.info/?l=linux-parisc&m=121114269417948&w=2 > > Kyle, please apply. > gotcha. -- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Apr 30, 2009 at 5:39 PM, Helge Deller <deller@gmx.de> wrote: > diff --git a/arch/parisc/include/asm/system.h b/arch/parisc/include/asm/system.h > index ee80c92..d91357b 100644 > --- a/arch/parisc/include/asm/system.h > +++ b/arch/parisc/include/asm/system.h > @@ -168,8 +168,8 @@ static inline void set_eiem(unsigned long val) > Â /* LDCW, the only atomic read-write operation PA-RISC has. *sigh*. Â */ > Â #define __ldcw(a) ({ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â \ > Â Â Â Â unsigned __ret; Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â \ > - Â Â Â __asm__ __volatile__(__LDCW " 0(%1),%0" Â Â Â Â Â Â Â Â \ > - Â Â Â Â Â Â Â : "=r" (__ret) : "r" (a)); Â Â Â Â Â Â Â Â Â Â Â \ > + Â Â Â __asm__ __volatile__(__LDCW " 0(%2),%0" Â Â Â Â Â Â Â Â \ > + Â Â Â Â Â Â Â : "=r" (__ret), "+m" (*(a)) : "r" (a)); Â Â Â Â \ > Â Â Â Â __ret; Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â \ > Â }) Historical note... We clobber all of memory in userspace, like this: ~~~ #define __ldcw(a) \ ({ \ unsigned int __ret; \ __asm__ __volatile__("ldcw 0(%1),%0" \ : "=r" (__ret) : "r" (a) : "memory"); \ __ret; \ }) ~~~ I wonder if I should change that to match the kernel? This is currently used in the Linuxthreads->NPTL compat code, and in the old Linuxthreads code. Cheers, Carlos. -- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, 01 May 2009, Carlos O'Donell wrote: > Historical note... > > We clobber all of memory in userspace, like this: > ~~~ > #define __ldcw(a) \ > ({ \ > unsigned int __ret; \ > __asm__ __volatile__("ldcw 0(%1),%0" \ > : "=r" (__ret) : "r" (a) : "memory"); \ > __ret; \ > }) > ~~~ > I wonder if I should change that to match the kernel? The above is perfectly safe. I believe the kernel provides a memory barrier when necessary. There's a discussion somewhere in the mail archives. Dave
On Fri, May 01, 2009 at 05:37:18PM -0400, John David Anglin wrote: > On Fri, 01 May 2009, Carlos O'Donell wrote: > > > Historical note... > > > > We clobber all of memory in userspace, like this: > > ~~~ > > #define __ldcw(a) \ > > ({ \ > > unsigned int __ret; \ > > __asm__ __volatile__("ldcw 0(%1),%0" \ > > : "=r" (__ret) : "r" (a) : "memory"); \ > > __ret; \ > > }) > > ~~~ > > I wonder if I should change that to match the kernel? > > The above is perfectly safe. I believe the kernel provides a memory > barrier when necessary. There's a discussion somewhere in the mail > archives. > I, er, don't think we do, not for the spinlock primitives at least, as far as I can tell... regards, Kyle -- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, 2009-05-01 at 17:46 -0400, Kyle McMartin wrote: > On Fri, May 01, 2009 at 05:37:18PM -0400, John David Anglin wrote: > > On Fri, 01 May 2009, Carlos O'Donell wrote: > > > > > Historical note... > > > > > > We clobber all of memory in userspace, like this: > > > ~~~ > > > #define __ldcw(a) \ > > > ({ \ > > > unsigned int __ret; \ > > > __asm__ __volatile__("ldcw 0(%1),%0" \ > > > : "=r" (__ret) : "r" (a) : "memory"); \ > > > __ret; \ > > > }) > > > ~~~ > > > I wonder if I should change that to match the kernel? > > > > The above is perfectly safe. I believe the kernel provides a memory > > barrier when necessary. There's a discussion somewhere in the mail > > archives. > > > > I, er, don't think we do, not for the spinlock primitives at least, as > far as I can tell... Yes we do ... look in asm/spinlock.h it's all the mb(); statements that are scattered through our _raw_spin_ ops The original problem was that the spinlocks were compile barrier leaky and caused infrequent but hard to debug issues on smp. The barriers are likely overkill (since we have two in each) but at least they prevent problems. James -- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, May 01, 2009 at 10:03:12PM +0000, James Bottomley wrote: > On Fri, 2009-05-01 at 17:46 -0400, Kyle McMartin wrote: > > On Fri, May 01, 2009 at 05:37:18PM -0400, John David Anglin wrote: > > > On Fri, 01 May 2009, Carlos O'Donell wrote: > > > > > > > Historical note... > > > > > > > > We clobber all of memory in userspace, like this: > > > > ~~~ > > > > #define __ldcw(a) \ > > > > ({ \ > > > > unsigned int __ret; \ > > > > __asm__ __volatile__("ldcw 0(%1),%0" \ > > > > : "=r" (__ret) : "r" (a) : "memory"); \ > > > > __ret; \ > > > > }) > > > > ~~~ > > > > I wonder if I should change that to match the kernel? > > > > > > The above is perfectly safe. I believe the kernel provides a memory > > > barrier when necessary. There's a discussion somewhere in the mail > > > archives. > > > > > > > I, er, don't think we do, not for the spinlock primitives at least, as > > far as I can tell... > > Yes we do ... look in asm/spinlock.h > > it's all the mb(); statements that are scattered through our _raw_spin_ > ops > > The original problem was that the spinlocks were compile barrier leaky > and caused infrequent but hard to debug issues on smp. The barriers are > likely overkill (since we have two in each) but at least they prevent > problems. > Yeah, I was looking at the lack of a barrier between ldcw and the test of *a == 0. I guess this would be fixed by Helge's patch. --Kyle -- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, 2009-05-01 at 18:25 -0400, Kyle McMartin wrote: > On Fri, May 01, 2009 at 10:03:12PM +0000, James Bottomley wrote: > > On Fri, 2009-05-01 at 17:46 -0400, Kyle McMartin wrote: > > > On Fri, May 01, 2009 at 05:37:18PM -0400, John David Anglin wrote: > > > > On Fri, 01 May 2009, Carlos O'Donell wrote: > > > > > > > > > Historical note... > > > > > > > > > > We clobber all of memory in userspace, like this: > > > > > ~~~ > > > > > #define __ldcw(a) \ > > > > > ({ \ > > > > > unsigned int __ret; \ > > > > > __asm__ __volatile__("ldcw 0(%1),%0" \ > > > > > : "=r" (__ret) : "r" (a) : "memory"); \ > > > > > __ret; \ > > > > > }) > > > > > ~~~ > > > > > I wonder if I should change that to match the kernel? > > > > > > > > The above is perfectly safe. I believe the kernel provides a memory > > > > barrier when necessary. There's a discussion somewhere in the mail > > > > archives. > > > > > > > > > > I, er, don't think we do, not for the spinlock primitives at least, as > > > far as I can tell... > > > > Yes we do ... look in asm/spinlock.h > > > > it's all the mb(); statements that are scattered through our _raw_spin_ > > ops > > > > The original problem was that the spinlocks were compile barrier leaky > > and caused infrequent but hard to debug issues on smp. The barriers are > > likely overkill (since we have two in each) but at least they prevent > > problems. > > > > Yeah, I was looking at the lack of a barrier between ldcw and the test > of *a == 0. I guess this would be fixed by Helge's patch. OK, now I'm confused. Barriers are used to inform the compiler about interlocks it isn't aware of (like when an asm changes a variable). The ldcw and the *a both mention a which is sufficient an interlock for the compiler to get it right without any barrier. Added to which, *a is declared volatile, which is enough of a cautionary note to make the compiler behave in a very straightforward fashion. James -- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, May 01, 2009 at 05:36:14PM -0500, James Bottomley wrote: > > OK, now I'm confused. Barriers are used to inform the compiler about > interlocks it isn't aware of (like when an asm changes a variable). The > ldcw and the *a both mention a which is sufficient an interlock for the > compiler to get it right without any barrier. > > Added to which, *a is declared volatile, which is enough of a cautionary > note to make the compiler behave in a very straightforward fashion. > I guess... I don't understand the gcc clobber semantics on asm() anymore, I'm just thinking of that stupid bug with ip_fast_csum we saw on parisc a few years ago... Don't mind me. --Kyle -- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/parisc/include/asm/system.h b/arch/parisc/include/asm/system.h index ee80c92..d91357b 100644 --- a/arch/parisc/include/asm/system.h +++ b/arch/parisc/include/asm/system.h @@ -168,8 +168,8 @@ static inline void set_eiem(unsigned long val) /* LDCW, the only atomic read-write operation PA-RISC has. *sigh*. */ #define __ldcw(a) ({ \ unsigned __ret; \ - __asm__ __volatile__(__LDCW " 0(%1),%0" \ - : "=r" (__ret) : "r" (a)); \ + __asm__ __volatile__(__LDCW " 0(%2),%0" \ + : "=r" (__ret), "+m" (*(a)) : "r" (a)); \ __ret; \ })