From patchwork Wed Jun 10 16:15:42 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aoi Shinkai X-Patchwork-Id: 29321 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n5AGFott014541 for ; Wed, 10 Jun 2009 16:15:50 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752958AbZFJQPq (ORCPT ); Wed, 10 Jun 2009 12:15:46 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751552AbZFJQPq (ORCPT ); Wed, 10 Jun 2009 12:15:46 -0400 Received: from mail-px0-f200.google.com ([209.85.216.200]:63947 "EHLO mail-px0-f200.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751165AbZFJQPp (ORCPT ); Wed, 10 Jun 2009 12:15:45 -0400 Received: by pxi38 with SMTP id 38so812395pxi.33 for ; Wed, 10 Jun 2009 09:15:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from :mime-version:to:cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=Jt4S3B+LKcTSYRj3BqSTKRm+HkFlaY5018cxRmbFv8w=; b=BJ3u/ZYeIRt42OPYg6Ci2xmeS6PDXa8m/VbmpaNjTZ6OwcJgNCUH6KmRgfg+prRrbK 9Ce4A5nvCUL+hFM8dHzhjKLQ+TfljM+DIw7PwBsP8ej6KEmC9SKqBesc07moqpX6RXxG UCNCyEJQ9UZBn5hpGqMYkIaznzgenhepVciAY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:mime-version:to:cc:subject:references :in-reply-to:content-type:content-transfer-encoding; b=E5BZMc3VQ2xxANDdiCQtHnFs/EdfcF5bw9kmRHeJJlZFYZh5mGOmSH53+IRzgUxbVN Cbryu96BM7TFfWCP2tbJAgzy3cdpk7UNwhV4Y9t69h2+/fjTXyQDB2oRsdk3BicukbEk nXHaL9qxNZxg5j5DFCphOnL78gxYaIYzaL2bU= Received: by 10.142.186.15 with SMTP id j15mr554897wff.205.1244650547490; Wed, 10 Jun 2009 09:15:47 -0700 (PDT) Received: from localhost.localdomain (lf84.opt2.point.ne.jp [222.225.118.84]) by mx.google.com with ESMTPS id 30sm2536786wff.29.2009.06.10.09.15.45 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 10 Jun 2009 09:15:46 -0700 (PDT) Message-ID: <4A2FDC2E.3020405@gmail.com> Date: Thu, 11 Jun 2009 01:15:42 +0900 From: Aoi Shinkai MIME-Version: 1.0 To: Matt Fleming CC: linux-sh@vger.kernel.org, Paul Mundt Subject: Re: [PATCH] sh: Fix sh4a llsc operation References: <4A2FD353.5080201@gmail.com> <20090610155452.GA5554@console-pimps.org> In-Reply-To: <20090610155452.GA5554@console-pimps.org> Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Hi Matt. Thank you for your comment. > That looks like a pasting error to me, "lock" should be "x". > Oh! sorry. Signed-off-by: Aoi Shinkai --- -- To unsubscribe from this list: send the line "unsubscribe linux-sh" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/sh/include/asm/atomic-llsc.h b/arch/sh/include/asm/atomic-llsc.h index 4b00b78..18cca1f 100644 --- a/arch/sh/include/asm/atomic-llsc.h +++ b/arch/sh/include/asm/atomic-llsc.h @@ -104,4 +104,29 @@ static inline void atomic_set_mask(unsigned int mask, atomic_t *v) : "t"); } +#define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) + +/** + * atomic_add_unless - add unless the number is a given value + * @v: pointer of type atomic_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, so long as it was not @u. + * Returns non-zero if @v was not @u, and zero otherwise. + */ +static inline int atomic_add_unless(atomic_t *v, int a, int u) +{ + int c, old; + c = atomic_read(v); + for (;;) { + if (unlikely(c == (u))) + break; + old = atomic_cmpxchg((v), c, c + (a)); + if (likely(old == c)) + break; + c = old; + } + return c != (u); +} #endif /* __ASM_SH_ATOMIC_LLSC_H */ diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h index 6327ffb..978b58e 100644 --- a/arch/sh/include/asm/atomic.h +++ b/arch/sh/include/asm/atomic.h @@ -45,7 +45,7 @@ #define atomic_inc(v) atomic_add(1,(v)) #define atomic_dec(v) atomic_sub(1,(v)) -#ifndef CONFIG_GUSA_RB +#if !defined(CONFIG_GUSA_RB) && !defined(CONFIG_CPU_SH4A) static inline int atomic_cmpxchg(atomic_t *v, int old, int new) { int ret; @@ -73,7 +73,7 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u) return ret != u; } -#endif +#endif /* !CONFIG_GUSA_RB && !CONFIG_CPU_SH4A */ #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) diff --git a/arch/sh/include/asm/cmpxchg-llsc.h b/arch/sh/include/asm/cmpxchg-llsc.h index 0fac3da..4713666 100644 --- a/arch/sh/include/asm/cmpxchg-llsc.h +++ b/arch/sh/include/asm/cmpxchg-llsc.h @@ -55,7 +55,7 @@ __cmpxchg_u32(volatile int *m, unsigned long old, unsigned long new) "mov %0, %1 \n\t" "cmp/eq %1, %3 \n\t" "bf 2f \n\t" - "mov %3, %0 \n\t" + "mov %4, %0 \n\t" "2: \n\t" "movco.l %0, @%2 \n\t" "bf 1b \n\t" diff --git a/arch/sh/include/asm/spinlock.h b/arch/sh/include/asm/spinlock.h index 6028356..69f4dc7 100644 --- a/arch/sh/include/asm/spinlock.h +++ b/arch/sh/include/asm/spinlock.h @@ -26,7 +26,7 @@ #define __raw_spin_is_locked(x) ((x)->lock <= 0) #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock) #define __raw_spin_unlock_wait(x) \ - do { cpu_relax(); } while ((x)->lock) + do { while (__raw_spin_is_locked(x)) cpu_relax(); } while (0) /* * Simple spin lock operations. There are two variants, one clears IRQ's