From patchwork Thu Nov 6 12:23:59 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 5241541 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 39D7C9F387 for ; Thu, 6 Nov 2014 12:26:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 142A9200D6 for ; Thu, 6 Nov 2014 12:26:32 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B354E200D5 for ; Thu, 6 Nov 2014 12:26:30 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XmM79-0001sD-5j; Thu, 06 Nov 2014 12:24:35 +0000 Received: from mail-wi0-f180.google.com ([209.85.212.180]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XmM74-0001gD-UB for linux-arm-kernel@lists.infradead.org; Thu, 06 Nov 2014 12:24:32 +0000 Received: by mail-wi0-f180.google.com with SMTP id hi2so1310388wib.1 for ; Thu, 06 Nov 2014 04:24:09 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uW6nw5sCtWKrGe5L231rTMVtir4PO/H0I4dblU6wChs=; b=EBsWHSTlb1xYpa5VlgfKzN0pVjpAelTYTq2qZjaCt2tC9uN8Fp0Yp2So/1xJPJfe7E lo7Prmudlz0D/amJC0PqNavcraeZFq3+ylL/hAyx2z3B5Wwx5dDiNaqLaq2xz+fyfKvq 6Lkp7r5o/8MkP2m94eL22EOglVa0P1+mjUfpccF3hVmd75SRtaTCUlqGPy6d5rG8rNuS CDvKc7gJEsXE3vrhAZHVB7vvvt6P6lsvMuLVoDVgJW9Qf5ZpJGKmcF3op8HnodhUPqI6 7Gt6CsZGvH8FOgDS5R6D8Y4eURGE7GDLXBjiec0XZHOgwgAt3vGYEVWA3WV6SQ5GXjhg cxEg== X-Gm-Message-State: ALoCoQkDk7zy3CsvxxRGrUOuMeG2W6TZui3gSP1wFiQioEpnfvKLPbZGT070PZGFgq7LjwqVbI73 X-Received: by 10.194.241.194 with SMTP id wk2mr2889977wjc.132.1415276649555; Thu, 06 Nov 2014 04:24:09 -0800 (PST) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id t16sm2075217wjr.28.2014.11.06.04.24.08 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 06 Nov 2014 04:24:08 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH V2] arm64: percpu: Implement this_cpu operations Date: Thu, 6 Nov 2014 12:23:59 +0000 Message-Id: <1415276639-7275-1-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <20141106115208.GA6258@linaro.org> References: <20141106115208.GA6258@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141106_042431_357505_D2417B24 X-CRM114-Status: GOOD ( 12.70 ) X-Spam-Score: -0.7 (/) Cc: catalin.marinas@arm.com, will.deacon@arm.com, Steve Capper , ard.biesheuvel@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The generic this_cpu operations disable interrupts to ensure that the requested operation is protected from pre-emption. For arm64, this is overkill and can hurt throughput and latency. This patch provides arm64 specific implementations for the this_cpu operations. Rather than disable interrupts, we use the exclusive monitor or atomic operations as appropriate. The following operations are implemented: add, add_return, and, or, read, write, xchg. We also wire up a cmpxchg implementation from cmpxchg.h. Testing was performed using the percpu_test module and hackbench on a Juno board running 3.18-rc3. Signed-off-by: Steve Capper --- Changed in V2: corrected address constraint for __percpu_read and __percpu_write. --- arch/arm64/include/asm/cmpxchg.h | 6 +- arch/arm64/include/asm/percpu.h | 231 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 235 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h index 3e02245..3e51f49 100644 --- a/arch/arm64/include/asm/cmpxchg.h +++ b/arch/arm64/include/asm/cmpxchg.h @@ -237,8 +237,10 @@ static inline unsigned long __cmpxchg_mb(volatile void *ptr, unsigned long old, __ret; \ }) -#define this_cpu_cmpxchg_8(ptr, o, n) \ - cmpxchg(raw_cpu_ptr(&(ptr)), o, n); +#define this_cpu_cmpxchg_1(ptr, o, n) cmpxchg(raw_cpu_ptr(&(ptr)), o, n) +#define this_cpu_cmpxchg_2(ptr, o, n) cmpxchg(raw_cpu_ptr(&(ptr)), o, n) +#define this_cpu_cmpxchg_4(ptr, o, n) cmpxchg(raw_cpu_ptr(&(ptr)), o, n) +#define this_cpu_cmpxchg_8(ptr, o, n) cmpxchg(raw_cpu_ptr(&(ptr)), o, n) #define this_cpu_cmpxchg_double_8(ptr1, ptr2, o1, o2, n1, n2) \ cmpxchg_double(raw_cpu_ptr(&(ptr1)), raw_cpu_ptr(&(ptr2)), \ diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h index 5279e57..30b84f2 100644 --- a/arch/arm64/include/asm/percpu.h +++ b/arch/arm64/include/asm/percpu.h @@ -44,6 +44,237 @@ static inline unsigned long __my_cpu_offset(void) #endif /* CONFIG_SMP */ +#define PERCPU_OP(op, asm_op) \ +static inline unsigned long __percpu_##op(void *ptr, \ + unsigned long val, int size) \ +{ \ + unsigned long loop, ret; \ + \ + switch (size) { \ + case 1: \ + do { \ + asm ("//__per_cpu_" #op "_1\n" \ + "ldxrb %w[ret], %[ptr]\n" \ + #asm_op " %w[ret], %w[ret], %w[val]\n" \ + "stxrb %w[loop], %w[ret], %[ptr]\n" \ + : [loop] "=&r" (loop), [ret] "=&r" (ret), \ + [ptr] "+Q"(*(u8 *)ptr) \ + : [val] "Ir" (val)); \ + } while (loop); \ + break; \ + case 2: \ + do { \ + asm ("//__per_cpu_" #op "_2\n" \ + "ldxrh %w[ret], %[ptr]\n" \ + #asm_op " %w[ret], %w[ret], %w[val]\n" \ + "stxrh %w[loop], %w[ret], %[ptr]\n" \ + : [loop] "=&r" (loop), [ret] "=&r" (ret), \ + [ptr] "+Q"(*(u16 *)ptr) \ + : [val] "Ir" (val)); \ + } while (loop); \ + break; \ + case 4: \ + do { \ + asm ("//__per_cpu_" #op "_4\n" \ + "ldxr %w[ret], %[ptr]\n" \ + #asm_op " %w[ret], %w[ret], %w[val]\n" \ + "stxr %w[loop], %w[ret], %[ptr]\n" \ + : [loop] "=&r" (loop), [ret] "=&r" (ret), \ + [ptr] "+Q"(*(u32 *)ptr) \ + : [val] "Ir" (val)); \ + } while (loop); \ + break; \ + case 8: \ + do { \ + asm ("//__per_cpu_" #op "_8\n" \ + "ldxr %[ret], %[ptr]\n" \ + #asm_op " %[ret], %[ret], %[val]\n" \ + "stxr %w[loop], %[ret], %[ptr]\n" \ + : [loop] "=&r" (loop), [ret] "=&r" (ret), \ + [ptr] "+Q"(*(u64 *)ptr) \ + : [val] "Ir" (val)); \ + } while (loop); \ + break; \ + default: \ + BUILD_BUG(); \ + } \ + \ + return ret; \ +} + +PERCPU_OP(add, add) +PERCPU_OP(and, and) +PERCPU_OP(or, orr) +#undef PERCPU_OP + +static inline unsigned long __percpu_read(void *ptr, int size) +{ + unsigned long ret; + + switch (size) { + case 1: + asm ("//__per_cpu_read_1\n" + "ldrb %w[ret], %[ptr]\n" : + [ret] "=&r"(ret) : [ptr] "Q"(*(u8 *)ptr)); + break; + case 2: + asm ("//__per_cpu_read_2\n" + "ldrh %w[ret], %[ptr]\n" : + [ret] "=&r"(ret) : [ptr] "Q"(*(u16 *)ptr)); + break; + case 4: + asm ("//__per_cpu_read_4\n" + "ldr %w[ret], %[ptr]\n" : + [ret] "=&r"(ret) : [ptr] "Q"(*(u32 *)ptr)); + break; + case 8: + asm ("//__per_cpu_read_8\n" + "ldr %[ret], %[ptr]\n" : + [ret] "=&r"(ret) : [ptr] "Q"(*(u64 *)ptr)); + break; + default: + BUILD_BUG(); + } + + return ret; +} + +static inline void __percpu_write(void *ptr, unsigned long val, int size) +{ + switch (size) { + case 1: + asm ("//__per_cpu_write_1\n" + "strb %w[val], %[ptr]\n" : + [ptr] "=Q"(*(u8 *)ptr) : [val] "r"(val)); + break; + case 2: + asm ("//__per_cpu_write_2\n" + "strh %w[val], %[ptr]\n" : + [ptr] "=Q"(*(u16 *)ptr) : [val] "r"(val)); + break; + case 4: + asm ("//__per_cpu_write_4\n" + "str %w[val], %[ptr]\n" : + [ptr] "=Q"(*(u32 *)ptr) : [val] "r"(val)); + break; + case 8: + asm ("//__per_cpu_write_8\n" + "str %[val], %[ptr]\n" : + [ptr] "=Q"(*(u64 *)ptr) : [val] "r"(val)); + break; + default: + BUILD_BUG(); + } +} + +static inline unsigned long __percpu_xchg(void *ptr, unsigned long val, + int size) +{ + unsigned long ret, loop; + + switch (size) { + case 1: + do { + asm ("//__percpu_xchg_1\n" + "ldxrb %w[ret], %[ptr]\n" + "stxrb %w[loop], %w[val], %[ptr]\n" + : [loop] "=&r"(loop), [ret] "=&r"(ret), + [ptr] "+Q"(*(u8 *)ptr) + : [val] "r" (val)); + } while (loop); + break; + case 2: + do { + asm ("//__percpu_xchg_2\n" + "ldxrh %w[ret], %[ptr]\n" + "stxrh %w[loop], %w[val], %[ptr]\n" + : [loop] "=&r"(loop), [ret] "=&r"(ret), + [ptr] "+Q"(*(u16 *)ptr) + : [val] "r" (val)); + } while (loop); + break; + case 4: + do { + asm ("//__percpu_xchg_4\n" + "ldxr %w[ret], %[ptr]\n" + "stxr %w[loop], %w[val], %[ptr]\n" + : [loop] "=&r"(loop), [ret] "=&r"(ret), + [ptr] "+Q"(*(u32 *)ptr) + : [val] "r" (val)); + } while (loop); + break; + case 8: + do { + asm ("//__percpu_xchg_8\n" + "ldxr %[ret], %[ptr]\n" + "stxr %w[loop], %[val], %[ptr]\n" + : [loop] "=&r"(loop), [ret] "=&r"(ret), + [ptr] "+Q"(*(u64 *)ptr) + : [val] "r" (val)); + } while (loop); + break; + default: + BUILD_BUG(); + } + + return ret; +} + +#define _percpu_add(pcp, val) \ + __percpu_add(raw_cpu_ptr(&(pcp)), val, sizeof(pcp)) + +#define _percpu_add_return(pcp, val) (typeof(pcp)) (_percpu_add(pcp, val)) + +#define _percpu_and(pcp, val) \ + __percpu_and(raw_cpu_ptr(&(pcp)), val, sizeof(pcp)) + +#define _percpu_or(pcp, val) \ + __percpu_or(raw_cpu_ptr(&(pcp)), val, sizeof(pcp)) + +#define _percpu_read(pcp) (typeof(pcp)) \ + (__percpu_read(raw_cpu_ptr(&(pcp)), sizeof(pcp))) + +#define _percpu_write(pcp, val) \ + __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), sizeof(pcp)) + +#define _percpu_xchg(pcp, val) (typeof(pcp)) \ + (__percpu_xchg(raw_cpu_ptr(&(pcp)), (unsigned long)(val), sizeof(pcp))) + +#define this_cpu_add_1(pcp, val) _percpu_add(pcp, val) +#define this_cpu_add_2(pcp, val) _percpu_add(pcp, val) +#define this_cpu_add_4(pcp, val) _percpu_add(pcp, val) +#define this_cpu_add_8(pcp, val) _percpu_add(pcp, val) + +#define this_cpu_add_return_1(pcp, val) _percpu_add_return(pcp, val) +#define this_cpu_add_return_2(pcp, val) _percpu_add_return(pcp, val) +#define this_cpu_add_return_4(pcp, val) _percpu_add_return(pcp, val) +#define this_cpu_add_return_8(pcp, val) _percpu_add_return(pcp, val) + +#define this_cpu_and_1(pcp, val) _percpu_and(pcp, val) +#define this_cpu_and_2(pcp, val) _percpu_and(pcp, val) +#define this_cpu_and_4(pcp, val) _percpu_and(pcp, val) +#define this_cpu_and_8(pcp, val) _percpu_and(pcp, val) + +#define this_cpu_or_1(pcp, val) _percpu_or(pcp, val) +#define this_cpu_or_2(pcp, val) _percpu_or(pcp, val) +#define this_cpu_or_4(pcp, val) _percpu_or(pcp, val) +#define this_cpu_or_8(pcp, val) _percpu_or(pcp, val) + +#define this_cpu_read_1(pcp) _percpu_read(pcp) +#define this_cpu_read_2(pcp) _percpu_read(pcp) +#define this_cpu_read_4(pcp) _percpu_read(pcp) +#define this_cpu_read_8(pcp) _percpu_read(pcp) + +#define this_cpu_write_1(pcp, val) _percpu_write(pcp, val) +#define this_cpu_write_2(pcp, val) _percpu_write(pcp, val) +#define this_cpu_write_4(pcp, val) _percpu_write(pcp, val) +#define this_cpu_write_8(pcp, val) _percpu_write(pcp, val) + +#define this_cpu_xchg_1(pcp, val) _percpu_xchg(pcp, val) +#define this_cpu_xchg_2(pcp, val) _percpu_xchg(pcp, val) +#define this_cpu_xchg_4(pcp, val) _percpu_xchg(pcp, val) +#define this_cpu_xchg_8(pcp, val) _percpu_xchg(pcp, val) + #include #endif /* __ASM_PERCPU_H */