From patchwork Tue Jun 7 14:42:43 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Wise X-Patchwork-Id: 9161577 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D443660572 for ; Tue, 7 Jun 2016 14:57:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C73952723E for ; Tue, 7 Jun 2016 14:57:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BBE132793B; Tue, 7 Jun 2016 14:57:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 20E0827248 for ; Tue, 7 Jun 2016 14:57:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932752AbcFGO5N (ORCPT ); Tue, 7 Jun 2016 10:57:13 -0400 Received: from smtp.opengridcomputing.com ([72.48.136.20]:58936 "EHLO smtp.opengridcomputing.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932411AbcFGO5M (ORCPT ); Tue, 7 Jun 2016 10:57:12 -0400 Received: from smtp.ogc.us (build2.ogc.int [10.10.0.32]) by smtp.opengridcomputing.com (Postfix) with ESMTP id 345B729EC2; Tue, 7 Jun 2016 09:57:12 -0500 (CDT) Received: by smtp.ogc.us (Postfix, from userid 503) id 29D56E08BE; Tue, 7 Jun 2016 09:57:12 -0500 (CDT) Message-Id: <4891ced7a08a643b9daf697a45e198f9284674d1.1465310573.git.root@r9.asicdesigners.com> In-Reply-To: References: From: Steve Wise Date: Tue, 7 Jun 2016 07:42:43 -0700 Subject: [PATCH 1/2] libibverbs: add ARM64 memory barrier macros To: dledford@redhat.com Cc: linux-rdma@vger.kernel.org Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The default generic barriers are not correct for ARM64. This results in data corruption. The correct macros are based on the ARM Compiler Toolchain Assembler Reference documenation. Signed-off-by: Steve Wise --- include/infiniband/arch.h | 8 ++++++++ 1 files changed, 8 insertions(+), 0 deletions(-) diff --git a/include/infiniband/arch.h b/include/infiniband/arch.h index bc1738a..c31dd0a 100644 --- a/include/infiniband/arch.h +++ b/include/infiniband/arch.h @@ -122,6 +122,14 @@ static inline uint64_t ntohll(uint64_t x) { return x; } #define wmb() mb() /* for s390x */ #define wc_wmb() wmb() /* for s390x */ +#elif defined(__aarch64__) + +/* Perhaps dmb would be sufficient? Let us be conservative for now. */ +#define mb() { asm volatile("dsb sy" ::: "memory"); } +#define rmb() { asm volatile("dsb ld" ::: "memory"); } +#define wmb() { asm volatile("dsb st" ::: "memory"); } +#define wc_wmb() wmb() + #else #warning No architecture specific defines found. Using generic implementation.