From patchwork Fri May 20 18:56:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Wise X-Patchwork-Id: 9130033 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B7E4960762 for ; Fri, 20 May 2016 20:00:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B11AA20453 for ; Fri, 20 May 2016 20:00:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A49E327D00; Fri, 20 May 2016 20:00:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4480020453 for ; Fri, 20 May 2016 20:00:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751110AbcETUA4 (ORCPT ); Fri, 20 May 2016 16:00:56 -0400 Received: from smtp.opengridcomputing.com ([72.48.136.20]:35621 "EHLO smtp.opengridcomputing.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751070AbcETUA4 (ORCPT ); Fri, 20 May 2016 16:00:56 -0400 Received: from smtp.ogc.us (build2.ogc.int [10.10.0.32]) by smtp.opengridcomputing.com (Postfix) with ESMTP id 893FE29E79; Fri, 20 May 2016 15:00:55 -0500 (CDT) Received: by smtp.ogc.us (Postfix, from userid 503) id 7DB95E0B9D; Fri, 20 May 2016 15:00:55 -0500 (CDT) From: Steve Wise Date: Fri, 20 May 2016 11:56:12 -0700 Subject: [PATCH 1/2] libibverbs: add ARM64 memory barrier macros To: dledford@redhat.com Cc: linux-rdma@vger.kernel.org Message-Id: <20160520200055.7DB95E0B9D@smtp.ogc.us> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The default generic barriers are not correct for ARM64. This results in data corruption. The correct macros are based on the ARM Compiler Toolchain Assembler Reference documenation. Signed-off-by: Steve Wise --- include/infiniband/arch.h | 8 ++++++++ 1 files changed, 8 insertions(+), 0 deletions(-) diff --git a/include/infiniband/arch.h b/include/infiniband/arch.h index bc1738a..3808bb2 100644 --- a/include/infiniband/arch.h +++ b/include/infiniband/arch.h @@ -122,6 +122,14 @@ static inline uint64_t ntohll(uint64_t x) { return x; } #define wmb() mb() /* for s390x */ #define wc_wmb() wmb() /* for s390x */ +#elif defined(__aarch64__) + +/* Perhaps dmb would be sufficient? Let us be conservative for now. */ +#define mb() asm volatile("dsb sy" ::: "memory") +#define rmb() asm volatile("dsb ld" ::: "memory") +#define wmb() asm volatile("dsb st" ::: "memory") +#define wc_wmb() wmb() + #else #warning No architecture specific defines found. Using generic implementation.