From patchwork Sat Apr 16 22:23:32 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sinan Kaya X-Patchwork-Id: 8862131 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6AEF0BF29F for ; Sat, 16 Apr 2016 22:24:11 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9C0E620121 for ; Sat, 16 Apr 2016 22:24:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BFD252013A for ; Sat, 16 Apr 2016 22:24:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752017AbcDPWX6 (ORCPT ); Sat, 16 Apr 2016 18:23:58 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:41093 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751813AbcDPWXz (ORCPT ); Sat, 16 Apr 2016 18:23:55 -0400 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 35CF960DB9; Sat, 16 Apr 2016 22:23:54 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 1000) id 2863360ECA; Sat, 16 Apr 2016 22:23:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from drakthul.qualcomm.com (global_nat1_iad_fw.qualcomm.com [129.46.232.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) (Authenticated sender: okaya@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 07DFF605B6; Sat, 16 Apr 2016 22:23:52 +0000 (UTC) From: Sinan Kaya To: linux-rdma@vger.kernel.org, timur@codeaurora.org, cov@codeaurora.org Cc: Sinan Kaya , Yishai Hadas , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V2] net: ethernet: mellanox: correct page conversion Date: Sat, 16 Apr 2016 18:23:32 -0400 Message-Id: <1460845412-13120-1-git-send-email-okaya@codeaurora.org> X-Mailer: git-send-email 1.8.2.1 X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Current code is assuming that the address returned by dma_alloc_coherent is a logical address. This is not true on ARM/ARM64 systems. This patch replaces dma_alloc_coherent with dma_map_page API. The address returned can later by virtually mapped from the CPU side with vmap API. Signed-off-by: Sinan Kaya --- drivers/net/ethernet/mellanox/mlx4/alloc.c | 37 ++++++++++++++++++++++-------- 1 file changed, 27 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx4/alloc.c b/drivers/net/ethernet/mellanox/mlx4/alloc.c index 0c51c69..22a7ae7 100644 --- a/drivers/net/ethernet/mellanox/mlx4/alloc.c +++ b/drivers/net/ethernet/mellanox/mlx4/alloc.c @@ -618,13 +618,26 @@ int mlx4_buf_alloc(struct mlx4_dev *dev, int size, int max_direct, return -ENOMEM; for (i = 0; i < buf->nbufs; ++i) { - buf->page_list[i].buf = - dma_alloc_coherent(&dev->persist->pdev->dev, - PAGE_SIZE, - &t, gfp); - if (!buf->page_list[i].buf) + struct page *page; + + page = alloc_page(GFP_KERNEL); + if (!page) goto err_free; + t = dma_map_page(&dev->persist->pdev->dev, page, 0, + PAGE_SIZE, DMA_BIDIRECTIONAL); + + if (dma_mapping_error(&dev->persist->pdev->dev, t)) { + __free_page(page); + goto err_free; + } + + buf->page_list[i].buf = page_address(page); + if (!buf->page_list[i].buf) { + __free_page(page); + goto err_free; + } + buf->page_list[i].map = t; memset(buf->page_list[i].buf, 0, PAGE_SIZE); @@ -666,11 +679,15 @@ void mlx4_buf_free(struct mlx4_dev *dev, int size, struct mlx4_buf *buf) vunmap(buf->direct.buf); for (i = 0; i < buf->nbufs; ++i) - if (buf->page_list[i].buf) - dma_free_coherent(&dev->persist->pdev->dev, - PAGE_SIZE, - buf->page_list[i].buf, - buf->page_list[i].map); + if (buf->page_list[i].buf) { + struct page *page; + + page = virt_to_page(buf->page_list[i].buf); + dma_unmap_page(&dev->persist->pdev->dev, + buf->page_list[i].map, + PAGE_SIZE, DMA_BIDIRECTIONAL); + __free_page(page); + } kfree(buf->page_list); } }