From patchwork Mon Feb 11 15:25:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiraz Saleem X-Patchwork-Id: 10806119 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 98E7117FB for ; Mon, 11 Feb 2019 15:26:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86C4E2A9DC for ; Mon, 11 Feb 2019 15:26:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7B5A92AA67; Mon, 11 Feb 2019 15:26:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A74F62A9DC for ; Mon, 11 Feb 2019 15:26:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390708AbfBKP0I (ORCPT ); Mon, 11 Feb 2019 10:26:08 -0500 Received: from mga14.intel.com ([192.55.52.115]:58114 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390651AbfBKP0I (ORCPT ); Mon, 11 Feb 2019 10:26:08 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Feb 2019 07:26:04 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,359,1544515200"; d="scan'208";a="115319388" Received: from ssaleem-mobl4.amr.corp.intel.com ([10.122.129.109]) by orsmga006.jf.intel.com with ESMTP; 11 Feb 2019 07:26:03 -0800 From: Shiraz Saleem To: dledford@redhat.com, jgg@ziepe.ca Cc: linux-rdma@vger.kernel.org, "Shiraz, Saleem" , Lijun Ou , "Wei Hu(Xavier)" Subject: [rdma-next 04/12] RDMA/hns: Use for_each_sg_dma_page iterator on umem SGL Date: Mon, 11 Feb 2019 09:25:00 -0600 Message-Id: <20190211152508.25040-5-shiraz.saleem@intel.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20190211152508.25040-1-shiraz.saleem@intel.com> References: <20190211152508.25040-1-shiraz.saleem@intel.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: "Shiraz, Saleem" Use the for_each_sg_dma_page iterator variant to walk the umem DMA-mapped SGL and get the page DMA address. This avoids the extra loop to iterate pages in the SGE when for_each_sg iterator is used. Additionally, purge umem->page_shift usage in the driver as its only relevant for ODP MRs. Use system page size and shift instead. Cc: Lijun Ou Cc: "Wei Hu(Xavier)" Signed-off-by: Shiraz, Saleem --- drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 7 +-- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 25 ++++----- drivers/infiniband/hw/hns/hns_roce_mr.c | 88 +++++++++++++----------------- drivers/infiniband/hw/hns/hns_roce_qp.c | 5 +- 4 files changed, 54 insertions(+), 71 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c index fa08c22..e700333 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c @@ -1866,9 +1866,8 @@ static int hns_roce_v1_write_mtpt(void *mb_buf, struct hns_roce_mr *mr, unsigned long mtpt_idx) { struct hns_roce_v1_mpt_entry *mpt_entry; - struct scatterlist *sg; + struct sg_dma_page_iter sg_iter; u64 *pages; - int entry; int i; /* MPT filled into mailbox buf */ @@ -1923,8 +1922,8 @@ static int hns_roce_v1_write_mtpt(void *mb_buf, struct hns_roce_mr *mr, return -ENOMEM; i = 0; - for_each_sg(mr->umem->sg_head.sgl, sg, mr->umem->nmap, entry) { - pages[i] = ((u64)sg_dma_address(sg)) >> 12; + for_each_sg_dma_page(mr->umem->sg_head.sgl, &sg_iter, mr->umem->nmap, 0) { + pages[i] = ((u64)sg_page_iter_dma_address(&sg_iter)) >> 12; /* Directly record to MTPT table firstly 7 entry */ if (i >= HNS_ROCE_MAX_INNER_MTPT_NUM) diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 19fefff..c648ee8 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -2084,12 +2084,10 @@ static int hns_roce_v2_set_mac(struct hns_roce_dev *hr_dev, u8 phy_port, static int set_mtpt_pbl(struct hns_roce_v2_mpt_entry *mpt_entry, struct hns_roce_mr *mr) { - struct scatterlist *sg; + struct sg_dma_page_iter sg_iter; u64 page_addr; u64 *pages; - int i, j; - int len; - int entry; + int i; mpt_entry->pbl_size = cpu_to_le32(mr->pbl_size); mpt_entry->pbl_ba_l = cpu_to_le32(lower_32_bits(mr->pbl_ba >> 3)); @@ -2102,17 +2100,14 @@ static int set_mtpt_pbl(struct hns_roce_v2_mpt_entry *mpt_entry, return -ENOMEM; i = 0; - for_each_sg(mr->umem->sg_head.sgl, sg, mr->umem->nmap, entry) { - len = sg_dma_len(sg) >> PAGE_SHIFT; - for (j = 0; j < len; ++j) { - page_addr = sg_dma_address(sg) + - (j << mr->umem->page_shift); - pages[i] = page_addr >> 6; - /* Record the first 2 entry directly to MTPT table */ - if (i >= HNS_ROCE_V2_MAX_INNER_MTPT_NUM - 1) - goto found; - i++; - } + for_each_sg_dma_page(mr->umem->sg_head.sgl, &sg_iter, mr->umem->nmap, 0) { + page_addr = sg_page_iter_dma_address(&sg_iter); + pages[i] = page_addr >> 6; + + /* Record the first 2 entry directly to MTPT table */ + if (i >= HNS_ROCE_V2_MAX_INNER_MTPT_NUM - 1) + goto found; + i++; } found: mpt_entry->pa0_l = cpu_to_le32(lower_32_bits(pages[0])); diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c index da4fffe..c89948d 100644 --- a/drivers/infiniband/hw/hns/hns_roce_mr.c +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c @@ -976,12 +976,11 @@ int hns_roce_ib_umem_write_mtt(struct hns_roce_dev *hr_dev, struct hns_roce_mtt *mtt, struct ib_umem *umem) { struct device *dev = hr_dev->dev; - struct scatterlist *sg; + struct sg_dma_page_iter sg_iter; unsigned int order; - int i, k, entry; int npage = 0; int ret = 0; - int len; + int i; u64 page_addr; u64 *pages; u32 bt_page_size; @@ -1014,29 +1013,25 @@ int hns_roce_ib_umem_write_mtt(struct hns_roce_dev *hr_dev, i = n = 0; - for_each_sg(umem->sg_head.sgl, sg, umem->nmap, entry) { - len = sg_dma_len(sg) >> PAGE_SHIFT; - for (k = 0; k < len; ++k) { - page_addr = - sg_dma_address(sg) + (k << umem->page_shift); - if (!(npage % (1 << (mtt->page_shift - PAGE_SHIFT)))) { - if (page_addr & ((1 << mtt->page_shift) - 1)) { - dev_err(dev, "page_addr 0x%llx is not page_shift %d alignment!\n", - page_addr, mtt->page_shift); - ret = -EINVAL; - goto out; - } - pages[i++] = page_addr; - } - npage++; - if (i == bt_page_size / sizeof(u64)) { - ret = hns_roce_write_mtt(hr_dev, mtt, n, i, - pages); - if (ret) - goto out; - n += i; - i = 0; + for_each_sg_dma_page(umem->sg_head.sgl, &sg_iter, umem->nmap, 0) { + page_addr = sg_page_iter_dma_address(&sg_iter); + if (!(npage % (1 << (mtt->page_shift - PAGE_SHIFT)))) { + if (page_addr & ((1 << mtt->page_shift) - 1)) { + dev_err(dev, "page_addr 0x%llx is not page_shift %d alignment!\n", + page_addr, mtt->page_shift); + ret = -EINVAL; + goto out; } + pages[i++] = page_addr; + } + npage++; + if (i == bt_page_size / sizeof(u64)) { + ret = hns_roce_write_mtt(hr_dev, mtt, n, i, + pages); + if (ret) + goto out; + n += i; + i = 0; } } @@ -1052,10 +1047,8 @@ static int hns_roce_ib_umem_write_mr(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr, struct ib_umem *umem) { - struct scatterlist *sg; - int i = 0, j = 0, k; - int entry; - int len; + struct sg_dma_page_iter sg_iter; + int i = 0, j = 0; u64 page_addr; u32 pbl_bt_sz; @@ -1063,27 +1056,22 @@ static int hns_roce_ib_umem_write_mr(struct hns_roce_dev *hr_dev, return 0; pbl_bt_sz = 1 << (hr_dev->caps.pbl_ba_pg_sz + PAGE_SHIFT); - for_each_sg(umem->sg_head.sgl, sg, umem->nmap, entry) { - len = sg_dma_len(sg) >> PAGE_SHIFT; - for (k = 0; k < len; ++k) { - page_addr = sg_dma_address(sg) + - (k << umem->page_shift); - - if (!hr_dev->caps.pbl_hop_num) { - mr->pbl_buf[i++] = page_addr >> 12; - } else if (hr_dev->caps.pbl_hop_num == 1) { - mr->pbl_buf[i++] = page_addr; - } else { - if (hr_dev->caps.pbl_hop_num == 2) - mr->pbl_bt_l1[i][j] = page_addr; - else if (hr_dev->caps.pbl_hop_num == 3) - mr->pbl_bt_l2[i][j] = page_addr; - - j++; - if (j >= (pbl_bt_sz / 8)) { - i++; - j = 0; - } + for_each_sg_dma_page(umem->sg_head.sgl, &sg_iter, umem->nmap, 0) { + page_addr = sg_page_iter_dma_address(&sg_iter); + if (!hr_dev->caps.pbl_hop_num) { + mr->pbl_buf[i++] = page_addr >> 12; + } else if (hr_dev->caps.pbl_hop_num == 1) { + mr->pbl_buf[i++] = page_addr; + } else { + if (hr_dev->caps.pbl_hop_num == 2) + mr->pbl_bt_l1[i][j] = page_addr; + else if (hr_dev->caps.pbl_hop_num == 3) + mr->pbl_bt_l2[i][j] = page_addr; + + j++; + if (j >= (pbl_bt_sz / 8)) { + i++; + j = 0; } } } diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index 73066bf..4fce2a9 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -640,18 +640,19 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev, } hr_qp->mtt.mtt_type = MTT_TYPE_WQE; + page_shift = PAGE_SHIFT; if (hr_dev->caps.mtt_buf_pg_sz) { npages = (ib_umem_page_count(hr_qp->umem) + (1 << hr_dev->caps.mtt_buf_pg_sz) - 1) / (1 << hr_dev->caps.mtt_buf_pg_sz); - page_shift = PAGE_SHIFT + hr_dev->caps.mtt_buf_pg_sz; + page_shift += hr_dev->caps.mtt_buf_pg_sz; ret = hns_roce_mtt_init(hr_dev, npages, page_shift, &hr_qp->mtt); } else { ret = hns_roce_mtt_init(hr_dev, ib_umem_page_count(hr_qp->umem), - hr_qp->umem->page_shift, + page_shift, &hr_qp->mtt); } if (ret) {