From patchwork Mon Dec 24 22:32:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiraz Saleem X-Patchwork-Id: 10742483 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E53FC13BF for ; Mon, 24 Dec 2018 22:32:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D4F8A28A00 for ; Mon, 24 Dec 2018 22:32:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C427E28A3D; Mon, 24 Dec 2018 22:32:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA58E28A2A for ; Mon, 24 Dec 2018 22:32:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725830AbeLXWcn (ORCPT ); Mon, 24 Dec 2018 17:32:43 -0500 Received: from mga04.intel.com ([192.55.52.120]:63659 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725820AbeLXWcn (ORCPT ); Mon, 24 Dec 2018 17:32:43 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Dec 2018 14:32:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,394,1539673200"; d="scan'208";a="286217060" Received: from ssaleem-mobl4.amr.corp.intel.com ([10.255.230.58]) by orsmga005.jf.intel.com with ESMTP; 24 Dec 2018 14:32:42 -0800 From: Shiraz Saleem To: dledford@redhat.com, jgg@ziepe.ca Cc: linux-rdma@vger.kernel.org, Shiraz Saleem Subject: [PATCH rdma-next 5/6] RDMA/bnxt_re: Use umem APIs to retrieve optimal HW address Date: Mon, 24 Dec 2018 16:32:26 -0600 Message-Id: <20181224223227.18016-6-shiraz.saleem@intel.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20181224223227.18016-1-shiraz.saleem@intel.com> References: <20181224223227.18016-1-shiraz.saleem@intel.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Call the core helpers to retrieve the optimal HW aligned address to use for the MR, within a supported bnxt_re page size. Remove checking the umem->hugtetlb flag as it is no longer required. The core helpers will return the 2M aligned address if the MR is backed by 2M huge pages. Signed-off-by: Shiraz Saleem --- drivers/infiniband/hw/bnxt_re/ib_verbs.c | 28 +++++++++++----------------- 1 file changed, 11 insertions(+), 17 deletions(-) diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index fa0cb7c..6692435 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -3535,17 +3535,13 @@ static int fill_umem_pbl_tbl(struct ib_umem *umem, u64 *pbl_tbl_orig, int page_shift) { u64 *pbl_tbl = pbl_tbl_orig; - u64 paddr; - u64 page_mask = (1ULL << page_shift) - 1; - struct sg_page_iter sg_iter; + u64 page_size = BIT_ULL(page_shift); + struct sg_phys_iter sg_phys_iter; + + for (ib_umem_start_phys_iter(umem, &sg_phys_iter, page_size); + ib_umem_next_phys_iter(umem, &sg_phys_iter);) + *pbl_tbl++ = sg_phys_iter.phyaddr; - for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->nmap, 0) { - paddr = sg_page_iter_dma_address(&sg_iter); - if (pbl_tbl == pbl_tbl_orig) - *pbl_tbl++ = paddr & ~page_mask; - else if ((paddr & page_mask) == 0) - *pbl_tbl++ = paddr; - } return pbl_tbl - pbl_tbl_orig; } @@ -3608,7 +3604,9 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, goto free_umem; } - page_shift = umem->page_shift; + page_shift = __ffs(ib_umem_find_single_pg_size(umem, + BNXT_RE_PAGE_SIZE_4K | BNXT_RE_PAGE_SIZE_2M, + virt_addr)); if (!bnxt_re_page_size_ok(page_shift)) { dev_err(rdev_to_dev(rdev), "umem page size unsupported!"); @@ -3616,17 +3614,13 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, goto fail; } - if (!umem->hugetlb && length > BNXT_RE_MAX_MR_SIZE_LOW) { + if (page_shift == BNXT_RE_PAGE_SHIFT_4K && + length > BNXT_RE_MAX_MR_SIZE_LOW) { dev_err(rdev_to_dev(rdev), "Requested MR Sz:%llu Max sup:%llu", length, (u64)BNXT_RE_MAX_MR_SIZE_LOW); rc = -EINVAL; goto fail; } - if (umem->hugetlb && length > BNXT_RE_PAGE_SIZE_2M) { - page_shift = BNXT_RE_PAGE_SHIFT_2M; - dev_warn(rdev_to_dev(rdev), "umem hugetlb set page_size %x", - 1 << page_shift); - } /* Map umem buf ptrs to the PBL */ umem_pgs = fill_umem_pbl_tbl(umem, pbl_tbl, page_shift);