From patchwork Tue Feb 19 14:57:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiraz Saleem X-Patchwork-Id: 10819991 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 54D161399 for ; Tue, 19 Feb 2019 14:57:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4102B2C62E for ; Tue, 19 Feb 2019 14:57:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3386D2C638; Tue, 19 Feb 2019 14:57:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8DC3F2C62E for ; Tue, 19 Feb 2019 14:57:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725808AbfBSO55 (ORCPT ); Tue, 19 Feb 2019 09:57:57 -0500 Received: from mga05.intel.com ([192.55.52.43]:28312 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725794AbfBSO55 (ORCPT ); Tue, 19 Feb 2019 09:57:57 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Feb 2019 06:57:57 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,388,1544515200"; d="scan'208";a="117372241" Received: from ssaleem-mobl4.amr.corp.intel.com ([10.254.12.189]) by orsmga006.jf.intel.com with ESMTP; 19 Feb 2019 06:57:56 -0800 From: Shiraz Saleem To: dledford@redhat.com, jgg@ziepe.ca Cc: linux-rdma@vger.kernel.org, "Saleem, Shiraz" Subject: [PATCH rdma-next v1 0/6] Add APIs to get contiguous memory blocks aligned to a HW supported page size Date: Tue, 19 Feb 2019 08:57:39 -0600 Message-Id: <20190219145745.13476-1-shiraz.saleem@intel.com> X-Mailer: git-send-email 2.8.3 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: "Saleem, Shiraz" This patch set is aiming to allow drivers that support multiple page sizes to leverage the core umem APIs to obtain suitable HW DMA addresses for the MR, aligned to a supported page size. The APIs accomodates for HW that support single page size or mixed page sizes in an MR. The motivation for this work comes from the discussion in [1]. The first patch modifies current memory registration API ib_umem_get() to combine contiguous regions into SGEs and add them to the scatter table. The second patch introduces a new core API that allows drivers to find the best supported page size to use for this MR, from a bitmap of HW supported page sizes. The third patch introduces new core APIs that iterates through the SG list and returns contiguous memory blocks aligned to a HW supported page size. The fourth patch and fifth patch removes the dependency of i40iw and bnxt_re drivers on the hugetlb flag. The new core APIs are called in these drivers to get huge page size aligned addresses if the MR is backed by huge pages. The sixth patch removes the hugetlb flag from IB core. Please note that mixed page portion of the algorithm and bnxt_re update in patch #5 have not been tested on hardware. [1] https://patchwork.kernel.org/patch/10499753/ RFC-->v0: --------- * Add to scatter table by iterating a limited sized page list. * Updated driver call sites to use the for_each_sg_page iterator variant where applicable. * Tweaked algorithm in ib_umem_find_single_pg_size and ib_umem_next_phys_iter to ignore alignment of the start of first SGE and end of the last SGE. * Simplified ib_umem_find_single_pg_size on offset alignments checks for user-space virtual and physical buffer. * Updated ib_umem_start_phys_iter to do some pre-computation for the non-mixed page support case. * Updated bnxt_re driver to use the new core APIs and remove its dependency on the huge tlb flag. * Fixed a bug in computation of sg_phys_iter->phyaddr in ib_umem_next_phys_iter. * Drop hugetlb flag usage from RDMA subsystem. * Rebased on top of for-next. v0-->v1: -------- * Remove the patches that update driver to use for_each_sg_page variant to iterate in the SGE. This is sent as a seperate series using the for_each_sg_dma_page variant. * Tweak ib_umem_add_sg_table API defintion based on maintainer feedback. * Cache number of scatterlist entries in umem. * Update function headers for ib_umem_find_single_pg_size and ib_umem_next_phys_iter. * Add sanity check on supported_pgsz in ib_umem_find_single_pg_size. Shiraz Saleem (6): RDMA/umem: Combine contiguous PAGE_SIZE regions in SGEs RDMA/umem: Add API to find best driver supported page size in an MR RDMA/umem: Add API to return aligned memory blocks from SGL RDMA/i40iw: Use umem API to retrieve aligned DMA address RDMA/bnxt_re: Use umem APIs to retrieve aligned DMA address RDMA/umem: Remove hugetlb flag drivers/infiniband/core/umem.c | 281 +++++++++++++++++++++++++++--- drivers/infiniband/core/umem_odp.c | 3 - drivers/infiniband/hw/bnxt_re/ib_verbs.c | 28 ++- drivers/infiniband/hw/i40iw/i40iw_user.h | 5 + drivers/infiniband/hw/i40iw/i40iw_verbs.c | 49 +----- drivers/infiniband/hw/i40iw/i40iw_verbs.h | 3 +- include/rdma/ib_umem.h | 50 +++++- 7 files changed, 319 insertions(+), 100 deletions(-)