From patchwork Thu Mar 28 16:49:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiraz Saleem X-Patchwork-Id: 10875415 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A83217E0 for ; Thu, 28 Mar 2019 16:50:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6415028C9A for ; Thu, 28 Mar 2019 16:50:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 58DC328DE8; Thu, 28 Mar 2019 16:50:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A47AC28C9A for ; Thu, 28 Mar 2019 16:50:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727237AbfC1QuW (ORCPT ); Thu, 28 Mar 2019 12:50:22 -0400 Received: from mga07.intel.com ([134.134.136.100]:60977 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727251AbfC1QuV (ORCPT ); Thu, 28 Mar 2019 12:50:21 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Mar 2019 09:50:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,281,1549958400"; d="scan'208";a="311206845" Received: from ssaleem-mobl4.amr.corp.intel.com ([10.255.33.154]) by orsmga005.jf.intel.com with ESMTP; 28 Mar 2019 09:50:19 -0700 From: Shiraz Saleem To: dledford@redhat.com, jgg@ziepe.ca Cc: linux-rdma@vger.kernel.org, Shiraz Saleem , Potnuri Bharat Teja Subject: [PATCH v1 rdma-next 2/5] RDMA/cxbg: Use correct sizing on buffers holding page DMA addresses Date: Thu, 28 Mar 2019 11:49:44 -0500 Message-Id: <20190328164947.13232-3-shiraz.saleem@intel.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20190328164947.13232-1-shiraz.saleem@intel.com> References: <20190328164947.13232-1-shiraz.saleem@intel.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The PBL array that hold the page DMA address is sized off umem->nmap. This can potentially cause out of bound accesses on the PBL array when iterating the umem DMA-mapped SGL. This is because if umem pages are combined, umem->nmap can be much lower than the number of system pages in umem. Use ib_umem_num_pages() to size this array. Cc: Potnuri Bharat Teja Signed-off-by: Shiraz Saleem --- drivers/infiniband/hw/cxgb3/iwch_provider.c | 2 +- drivers/infiniband/hw/cxgb4/mem.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c index 4accf7b..3400004 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_provider.c +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c @@ -539,7 +539,7 @@ static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, shift = PAGE_SHIFT; - n = mhp->umem->nmap; + n = ib_umem_num_pages(mhp->umem); err = iwch_alloc_pbl(mhp, n); if (err) diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c index 5baa31a..5b8208d 100644 --- a/drivers/infiniband/hw/cxgb4/mem.c +++ b/drivers/infiniband/hw/cxgb4/mem.c @@ -542,7 +542,7 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, shift = PAGE_SHIFT; - n = mhp->umem->nmap; + n = ib_umem_num_pages(mhp->umem); err = alloc_pbl(mhp, n); if (err) goto err_umem_release;