From patchwork Tue Mar 5 10:15:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13581934 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CE1FC54E41 for ; Tue, 5 Mar 2024 10:16:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C0A1C6B00B8; Tue, 5 Mar 2024 05:16:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B6D136B00B9; Tue, 5 Mar 2024 05:16:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 949A16B00BA; Tue, 5 Mar 2024 05:16:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7F79F6B00B8 for ; Tue, 5 Mar 2024 05:16:06 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5BE9FC0DEC for ; Tue, 5 Mar 2024 10:16:06 +0000 (UTC) X-FDA: 81862579932.27.A3A715E Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf02.hostedemail.com (Postfix) with ESMTP id B85928000F for ; Tue, 5 Mar 2024 10:16:04 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Cimcj0rC; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf02.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633764; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zYSEuAz/VcQQ9oYOdYsxLecJdgLxoOHzBKYScKbWY7E=; b=6MHH2L2Hx+ZOaQNPF2RaDQhgY/qnxg/m8uzV2vjIF6YQbuJfBBY0xxXvDV6OBwbKLgYjgx nVPh6svQS3qpPGhdd6tThX/NC3qWsat8hoHQIEUCHJd1Rt+0kafp17BaMY97n3AP8OeNVY Qdzha8txvA4cLQ2mpb1PoNvyoPVC4NE= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Cimcj0rC; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf02.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633764; a=rsa-sha256; cv=none; b=u5vupOsJsFsLmd2gvJzbPFuhzebba65gU37SfLcur5/UFHjtRd/u+rkQnJi/jsQQVGAOru 2BRzeCGwTyNGqxWjNpFBzPQnHRpxMRhnRnCwnxPROWXwuCkxyXiRUzAU1C01IbhGHR8R1z /HqMDeA6hst1UK89ola2Ig8Lcl665bQ= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id CE02A614B0; Tue, 5 Mar 2024 10:16:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CC633C43390; Tue, 5 Mar 2024 10:16:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633763; bh=E8ahaIx2eHfPJk0G81HhbupoEvLGKO0ealmbIxFQ2TY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Cimcj0rCFoKg3KWHRwTEgnoqc8qrWIhxvFizI1kU7loe/t73myo7KMiGNWQpQmDYK YkCKGSvq63GGBNQ2CLnL24z1RXznU+8F5XkWj/5/ZksOhZ0kBQccN2aT7CgAKP9v9x yl9owxSK+HXI1J5nvkmlDd9aQR0PNfqbHP8QIEKRMOoQAznwZDxj6A2AMZSNfEgi32 fiuIyDdyjU1OE4zsZOG6BbNIThso3SY9ITUaG4nRwQoZTR27ICISHS92MWtSOWycWO /4uENNscCuCDEqeAucc8qKvGe+qf/1JpEGEZlgyvbFkeMhHjGMpx2I6GdKSl26ERxE hhl/ov41ZTT0g== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 07/16] RDMA/umem: Preallocate and cache IOVA for UMEM ODP Date: Tue, 5 Mar 2024 12:15:17 +0200 Message-ID: <47cc27fbaf9f4bd19edbcaac380bdd9684c5d12f.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: B85928000F X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: wbr3ewy164z8kpj74bonmb7i6c3zsd19 X-HE-Tag: 1709633764-735124 X-HE-Meta: U2FsdGVkX19pkoXnNpdjqRXG/CsitiPSZo+B9pjnNcZt5ABWB1VTLvVC5xlRo5EWm8NEcFCiAX1RHG/otammGQM6euy+R9evGfgcdCU3vjSD0sdgRo3ySNofbmqXTlkwUMgUBivvFuppEKPuKKF+kPF6x1l3zTJd5hBx8DLIP8IBgtHc+JyRBLp1caJOYaDmiOKce8Dz+GA8b6HGIpXAOYnxl2pMDHxDFJJrV+iIsuJonPY4jKaSRWeZbBEe50Tr1wsb/HJ10HCfhom3917bprv6AH4Njn9S3xp0t0tV2EFCTvAE31Mdm6eYcxlBdot4sjtvrvcXHrgCXVK70giaSd+KWGRVeCVx4EJ821Tddr7aeis9pTmqG0LRNVg70mXZeSttkw5rcGcKnm0cLhE/vTD1Cc6O4rRFryjaYXklzo0nC8+BbIIR4VhpDvjXBaJI59CIkE0rSnCbWabnNKGx5xjGBmjH8cUHUzW/ZKFpEpaoXr+SmUURvxdSOdLOBwla2Pf/SLbQICQ4/fSfRuBX6cQn2tDkyPbtz4ZJyCGDGo09GPpQbygwHZC1DjavhE7wd7Rs7sWiAvJwW9Fu1OsD6sgESWOcMamGNpIaCiVEdL69uuiJfezBtIjWIyyoS39IkQtJAAfCmTM5BMU+dR8pSKfNtkB8EKYs+bBUBpYXdoUKIpXtg17xEWIXja3dz0gR9UfcnMXKxxCy6wzsrEV8X2MK7O5LT4UTMpPRH14uSt2rBRkuM9qJ1WWvA2oYVaDkw8W85LTnP59n2a2Gzg/lI9NwpRQoYit/r4GsFwOO6Qc8L9Fsx+RaguOPtRGGZbwBU53fRIAbsigN8GLtYU+mpCQ9MmlBqLzKqTIua/DkJx62rb/SMEkr8mkIJ4unufV8bv//A2kNGG5rHSTOguDChusRCh2OS9vA/DSa3mOpERfetZxmmgdCZ+95dRlKvnfBFRewJ7SNGf+H51mjLgk Tt40A+bI 8p0y+QY92yciFmVufpU2zdTdCgcFvr145wUZa6yUpEkF8v2WN1GwoMSI0Ht+zNUlDQ2hp5grniTVAmuyhsbJL+BVMzaZ1iwYfWwlJR1DB5ejqMlGx6o2d3yatLT6qM8pBs+/yqpjuqlmV3pkiuGf/iUOXlIkLR6nxJZqnxVvvZFmUFCSK2OGmr0n+tjzwM8raPNgLV1WBChcHAXreZVju7ustpHqLC4bvYjdN/lmTg9JeWEDh/O7y22UnFg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky As a preparation to provide two step interface to map pages, preallocate IOVA when UMEM is initialized. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 16 +++++++++++++++- include/rdma/ib_umem_odp.h | 1 + include/rdma/ib_verbs.h | 18 ++++++++++++++++++ 3 files changed, 34 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index e9fa22d31c23..f69d1233dc82 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -50,6 +50,7 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, const struct mmu_interval_notifier_ops *ops) { + struct ib_device *dev = umem_odp->umem.ibdev; int ret; umem_odp->umem.is_odp = 1; @@ -87,15 +88,25 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, goto out_pfn_list; } + umem_odp->iova.dev = dev->dma_device; + umem_odp->iova.size = end - start; + umem_odp->iova.dir = DMA_BIDIRECTIONAL; + ret = ib_dma_alloc_iova(dev, &umem_odp->iova); + if (ret) + goto out_dma_list; + + ret = mmu_interval_notifier_insert(&umem_odp->notifier, umem_odp->umem.owning_mm, start, end - start, ops); if (ret) - goto out_dma_list; + goto out_free_iova; } return 0; +out_free_iova: + ib_dma_free_iova(dev, &umem_odp->iova); out_dma_list: kvfree(umem_odp->dma_list); out_pfn_list: @@ -262,6 +273,8 @@ EXPORT_SYMBOL(ib_umem_odp_get); void ib_umem_odp_release(struct ib_umem_odp *umem_odp) { + struct ib_device *dev = umem_odp->umem.ibdev; + /* * Ensure that no more pages are mapped in the umem. * @@ -274,6 +287,7 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) ib_umem_end(umem_odp)); mutex_unlock(&umem_odp->umem_mutex); mmu_interval_notifier_remove(&umem_odp->notifier); + ib_dma_free_iova(dev, &umem_odp->iova); kvfree(umem_odp->dma_list); kvfree(umem_odp->pfn_list); } diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index 0844c1d05ac6..bb2d7f2a5b04 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -23,6 +23,7 @@ struct ib_umem_odp { * See ODP_READ_ALLOWED_BIT and ODP_WRITE_ALLOWED_BIT. */ dma_addr_t *dma_list; + struct dma_iova_attrs iova; /* * The umem_mutex protects the page_list and dma_list fields of an ODP * umem, allowing only a single thread to map/unmap pages. The mutex diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index b7b6b58dd348..e71fa19187cc 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -4077,6 +4077,24 @@ static inline int ib_dma_mapping_error(struct ib_device *dev, u64 dma_addr) return dma_mapping_error(dev->dma_device, dma_addr); } +static inline int ib_dma_alloc_iova(struct ib_device *dev, + struct dma_iova_attrs *iova) +{ + if (ib_uses_virt_dma(dev)) + return 0; + + return dma_alloc_iova(iova); +} + +static inline void ib_dma_free_iova(struct ib_device *dev, + struct dma_iova_attrs *iova) +{ + if (ib_uses_virt_dma(dev)) + return; + + dma_free_iova(iova); +} + /** * ib_dma_map_single - Map a kernel virtual address to DMA address * @dev: The device for which the dma_addr is to be created