From patchwork Thu Dec 5 13:21:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13895261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C97EE7716C for ; Thu, 5 Dec 2024 13:22:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB1118D0003; Thu, 5 Dec 2024 08:22:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D398C8D0001; Thu, 5 Dec 2024 08:22:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB3AF8D0003; Thu, 5 Dec 2024 08:22:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 943D08D0001 for ; Thu, 5 Dec 2024 08:22:31 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 495BFC0972 for ; Thu, 5 Dec 2024 13:22:31 +0000 (UTC) X-FDA: 82860969240.22.19011D9 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf20.hostedemail.com (Postfix) with ESMTP id D0F8D1C0010 for ; Thu, 5 Dec 2024 13:22:12 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=o7ULrU75; spf=pass (imf20.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733404933; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9wohjECrItiTe3wGt73yRaTZkJ8cJH9lXeliHlLBqbI=; b=5T3WJ1j+BM0spCTZLieWbBhv1fmgBuBxKOHNCUVjbPCa5MT5JEo9+e1H8VwTok/UHPLx9z IiWLmEfmB/iJfhy1mtVrYbNO0qHn7bIpAeQCjgMiUf1FfU7bfvK/zX3qVlNMKa/3Y4naO+ CiEMQfor4vk0YhT5f75k8mZjJm5mTYw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733404933; a=rsa-sha256; cv=none; b=0Cqio4V6BEUvQowWbWgBkisUzRdyP3cPFN3UOfPB4lBO1u2k5lYQWJ5WXmghF07+rUWLqo onp9aD6WTn+58rZlhiat1b/Yd1CwTp2YyqXqczNyfRt540Y69fodIQ7Gdj7byo57q01ZGT eIdK0ItkUQcQ5jDw0HAc5rQkQXwYSQg= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=o7ULrU75; spf=pass (imf20.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 0D308A43827; Thu, 5 Dec 2024 13:20:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8E907C4CED1; Thu, 5 Dec 2024 13:22:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733404948; bh=faZKR+GLvGy07GeNnto8v510/6+S1UMSrHjOqNZtBkg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=o7ULrU75jTg12e+EalGEvRPm6K0Z1TErja8ZpXorSfxilMFi2Lu6dmGfB6KbpaA9O z/UBuPxXJ7OYNG9FecLBBP4uJDWKwKVIHjbTbDaAB4x5bh7q/Mzfah4us7V06Q0F+t rlvB3zig3SIC35ozZv9jiJuP81QsuUAKLGGsPFXIHemlPCimWGKNPNYfBDpnQllwAa aN7X2YbbrUJmgytLjKPmU49k21+jo2GeYc7NR4ZBFmLbxGNduITaD8ln8Ebxr0G3rD mG6NtEklVtcxG1kpRy3ooTqEqdjjIJQPfdlbKu+cz3CxBxWvUkahYJYlVHUpMbGH/j oAe3wi168gvRQ== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v4 15/18] RDMA/umem: Separate implicit ODP initialization from explicit ODP Date: Thu, 5 Dec 2024 15:21:14 +0200 Message-ID: <92c074fecf985073dea1ae90a228f6478975f41a.1733398913.git.leon@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: D0F8D1C0010 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: fncbbwampapc9jrdxwqagnzhutwj3nub X-HE-Tag: 1733404932-195972 X-HE-Meta: U2FsdGVkX199r+NQ4ShauJI/mr1Q87ioC90ZrHZDva9tLuWKIUJ+266ImrszXo5M8aTxjKcCGVRPyz2sGsut9nmlSRG3kWgHLr6ODvU7+G0CChyaowtdlWIaywkB/9SKZjyL4FjR0sD6r0PXli5nZOilqNWVnPfrJkrUM5L9ISbPwWlbnigUaWDnv8y4L83fo0sRndBKXnc/1oZpdtg1qOvMfCQPk5/p4pvxQnUmV7sqlME0SH/UgXhkcocrLaXwacFgw6v05OwNyWRRCHAJ3kBiO4e16Dob/mGOsJEROS1KnGT9zgeksOmTYlJoTVqrZQ0NzB/2QavzLig5PCb5kDv7iAOisxQW8flINW2Kfpbwj81zqQ0XeWtextONW2wknL6OcyaEkkbYQrXh/G6i8VIQFgV4CEtsi7G3YV7LbwhjOGSHq0AgatdmQGtlYAzfVvAr+zy1uvNO89Axd6qEHm2NXOZ5trVBpxUR7JIhg3K3Sqq6FY/ozWAcFonTup2E3jBE7U7LhQesTklhMNpwf8Jcz4HIR5lM2pcl94IeqkMsYi8Xv4Bk2O6iLNd7S6dAAomjhycJAMH3h1WEd9x6QXbJeFOvyBXkzd+7hnjqoFzC7nbcckTHv7dmDSiGNz1MgSc36ZQAisZJW54H/HCOTStpO9SlaZguQtuumjrMusAPdqilqiuq+/jtweDj2VA1BsSKlyAUj2CiuSnm83y738soPWvZteZX1NipHkzT/pyBi6OPbAMqUnbKW/YBzyC3LvrNXUCKG6ZzQACiqDFbmDnpHpS9ERc1m0vUebriip9yd9vJNXc+Bg0JbHFhuwGRLWgwSQe0CpH6BYPMud4OFkq8PR6kLd9BwHF1RJzeNd5mCd1TkpSZ+a1vNdB9vUcwH4UCOlh9kjc6ZXunAoXFRx3kQwCvl3t76bTzvTSUR1eQKDaJL0z+GUx8XyzvFD6MRmFkqKlhJ9GUPrIj/pT y3rf8tpr LxP1vUXWiQXCc0X2M5KA6dEZBTr2LLjWVbL+8WEoe1+mlpb8xr3H0n6qKgY9excIMWVRjDzLoif52A6Vnr/bMw5Cuo76HaIZp9jWAXQqVBurUwTOKgvVEwqS68IRItIwiLHqwMW051DL2AdPn2aGqHvUPv29x047DQ+1iI+Xsh0MyLujiA/dKixvFtsfqupDzOX0y8v8nDjqIYR77M9MMpYw7LFjXes0jISxzTvugNHfDVsM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Create separate functions for the implicit ODP initialization which is different from the explicit ODP initialization. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 91 +++++++++++++++--------------- 1 file changed, 46 insertions(+), 45 deletions(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 30cd8f353476..51d518989914 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -48,41 +48,44 @@ #include "uverbs.h" -static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, - const struct mmu_interval_notifier_ops *ops) +static void ib_init_umem_implicit_odp(struct ib_umem_odp *umem_odp) +{ + umem_odp->is_implicit_odp = 1; + umem_odp->umem.is_odp = 1; + mutex_init(&umem_odp->umem_mutex); +} + +static int ib_init_umem_odp(struct ib_umem_odp *umem_odp, + const struct mmu_interval_notifier_ops *ops) { struct ib_device *dev = umem_odp->umem.ibdev; + size_t page_size = 1UL << umem_odp->page_shift; + unsigned long start; + unsigned long end; int ret; umem_odp->umem.is_odp = 1; mutex_init(&umem_odp->umem_mutex); - if (!umem_odp->is_implicit_odp) { - size_t page_size = 1UL << umem_odp->page_shift; - unsigned long start; - unsigned long end; - - start = ALIGN_DOWN(umem_odp->umem.address, page_size); - if (check_add_overflow(umem_odp->umem.address, - (unsigned long)umem_odp->umem.length, - &end)) - return -EOVERFLOW; - end = ALIGN(end, page_size); - if (unlikely(end < page_size)) - return -EOVERFLOW; - - ret = hmm_dma_map_alloc(dev->dma_device, &umem_odp->map, - (end - start) >> PAGE_SHIFT, - 1 << umem_odp->page_shift); - if (ret) - return ret; - - ret = mmu_interval_notifier_insert(&umem_odp->notifier, - umem_odp->umem.owning_mm, - start, end - start, ops); - if (ret) - goto out_free_map; - } + start = ALIGN_DOWN(umem_odp->umem.address, page_size); + if (check_add_overflow(umem_odp->umem.address, + (unsigned long)umem_odp->umem.length, &end)) + return -EOVERFLOW; + end = ALIGN(end, page_size); + if (unlikely(end < page_size)) + return -EOVERFLOW; + + ret = hmm_dma_map_alloc(dev->dma_device, &umem_odp->map, + (end - start) >> PAGE_SHIFT, + 1 << umem_odp->page_shift); + if (ret) + return ret; + + ret = mmu_interval_notifier_insert(&umem_odp->notifier, + umem_odp->umem.owning_mm, start, + end - start, ops); + if (ret) + goto out_free_map; return 0; @@ -106,7 +109,6 @@ struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_device *device, { struct ib_umem *umem; struct ib_umem_odp *umem_odp; - int ret; if (access & IB_ACCESS_HUGETLB) return ERR_PTR(-EINVAL); @@ -118,16 +120,10 @@ struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_device *device, umem->ibdev = device; umem->writable = ib_access_writable(access); umem->owning_mm = current->mm; - umem_odp->is_implicit_odp = 1; umem_odp->page_shift = PAGE_SHIFT; umem_odp->tgid = get_task_pid(current->group_leader, PIDTYPE_PID); - ret = ib_init_umem_odp(umem_odp, NULL); - if (ret) { - put_pid(umem_odp->tgid); - kfree(umem_odp); - return ERR_PTR(ret); - } + ib_init_umem_implicit_odp(umem_odp); return umem_odp; } EXPORT_SYMBOL(ib_umem_odp_alloc_implicit); @@ -248,7 +244,7 @@ struct ib_umem_odp *ib_umem_odp_get(struct ib_device *device, } EXPORT_SYMBOL(ib_umem_odp_get); -void ib_umem_odp_release(struct ib_umem_odp *umem_odp) +static void ib_umem_odp_free(struct ib_umem_odp *umem_odp) { struct ib_device *dev = umem_odp->umem.ibdev; @@ -258,14 +254,19 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) * It is the driver's responsibility to ensure, before calling us, * that the hardware will not attempt to access the MR any more. */ - if (!umem_odp->is_implicit_odp) { - mutex_lock(&umem_odp->umem_mutex); - ib_umem_odp_unmap_dma_pages(umem_odp, ib_umem_start(umem_odp), - ib_umem_end(umem_odp)); - mutex_unlock(&umem_odp->umem_mutex); - mmu_interval_notifier_remove(&umem_odp->notifier); - hmm_dma_map_free(dev->dma_device, &umem_odp->map); - } + mutex_lock(&umem_odp->umem_mutex); + ib_umem_odp_unmap_dma_pages(umem_odp, ib_umem_start(umem_odp), + ib_umem_end(umem_odp)); + mutex_unlock(&umem_odp->umem_mutex); + mmu_interval_notifier_remove(&umem_odp->notifier); + hmm_dma_map_free(dev->dma_device, &umem_odp->map); +} + +void ib_umem_odp_release(struct ib_umem_odp *umem_odp) +{ + if (!umem_odp->is_implicit_odp) + ib_umem_odp_free(umem_odp); + put_pid(umem_odp->tgid); kfree(umem_odp); }