From patchwork Tue Dec 17 13:00:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CD6AE7717F for ; Tue, 17 Dec 2024 13:01:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 028B86B00A9; Tue, 17 Dec 2024 08:01:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F1A7D6B00AA; Tue, 17 Dec 2024 08:01:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DBB066B00AC; Tue, 17 Dec 2024 08:01:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BCC476B00A9 for ; Tue, 17 Dec 2024 08:01:11 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id BC4B3AEDB3 for ; Tue, 17 Dec 2024 13:01:10 +0000 (UTC) X-FDA: 82904460618.22.E562467 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf08.hostedemail.com (Postfix) with ESMTP id 9F3B5160029 for ; Tue, 17 Dec 2024 13:00:45 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=azixAUvd; spf=pass (imf08.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440442; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=38nCuZVQhoZyNi2yPSNj6WGTyZv1iptpIUEMFvFc5qc=; b=MnR3+d9QALMWZFU0aASVDrTmElvYDYEoMnDFgEZVOcvE1NDX2FAN+5g3ssn+xYjp+B/P28 tqmvEHelw5CAC+nVqpRgYq/dS/VHJqcUBRoYYWy+JSPSkY4JXaDSxXpj6EkCuR+390sXOl GkqxLC+kxMpBxA+VfeLnr/ANzuicuo8= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=azixAUvd; spf=pass (imf08.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440442; a=rsa-sha256; cv=none; b=qrk1ew0HoNR1i9QF+nQ0mxJPqVgytvn6zMz6JRgMNX1EDzyiowMePInbKVWBgkRfUc9sum reWcx04z8t+ljM/RQ4tUAbJK4EcKnqiv8zGdUqdOYInzMQKRUlHs54A3/N5P4FxAP2yd4g 1qdtiq0KSk6vzh+Ww9qTC3KOqeJGlmQ= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 172055C10F4; Tue, 17 Dec 2024 13:00:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 440A2C4CEDD; Tue, 17 Dec 2024 13:01:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440463; bh=E0jv1IHLKFRffsFQsSY/qzvzkcPyoS+otAu8oLaUFcs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=azixAUvdF/gdcA+VIzcfcjC6+MbmpSVfQWeoBkx2g6/RVUDBETFtyBozZHxqulxvL U807QXB9A+ahCg87tDpyIDBo2w6iZD0GLN2Ue6hCjP83T3i4OaMSnBwN2rKscSs8Xu bM1wyDvHJPVYNMaICWa1cAOGp9WMbVxc3erMBD6xSkDJdjPV932DvXJsAeL8juRV+/ z9UIHq+8CLCBxaDTYvezpeGfqSLvxys2vn+eN5/V+NaPdcGRqI87k4Ktt5W5Nhxu2v snZ1XHwg3ldC0CD/n1Lb5MVDuDLgiMPtahXchDOTsIGOXlGRrrWxvfJjSDDPy5RHWW Lzv1LcjyLsWMA== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 01/17] PCI/P2PDMA: Refactor the p2pdma mapping helpers Date: Tue, 17 Dec 2024 15:00:19 +0200 Message-ID: X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 9F3B5160029 X-Rspamd-Server: rspam12 X-Stat-Signature: awyfqocnappjzhyosujsgfogecj6pw67 X-Rspam-User: X-HE-Tag: 1734440445-67731 X-HE-Meta: U2FsdGVkX195eh8KJAbOMMoC+JQuHi13ot3j2eLJF9AEpPnARa1xM1CfVtF5pK/tLFzP99HBTHbzk485LIUFsV0Bw2K6sseWC3IxnojnGC8hz9YPZORMSUoGa/oe3SdJUnDmqaX1U/dviTqwt8gw1bLddVQdTGt/j0h3UhD7mL4lkhLakzbRyRqONXFYnoojRVt8ehfA8PIIm3lPhmB7Ljrb/JW3V7AOdzI+3GUo4Ff9DDDLQxuNMRypbsaHK9xTvpK+YQDxPxChjkK3ntWtpHJMSgt6dWeds2IPwwyrAtrUfGq1XUTkAHoBFnlxKDLqkwLDNvbVKENWXNRDLbpvnfphnHMnkJJWhHDZSkSRELUKW8jgyopytvLItJCcjdRfVMP233axljApfZudnonwVsikwlM9duKz6kVKTiXyroHL1fhNeB9BqELStaNC5dkuVwwdVmaaAeEINlHAsirnAKwBrYto8tH6t9FZiHNUK90ibpOz1M9HiHr9vzQRHcEXQbPMhh27Jp6rta3+VS0Q+c+u5gekuv7c4eYyJbTMNL44wL12frkXLuLoArl8JQHBLAt23W26t6OXDCxCaxP8UN/8BofS06uj/D53awpESgKmteNJltXUqjvPOKkQxFFl1+mnQNWLXARR2EtvNVKXrefwDeSLDsbIJ0pt6M5uM1OVcDmyk4u/UdCKy7Scn0/IYDDY6g8SfCLUNnOsi4Ibd7XpGXXjXuVf4wabpTUK73fb21rVK0ki7f0CpA3DgHftGxwLrzvr71OBI8NTf/mFUdNX+hjC+4p8STrtqm1R5BiYUOoiEztBLyniy36QEi1nuyRXtHmoU34SaCsuLulEKc4DRy9KJ6tNq20wtvIUkz37zcEsYyyAd8NHQGKJEcznRnFq7AN9Uj54UZmnqfTKtG0SF436JewihEhI6j2DPfF9eJX8qHsAkNuuGXusVJa9/hzBimR7cdC03d+RPqh r24TZgL5 sDDmCatzBw7ScisHUMPHm+//+S5MWcniFRdNO8t2OvIK0ordgpnXacrLr+4GaTdmHQhRjsLInynWLKV1RzEXb3TMB/D5l3kNQiE7xBBYxbK2fIHiVFzxr2smVBf3fJcuuAGPSZES+EXS6AyhtuRT6HE6wI9kybd4LG5dg/T7c6yz9gtvvJKDTE7ey/D6ZP/24pLpnHHTrRD/jEs9/oE4/uosTcxInQh/3//UwUuMbXn4gce53Gx8Nk8Bw27Ya3P4L7OuQEtE1pKn2ahhkFYnyANGDuijsTxR+5xhd5sdeRVaDazmO1bziYe/zrtwpfDDLmn/d X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig The current scheme with a single helper to determine the P2P status and map a scatterlist segment force users to always use the map_sg helper to DMA map, which we're trying to get away from because they are very cache inefficient. Refactor the code so that there is a single helper that checks the P2P state for a page, including the result that it is not a P2P page to simplify the callers, and a second one to perform the address translation for a bus mapped P2P transfer that does not depend on the scatterlist structure. Signed-off-by: Christoph Hellwig Reviewed-by: Logan Gunthorpe Acked-by: Bjorn Helgaas Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 47 +++++++++++++++++----------------- drivers/pci/p2pdma.c | 38 ++++----------------------- include/linux/dma-map-ops.h | 51 +++++++++++++++++++++++++++++-------- kernel/dma/direct.c | 43 +++++++++++++++---------------- 4 files changed, 91 insertions(+), 88 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 2a9fa0c8cc00..5746ffaf0061 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1382,7 +1382,6 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, struct scatterlist *s, *prev = NULL; int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs); struct pci_p2pdma_map_state p2pdma_state = {}; - enum pci_p2pdma_map_type map; dma_addr_t iova; size_t iova_len = 0; unsigned long mask = dma_get_seg_boundary(dev); @@ -1412,28 +1411,30 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, size_t s_length = s->length; size_t pad_len = (mask - iova_len + 1) & mask; - if (is_pci_p2pdma_page(sg_page(s))) { - map = pci_p2pdma_map_segment(&p2pdma_state, dev, s); - switch (map) { - case PCI_P2PDMA_MAP_BUS_ADDR: - /* - * iommu_map_sg() will skip this segment as - * it is marked as a bus address, - * __finalise_sg() will copy the dma address - * into the output segment. - */ - continue; - case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: - /* - * Mapping through host bridge should be - * mapped with regular IOVAs, thus we - * do nothing here and continue below. - */ - break; - default: - ret = -EREMOTEIO; - goto out_restore_sg; - } + switch (pci_p2pdma_state(&p2pdma_state, dev, sg_page(s))) { + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + /* + * Mapping through host bridge should be mapped with + * regular IOVAs, thus we do nothing here and continue + * below. + */ + break; + case PCI_P2PDMA_MAP_NONE: + break; + case PCI_P2PDMA_MAP_BUS_ADDR: + /* + * iommu_map_sg() will skip this segment as it is marked + * as a bus address, __finalise_sg() will copy the dma + * address into the output segment. + */ + s->dma_address = pci_p2pdma_bus_addr_map(&p2pdma_state, + sg_phys(s)); + sg_dma_len(s) = sg->length; + sg_dma_mark_bus_address(s); + continue; + default: + ret = -EREMOTEIO; + goto out_restore_sg; } sg_dma_address(s) = s_iova_off; diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 7abd4f546d3c..82b6ed736f0f 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -995,40 +995,12 @@ static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap, return type; } -/** - * pci_p2pdma_map_segment - map an sg segment determining the mapping type - * @state: State structure that should be declared outside of the for_each_sg() - * loop and initialized to zero. - * @dev: DMA device that's doing the mapping operation - * @sg: scatterlist segment to map - * - * This is a helper to be used by non-IOMMU dma_map_sg() implementations where - * the sg segment is the same for the page_link and the dma_address. - * - * Attempt to map a single segment in an SGL with the PCI bus address. - * The segment must point to a PCI P2PDMA page and thus must be - * wrapped in a is_pci_p2pdma_page(sg_page(sg)) check. - * - * Returns the type of mapping used and maps the page if the type is - * PCI_P2PDMA_MAP_BUS_ADDR. - */ -enum pci_p2pdma_map_type -pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, - struct scatterlist *sg) +void __pci_p2pdma_update_state(struct pci_p2pdma_map_state *state, + struct device *dev, struct page *page) { - if (state->pgmap != sg_page(sg)->pgmap) { - state->pgmap = sg_page(sg)->pgmap; - state->map = pci_p2pdma_map_type(state->pgmap, dev); - state->bus_off = to_p2p_pgmap(state->pgmap)->bus_offset; - } - - if (state->map == PCI_P2PDMA_MAP_BUS_ADDR) { - sg->dma_address = sg_phys(sg) + state->bus_off; - sg_dma_len(sg) = sg->length; - sg_dma_mark_bus_address(sg); - } - - return state->map; + state->pgmap = page->pgmap; + state->map = pci_p2pdma_map_type(state->pgmap, dev); + state->bus_off = to_p2p_pgmap(state->pgmap)->bus_offset; } /** diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index e172522cd936..63dd480e209b 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -443,6 +443,11 @@ enum pci_p2pdma_map_type { */ PCI_P2PDMA_MAP_UNKNOWN = 0, + /* + * Not a PCI P2PDMA transfer. + */ + PCI_P2PDMA_MAP_NONE, + /* * PCI_P2PDMA_MAP_NOT_SUPPORTED: Indicates the transaction will * traverse the host bridge and the host bridge is not in the @@ -471,21 +476,47 @@ enum pci_p2pdma_map_type { struct pci_p2pdma_map_state { struct dev_pagemap *pgmap; - int map; + enum pci_p2pdma_map_type map; u64 bus_off; }; -#ifdef CONFIG_PCI_P2PDMA -enum pci_p2pdma_map_type -pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, - struct scatterlist *sg); -#else /* CONFIG_PCI_P2PDMA */ +/* helper for pci_p2pdma_state(), do not use directly */ +void __pci_p2pdma_update_state(struct pci_p2pdma_map_state *state, + struct device *dev, struct page *page); + +/** + * pci_p2pdma_state - check the P2P transfer state of a page + * @state: P2P state structure + * @dev: device to transfer to/from + * @page: page to map + * + * Check if @page is a PCI P2PDMA page, and if yes of what kind. Returns the + * map type, and updates @state with all information needed for a P2P transfer. + */ static inline enum pci_p2pdma_map_type -pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, - struct scatterlist *sg) +pci_p2pdma_state(struct pci_p2pdma_map_state *state, struct device *dev, + struct page *page) +{ + if (IS_ENABLED(CONFIG_PCI_P2PDMA) && is_pci_p2pdma_page(page)) { + if (state->pgmap != page->pgmap) + __pci_p2pdma_update_state(state, dev, page); + return state->map; + } + return PCI_P2PDMA_MAP_NONE; +} + +/** + * pci_p2pdma_bus_addr_map - map a PCI_P2PDMA_MAP_BUS_ADDR P2P transfer + * @state: P2P state structure + * @paddr: physical address to map + * + * Map a physically contigous PCI_P2PDMA_MAP_BUS_ADDR transfer. + */ +static inline dma_addr_t +pci_p2pdma_bus_addr_map(struct pci_p2pdma_map_state *state, phys_addr_t paddr) { - return PCI_P2PDMA_MAP_NOT_SUPPORTED; + WARN_ON_ONCE(state->map != PCI_P2PDMA_MAP_BUS_ADDR); + return paddr + state->bus_off; } -#endif /* CONFIG_PCI_P2PDMA */ #endif /* _LINUX_DMA_MAP_OPS_H */ diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 5b4e6d3bf7bc..e289ad27d1b5 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -462,34 +462,33 @@ int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction dir, unsigned long attrs) { struct pci_p2pdma_map_state p2pdma_state = {}; - enum pci_p2pdma_map_type map; struct scatterlist *sg; int i, ret; for_each_sg(sgl, sg, nents, i) { - if (is_pci_p2pdma_page(sg_page(sg))) { - map = pci_p2pdma_map_segment(&p2pdma_state, dev, sg); - switch (map) { - case PCI_P2PDMA_MAP_BUS_ADDR: - continue; - case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: - /* - * Any P2P mapping that traverses the PCI - * host bridge must be mapped with CPU physical - * address and not PCI bus addresses. This is - * done with dma_direct_map_page() below. - */ - break; - default: - ret = -EREMOTEIO; + switch (pci_p2pdma_state(&p2pdma_state, dev, sg_page(sg))) { + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + /* + * Any P2P mapping that traverses the PCI host bridge + * must be mapped with CPU physical address and not PCI + * bus addresses. + */ + break; + case PCI_P2PDMA_MAP_NONE: + sg->dma_address = dma_direct_map_page(dev, sg_page(sg), + sg->offset, sg->length, dir, attrs); + if (sg->dma_address == DMA_MAPPING_ERROR) { + ret = -EIO; goto out_unmap; } - } - - sg->dma_address = dma_direct_map_page(dev, sg_page(sg), - sg->offset, sg->length, dir, attrs); - if (sg->dma_address == DMA_MAPPING_ERROR) { - ret = -EIO; + break; + case PCI_P2PDMA_MAP_BUS_ADDR: + sg->dma_address = pci_p2pdma_bus_addr_map(&p2pdma_state, + sg_phys(sg)); + sg_dma_mark_bus_address(sg); + continue; + default: + ret = -EREMOTEIO; goto out_unmap; } sg_dma_len(sg) = sg->length; From patchwork Tue Dec 17 13:00:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27381E77184 for ; Tue, 17 Dec 2024 13:01:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A624D6B00C0; Tue, 17 Dec 2024 08:01:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A129D6B00BE; Tue, 17 Dec 2024 08:01:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B2456B00C0; Tue, 17 Dec 2024 08:01:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6BEEB6B00BD for ; Tue, 17 Dec 2024 08:01:18 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E64AEC0719 for ; Tue, 17 Dec 2024 13:01:17 +0000 (UTC) X-FDA: 82904461206.11.6DE9C9B Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf14.hostedemail.com (Postfix) with ESMTP id 73DC6100030 for ; Tue, 17 Dec 2024 13:00:40 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=O3InTm2u; spf=pass (imf14.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440453; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zUUA/TMvg8XqSjNBN6uthRnm/7odx1BcA4RWK0w5lLQ=; b=yv8RqJpUxUXdbG6pwy7juJymQPRneszmF9PPnYbPNKQizAL+SaouHeDQhOa9wFiaS1tgXP yZdQ36QHiX4oOc/TCxCCm5YhraFxdnJcRibf67J3SdBTNTIwtWAX3NsXPl0eiyeY2vzvkQ 4rChT/37OO3gEQ5yJvHAwS9CRwQD7Lg= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=O3InTm2u; spf=pass (imf14.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440453; a=rsa-sha256; cv=none; b=Uv5U45LqPJb5qSKBZUFrL/b8hj0NeDgABN1GQI7OqQm625cmTj6V5xfP0Bd72D6ivsnf+0 eff0wk+1zq56I5v1qncbnOSw/CDpI6IzcLBjru8/7khIzvJuY/dpGaOc7h+CzehDAQn4IM 6MdbMTzZW1UIjrMbyBysH3JXH+4Mpwc= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 0A8D1A40AE9; Tue, 17 Dec 2024 12:59:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A5051C4CED4; Tue, 17 Dec 2024 13:01:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440472; bh=WdqInaVpIkGcAkpr9gOUQBQLK7DsA+zuuZK7hblnYT8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O3InTm2uVHzMNgZCeqg4DYzu/Iyw1FZ9oSsh9RdWFB9YGSJ+qsD3Wmkov7SqgubcL fxzbeS4FBXMalRCaTR6y2VDWV6fgps21ZKJaT6mCSYt2Ro18L/T+DCTfNuN2Ulieb0 YQAHF+G8fYzzqWPayaqdcssrEmvM9NfZeTvGOTak/0ojSZ5OLrlRD8/+gixv6iwAAw HHYrK12cKaXOirQS23DEGsZxg5si+E9io0EEo45s4sofx60YAFsSr6SkSvvrY9VDkx hjyADNAzzYyyrzdFhA0E2UXECIZG0JdHcFUtN6ElC2hWBZLG3CKBtpCpBt2OUdvdHA 7jByhjUYXQf8g== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 02/17] dma-mapping: move the PCI P2PDMA mapping helpers to pci-p2pdma.h Date: Tue, 17 Dec 2024 15:00:20 +0200 Message-ID: <15e9becd1a061b538b44cbe02a47beeed0f53771.1734436840.git.leon@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Stat-Signature: i1pc1d1a5tqzcfe14ehc9nd44h7apu44 X-Rspamd-Queue-Id: 73DC6100030 X-Rspam-User: X-HE-Tag: 1734440440-123383 X-HE-Meta: U2FsdGVkX19PUgsO3WrEH28efBNJGgHDRxAkZU17RxhEvki4JbeZIsH72jzk7/NSu3Lp0XshTNxi2TFh3pMhfAXL0geKQ/9b2CzTWzP4h+lpB1nd4mNgyB0J2LW0GUDnOgqJyT6y5ljMV3AXJbbSgodWtpUPCQA+/WEoYz6SGoDChzVCKI5s97SCQgxXQ86UZk6xrw3++0lFJyZCo7c+ZAyUgDSXDNMK6l2475eSwVbbgD930muHvBnOPdV+0Dmxu5168N6G1iKYosphNQPEv9BwUdY9LYY3pAl08TX1kZMzvG4j0ox8VQzvIUHV5zbZIJeHmJp0ngdVjsjo2Ksu4/sxGjWpZpK2lj4aDan4i0IwEs04wy8aDqJm7QrDWEi1qii4C9rF4LqdJrTlFp0z4K3zulZ4aHHU/rkSaS4C9tJqyRKcBms6H9ew2TQ+Ne+J4sI+i0Sf7Vgw92QdKOb2jcTRa3kv2MGc0y56ofSVp4fO31l8HJGl/TDHHn0WVW49XSjI3GGtHx4UMLl2kOWXt4pLL/81R65eZAe23sJc1wXQ3jPq5+wl3Wd3O+wGzCVJgyhMybDImeWtqEhQ5oLE1VW6A0gBDtWfr5tc1qZiO/SsHy/S7Wx3XUxiir2I7qts61oZZOL7/3+0/m1gnree3U0opRRWic7AxqlfLALcoKYf54B7BD2gmqKGvLy0pdOlMx9Sxn31L96h6KTI/F5EF6RFqe2XKOwXil5pSqMa4CQse+stJY/mzWqygudTO9BeDzkmC4UBnn/7hNtk+mrVbU8rcRvyVi6uEiF7PMdMR4defqzAjy3cH5E/TaJ4337EdFIdHyvZ6eETTjM6ise5NNMmjdnTO61ZrmuYpYIa+Om184M6ewcdxhya3izzp1cofiVHQh2QFUa5nU8wRhwbeKznq/+DBVMvQVNPF7xlGH/pzBQH680qMEmqldHySoFKBYU1YK35vltCBTDGqA7 ovCiOE/b TY4ajbWCbK38aeoa+q60PzEJmedk0BE4zK26YG3vXSMWSFfC+VHoBw6KC3wo0LC9DUf+Sfa0q8OcuNb7doSUD8hMEuL62XKhaK2kgaF5ymngNT9SQuQpMWGhRABASDSDVf89Zk3uFTOb7kLPFZHDU8AAVSkc6HNb0S6nmVx8gQNq3DMIGiBELjW6HbBRXYUfIXrDMzf5zbZNWcxSS5YJT6NSrUwbVztDmfOBdkfcF+Xk6TACyvLN9in28L9v3xW2HfegGGOQWr/I+AgJP8XupbVZUx74YIPNBZ9j+T8CCrIUdlaCiOGGV4LgqCsBjPf8nbocD X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig To support the upcoming non-scatterlist mapping helpers, we need to go back to have them called outside of the DMA API. Thus move them out of dma-map-ops.h, which is only for DMA API implementations to pci-p2pdma.h, which is for driver use. Note that the core helper is still not exported as the mapping is expected to be done only by very highlevel subsystem code at least for now. Signed-off-by: Christoph Hellwig Reviewed-by: Logan Gunthorpe Acked-by: Bjorn Helgaas Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 1 + include/linux/dma-map-ops.h | 85 ------------------------------------- include/linux/pci-p2pdma.h | 84 ++++++++++++++++++++++++++++++++++++ kernel/dma/direct.c | 1 + 4 files changed, 86 insertions(+), 85 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 5746ffaf0061..853247c42f7d 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include #include diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 63dd480e209b..f48e5fb88bd5 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -434,89 +434,4 @@ static inline void debug_dma_dump_mappings(struct device *dev) #endif /* CONFIG_DMA_API_DEBUG */ extern const struct dma_map_ops dma_dummy_ops; - -enum pci_p2pdma_map_type { - /* - * PCI_P2PDMA_MAP_UNKNOWN: Used internally for indicating the mapping - * type hasn't been calculated yet. Functions that return this enum - * never return this value. - */ - PCI_P2PDMA_MAP_UNKNOWN = 0, - - /* - * Not a PCI P2PDMA transfer. - */ - PCI_P2PDMA_MAP_NONE, - - /* - * PCI_P2PDMA_MAP_NOT_SUPPORTED: Indicates the transaction will - * traverse the host bridge and the host bridge is not in the - * allowlist. DMA Mapping routines should return an error when - * this is returned. - */ - PCI_P2PDMA_MAP_NOT_SUPPORTED, - - /* - * PCI_P2PDMA_BUS_ADDR: Indicates that two devices can talk to - * each other directly through a PCI switch and the transaction will - * not traverse the host bridge. Such a mapping should program - * the DMA engine with PCI bus addresses. - */ - PCI_P2PDMA_MAP_BUS_ADDR, - - /* - * PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: Indicates two devices can talk - * to each other, but the transaction traverses a host bridge on the - * allowlist. In this case, a normal mapping either with CPU physical - * addresses (in the case of dma-direct) or IOVA addresses (in the - * case of IOMMUs) should be used to program the DMA engine. - */ - PCI_P2PDMA_MAP_THRU_HOST_BRIDGE, -}; - -struct pci_p2pdma_map_state { - struct dev_pagemap *pgmap; - enum pci_p2pdma_map_type map; - u64 bus_off; -}; - -/* helper for pci_p2pdma_state(), do not use directly */ -void __pci_p2pdma_update_state(struct pci_p2pdma_map_state *state, - struct device *dev, struct page *page); - -/** - * pci_p2pdma_state - check the P2P transfer state of a page - * @state: P2P state structure - * @dev: device to transfer to/from - * @page: page to map - * - * Check if @page is a PCI P2PDMA page, and if yes of what kind. Returns the - * map type, and updates @state with all information needed for a P2P transfer. - */ -static inline enum pci_p2pdma_map_type -pci_p2pdma_state(struct pci_p2pdma_map_state *state, struct device *dev, - struct page *page) -{ - if (IS_ENABLED(CONFIG_PCI_P2PDMA) && is_pci_p2pdma_page(page)) { - if (state->pgmap != page->pgmap) - __pci_p2pdma_update_state(state, dev, page); - return state->map; - } - return PCI_P2PDMA_MAP_NONE; -} - -/** - * pci_p2pdma_bus_addr_map - map a PCI_P2PDMA_MAP_BUS_ADDR P2P transfer - * @state: P2P state structure - * @paddr: physical address to map - * - * Map a physically contigous PCI_P2PDMA_MAP_BUS_ADDR transfer. - */ -static inline dma_addr_t -pci_p2pdma_bus_addr_map(struct pci_p2pdma_map_state *state, phys_addr_t paddr) -{ - WARN_ON_ONCE(state->map != PCI_P2PDMA_MAP_BUS_ADDR); - return paddr + state->bus_off; -} - #endif /* _LINUX_DMA_MAP_OPS_H */ diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h index 2c07aa6b7665..e839f52b512b 100644 --- a/include/linux/pci-p2pdma.h +++ b/include/linux/pci-p2pdma.h @@ -104,4 +104,88 @@ static inline struct pci_dev *pci_p2pmem_find(struct device *client) return pci_p2pmem_find_many(&client, 1); } +enum pci_p2pdma_map_type { + /* + * PCI_P2PDMA_MAP_UNKNOWN: Used internally for indicating the mapping + * type hasn't been calculated yet. Functions that return this enum + * never return this value. + */ + PCI_P2PDMA_MAP_UNKNOWN = 0, + + /* + * Not a PCI P2PDMA transfer. + */ + PCI_P2PDMA_MAP_NONE, + + /* + * PCI_P2PDMA_MAP_NOT_SUPPORTED: Indicates the transaction will + * traverse the host bridge and the host bridge is not in the + * allowlist. DMA Mapping routines should return an error when + * this is returned. + */ + PCI_P2PDMA_MAP_NOT_SUPPORTED, + + /* + * PCI_P2PDMA_BUS_ADDR: Indicates that two devices can talk to + * each other directly through a PCI switch and the transaction will + * not traverse the host bridge. Such a mapping should program + * the DMA engine with PCI bus addresses. + */ + PCI_P2PDMA_MAP_BUS_ADDR, + + /* + * PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: Indicates two devices can talk + * to each other, but the transaction traverses a host bridge on the + * allowlist. In this case, a normal mapping either with CPU physical + * addresses (in the case of dma-direct) or IOVA addresses (in the + * case of IOMMUs) should be used to program the DMA engine. + */ + PCI_P2PDMA_MAP_THRU_HOST_BRIDGE, +}; + +struct pci_p2pdma_map_state { + struct dev_pagemap *pgmap; + enum pci_p2pdma_map_type map; + u64 bus_off; +}; + +/* helper for pci_p2pdma_state(), do not use directly */ +void __pci_p2pdma_update_state(struct pci_p2pdma_map_state *state, + struct device *dev, struct page *page); + +/** + * pci_p2pdma_state - check the P2P transfer state of a page + * @state: P2P state structure + * @dev: device to transfer to/from + * @page: page to map + * + * Check if @page is a PCI P2PDMA page, and if yes of what kind. Returns the + * map type, and updates @state with all information needed for a P2P transfer. + */ +static inline enum pci_p2pdma_map_type +pci_p2pdma_state(struct pci_p2pdma_map_state *state, struct device *dev, + struct page *page) +{ + if (IS_ENABLED(CONFIG_PCI_P2PDMA) && is_pci_p2pdma_page(page)) { + if (state->pgmap != page->pgmap) + __pci_p2pdma_update_state(state, dev, page); + return state->map; + } + return PCI_P2PDMA_MAP_NONE; +} + +/** + * pci_p2pdma_bus_addr_map - map a PCI_P2PDMA_MAP_BUS_ADDR P2P transfer + * @state: P2P state structure + * @paddr: physical address to map + * + * Map a physically contigous PCI_P2PDMA_MAP_BUS_ADDR transfer. + */ +static inline dma_addr_t +pci_p2pdma_bus_addr_map(struct pci_p2pdma_map_state *state, phys_addr_t paddr) +{ + WARN_ON_ONCE(state->map != PCI_P2PDMA_MAP_BUS_ADDR); + return paddr + state->bus_off; +} + #endif /* _LINUX_PCI_P2P_H */ diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index e289ad27d1b5..c9b3893257d4 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "direct.h" /* From patchwork Tue Dec 17 13:00:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83A29E77187 for ; Tue, 17 Dec 2024 13:01:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CD4146B00B8; Tue, 17 Dec 2024 08:01:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C85666B00BA; Tue, 17 Dec 2024 08:01:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD5A66B00BD; Tue, 17 Dec 2024 08:01:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 84AE66B00B8 for ; Tue, 17 Dec 2024 08:01:14 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 3DFC4434D5 for ; Tue, 17 Dec 2024 13:01:14 +0000 (UTC) X-FDA: 82904460198.03.3237AB1 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf28.hostedemail.com (Postfix) with ESMTP id 2AE24C0022 for ; Tue, 17 Dec 2024 13:00:35 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=s0fi647N; spf=pass (imf28.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440449; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tDiiYSE3uGopq55uae6HEzFj73Tf4VcEaXx69m08WE4=; b=0FqhNjObdXfDKkHCnVN9UX0TbpS1VASwgY/09lRbjWXQo6DNpkBuuQ0cSnV9Db+RdCQNkE a5o39tsRcss2f2f0lgVH9j/+16VOIvflnxMNZmip+idhfSOaw72Qlrh5w+iWKpxRBUuZof Nn4vWwjJYskUBvQl9qWvZk8v0RzU8+M= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440449; a=rsa-sha256; cv=none; b=7W6gjaneLQaZ9jGLsBhqBohIpdjyRYn2FokQ0syb/jfligIdbg4uWeWlwwNWxK+kWre/VQ mV6TO3IWFeC4cth0glq90unAUGSyOYY5RCGLf73uzeaLSvk6w5YBHL5xBx1TKTTSa9tS3m hoS+ioQ/c2reO9KCfXK7MtKM+FXV6+8= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=s0fi647N; spf=pass (imf28.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 78EA85C6287; Tue, 17 Dec 2024 13:00:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B54E4C4CEDD; Tue, 17 Dec 2024 13:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440468; bh=/0RFbzCxs3bUpxWK8zAtOzrecMeBWxfg1712GxHC1Ds=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s0fi647Nb8GVEkeVCdPaHOKtwqPlRztHjV3Lc+Y+5PVL8Q/Lx/xB6HVtlOG3eAEjD jRGNtRML5z6w9cRl/zeu4jESEygSJggzd3jLtUSKmmc44m+WcoG7QPKTd/MKteUXv6 JarFXlBS4rgl7Trn0i/Lk1VAOFfMlP4fcX0p/8uXxMlb0yzeTKBoZxiMSR3fx6j6Yw 40r9N3AQmTbU524pt3d3QtO7B7mMwqluy5Af99bG9ln4GINmne3EndQ+ExmKGoehTt qcP1PMX2t62/vzVITbDQSq9R6mEiCNy7JLr6lTB1uXIBZGqbt6VYLfGkB0Zwsd77L6 4kGPMi2MQ8+2Q== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 03/17] iommu: generalize the batched sync after map interface Date: Tue, 17 Dec 2024 15:00:21 +0200 Message-ID: X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 2AE24C0022 X-Stat-Signature: essmkj8q91qie7br7xs9s7ypgk8eubua X-Rspam-User: X-HE-Tag: 1734440435-264119 X-HE-Meta: U2FsdGVkX1/ccJz8dLKVo0Ecx05Y0doRWTt2+qICAFDVUEcCDupf7yDVCWszC5uBBoUhKqnoVMhIjRps7u7MqPKkx9to61KF2gEdIgfqPUyd2R//PaJfE/AMico0nMAuNf1DN/rcrdwoghHCecDfjgzgUQlgdj3W1jpdSqpJQhIegIu0XTuzoOmGs/wZsI9ko6RqUNM/RFhFmyLFnnuW7UbfcYuxwE2Cu+hLb61XekwfQ5JN1+reC2MntI9q6EZloY3DKOT0DICubjvAqqynESTF9hQFbHZwZ2ebMLVsuYyRgHumNv1CuCcn0fNrA39TIwkw2NF1XxJMfi9MBqUTU/p0FfcA4plK+QHh6N1o8y7KJfz3w1NMHOJB4zvFDeIuvoZeDet69mR4sp+/ziC6WfBST1N6ftJUCzTMnIPo7LbPU6HEvtdyAM+7P40nWoCReyopxiy0yWKsSMqtosT+5jBS3iRrmHfhGbJv9ytp+oxXRLnKmZwpwA/Seo1WyoegltwW9q+DbUoQw+2zmjHgRGAT51CnBrjkO18zEgsXYO3eU8+osvGXHDiAtBUHKHeoxDr89TUsVShMdIjVFBI5s5Ny00TsSILOD0e1obgbVRMqCztBPk6xfmC5Hz1bu4q3ZlNRwJlKvOkkigQ/XdH/hAATN5iRcCg9qd34ro+COOfjAP/XaSAz35llGxrGNveIA7lCx4+XmmSez7Q4zkpbEkKG6PW56rHPhEIo3vJnp+CDkkWHfsw6ZbfQdweiPJsnE1ML6Lwx6s4/aXzTyRxzCrEaIkHcnvNvNNXpe2oAwE6XJjeXxxyCTDSR8chgETc3xpXcC8+z39f5uKAy1zLrhdK2kx05OnwPhA8kuWrVwXPDWdGA2n68ClVsjy5HzJBPb5YguvJR7yhliJ0jx3xlfJp21c46indagHq6ancAJ9CGOuEv6DgvgBtdoyykozX52PMLAmywYUL5NxW2LME KGZlEVbZ qmamfOml7ITOCSg6CiNIRsKo3dKToxe+6oEkbzRRdOPkim9afUa4/FYFupe1cWVv9zm3zoA0hf77z2kRV2/m64WECwkenzs2mQrqDX+8mBrNaTsvYbOxF8RAimQM42vgnG+xf2kq/dpuie+7hQ4+2AWaAxzx7aSD1uYqwYe8wBIdNTjKshB3kjeCuCel7ycjE0PncTAY+tFAFsC7zUiTIDXuHfFKWdf4NPbkltmAB2XwTLtjEBGGEwQ11Yxynq1RX+n1Y X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig For the upcoming IOVA-based DMA API we want to use the interface batch the sync after mapping multiple entries from dma-iommu without having a scatterlist. For that move more sanity checks from the callers into __iommu_map and make that function available outside of iommu.c as iommu_map_nosync. Add a wrapper for the map_sync as iommu_sync_map so that callers don't need to poke into the methods directly. Signed-off-by: Christoph Hellwig Acked-by: Will Deacon Signed-off-by: Leon Romanovsky --- drivers/iommu/iommu.c | 65 +++++++++++++++++++------------------------ include/linux/iommu.h | 4 +++ 2 files changed, 33 insertions(+), 36 deletions(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 9bc0c74cca3c..ec75d14497bf 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2412,8 +2412,8 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova, return pgsize; } -static int __iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { const struct iommu_domain_ops *ops = domain->ops; unsigned long orig_iova = iova; @@ -2422,12 +2422,19 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t orig_paddr = paddr; int ret = 0; + might_sleep_if(gfpflags_allow_blocking(gfp)); + if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING))) return -EINVAL; if (WARN_ON(!ops->map_pages || domain->pgsize_bitmap == 0UL)) return -ENODEV; + /* Discourage passing strange GFP flags */ + if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | + __GFP_HIGHMEM))) + return -EINVAL; + /* find out the minimum page size supported */ min_pagesz = 1 << __ffs(domain->pgsize_bitmap); @@ -2475,31 +2482,27 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, return ret; } -int iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +int iommu_sync_map(struct iommu_domain *domain, unsigned long iova, size_t size) { const struct iommu_domain_ops *ops = domain->ops; - int ret; - - might_sleep_if(gfpflags_allow_blocking(gfp)); - /* Discourage passing strange GFP flags */ - if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | - __GFP_HIGHMEM))) - return -EINVAL; + if (!ops->iotlb_sync_map) + return 0; + return ops->iotlb_sync_map(domain, iova, size); +} - ret = __iommu_map(domain, iova, paddr, size, prot, gfp); - if (ret == 0 && ops->iotlb_sync_map) { - ret = ops->iotlb_sync_map(domain, iova, size); - if (ret) - goto out_err; - } +int iommu_map(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +{ + int ret; - return ret; + ret = iommu_map_nosync(domain, iova, paddr, size, prot, gfp); + if (ret) + return ret; -out_err: - /* undo mappings already done */ - iommu_unmap(domain, iova, size); + ret = iommu_sync_map(domain, iova, size); + if (ret) + iommu_unmap(domain, iova, size); return ret; } @@ -2599,26 +2602,17 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova, struct scatterlist *sg, unsigned int nents, int prot, gfp_t gfp) { - const struct iommu_domain_ops *ops = domain->ops; size_t len = 0, mapped = 0; phys_addr_t start; unsigned int i = 0; int ret; - might_sleep_if(gfpflags_allow_blocking(gfp)); - - /* Discourage passing strange GFP flags */ - if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | - __GFP_HIGHMEM))) - return -EINVAL; - while (i <= nents) { phys_addr_t s_phys = sg_phys(sg); if (len && s_phys != start + len) { - ret = __iommu_map(domain, iova + mapped, start, + ret = iommu_map_nosync(domain, iova + mapped, start, len, prot, gfp); - if (ret) goto out_err; @@ -2641,11 +2635,10 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova, sg = sg_next(sg); } - if (ops->iotlb_sync_map) { - ret = ops->iotlb_sync_map(domain, iova, mapped); - if (ret) - goto out_err; - } + ret = iommu_sync_map(domain, iova, mapped); + if (ret) + goto out_err; + return mapped; out_err: diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 318d27841130..de77012f76d5 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -862,6 +862,10 @@ extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev); extern struct iommu_domain *iommu_get_dma_domain(struct device *dev); extern int iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot, gfp_t gfp); +int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp); +int iommu_sync_map(struct iommu_domain *domain, unsigned long iova, + size_t size); extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size); extern size_t iommu_unmap_fast(struct iommu_domain *domain, From patchwork Tue Dec 17 13:00:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911738 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2E66E7717F for ; Tue, 17 Dec 2024 13:01:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 707306B00B3; Tue, 17 Dec 2024 08:01:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 68EFB6B00B4; Tue, 17 Dec 2024 08:01:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E25B6B00CB; Tue, 17 Dec 2024 08:01:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 2FB576B00B3 for ; Tue, 17 Dec 2024 08:01:41 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AE01CA0325 for ; Tue, 17 Dec 2024 13:01:40 +0000 (UTC) X-FDA: 82904461584.06.278CCC9 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf12.hostedemail.com (Postfix) with ESMTP id B57FC40023 for ; Tue, 17 Dec 2024 13:01:25 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LdACD9MK; spf=pass (imf12.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440470; a=rsa-sha256; cv=none; b=05GYOc+aCW2NAumILrgzx84B/M9TWX/CROtZNNunPFwEdl7IY1JU5ZEsBM3dGU+frJAWhV haUlJPbZB4lOyloG5kGrhIzotAieXOoAO2VZCleKtc9m/X2o0xefMinVpSKcwmR210GHC5 PZGpjVZ5My3dCZ23WLuDRQieQWXLQVM= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LdACD9MK; spf=pass (imf12.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440470; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hYlaENatvypVpRFRMFxaZrbsz3CLq1CAF+/pGdsVNn0=; b=WdztBpSa2NOGfTkENCUo0xOvEhZoOyyYAyjgAmK0i4aWn+ZPeIaSabm9K9nx1hjGMpCZ+y DAv0E9uPeH2kZnjLlN1206aOFN3Dw0FxyFI4caYalGmsvSHr44Xt9L425dVTbBD9pV6aFB ub8a2rWTUlCE8XHxVAY35/2Aho9xn7k= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 9DDCCA40FE8; Tue, 17 Dec 2024 12:59:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5DE30C4CED4; Tue, 17 Dec 2024 13:01:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440498; bh=txr0vDjl8mznkO+cXTCQ+EHv0Saw9V/4thNLzvjUkyQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LdACD9MKWm1D6X4kRWaGTcthlF/q9vCUHezCoZOdL0kT2cXJttd2ePXuz/zIlQNyl p1T9uZdqg7os8bvNjLWvtY/Lr5cR2Z6JNP5SfO4oN3KqvQr2euWy6ueUFvzxsHQDwC R45dGMGpmp5NuG7inw7meA1EvPtQkjchsEYMbFYmmszNsEWpE+01zji7p4Mx25xe6B mrP6eP1QXEnn8RWQ6/61RZRHfZJSAS8otC1BZrVeMEHTBEdyH9ULX4tUYI+DZpbVrA FT6u2VbZ7DoMdvqfBKNbj2KHcChAxZFAfQfGV2qhyQFOB5pBToEvmDShClNsfty4Qt I2BRfo/20eJ5Q== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap , Jason Gunthorpe Subject: [PATCH v5 04/17] iommu: add kernel-doc for iommu_unmap and iommu_unmap_fast Date: Tue, 17 Dec 2024 15:00:22 +0200 Message-ID: <0ae577f8b99f7e03c679729434c87ea7daf78955.1734436840.git.leon@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: B57FC40023 X-Stat-Signature: kfgezy1xi197a57ouzt84rbismmdpth7 X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1734440485-934508 X-HE-Meta: U2FsdGVkX183U6cHOJVocoyXilJAWdFK2gbPayyvezhy34t9h5Iu9mbsPLb+4JGrBXLILo1iV5lRONHOQ1W0e0766Wpxj6gUaH0+4/apI5nseolWSbRrNA78/2SniVrN77abvOPcLFeLXyQyxEQlM3hUXfSCipRxJyRZhbdS51iI2ZWl5/QUzYlBvn+xXQqyZKanmAqXa/XhPxoCwf9p53FIHlTRQFC+AT+8iwcid/Wn59ChuMhqQnGZB7BFivrWLDZtAfCuvHHpOiyzphC8fLOOGyVpIFrlZmfyAdFOxIsplqIHvPTD3v/i0qTybr/lBAIIa6pSgRCjNmAm8jbekNDZ75enPRJpgCJPEGH0Nzw2QYO308cQC16M3G7yrfCtez+O2PEW5MzyXen3vbOYSE+23IkD6Wd/Zgg1IKT0Ye/drkV+SZEzcJsrztBiG4a5GZSwWbhgQCng5jTMn2/jekAkljXSgxE16wmGqyLu9HpDZOvxJ638qmCSy2a2b/bPoDf5wuJebs3lxt1isMtYFiaI01A9Urh/X424Fd/C/ZVDNYEjlmoqYXh8biUqn64Dpvv1lLAgE3udY+iB5vNQM+fzeupJX77DsEUgmVSQgo54PidOkBM3m3H7kQQ04xb7STRSxsV0B9oShgmcqtDn7ARrDpNu2gybafoa++VN9WpJ98onP1dUz6NIcWuiMmhdp/nK8nMM8UmBhlNCMAhy6osfYEtG6SLLdauYqqA7JZPrnBlYHcvjHtOcM6pw0aNGrSm2wp+ykmPbgPPMFFbMCI70Ur5FoSowgwtNo66F2iOcWSw4be7D7VakjFh48XGVbcngqHnnMyeZWOmNELWEU4YIe/ylxLESefRoE+RbpJLsmql1qwWK54PtdXYZiwuUaUWWOmMhN3XuqWSCiDrxMg8EzQ37HdkzJdq7Usa+fSp3Nl13UcabUH8SOHLy+SLGTX4+VBrX6y4y/6qhJTg C00rUypv dI0UxHVyBWPWNf+JerBeXDZ+OK1Mra+hadCxz1TknTge9bpGbNUqG0i2IXVnZqMsOfEheFRQSqDu1o2/Xi8iD0wqkbgSaQS/pWxwWUx8+Y/3bipqrZtIBxHrmVpg3Mbfo4IB4osY8od1kzAdF3kHrkV9nsH1FOj1TuAzZKoqx7pRZlowD4Fx3SfY2atNch+qLCd7vhDWI/SsyLn+J/EISIG58y5Vh+uYtlMdgMiOn0eeyegFITyovH+t+UtyBDDf6eErcz39OGgaYEX/379KZigZPKhl1wElsmv9Eh/KYUgOfxZk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Add kernel-doc section for iommu_unmap and iommu_unmap_fast to document existing limitation of underlying functions which can't split individual ranges. Suggested-by: Jason Gunthorpe Acked-by: Will Deacon Reviewed-by: Christoph Hellwig Signed-off-by: Leon Romanovsky --- drivers/iommu/iommu.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index ec75d14497bf..c86a57abe292 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2590,6 +2590,25 @@ size_t iommu_unmap(struct iommu_domain *domain, } EXPORT_SYMBOL_GPL(iommu_unmap); +/** + * iommu_unmap_fast() - Remove mappings from a range of IOVA without IOTLB sync + * @domain: Domain to manipulate + * @iova: IO virtual address to start + * @size: Length of the range starting from @iova + * @iotlb_gather: range information for a pending IOTLB flush + * + * iommu_unmap_fast() will remove a translation created by iommu_map(). + * It can't subdivide a mapping created by iommu_map(), so it should be + * called with IOVA ranges that match what was passed to iommu_map(). The + * range can aggregate contiguous iommu_map() calls so long as no individual + * range is split. + * + * Basically iommu_unmap_fast() is the same as iommu_unmap() but for callers + * which manage the IOTLB flushing externally to perform a batched sync. + * + * Returns: Number of bytes of IOVA unmapped. iova + res will be the point + * unmapping stopped. + */ size_t iommu_unmap_fast(struct iommu_domain *domain, unsigned long iova, size_t size, struct iommu_iotlb_gather *iotlb_gather) From patchwork Tue Dec 17 13:00:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99FBCE77187 for ; Tue, 17 Dec 2024 13:01:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 99F976B0095; Tue, 17 Dec 2024 08:01:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 94F446B00C1; Tue, 17 Dec 2024 08:01:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 754CC6B00C2; Tue, 17 Dec 2024 08:01:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5075A6B0095 for ; Tue, 17 Dec 2024 08:01:20 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 09A9C1A06F8 for ; Tue, 17 Dec 2024 13:01:20 +0000 (UTC) X-FDA: 82904460702.06.54F5906 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf06.hostedemail.com (Postfix) with ESMTP id 2F07E180025 for ; Tue, 17 Dec 2024 13:00:55 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=SFOnbcEP; spf=pass (imf06.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440463; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9ZMoYNVKYFJrC978DJ/eZvn5/BDWPm8+fvzAy/l0PKU=; b=nOMy6lmYfHDH2BKsSHrBLEDPmsTJmXagqSSIDG6O6jKI1L4V/uq3xPGTrujtowKPz+4cY7 KqT2L7oUfLKWKEK1BOnB7Y0hg2KQ/JBso/GOfOG/AnbALrMRYVhXtHO5k+frW6xAxhNdpL rD1nsuVD6q0HCv/yYMjszE/55C9K3wQ= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=SFOnbcEP; spf=pass (imf06.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440463; a=rsa-sha256; cv=none; b=IIe9j1kMamBePEzzMx2lCmMw21dWn3Cc/lzlwyGro1iNTU+HDPeJJhbj53fptHa+0FuPM+ jbvRvO4FSIyv1Pz+dgvS+CbTfSAIE9OwE/r2D2cvzOI3NY8kGzdP+H/NvGaUm1HxcNDzZp wbpbhnIDbk0ulWiDJGbMF+KYZwLJW6I= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 9A41F5C62E3; Tue, 17 Dec 2024 13:00:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DEFE8C4CEDE; Tue, 17 Dec 2024 13:01:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440476; bh=Tuu9lLMPsbpojYflIa+4EI354ts2rpP2NDVkgczvrjY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SFOnbcEPmkX0bqeNVKeTuJ9cBxPcplTvHF/ZP/RmumhXklJ/tTjc0c0VXdSYZVvMH XgMdlsyuwihuJS6JHkax0GmbjSuHcvcl0QebwyYZXN9k7cGCrGBH3zlAcqnBkoHabA pEteBleG6IoonvfvtExGXNHiFvu87m3C1yRjLs3JFYY4OsT+lHfvkPzoLj52pdJhM/ 89OSsvuxH83fr4T/XMBrFl1i+KOU1ditkP3Rb9net0w56GPUPJDStLeEvPfbJhRQ76 hQYKywSEIjcrDtwM3NSN77/BjJ7vzYHGMfaOWSRfu5Np/0DSJCQBJMIl2w7Zb/naze c7TPx7J8ilqqg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 05/17] dma-mapping: Provide an interface to allow allocate IOVA Date: Tue, 17 Dec 2024 15:00:23 +0200 Message-ID: X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 2F07E180025 X-Stat-Signature: uetbbtukfy3guiw968p5jzbmxspey85w X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1734440455-984396 X-HE-Meta: U2FsdGVkX1/bwV1DQvpY5qOC4o3geAUmn1r296F4De7H2vJ01DyEyvp8Lww+iezWsH8vlO7KWHm5u+bPAUEQc28d4EwW0gXe4O8KsgyZPBJj+PfPK5f3qDsEe11yYrFrxc6I+I1A1zJyo5DTUwGpBMW3YSTj+ex7wvsg6zqBcrtGXXeCq8SyUuy0Hdpqsb/QjEW70CJkELedVML2rV0WITmCjSIs++GyyDom7OPxcd83FyI8DjC6jfPfKB9kFkaX7IdWNwpX5W2Vb9MPae8ZYYBOu0w6Zu/ZxeQsoPtRsYEv6PivficGMSsPqhOImRlzx0oraaRO1eAG4fHyNIW2BB6oOMP4SnWbP23763OWWpEyOJ2MaJyg8t78K+XRFPCqNSHbIgvSg0DCBZZkgK3PoXDwcClwigwBT4+lgU7X9K1ZeiRG3b647Bxuj8OK30oy2MbTLZ1dS50gc+9jWT5L3i2/Gv+LU//rRQQ7OcYt0XzPjK2RtWZuPl2biUK28voT1SoGIz6apIRbs8biuTHmXO/1uT44dLTS+f/qc1w4WdODOGJFBqBujF0sRB80+STzM5qDgDZEprz1qSeoaNbCaBQ1IJjuV+ljZU4QcTREbc9GSkd/DUaFrr6aeHP5kfu0qSY86OZwGhgO1+AC5mX4KUlXrhQ9J5GLR/V5Fep7VTGOoyJ9Tr4FS1ryUvHNHwNv5J3Q3VwmuLMCsp7GOPQyq6IhXIu0qNyPbGWfT2Oet0cN35YlIcwYritZrDl3ZBK7bCkfWsBcyUEtrcjFXVX+iN45fxz78Gf8PDSz52fzUh//9oZvHOJrw06pPyiyiEAfltmg/hS5fMpwL5oAWX8lrmsYLaxOzSPMoNQmgI0v5pqEixjaUUSbB8Sdo+vHQWwRhxFw2s6Vi5OkCFt1a9fHU8iX3C3wgfWDmKpkEONLj31swESsiyB1TWzZ9zv+wSrn2+TyncYnDJ5bC7TpW9i bJThSMGM njtva9DBb1Bzv3ghct20kt0By6/sKTnYSPTBo/nyUxxOT4g6w4udlBcGTs54gjFLLben7RbMvnVtUxymy86vaIvmgKQbnOtXp3e4ani5NTgK0sQpQ0KBLHoIQiadK/FZn2rhcEBI7skZ1huUCrqtbAeHG24OncSZLV5xE023G9JvnAi+Nh4xUSLJK3Hru5Jebc0Pt2Hdt2meehYkgjNbSGKtyPVVeeGzCxmzUnghqaA72+PI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky The existing .map_page() callback provides both allocating of IOVA and linking DMA pages. That combination works great for most of the callers who use it in control paths, but is less effective in fast paths where there may be multiple calls to map_page(). These advanced callers already manage their data in some sort of database and can perform IOVA allocation in advance, leaving range linkage operation to be in fast path. Provide an interface to allocate/deallocate IOVA and next patch link/unlink DMA ranges to that specific IOVA. In the new API a DMA mapping transaction is identified by a struct dma_iova_state, which holds some recomputed information for the transaction which does not change for each page being mapped, so add a check if IOVA can be used for the specific transaction. The API is exported from dma-iommu as it is the only implementation supported, the namespace is clearly different from iommu_* functions which are not allowed to be used. This code layout allows us to save function call per API call used in datapath as well as a lot of boilerplate code. Reviewed-by: Christoph Hellwig Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 74 +++++++++++++++++++++++++++++++++++++ include/linux/dma-mapping.h | 49 ++++++++++++++++++++++++ 2 files changed, 123 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 853247c42f7d..5906b47a300c 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1746,6 +1746,80 @@ size_t iommu_dma_max_mapping_size(struct device *dev) return SIZE_MAX; } +/** + * dma_iova_try_alloc - Try to allocate an IOVA space + * @dev: Device to allocate the IOVA space for + * @state: IOVA state + * @phys: physical address + * @size: IOVA size + * + * Check if @dev supports the IOVA-based DMA API, and if yes allocate IOVA space + * for the given base address and size. + * + * Note: @phys is only used to calculate the IOVA alignment. Callers that always + * do PAGE_SIZE aligned transfers can safely pass 0 here. + * + * Returns %true if the IOVA-based DMA API can be used and IOVA space has been + * allocated, or %false if the regular DMA API should be used. + */ +bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_off = iova_offset(iovad, phys); + dma_addr_t addr; + + memset(state, 0, sizeof(*state)); + if (!use_dma_iommu(dev)) + return false; + if (static_branch_unlikely(&iommu_deferred_attach_enabled) && + iommu_deferred_attach(dev, iommu_get_domain_for_dev(dev))) + return false; + + if (WARN_ON_ONCE(!size)) + return false; + if (WARN_ON_ONCE(size & DMA_IOVA_USE_SWIOTLB)) + return false; + + addr = iommu_dma_alloc_iova(domain, + iova_align(iovad, size + iova_off), + dma_get_mask(dev), dev); + if (!addr) + return false; + + state->addr = addr + iova_off; + state->__size = size; + return true; +} +EXPORT_SYMBOL_GPL(dma_iova_try_alloc); + +/** + * dma_iova_free - Free an IOVA space + * @dev: Device to free the IOVA space for + * @state: IOVA state + * + * Undoes a successful dma_try_iova_alloc(). + * + * Note that all dma_iova_link() calls need to be undone first. For callers + * that never call dma_iova_unlink(), dma_iova_destroy() can be used instead + * which unlinks all ranges and frees the IOVA space in a single efficient + * operation. + */ +void dma_iova_free(struct device *dev, struct dma_iova_state *state) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_start_pad = iova_offset(iovad, state->addr); + size_t size = dma_iova_size(state); + + iommu_dma_free_iova(cookie, state->addr - iova_start_pad, + iova_align(iovad, size + iova_start_pad), NULL); +} +EXPORT_SYMBOL_GPL(dma_iova_free); + void iommu_setup_dma_ops(struct device *dev) { struct iommu_domain *domain = iommu_get_domain_for_dev(dev); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index b79925b1c433..55899d65668b 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -7,6 +7,8 @@ #include #include #include +#include +#include /** * List of possible attributes associated with a DMA mapping. The semantics @@ -72,6 +74,21 @@ #define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) +struct dma_iova_state { + dma_addr_t addr; + size_t __size; +}; + +/* + * Use the high bit to mark if we used swiotlb for one or more ranges. + */ +#define DMA_IOVA_USE_SWIOTLB (1ULL << 63) + +static inline size_t dma_iova_size(struct dma_iova_state *state) +{ + return state->__size & ~DMA_IOVA_USE_SWIOTLB; +} + #ifdef CONFIG_DMA_API_DEBUG void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); void debug_dma_map_single(struct device *dev, const void *addr, @@ -277,6 +294,38 @@ static inline int dma_mmap_noncontiguous(struct device *dev, } #endif /* CONFIG_HAS_DMA */ +#ifdef CONFIG_IOMMU_DMA +/** + * dma_use_iova - check if the IOVA API is used for this state + * @state: IOVA state + * + * Return %true if the DMA transfers uses the dma_iova_*() calls or %false if + * they can't be used. + */ +static inline bool dma_use_iova(struct dma_iova_state *state) +{ + return state->__size != 0; +} + +bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t size); +void dma_iova_free(struct device *dev, struct dma_iova_state *state); +#else /* CONFIG_IOMMU_DMA */ +static inline bool dma_use_iova(struct dma_iova_state *state) +{ + return false; +} +static inline bool dma_iova_try_alloc(struct device *dev, + struct dma_iova_state *state, phys_addr_t phys, size_t size) +{ + return false; +} +static inline void dma_iova_free(struct device *dev, + struct dma_iova_state *state) +{ +} +#endif /* CONFIG_IOMMU_DMA */ + #if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC) void __dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir); From patchwork Tue Dec 17 13:00:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911734 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49CE0E7717F for ; Tue, 17 Dec 2024 13:01:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CFAA66B00C3; Tue, 17 Dec 2024 08:01:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CAACD6B00C4; Tue, 17 Dec 2024 08:01:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4D616B00C5; Tue, 17 Dec 2024 08:01:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 87B0F6B00C3 for ; Tue, 17 Dec 2024 08:01:24 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 03DAB1A06F2 for ; Tue, 17 Dec 2024 13:01:23 +0000 (UTC) X-FDA: 82904461122.23.DD7FB1C Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf16.hostedemail.com (Postfix) with ESMTP id 02727180037 for ; Tue, 17 Dec 2024 13:00:50 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="RFwhKA/z"; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf16.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440453; a=rsa-sha256; cv=none; b=P+mUryRpQLHVJ/e+asnWZGwS/7ExC6lKqwFVE1jhCERzRcGs1UpbKU6+aHFewYg47mZ1O9 lyyzJCcEOZOsWt2JmJMeKte8a4Atygnbmy8zhsu8F/zNSh+aznnBBhrEgkgh0/FwLfwZXN PnkYI+6y/LKrP1TNxTMjSISHnj31hyU= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="RFwhKA/z"; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf16.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440453; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Iy0gp93DodbSNSHicJXf8vjZWTU+QO9fJqZUilF8j+A=; b=zI6KyA+qnk9YZ6RjHvub5GO4/e+/7AVjVDq8m/HiZZJmPQR78HhqjKnrswvKADiuJ0XwSc e2uF5V6mzQVBZqiCsZN6H7HzAlQsgk5y6l38+e8uVh6O3IoOEDwruncpjy9VfSsW+nS2Hf g0GaVlWqwL7hm1+sa09uk/LY9OZv/SA= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 171795C0741; Tue, 17 Dec 2024 13:00:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F0224C4CED3; Tue, 17 Dec 2024 13:01:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440480; bh=d/zWaTbRWl2SeZvtdCC9mFOn7TRDMhrcIPAPxHQpIr4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RFwhKA/zJWrblb4tpgarpe8NCYl7Kl2IT/4LiFu4JEiCX69VRrPbAZkQAGaOJEJUG 5fkoMi3XhIgW1V/eBjC2CzrDZRR9pd4IGf/8iULSNVoGyZUk5C77Hy3XkWv4AzJe87 2B/s9yHsmSXseje9qkDjFCbbmpqKc211dWgcz+iNybtlScOcz31SGjfAWmfy1E0t40 HXFObjwTar9/gFpvUpIoYpBsTSr0MSqYqFUdxfZyVTd0gnZkOaD0gwlnkMF6G5rmhk cAQnHnoGyVL1zogPve1mYPzmZSR28x+EXnYsqaCcz4YDsseXmIRuy0SMaKi0w4hDzK bl050WlzPuL5g== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 06/17] iommu/dma: Factor out a iommu_dma_map_swiotlb helper Date: Tue, 17 Dec 2024 15:00:24 +0200 Message-ID: X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 02727180037 X-Stat-Signature: u8grysqb3dentop5ngouqd49ws1tphtw X-Rspam-User: X-HE-Tag: 1734440450-818320 X-HE-Meta: U2FsdGVkX1+wnlChHaBXcYP8HTVn3Pf/8i7em3ITYJbZDxnqSjUx/EJRk8z0zgXhbihq6mMOLOD5gktslTVzqm1rSjs/JJaQHixxZWx2+1DU7CMTdV/yq3FRzVL6mSCkjGczuy3yeuG3+EEer5Ms7/KwZH6ILLOmEe3wN7Oy4kkrNOhqnH9eaCLZjlzdRpFPLScgujNGqjDEhG+VpPOvr4xGrL/AqOC0qiSjHO5NM1tn0JezP7GeVtFVw/5z0g/IEpECxGOi80ESa0FaTuTrvgSrjyv/CY7ZihNX6OebgNhLsFrOsEGEWrXMuli4OqCNj37AqB7fx+43/PAlX9a9rhAywvz11phkScOe7mvYqhBrNGF6vy1WtaPilIcjcCDWqYpHD8tHFNzNrCE+spPQgDsgPak+3QvqN0Dl7OxVC1lB/OrFK9NZ7Vhz9vD7vQnO6mEv+wNJc9rt9+jnzCTF7IiBIMvp8pRvXkpN9nMWFWBoAA22Cya7J2LAKEsyajZyI9loFcoOOYsUOs/Gy8LFmxGNBuobTqH2ep2oSJAMtiio+YzUShkcjxR+MF8i2GUj9+O7+8ybLEd78GU5/gViKGLtc+JJ/0gehTQVhQLP3CYyz8ymmkY1N/P5Pubh74yUGAOEyTpNwjCLYXP3QrScyEaT/MLqeREgu9Pa6Yg7gc1bwJFAGAgEaGSXnxSXGNVve5AmA2SCIpObiu0CQNev3RMGjGFCUsAS6VFIm5x+4lik00an4rGmQpBZZiOCRFLyj2m/ZxjIxlcm4GkPXzONVLpeY89JO9o9wdkelEK71rqlFnnLbPvRAIYBMCCLwOhswlsuATPS2fN83qtJF8fVpml1zFogPAaepGJ4OKq9uMBP1xldvD0oGuIO7UtMNgHNIfPIGFLXgEZ0WeLNC0nujAh/iL64jAVoSHIvEPkzW/0ZQZTKmRu3f/T5gNAUtaopkMWuQ/He3V/y310xfS4 FtjB1Ghb fCNayBPYZW3bwK52gWM/U06SXCW6D1x0TNmINeUB8M7K5GOlFIw/Oa3iV4R/pOqHlb2iwTaykIFpl2pVl+gDWLasxQtmGdkxSKJGEvocQrONULicl/d+AxPv2KeIKUBdl340JEXVlpIE0ceHjNTqz5dGRmKuvJWu4Yj0ZtHO/NM9p1/E+Qoy4OeOD6ecX98sbTli4tshqqUo6+8V+JlTYVMiCJYX8eK/NiqPjBXTrSzvT/iqertvkYeSGNbZAF6Pj5djX171p7mCqIvGbVu8je2QlCnQ+hsYQ+Lsj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig Split the iommu logic from iommu_dma_map_page into a separate helper. This not only keeps the code neatly separated, but will also allow for reuse in another caller. Signed-off-by: Christoph Hellwig Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 73 ++++++++++++++++++++++----------------- 1 file changed, 41 insertions(+), 32 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 5906b47a300c..d473ea4329ab 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1161,6 +1161,43 @@ void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); } +static phys_addr_t iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iova_domain *iovad = &domain->iova_cookie->iovad; + + if (!is_swiotlb_active(dev)) { + dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n"); + return DMA_MAPPING_ERROR; + } + + trace_swiotlb_bounced(dev, phys, size); + + phys = swiotlb_tbl_map_single(dev, phys, size, iova_mask(iovad), dir, + attrs); + + /* + * Untrusted devices should not see padding areas with random leftover + * kernel data, so zero the pre- and post-padding. + * swiotlb_tbl_map_single() has initialized the bounce buffer proper to + * the contents of the original memory buffer. + */ + if (phys != DMA_MAPPING_ERROR && dev_is_untrusted(dev)) { + size_t start, virt = (size_t)phys_to_virt(phys); + + /* Pre-padding */ + start = iova_align_down(iovad, virt); + memset((void *)start, 0, virt - start); + + /* Post-padding */ + start = virt + size; + memset((void *)start, 0, iova_align(iovad, start) - start); + } + + return phys; +} + dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long attrs) @@ -1174,42 +1211,14 @@ dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, dma_addr_t iova, dma_mask = dma_get_mask(dev); /* - * If both the physical buffer start address and size are - * page aligned, we don't need to use a bounce page. + * If both the physical buffer start address and size are page aligned, + * we don't need to use a bounce page. */ if (dev_use_swiotlb(dev, size, dir) && iova_offset(iovad, phys | size)) { - if (!is_swiotlb_active(dev)) { - dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n"); - return DMA_MAPPING_ERROR; - } - - trace_swiotlb_bounced(dev, phys, size); - - phys = swiotlb_tbl_map_single(dev, phys, size, - iova_mask(iovad), dir, attrs); - + phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); if (phys == DMA_MAPPING_ERROR) - return DMA_MAPPING_ERROR; - - /* - * Untrusted devices should not see padding areas with random - * leftover kernel data, so zero the pre- and post-padding. - * swiotlb_tbl_map_single() has initialized the bounce buffer - * proper to the contents of the original memory buffer. - */ - if (dev_is_untrusted(dev)) { - size_t start, virt = (size_t)phys_to_virt(phys); - - /* Pre-padding */ - start = iova_align_down(iovad, virt); - memset((void *)start, 0, virt - start); - - /* Post-padding */ - start = virt + size; - memset((void *)start, 0, - iova_align(iovad, start) - start); - } + return phys; } if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) From patchwork Tue Dec 17 13:00:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 345FCE7717F for ; Tue, 17 Dec 2024 13:01:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B9B556B009C; Tue, 17 Dec 2024 08:01:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B4A1C6B00C6; Tue, 17 Dec 2024 08:01:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9EB466B00C7; Tue, 17 Dec 2024 08:01:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 6882F6B009C for ; Tue, 17 Dec 2024 08:01:30 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CB9031206C3 for ; Tue, 17 Dec 2024 13:01:29 +0000 (UTC) X-FDA: 82904461080.27.56B0412 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf25.hostedemail.com (Postfix) with ESMTP id AD96CA001F for ; Tue, 17 Dec 2024 13:01:04 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=T96v26ss; spf=pass (imf25.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440454; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rg2fSe++9Fh0A4eOCliN0CcXzrlPh2OFg8rLYX+MqLQ=; b=a553Z3yzzhS8qlrsFFQlxYXHg098ISugfmvf3ls0ho4Mo2g12IkCJw5wzcU6BpqXx+49A+ iM3jZXkvZOY3Q2dwEtYEOFSTo0emjV1TC7pO1sHOmRA63uoOYYT9W2yCxorjpHEQX4fQ9Z lE+y9rIVevLfN0IUe1Vhi+dKmn1fovM= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=T96v26ss; spf=pass (imf25.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440454; a=rsa-sha256; cv=none; b=pXe9/w6/ps2BAVBp7vzPH1/4yo2Gfx39NuZHOmPn6b6thHrOAm2Wdfi2FNPEyDoeXdtQaI yHOdQcumj0OQzc5hUmNILWKumgQ2vO5k7lWw/bnnyPXI5kbMb2R5pWmf6Ctu5RXj3/ipj1 2AlKkBXtx0xM0490zS47OF1MD9JV7MA= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id EC3CBA40AE8; Tue, 17 Dec 2024 12:59:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E4818C4CED4; Tue, 17 Dec 2024 13:01:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440485; bh=J/A4HIJwXBtFyEgwDdfKNSWvnCrjun5Y2JK84HYG/IQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=T96v26ssGxaDgMWsqXoZz/4Ia8ezDlJwvzrQ350zku5TYH1OxFYU6UcEDYMOKY3mi HMuhAhVYQ+mP2e6vkC22TozsMoLwgKmv/UHqr81I1mYKYriiknhrT49EGiU5adxX+N 95QC0OoKQxZbPq4d3GCCXuq6FTbC4Bg3PWOJaZn9T08nMZfFQ5SjJ9wkEWXGkyXzrk aZSInLWFHlXKDVUVrqPdzXWHrBSlRrPYaBo39CUjOo2BTXw4g9fR47z9PdIH5vf79E mVjswY47Mm0Jp2QZyaveqYs7tjfeADpUMdoHeZBIyI/mBbmVpdQez350W+NcPS++yV nP94kHCqT7ZEw== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 07/17] dma-mapping: Implement link/unlink ranges API Date: Tue, 17 Dec 2024 15:00:25 +0200 Message-ID: X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: AD96CA001F X-Rspam-User: X-Stat-Signature: hkixnttqefddimzyg7rxfxrwnwhheddg X-HE-Tag: 1734440464-91411 X-HE-Meta: U2FsdGVkX19aA1KQsYJvYhh30JmVb6fSlT++Ljm/OaXYSJOxgrMYHqC0rdHCzeFvWkjlWPVLyvvHV+5lpuxuX9TJGQBl7jrML2+Xq3MNrPDwkdhIskM4aO33WSccKTIExuGsMSt8jniJdrkIwocX4GfJUz7vTMvfa+UZjcYX+nFaB/qnH2WOc/3WsjMLKQIyw5tQ6gs4LguZhk/UZtOa0xuHH6A92Us6agWhIwBo58G+kuVnWvwU541VibNb5Zx5gZhnOGSU5zhJIpCK0SoMGLYLp2f4m/t16RLt6FtVUdtLUd5zGn5SqDl537EkuyyrXxiJJcz45cFhD0DthjHEACPi9KIfNrDH/sul9l6kAtd6LUnRRQcKJ2kvCe6KjCF98c670qZlptBUFrHtBkdhEmUSRWkvIj2JAzjgoi3UdBha+yPwhcXE/maIniFLZTLOR1k8aL/IdiEejfSKXsj0WEavNqKSgdgfAVBav2ciI/kI+Ij8CyWVzDH8wSmGXxJj23xgFjDgT8vrrluiRy5W0R7XIgWHQ0zs97UyulVNtW/qSjrHSowZlgV+gFrtJntiFjaoXJezPXPGsI/1bmvKbhxr9Y+r62OFbmAmfjLzGOCERM7IgkEjbW7/eYHozUAHsWiSS/8KVEhgKniYltZqzM+RvOq+bq6Hf9CFLx/ZrwlC6o0QgKlq/JAuzSMJVWWj/GubdI3bVHMZmjO+FQ/Qv4LHatfw3Sef6RASQ6lTnoLQNQ219BYGisHCK2iSfB7ZpEJSrPSP7Mst96frpqLR6YTA4/AuZSumGumVXtzZqishw9A8ZxFaYLGyP2IfvMbyXOX5P1rLjBH/MkER+zNDUFgrJiDMd8uNfEXSwPzUTsMRZV3KKMXJL7DF/k2LewwzCwWGw7ns0znKrSe5CbDuWVKn0PVcJHd6+0LD3GIypH7HL7wKuZx0JXF9lozI/+SRFLkwpduhZIo2KfXguyV lwEWtckx MNSSbPqpYR02oLpTAEtERJNgrrhtWeo1b9y2d7u+RsCKMEt6hYpAAHlloAycQDKnfazQ0DVtgwTztyV6PXh2MpAg14EJjVjGX66quTpADVR54cAgXtydvTEOEJbJItlT33MJaV7JiY6cpGRdkBavYMoOxzcyrtokSmzYSyBf/TzpoQIXnQd9xLTHU/maSXDGx389H657RVqzzUDrEc5uHePq/aRyhHRvkVFkHHWjc8BvAovs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new DMA APIs to perform DMA linkage of buffers in layers higher than DMA. In proposed API, the callers will perform the following steps. In map path: if (dma_can_use_iova(...)) dma_iova_alloc() for (page in range) dma_iova_link_next(...) dma_iova_sync(...) else /* Fallback to legacy map pages */ for (all pages) dma_map_page(...) In unmap path: if (dma_can_use_iova(...)) dma_iova_destroy() else for (all pages) dma_unmap_page(...) Reviewed-by: Christoph Hellwig Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 259 ++++++++++++++++++++++++++++++++++++ include/linux/dma-mapping.h | 32 +++++ 2 files changed, 291 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index d473ea4329ab..7972270e82b4 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1829,6 +1829,265 @@ void dma_iova_free(struct device *dev, struct dma_iova_state *state) } EXPORT_SYMBOL_GPL(dma_iova_free); +static int __dma_iova_link(struct device *dev, dma_addr_t addr, + phys_addr_t phys, size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + bool coherent = dev_is_dma_coherent(dev); + + if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + arch_sync_dma_for_device(phys, size, dir); + + return iommu_map_nosync(iommu_get_dma_domain(dev), addr, phys, size, + dma_info_to_prot(dir, coherent, attrs), GFP_ATOMIC); +} + +static int iommu_dma_iova_bounce_and_link(struct device *dev, dma_addr_t addr, + phys_addr_t phys, size_t bounce_len, + enum dma_data_direction dir, unsigned long attrs, + size_t iova_start_pad) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iova_domain *iovad = &domain->iova_cookie->iovad; + phys_addr_t bounce_phys; + int error; + + bounce_phys = iommu_dma_map_swiotlb(dev, phys, bounce_len, dir, attrs); + if (bounce_phys == DMA_MAPPING_ERROR) + return -ENOMEM; + + error = __dma_iova_link(dev, addr - iova_start_pad, + bounce_phys - iova_start_pad, + iova_align(iovad, bounce_len), dir, attrs); + if (error) + swiotlb_tbl_unmap_single(dev, bounce_phys, bounce_len, dir, + attrs); + return error; +} + +static int iommu_dma_iova_link_swiotlb(struct device *dev, + struct dma_iova_state *state, phys_addr_t phys, size_t offset, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_start_pad = iova_offset(iovad, phys); + size_t iova_end_pad = iova_offset(iovad, phys + size); + dma_addr_t addr = state->addr + offset; + size_t mapped = 0; + int error; + + if (iova_start_pad) { + size_t bounce_len = min(size, iovad->granule - iova_start_pad); + + error = iommu_dma_iova_bounce_and_link(dev, addr, phys, + bounce_len, dir, attrs, iova_start_pad); + if (error) + return error; + state->__size |= DMA_IOVA_USE_SWIOTLB; + + mapped += bounce_len; + size -= bounce_len; + if (!size) + return 0; + } + + size -= iova_end_pad; + error = __dma_iova_link(dev, addr + mapped, phys + mapped, size, dir, + attrs); + if (error) + goto out_unmap; + mapped += size; + + if (iova_end_pad) { + error = iommu_dma_iova_bounce_and_link(dev, addr + mapped, + phys + mapped, iova_end_pad, dir, attrs, 0); + if (error) + goto out_unmap; + state->__size |= DMA_IOVA_USE_SWIOTLB; + } + + return 0; + +out_unmap: + dma_iova_unlink(dev, state, 0, mapped, dir, attrs); + return error; +} + +/** + * dma_iova_link - Link a range of IOVA space + * @dev: DMA device + * @state: IOVA state + * @phys: physical address to link + * @offset: offset into the IOVA state to map into + * @size: size of the buffer + * @dir: DMA direction + * @attrs: attributes of mapping properties + * + * Link a range of IOVA space for the given IOVA state without IOTLB sync. + * This function is used to link multiple physical addresses in contigueous + * IOVA space without performing costly IOTLB sync. + * + * The caller is responsible to call to dma_iova_sync() to sync IOTLB at + * the end of linkage. + */ +int dma_iova_link(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t offset, size_t size, + enum dma_data_direction dir, unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_start_pad = iova_offset(iovad, phys); + + if (WARN_ON_ONCE(iova_start_pad && offset > 0)) + return -EIO; + + if (dev_use_swiotlb(dev, size, dir) && iova_offset(iovad, phys | size)) + return iommu_dma_iova_link_swiotlb(dev, state, phys, offset, + size, dir, attrs); + + return __dma_iova_link(dev, state->addr + offset - iova_start_pad, + phys - iova_start_pad, + iova_align(iovad, size + iova_start_pad), dir, attrs); +} +EXPORT_SYMBOL_GPL(dma_iova_link); + +/** + * dma_iova_sync - Sync IOTLB + * @dev: DMA device + * @state: IOVA state + * @offset: offset into the IOVA state to sync + * @size: size of the buffer + * + * Sync IOTLB for the given IOVA state. This function should be called on + * the IOVA-contigous range created by one ore more dma_iova_link() calls + * to sync the IOTLB. + */ +int dma_iova_sync(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + dma_addr_t addr = state->addr + offset; + size_t iova_start_pad = iova_offset(iovad, addr); + + return iommu_sync_map(domain, addr - iova_start_pad, + iova_align(iovad, size + iova_start_pad)); +} +EXPORT_SYMBOL_GPL(dma_iova_sync); + +static void iommu_dma_iova_unlink_range_slow(struct device *dev, + dma_addr_t addr, size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_start_pad = iova_offset(iovad, addr); + dma_addr_t end = addr + size; + + do { + phys_addr_t phys; + size_t len; + + phys = iommu_iova_to_phys(domain, addr); + if (WARN_ON(!phys)) + continue; + len = min_t(size_t, + end - addr, iovad->granule - iova_start_pad); + + if (!dev_is_dma_coherent(dev) && + !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + arch_sync_dma_for_cpu(phys, len, dir); + + swiotlb_tbl_unmap_single(dev, phys, len, dir, attrs); + + addr += len; + iova_start_pad = 0; + } while (addr < end); +} + +static void __iommu_dma_iova_unlink(struct device *dev, + struct dma_iova_state *state, size_t offset, size_t size, + enum dma_data_direction dir, unsigned long attrs, + bool free_iova) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + dma_addr_t addr = state->addr + offset; + size_t iova_start_pad = iova_offset(iovad, addr); + struct iommu_iotlb_gather iotlb_gather; + size_t unmapped; + + if ((state->__size & DMA_IOVA_USE_SWIOTLB) || + (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))) + iommu_dma_iova_unlink_range_slow(dev, addr, size, dir, attrs); + + iommu_iotlb_gather_init(&iotlb_gather); + iotlb_gather.queued = free_iova && READ_ONCE(cookie->fq_domain); + + size = iova_align(iovad, size + iova_start_pad); + addr -= iova_start_pad; + unmapped = iommu_unmap_fast(domain, addr, size, &iotlb_gather); + WARN_ON(unmapped != size); + + if (!iotlb_gather.queued) + iommu_iotlb_sync(domain, &iotlb_gather); + if (free_iova) + iommu_dma_free_iova(cookie, addr, size, &iotlb_gather); +} + +/** + * dma_iova_unlink - Unlink a range of IOVA space + * @dev: DMA device + * @state: IOVA state + * @offset: offset into the IOVA state to unlink + * @size: size of the buffer + * @dir: DMA direction + * @attrs: attributes of mapping properties + * + * Unlink a range of IOVA space for the given IOVA state. + */ +void dma_iova_unlink(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + __iommu_dma_iova_unlink(dev, state, offset, size, dir, attrs, false); +} +EXPORT_SYMBOL_GPL(dma_iova_unlink); + +/** + * dma_iova_destroy - Finish a DMA mapping transaction + * @dev: DMA device + * @state: IOVA state + * @mapped_len: number of bytes to unmap + * @dir: DMA direction + * @attrs: attributes of mapping properties + * + * Unlink the IOVA range up to @mapped_len and free the entire IOVA space. The + * range of IOVA from dma_addr to @mapped_len must all be linked, and be the + * only linked IOVA in state. + */ +void dma_iova_destroy(struct device *dev, struct dma_iova_state *state, + size_t mapped_len, enum dma_data_direction dir, + unsigned long attrs) +{ + if (mapped_len) + __iommu_dma_iova_unlink(dev, state, 0, mapped_len, dir, attrs, + true); + else + /* + * We can be here if first call to dma_iova_link() failed and + * there is nothing to unlink, so let's be more clear. + */ + dma_iova_free(dev, state); +} +EXPORT_SYMBOL_GPL(dma_iova_destroy); + void iommu_setup_dma_ops(struct device *dev) { struct iommu_domain *domain = iommu_get_domain_for_dev(dev); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 55899d65668b..f4d717e17bde 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -310,6 +310,17 @@ static inline bool dma_use_iova(struct dma_iova_state *state) bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state, phys_addr_t phys, size_t size); void dma_iova_free(struct device *dev, struct dma_iova_state *state); +void dma_iova_destroy(struct device *dev, struct dma_iova_state *state, + size_t mapped_len, enum dma_data_direction dir, + unsigned long attrs); +int dma_iova_sync(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size); +int dma_iova_link(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t offset, size_t size, + enum dma_data_direction dir, unsigned long attrs); +void dma_iova_unlink(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size, enum dma_data_direction dir, + unsigned long attrs); #else /* CONFIG_IOMMU_DMA */ static inline bool dma_use_iova(struct dma_iova_state *state) { @@ -324,6 +335,27 @@ static inline void dma_iova_free(struct device *dev, struct dma_iova_state *state) { } +static inline void dma_iova_destroy(struct device *dev, + struct dma_iova_state *state, size_t mapped_len, + enum dma_data_direction dir, unsigned long attrs) +{ +} +static inline int dma_iova_sync(struct device *dev, + struct dma_iova_state *state, size_t offset, size_t size) +{ + return -EOPNOTSUPP; +} +static inline int dma_iova_link(struct device *dev, + struct dma_iova_state *state, phys_addr_t phys, size_t offset, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + return -EOPNOTSUPP; +} +static inline void dma_iova_unlink(struct device *dev, + struct dma_iova_state *state, size_t offset, size_t size, + enum dma_data_direction dir, unsigned long attrs) +{ +} #endif /* CONFIG_IOMMU_DMA */ #if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC) From patchwork Tue Dec 17 13:00:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911736 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A211DE77187 for ; Tue, 17 Dec 2024 13:01:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E75E6B00C8; Tue, 17 Dec 2024 08:01:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 272E76B00C9; Tue, 17 Dec 2024 08:01:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C7046B00CA; Tue, 17 Dec 2024 08:01:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D78876B00C8 for ; Tue, 17 Dec 2024 08:01:32 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 91E4DA0734 for ; Tue, 17 Dec 2024 13:01:32 +0000 (UTC) X-FDA: 82904461836.04.5261EB6 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf19.hostedemail.com (Postfix) with ESMTP id 6A0FF1A002A for ; Tue, 17 Dec 2024 13:00:58 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="HEKE/+zX"; spf=pass (imf19.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440476; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=I8zeuz1LusMcrZY1ZH3SbXwbZVAiaCQQJNl/hX6DDvQ=; b=R4rQ7omy0uB+WHkgj6yOzXSM8d0qIAYyx/QtheSZC6dwLCCXq2Y98D2eehDANHJtS/+5bx qcWSmeqdoNMgJ/Igh3nBPU7p6A4o6V+TU0MJTcL6uY7R0WkkWVE0sBhBq8zgf3DS++ZRfW PdY2b3GsGr4RP2FHSq5m0KTe99z5wzU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440476; a=rsa-sha256; cv=none; b=d+bmid7mch9E5PrSNzvat1sWPB3b9ffJB3bZ5gaJ0/Iki3K5e29ES39QwoemsNXoZ/pWmx uUjs/8BLAKUg/x802ilXj633GZrsEPKNbWCbkvv4Yi1f7RjLErJ8+XB5+aQWuuO1naQGuI JoXy4Zi0F0UggwdvgClh4yXZPwu/lXA= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="HEKE/+zX"; spf=pass (imf19.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 6F5845C62C4; Tue, 17 Dec 2024 13:00:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9E3CFC4CED4; Tue, 17 Dec 2024 13:01:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440489; bh=iIU3nAodcPqgXhM415qJ6UO7wN8Xjlyg5zHcBf3DQZk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HEKE/+zX+ZdFZ/bWCF8cQSMXArY/rreeS3qzRw8Eeu6EzOGw/Z1JrKiOfPOyVSkFH kHZlx+qqB7dSpSCgCpOCMOtZc+Fo8v+6UYjKoFb2Xd9zDNsuFoI0OOuBsJxZPl7pJh dNSTnpKCfVe82F/lCx8zqFy3TaxV2gFUGzc7/qFSsbzSmOi5OQoKMuHthehZMzeMam eLzSmHxPiLYfsTmC8Jw/a52JDN/ZH/+lyrzoi8FkhpRTZjxnQa6rv9eItSzpWooiYV CxiBLRT9PR8d8JkpFs8QBkshTvp9Yc+wF1IOA4vnuA3uNmm5kbPutlclWHDhQx3klo D9T+xxGicipwg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 08/17] dma-mapping: add a dma_need_unmap helper Date: Tue, 17 Dec 2024 15:00:26 +0200 Message-ID: X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 6A0FF1A002A X-Stat-Signature: ttg7eqkq7d8tsyi86cbzfkwb1ys8ju3q X-Rspam-User: X-HE-Tag: 1734440458-159654 X-HE-Meta: U2FsdGVkX189YY9hEsSqgQl23TLG0kj5zbj6ByaLDjRwlpkfjnDHF3jyYu42suWC0q2D3D4B12oJDgxnhw0A083u5n5OG0Bel1BG6wVvxAb0+El3hFBtEG6XDMe+poKmehxGfA3t2rj067waI1/Q+IjmAbel01yaTibv78jO17/Jv4eRU4Kp2zgSTsvsp/m8EO7pN/cgJzR7cy2fb0GvBoGswC/tN5BOMrD7sEzfREHF3b7goy4t+iLBwdY2jB7eqIc6omjV5xBs7UCvELLODk5BO3+OyJCHv+Wl2HwfoAVY3fU5zO6qBe4UkO3WvBqf6PQL321HnQaHu3wU2bJq/6yi2VzxvS+5cogTmfMm7wShldJZoYMxdLvvrS+NUXnfAdXiJ+H72vJUfzcNrpTaSDIjEEVmmE8/f5oFP6/geY9yIEJ4BIvzT6scUit1nUgRPxXeOVxil9RzXf7pqgiaMyTq3Muy3AUWouHPL1xwSco3LsmX9xOmcDHzDcY/KAJjIdWHxeowadySQ12NvnWxeE1KcBxp89rinVk0a6eDMap4uOmIqCcjzA42UbRCBlDnTteYhNKrw0JGkggyRFnKykCYINeiRN+jO11Sc4ap5hQgkFMpP9R8xAZDs8EPuI/2/JGtoV3mHqBHQ7nPYpDsBkADr5kwRVEWyg6qiVifOcdVQRxCSKHcov5ZomnhLO0LH8XaCVHlTSHILJ1xKGzD70g56WYuYqZQar800NQ6Z0Tpw7ansuFJZkFns9KDNC6RGNgH0W0AKa1j4OvewtLS+lcThftVwc5qunWakVqQxtN0j14v744scermWT1xqJ9bvhle/1oZmmiaG/k4+Vyd3k5ACOU1uY2dvausVLRnyekGwx/F9y9lV/SWGAke8UTbYWxs/8z4EIPFFjMmDcG5DI5QNSJ+JkeWRYqWUHztaIF/NhV5R3cu9OYLbL2saIGDfFfgdWQD+9FrE3NN2kU 085Srcqc TQmtsdBsb0VsWFQBsSM092B5TkYCGvJvrkcXZq/scnSAjnaxeqG58W9uFgu+2vfVzKFSIKCU4eLpjUKDbM0P8any44Fb01kOmlPVOh+wVZIpKfewAzft9sXDxUgMCybvALtxG9tItocEm/L+aw+3z3QSrswMKeApvTWewN8pqsCnmGpKkm2UZpAse8K0EhJaQHGnz7ON5NQO+ySLw9g1ibpmBljOlv/gjIaROQZXJz5PlM8aE1QDYfEoueC2+Nu7f/op3nh2xetjMF5hHHVC8vPQZn6vyTbPAivf/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig Add helper that allows a driver to skip calling dma_unmap_* if the DMA layer can guarantee that they are no-nops. Signed-off-by: Christoph Hellwig Signed-off-by: Leon Romanovsky --- include/linux/dma-mapping.h | 5 +++++ kernel/dma/mapping.c | 18 ++++++++++++++++++ 2 files changed, 23 insertions(+) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index f4d717e17bde..bda090beb9b1 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -407,6 +407,7 @@ static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr) { return dma_dev_need_sync(dev) ? __dma_need_sync(dev, dma_addr) : false; } +bool dma_need_unmap(struct device *dev); #else /* !CONFIG_HAS_DMA || !CONFIG_DMA_NEED_SYNC */ static inline bool dma_dev_need_sync(const struct device *dev) { @@ -432,6 +433,10 @@ static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr) { return false; } +static inline bool dma_need_unmap(struct device *dev) +{ + return false; +} #endif /* !CONFIG_HAS_DMA || !CONFIG_DMA_NEED_SYNC */ struct page *dma_alloc_pages(struct device *dev, size_t size, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index cda127027e48..3c3204ad2839 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -443,6 +443,24 @@ bool __dma_need_sync(struct device *dev, dma_addr_t dma_addr) } EXPORT_SYMBOL_GPL(__dma_need_sync); +/** + * dma_need_unmap - does this device need dma_unmap_* operations + * @dev: device to check + * + * If this function returns %false, drivers can skip calling dma_unmap_* after + * finishing an I/O. This function must be called after all mappings that might + * need to be unmapped have been performed. + */ +bool dma_need_unmap(struct device *dev) +{ + if (!dma_map_direct(dev, get_dma_ops(dev))) + return true; + if (!dev->dma_skip_sync) + return true; + return IS_ENABLED(CONFIG_DMA_API_DEBUG); +} +EXPORT_SYMBOL_GPL(dma_need_unmap); + static void dma_setup_need_sync(struct device *dev) { const struct dma_map_ops *ops = get_dma_ops(dev); From patchwork Tue Dec 17 13:00:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911737 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5E17E7717F for ; Tue, 17 Dec 2024 13:01:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 644276B00C9; Tue, 17 Dec 2024 08:01:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F24D6B00CA; Tue, 17 Dec 2024 08:01:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 446066B00CB; Tue, 17 Dec 2024 08:01:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 244796B00C9 for ; Tue, 17 Dec 2024 08:01:37 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CD3041206F1 for ; Tue, 17 Dec 2024 13:01:36 +0000 (UTC) X-FDA: 82904460828.09.04B8330 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf11.hostedemail.com (Postfix) with ESMTP id 114B04002B for ; Tue, 17 Dec 2024 13:01:06 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=tg6kK2qm; spf=pass (imf11.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440481; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ufN3FHn90uJODz3lZznMDmn2tX4yEQe0kEyo2vbDgPE=; b=FivAkKc0r+9eSGFzmi8AZllUegxRDjjKNjdzcznvaiJryj/zOq7vWcgLoXmVPzJVa4isNv HQ14CY3pEjfREt1FpDnu9FRenyTe79YgfW5b0Win9H8eSxDQIuFChYl3tNXfZLc9zyIrzz ZbvMA1SgVt7PaLLKtXdZucZScfec/k8= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=tg6kK2qm; spf=pass (imf11.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440481; a=rsa-sha256; cv=none; b=tiOcLhsTrA7sSvPLqFe+XiIAuBsqjCoUv7BxcnYKZ14Quwm2pFgiUxnnLcbMZePaSLEpZ4 RzB/wyXjDl/0sjasyQ0P+o3G5G0kASvBa+EW5EOSWrC5vavs8UWCUIS/pvaBQ9RiwMeYg0 htqVhWI2x6E0dXjWbm5/EEAF1oNlXZI= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id CD8155C6277; Tue, 17 Dec 2024 13:00:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DEB4CC4CED4; Tue, 17 Dec 2024 13:01:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440493; bh=YpLncFZbCC4w0IIg6h/rxYvmMz7724GLt2jYf2BJ9P8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tg6kK2qmIeXpkyBNAqUrrqpIInx6lG68O8eFGOoTI8XlWOQ90vnCxtLWpLCg3tek7 +mkSXSecJXcyowLYbWlFKUYpU5IKIAIKW2kjJwASaVqUxg6ZGk0MawOxgOx096mmdA 1JWSgjuN8fvdqalE10IOhWVwBLAdG1cZh1RLoJD8VYIcWolggNPLaGeD/7ejY5raSl x4YO2o5/6l2S5lCxfVeuhfanErorNgKvKbOo37Q2mxbwIw60ldyMRklbWCuAPFALdS 5ryuX75bpg0AGVGtp/oNQRa1i9qgi4vC/P+5pstrlHvkt/yXuT1PmW7FWkOn57+C1E VZySTGqqNUXTA== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 09/17] docs: core-api: document the IOVA-based API Date: Tue, 17 Dec 2024 15:00:27 +0200 Message-ID: <8f1fc0969bf7d7db11bd27d4c6b7510ef8c3ae5e.1734436840.git.leon@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 114B04002B X-Stat-Signature: kxdi3br4jg4mrgbjux4nmc7eobkcf4tm X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1734440466-554541 X-HE-Meta: U2FsdGVkX18J72sPcRDz2GSXgAijJDX5hDjqoNGfCa/G2IE+7SP7gRv8UYaGlmGA8cVXzIwzt4G4L7zD3D/mwyAC7n0NR32t0iSQpjbx3PnGLiN92YRP6Rj5aQvENpbgtgfVqWL35epPgR+GVdyTlG64x6D+PMU7qdQoWAl3lHgYjo1PwbHUk0ZCLWPsNesYLAdCbcFlXAhzDOMila0IL6iOUd9G5jAsjX3C+5czbvuo8pbzTAwG2SxhyJvJGkKwocHcj9OuIOqDl75nG93pnJudJAwdeZGezbNwv/qxNaGG5ljOJ0WFL9gjhdoGmzpeAAptCZ6RBlkSD3lkt/zYlEx1wd7QuO81VajPivqO9CSGFw4Q3XXtBgfQHbOFsVTdHZPVy6zTx+8RSr6kV9nslRA1YiQot+kEUdYa4f49ryFIB/xzJb8hNsPd0xGjSCybB8UQFiFIyx9rRyvzmz6aKv1xPv9wP3Slu+OJu8ASE9HlyfgLPxQaGkbVfE8Y+6UqoU5325VAg9N3btmejaLxEp+nywXc7IZ0mYqolzVdYA1oO4MAh9S1MdcmeX6V1h6t3HsFfIoxqLLnxNxGzOWPCbxwjD/lbjXejPbpmnSJIhdI0Vmt51rCBgeA0nTVguQ9RiGKHg41HuRywgKsGASzJF+RKWVadMxPQjbyvw8TEa8xwXp2Az5ugWsPsT4ehOLIw5/9Z+lWdec7J0cClIWZP8T+spxc/PeRb22YZHvSisAEEvcA099szRwshQkyTlRXPMLQJCLR0U6LI+LyCyWMxtLGPbH75EoJMOeu9EWeURHOa27CYie9e2uhQpJHHKXr8cS7Kxy4xy+KO10y3sFjs6+drmrtQfmGHWDJ3odBQKN7Ml9bklVpWUcdwh/hXs9+HZ2ggbPYZghVXk56wa5iSfciyBw76RFR7Itn8q/XU9GL/FsdrLI0KOR+ImnHnmiqBlIsOyttMZ6aiQGGl+T GXjkuAWk x83Ivz0nbHg+mr8v+WQrTytMovdoUU2veBSIgsYWcFLY4+zNkoqhxvHge+OM4tfdyRBWvEFE1ywnVafqkKdOUbsImAa7p7XvDJXllN6XGxInuXS8R3ne3sjxQEymur4GZoCrmcrSiXYcdj1+EC8WCgjIwLzHPDCYN93DZ6nCUMb/41r/9k6pQhlAkW3+dh0GP6PJIXKXfUVgPqqlNXPk5opxMpH9QZIfxPaFS654Hmzl5ql8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig Add an explanation of the newly added IOVA-based mapping API. Signed-off-by: Christoph Hellwig Signed-off-by: Leon Romanovsky --- Documentation/core-api/dma-api.rst | 70 ++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst index 8e3cce3d0a23..61d6f4fe3d88 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -530,6 +530,76 @@ routines, e.g.::: .... } +Part Ie - IOVA-based DMA mappings +--------------------------------- + +These APIs allow a very efficient mapping when using an IOMMU. They are an +optional path that requires extra code and are only recommended for drivers +where DMA mapping performance, or the space usage for storing the DMA addresses +matter. All the considerations from the previous section apply here as well. + +:: + + bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t size); + +Is used to try to allocate IOVA space for mapping operation. If it returns +false this API can't be used for the given device and the normal streaming +DMA mapping API should be used. The ``struct dma_iova_state`` is allocated +by the driver and must be kept around until unmap time. + +:: + + static inline bool dma_use_iova(struct dma_iova_state *state) + +Can be used by the driver to check if the IOVA-based API is used after a +call to dma_iova_try_alloc. This can be useful in the unmap path. + +:: + + int dma_iova_link(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t offset, size_t size, + enum dma_data_direction dir, unsigned long attrs); + +Is used to link ranges to the IOVA previously allocated. The start of all +but the first call to dma_iova_link for a given state must be aligned +to the DMA merge boundary returned by ``dma_get_merge_boundary())``, and +the size of all but the last range must be aligned to the DMA merge boundary +as well. + +:: + + int dma_iova_sync(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size); + +Must be called to sync the IOMMU page tables for IOVA-range mapped by one or +more calls to ``dma_iova_link()``. + +For drivers that use a one-shot mapping, all ranges can be unmapped and the +IOVA freed by calling: + +:: + + void dma_iova_destroy(struct device *dev, struct dma_iova_state *state, + enum dma_data_direction dir, unsigned long attrs); + +Alternatively drivers can dynamically manage the IOVA space by unmapping +and mapping individual regions. In that case + +:: + + void dma_iova_unlink(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size, enum dma_data_direction dir, + unsigned long attrs); + +is used to unmap a range previously mapped, and + +:: + + void dma_iova_free(struct device *dev, struct dma_iova_state *state); + +is used to free the IOVA space. All regions must have been unmapped using +``dma_iova_unlink()`` before calling ``dma_iova_free()``. Part II - Non-coherent DMA allocations -------------------------------------- From patchwork Tue Dec 17 13:00:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA646E7717F for ; Tue, 17 Dec 2024 13:02:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 50C6C6B00C2; Tue, 17 Dec 2024 08:02:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 495466B00D6; Tue, 17 Dec 2024 08:02:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 30E3A6B00D8; Tue, 17 Dec 2024 08:02:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 114F76B00D6 for ; Tue, 17 Dec 2024 08:02:02 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id C46C8A0721 for ; Tue, 17 Dec 2024 13:02:01 +0000 (UTC) X-FDA: 82904462760.26.1775E1D Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf12.hostedemail.com (Postfix) with ESMTP id BF63D40026 for ; Tue, 17 Dec 2024 13:01:46 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=IQLaGQxS; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf12.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440505; a=rsa-sha256; cv=none; b=Qa4ml5QKVShux6dJ4JQIXY83h7E+cL317+I6QzeXDKE8b5a5+uDIIPR3vq8MefztVH38gz rB9jZjPwVjkVmborUpdwAd/iSH0VUPesRTbrt+OGEhoC2ci2HGwaZXwnZyxnMJDqRvSWOz koaS0JTB6PsQjXZmog++HwekTawfQGI= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=IQLaGQxS; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf12.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440505; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fgW5+La5vrTTNC4OeDl2gDw5x5OA/70T1ZCsufGicbE=; b=ESgRjrEm5qH6KzfufaqNAs2WjlLSRgVxwIH14p438pgyZ9A/XlFPCUBuS87PDVPzF6z4du ZXOjgHTubQdkqfemX3xva5QrEICSCxlPwpdTzIjJoRET2gaPTWzKilS8iYMC4t9yV/4PAi O57spQcZ1LxdCpYb9vcAhqv8JQxfbys= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 9EDDFA40FE2; Tue, 17 Dec 2024 13:00:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3E6E5C4CED3; Tue, 17 Dec 2024 13:01:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440519; bh=4U8OAdjLoa135LlF1quSDl0hGi9037MUkClj3CmMSko=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IQLaGQxSJJQ2WghREDHbtFQtI9QIm41ygcqLm2OKUJcDxB/Jc4SYcLlNryVx5HVhR kMXPLcj0G9iQfpFdRmj/36BU/xsh/BqDAkL9VNHDx09KOM2SDO2kkG3VDxWwiy4BUm OFO285LFsxdQHR/MML3tvO5QB3+vig34drtSzzDCfRCYLyNJsewQVfRNPtUBj9UT46 JslUNMuz/jabpCwftz6ja6emA6pjhKHAjsVWiPvOkKZXgU5FEm5hcOlr5e0ihWtivO KPbHri1v/etWBw/iIb/IHrzW71V8yreEm01Qo/eGvpQh5iNdVFIvIQh8by4NQxqpbY l1mOohs29P5wg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 10/17] mm/hmm: let users to tag specific PFN with DMA mapped bit Date: Tue, 17 Dec 2024 15:00:28 +0200 Message-ID: <536d27ff1bbf2bd53f3340909ccae109ded7af83.1734436840.git.leon@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: BF63D40026 X-Stat-Signature: zf8fd6yzuab5dbbpppf349ob19qeguag X-HE-Tag: 1734440506-33654 X-HE-Meta: U2FsdGVkX18YByVLbhqjYG7/LMb3S+FqNHHKaHKs6gOZoeDmlcDtEGJs0pa0zY+8rNGW/9b75J6oeUHC43yMXvKDTFN/HMAsZdlipWBMShQdFY9smdgT5S7wRBUkFjtBwEtiK7eBTmyhh4Z82ItG6E82GbhMvwQ7Ut4nWu0Q8/HOSZDuSn7RfvKCHeLcPfGe2q29IQ1az9YQ7szQm5/zjt7Kf/7ce+eJezpVBWz5FkfeZz6rLMW4EgFtk5CARLm3Qy42XF5dVZcqXBRL9r2LGuysgd3Ki+1wQejmNderLIxsD9u/33PSFHRHUTzX45y+LKbO/z4WO2iRR7ztp3QjuGdnwpshJcJFmsTWoka4SUjeLjZNmXeiGYXSgP5zjz3DnLY+FlV9fnRZYm142pRCN70msvk7m1kLuMxEJ+bYw73v+pdRIQx2CWStVySHl9aME+yyajfmOBJaP59y/Ubi6VgQigyPlEF8AHR0QImBpR2O42FB/ke+XmEsBd9w7mj5LkgodqaooQKWm+92yfn9RI9pWx14zs9EyMNcvj/ZmBFTtVqkXGzNauVQRwygBF9IxUt1FTQb5n2F4n3uUipp5gq96pymkg/GjsVauEdbj7O9gDNDlKwIYc6UvtIafvY5Zg1vqESEQbhTWiBG3FYRjQJfwTuuuIeDAVnAAHywmTqGjPRg6bKR3cdg2o2K5HtBRjvKzPQ4aP86HFtNcBE6WC7U9R1HrAnduUylEeNoW7vzMVWhaFk6SpXwSh3bisTMaEWS91+6M5dLSGPoOJJd0QBZHt9iDB3X6meuiYWESMap6pV7f1+xqvwlxTzqgPe0zG8F2nOEDTT8gGd814s6aaJlikuumfbjYRU4Bo/RUCNN49KnAjqrs04lb15sBN7Y/e6ldvCEZsewVg3KmDhuEGHkUCsAoj20R6YUZSL7QXrfqbIkeeOnRteLSzOHA3shwN+HZYS9vOqX/oNdgQa rLzv9PTG 37RjFoge0NTk0+YkwjrMz5P8sIZBENIAb/buLDJdTIqT4LJaUWQheMjsuUrIiiHXqnTbtf2ULFpJoYI41jcjREOrVx13DTCKzxIqlrn9rIqggk/trUJEBxyjDRpmV5dxBFaru5whAaW5ZhwMtdCdAZehxKvpGavhIuZLw3q+6n7144uJJrHUU1HU1Qu0J4ltW0tgEhQIMenR+38xG1lZEnDGv4HqyG0xt3Ak9HoClXvly3Ls= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new sticky flag (HMM_PFN_DMA_MAPPED), which isn't overwritten by HMM range fault. Such flag allows users to tag specific PFNs with information if this specific PFN was already DMA mapped. Signed-off-by: Leon Romanovsky --- include/linux/hmm.h | 17 +++++++++++++++ mm/hmm.c | 51 ++++++++++++++++++++++++++++----------------- 2 files changed, 49 insertions(+), 19 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 126a36571667..a1ddbedc19c0 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -23,6 +23,8 @@ struct mmu_interval_notifier; * HMM_PFN_WRITE - if the page memory can be written to (requires HMM_PFN_VALID) * HMM_PFN_ERROR - accessing the pfn is impossible and the device should * fail. ie poisoned memory, special pages, no vma, etc + * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation + * to mark that page is already DMA mapped * * On input: * 0 - Return the current state of the page, do not fault it. @@ -36,6 +38,13 @@ enum hmm_pfn_flags { HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + + /* + * Sticky flags, carried from input to output, + * don't forget to update HMM_PFN_INOUT_FLAGS + */ + HMM_PFN_DMA_MAPPED = 1UL << (BITS_PER_LONG - 7), + HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), /* Input flags */ @@ -57,6 +66,14 @@ static inline struct page *hmm_pfn_to_page(unsigned long hmm_pfn) return pfn_to_page(hmm_pfn & ~HMM_PFN_FLAGS); } +/* + * hmm_pfn_to_phys() - return physical address pointed to by a device entry + */ +static inline phys_addr_t hmm_pfn_to_phys(unsigned long hmm_pfn) +{ + return __pfn_to_phys(hmm_pfn & ~HMM_PFN_FLAGS); +} + /* * hmm_pfn_to_map_order() - return the CPU mapping size order * diff --git a/mm/hmm.c b/mm/hmm.c index 7e0229ae4a5a..da5743f6d854 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -39,13 +39,20 @@ enum { HMM_NEED_ALL_BITS = HMM_NEED_FAULT | HMM_NEED_WRITE_FAULT, }; +enum { + /* These flags are carried from input-to-output */ + HMM_PFN_INOUT_FLAGS = HMM_PFN_DMA_MAPPED, +}; + static int hmm_pfns_fill(unsigned long addr, unsigned long end, struct hmm_range *range, unsigned long cpu_flags) { unsigned long i = (addr - range->start) >> PAGE_SHIFT; - for (; addr < end; addr += PAGE_SIZE, i++) - range->hmm_pfns[i] = cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++) { + range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + range->hmm_pfns[i] |= cpu_flags; + } return 0; } @@ -202,8 +209,10 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, return hmm_vma_fault(addr, end, required_fault, walk); pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) { + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + hmm_pfns[i] |= pfn | cpu_flags; + } return 0; } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -230,14 +239,14 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, unsigned long cpu_flags; pte_t pte = ptep_get(ptep); uint64_t pfn_req_flags = *hmm_pfn; + uint64_t new_pfn_flags = 0; if (pte_none_mostly(pte)) { required_fault = hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (required_fault) goto fault; - *hmm_pfn = 0; - return 0; + goto out; } if (!pte_present(pte)) { @@ -253,16 +262,14 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) cpu_flags |= HMM_PFN_WRITE; - *hmm_pfn = swp_offset_pfn(entry) | cpu_flags; - return 0; + new_pfn_flags = swp_offset_pfn(entry) | cpu_flags; + goto out; } required_fault = hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); - if (!required_fault) { - *hmm_pfn = 0; - return 0; - } + if (!required_fault) + goto out; if (!non_swap_entry(entry)) goto fault; @@ -304,11 +311,13 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, pte_unmap(ptep); return -EFAULT; } - *hmm_pfn = HMM_PFN_ERROR; - return 0; + new_pfn_flags = HMM_PFN_ERROR; + goto out; } - *hmm_pfn = pte_pfn(pte) | cpu_flags; + new_pfn_flags = pte_pfn(pte) | cpu_flags; +out: + *hmm_pfn = (*hmm_pfn & HMM_PFN_INOUT_FLAGS) | new_pfn_flags; return 0; fault: @@ -448,8 +457,10 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, } pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - for (i = 0; i < npages; ++i, ++pfn) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; i < npages; ++i, ++pfn) { + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + hmm_pfns[i] |= pfn | cpu_flags; + } goto out_unlock; } @@ -507,8 +518,10 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, } pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT); - for (; addr < end; addr += PAGE_SIZE, i++, pfn++) - range->hmm_pfns[i] = pfn | cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++, pfn++) { + range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + range->hmm_pfns[i] |= pfn | cpu_flags; + } spin_unlock(ptl); return 0; From patchwork Tue Dec 17 13:00:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F25CBE7717F for ; Tue, 17 Dec 2024 13:01:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7F3DB6B00B9; Tue, 17 Dec 2024 08:01:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A0D66B00CC; Tue, 17 Dec 2024 08:01:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CD406B00CD; Tue, 17 Dec 2024 08:01:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 359396B00B9 for ; Tue, 17 Dec 2024 08:01:45 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D83DA80724 for ; Tue, 17 Dec 2024 13:01:44 +0000 (UTC) X-FDA: 82904461542.08.C797582 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf09.hostedemail.com (Postfix) with ESMTP id 3297C140008 for ; Tue, 17 Dec 2024 13:01:22 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Rat7pFf+; spf=pass (imf09.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440474; a=rsa-sha256; cv=none; b=2RQqaTuJYOywk9VWH2Ucsdql/pIVTN2HJ33Odt4pg/LOltN86ygEbL+0CiE3UEhZrjtH9S j8lQOjq+VoA1nwoWcyfIUgmimQgUI3mZx7kCh+kyrW1q55ISAw4TMCJROBB5Q55fZZ2z7I EPGNz5vvZNVuPF7QiatxyBxoju1Q6f0= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Rat7pFf+; spf=pass (imf09.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440474; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=S/3TxqmUMB8lUKN1t/G59kzomarq9ZzDQoXqpcjpuZs=; b=6f1J8ZyPcwtbkmz2IXPZ4OEQu5iiZ7ByzVgFC956IeDwfb0G17Q6wAtSx6O6Tf5SWgo6V2 /MiiEuGi8yBLwmqBox9+0hq9Ej6bll/PFZ2U0PGXX5Gi9EWFd/Wu+kztpslLv2NIFOTxl2 hOVgBnJLts8ld4XRwXkRpe69UzhC4qs= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 069145C627E; Tue, 17 Dec 2024 13:01:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01B9CC4CED3; Tue, 17 Dec 2024 13:01:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440501; bh=pMJ6mhRbZZkiUrgEIQ51pPr7HXNUzJolBRZM3UQIgfs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Rat7pFf+uwyGS7/vLSYierdsCq34mdujltijfk/qtl5EbVYQ+GK+qnAzIU8OsbO3m KX8VPEJeUpPjjR+NDX6bYU99npUp33Kw0TQKXUjcmHk/aMPeJ2y3ZS0bfIK1UkwJYk to6pMNb2o4buQ2LKBb1UFyEW78kkFkGsoKlvv4H0qqe+J85ZEX39WGY1mLKH1IC0a7 vQvaYy5/4qqYde1W5ut5Sbu286BRii3/Z1qC89i4U9sO62oDth0I8wDkQtCIu8uKAW d8Yrz1N88AQPrrcb2bhVxNSwcWGSqMTGIKuxaMdNBQUGKP6/ItaYKYZ4gD8EeTQ5gE YQXxHbdqT4djw== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 11/17] mm/hmm: provide generic DMA managing logic Date: Tue, 17 Dec 2024 15:00:29 +0200 Message-ID: <697b8ef0f4e201c70de5a5e04de2a847705bbdfe.1734436840.git.leon@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 3297C140008 X-Stat-Signature: eoitrmodwch8n48sd3io79m6ar5num6e X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1734440482-108780 X-HE-Meta: U2FsdGVkX18IGhtJCyDggEHL6XsMx16FQVsTPwEmjcNsFaVDxwqYQkjA/yyL0kzsd9DwvYfSb+bjzF5kQu/VbDAR47Hlpn4r+a/WdcN7Vnq5Yi2FUzLyPQMaPB3rKIg1x7IfStBSv3O8QXSafu/XysZeiLZsefduKVqnwpWq5vFSbU2QCutdm8QkoMXLegd7KFwRL4XL107HZEuWCsh+J2h69QOHZfToKYgsNdOkCDxbU/qA1QYefOhMo/xH0Xn9HOyeSi9ERBZt5QmrNVWtTpnj9LwRL0CXpAgtAlbA+f5DhMXkomCikOn7V/Lq1Dpdy56E+zT7IVNyrF4nq2+OikRjJCCLr2uH5UntcgLLrukaUzPxr0RgpMWhdltj179yDBgrq8AX2VtflaAjmPgDuFdJgUuJOlMQCx/oPLjY5YSzjpBdA1HDaOD6UCtuMe02NhRcA8r770uC4/Yhr9VSZenD4ht5m4hBQbVwAbFhcXMmIFeR/KMlGLWm8D5nRKS7c86fca1EKXSgYRH1KuF3IUdajkIG/kyIGSb6SaErY1rhKmhsm0GNbH6351RXQ7p3E0excrjnvP+uk/80GKtL8NhhnyV0kX29ax6/oUSalFNy51XMMtXBRZzPhtHH3hrgH1pOZBHT1/UZAKqi0tQRlM4AyIQHzj0R2zBc2hE2ixDcRlJE2bPotAv9fEFqKXwB1MBULBm2BiT2x0DByNq6xIWey1fQjNT+9uuzG+EaQ3cWxhwAee+YNLSgtshGPHd6pKUyD/5q8VVUiAU5rPeUyw9n+BLxTryHRlshTtU44G+RlSB9Nhw4eJB7F/mh2Xbv5H9oeJVeZXfkcepl457rhGBVcwR8Mk0cdKUDZZW8Pfyuxq4CARbDytn9OkPSNlmrDCprAixah5b8mxQ2WU6oigbPf61pBbC/qorOm7wYjhv5dLXD1CIROG1Grx3+TWm/oR81fkO5v7q3DwJGRSl XNZZweLg 4SCm+CxVtYLtIdzR+L/zoM6fUMEh+ERh3b6Ih+fo4HhdfmOXVg6qnYAqYr2nE5pPfoq9ojSxfC5hjz/5Rfn7ve59hkIGHSHOeaZFXSVsi7FkBQNrpMh1JNcdf+2/qUwqHGFjxSWtImDJ5SAvn+NjzO4Sx8+jtBnJD9E68JioRw7xpwhky75BiGOWz0VtaD8qN/xorVQaKo69sU9egCGh0DnqnJJ//wDq9KohbRdrAce7P/mU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky HMM callers use PFN list to populate range while calling to hmm_range_fault(), the conversion from PFN to DMA address is done by the callers with help of another DMA list. However, it is wasteful on any modern platform and by doing the right logic, that DMA list can be avoided. Provide generic logic to manage these lists and gave an interface to map/unmap PFNs to DMA addresses, without requiring from the callers to be an experts in DMA core API. Signed-off-by: Leon Romanovsky --- include/linux/hmm-dma.h | 33 ++++++ include/linux/hmm.h | 4 + mm/hmm.c | 215 +++++++++++++++++++++++++++++++++++++++- 3 files changed, 251 insertions(+), 1 deletion(-) create mode 100644 include/linux/hmm-dma.h diff --git a/include/linux/hmm-dma.h b/include/linux/hmm-dma.h new file mode 100644 index 000000000000..f58b9fc71999 --- /dev/null +++ b/include/linux/hmm-dma.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */ +#ifndef LINUX_HMM_DMA_H +#define LINUX_HMM_DMA_H + +#include + +struct dma_iova_state; +struct pci_p2pdma_map_state; + +/* + * struct hmm_dma_map - array of PFNs and DMA addresses + * + * @state: DMA IOVA state + * @pfns: array of PFNs + * @dma_list: array of DMA addresses + * @dma_entry_size: size of each DMA entry in the array + */ +struct hmm_dma_map { + struct dma_iova_state state; + unsigned long *pfn_list; + dma_addr_t *dma_list; + size_t dma_entry_size; +}; + +int hmm_dma_map_alloc(struct device *dev, struct hmm_dma_map *map, + size_t nr_entries, size_t dma_entry_size); +void hmm_dma_map_free(struct device *dev, struct hmm_dma_map *map); +dma_addr_t hmm_dma_map_pfn(struct device *dev, struct hmm_dma_map *map, + size_t idx, + struct pci_p2pdma_map_state *p2pdma_state); +bool hmm_dma_unmap_pfn(struct device *dev, struct hmm_dma_map *map, size_t idx); +#endif /* LINUX_HMM_DMA_H */ diff --git a/include/linux/hmm.h b/include/linux/hmm.h index a1ddbedc19c0..1bc33e4c20ea 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -23,6 +23,8 @@ struct mmu_interval_notifier; * HMM_PFN_WRITE - if the page memory can be written to (requires HMM_PFN_VALID) * HMM_PFN_ERROR - accessing the pfn is impossible and the device should * fail. ie poisoned memory, special pages, no vma, etc + * HMM_PFN_P2PDMA - P2P page + * HMM_PFN_P2PDMA_BUS - Bus mapped P2P transfer * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation * to mark that page is already DMA mapped * @@ -43,6 +45,8 @@ enum hmm_pfn_flags { * Sticky flags, carried from input to output, * don't forget to update HMM_PFN_INOUT_FLAGS */ + HMM_PFN_P2PDMA = 1UL << (BITS_PER_LONG - 5), + HMM_PFN_P2PDMA_BUS = 1UL << (BITS_PER_LONG - 6), HMM_PFN_DMA_MAPPED = 1UL << (BITS_PER_LONG - 7), HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), diff --git a/mm/hmm.c b/mm/hmm.c index da5743f6d854..e7dfb9f6cd9b 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -10,6 +10,7 @@ */ #include #include +#include #include #include #include @@ -23,6 +24,7 @@ #include #include #include +#include #include #include @@ -41,7 +43,8 @@ enum { enum { /* These flags are carried from input-to-output */ - HMM_PFN_INOUT_FLAGS = HMM_PFN_DMA_MAPPED, + HMM_PFN_INOUT_FLAGS = HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | + HMM_PFN_P2PDMA_BUS, }; static int hmm_pfns_fill(unsigned long addr, unsigned long end, @@ -620,3 +623,213 @@ int hmm_range_fault(struct hmm_range *range) return ret; } EXPORT_SYMBOL(hmm_range_fault); + +/** + * hmm_dma_map_alloc - Allocate HMM map structure + * @dev: device to allocate structure for + * @map: HMM map to allocate + * @nr_entries: number of entries in the map + * @dma_entry_size: size of the DMA entry in the map + * + * Allocate the HMM map structure and all the lists it contains. + * Return 0 on success, -ENOMEM on failure. + */ +int hmm_dma_map_alloc(struct device *dev, struct hmm_dma_map *map, + size_t nr_entries, size_t dma_entry_size) +{ + bool dma_need_sync = false; + bool use_iova; + + if (!(nr_entries * PAGE_SIZE / dma_entry_size)) + return -EINVAL; + + /* + * The HMM API violates our normal DMA buffer ownership rules and can't + * transfer buffer ownership. The dma_addressing_limited() check is a + * best approximation to ensure no swiotlb buffering happens. + */ +#ifdef CONFIG_DMA_NEED_SYNC + dma_need_sync = !dev->dma_skip_sync; +#endif /* CONFIG_DMA_NEED_SYNC */ + if (dma_need_sync || dma_addressing_limited(dev)) + return -EOPNOTSUPP; + + map->dma_entry_size = dma_entry_size; + map->pfn_list = + kvcalloc(nr_entries, sizeof(*map->pfn_list), GFP_KERNEL); + if (!map->pfn_list) + return -ENOMEM; + + use_iova = dma_iova_try_alloc(dev, &map->state, 0, + nr_entries * PAGE_SIZE); + if (!use_iova && dma_need_unmap(dev)) { + map->dma_list = kvcalloc(nr_entries, sizeof(*map->dma_list), + GFP_KERNEL); + if (!map->dma_list) + goto err_dma; + } + return 0; + +err_dma: + kvfree(map->pfn_list); + return -ENOMEM; +} +EXPORT_SYMBOL_GPL(hmm_dma_map_alloc); + +/** + * hmm_dma_map_free - iFree HMM map structure + * @dev: device to free structure from + * @map: HMM map containing the various lists and state + * + * Free the HMM map structure and all the lists it contains. + */ +void hmm_dma_map_free(struct device *dev, struct hmm_dma_map *map) +{ + if (dma_use_iova(&map->state)) + dma_iova_free(dev, &map->state); + kvfree(map->pfn_list); + kvfree(map->dma_list); +} +EXPORT_SYMBOL_GPL(hmm_dma_map_free); + +/** + * hmm_dma_map_pfn - Map a physical HMM page to DMA address + * @dev: Device to map the page for + * @map: HMM map + * @idx: Index into the PFN and dma address arrays + * @pci_p2pdma_map_state: PCI P2P state. + * + * dma_alloc_iova() allocates IOVA based on the size specified by their use in + * iova->size. Call this function after IOVA allocation to link whole @page + * to get the DMA address. Note that very first call to this function + * will have @offset set to 0 in the IOVA space allocated from + * dma_alloc_iova(). For subsequent calls to this function on same @iova, + * @offset needs to be advanced by the caller with the size of previous + * page that was linked + DMA address returned for the previous page that was + * linked by this function. + */ +dma_addr_t hmm_dma_map_pfn(struct device *dev, struct hmm_dma_map *map, + size_t idx, + struct pci_p2pdma_map_state *p2pdma_state) +{ + struct dma_iova_state *state = &map->state; + dma_addr_t *dma_addrs = map->dma_list; + unsigned long *pfns = map->pfn_list; + struct page *page = hmm_pfn_to_page(pfns[idx]); + phys_addr_t paddr = hmm_pfn_to_phys(pfns[idx]); + size_t offset = idx * map->dma_entry_size; + unsigned long attrs = 0; + dma_addr_t dma_addr; + int ret; + + if ((pfns[idx] & HMM_PFN_DMA_MAPPED) && + !(pfns[idx] & HMM_PFN_P2PDMA_BUS)) { + /* + * We are in this flow when there is a need to resync flags, + * for example when page was already linked in prefetch call + * with READ flag and now we need to add WRITE flag + * + * This page was already programmed to HW and we don't want/need + * to unlink and link it again just to resync flags. + */ + if (dma_use_iova(state)) + return state->addr + offset; + + /* + * Without dma_need_unmap, the dma_addrs array is NULL, thus we + * need to regenerate the address below even if there already + * was a mapping. But !dma_need_unmap implies that the + * mapping stateless, so this is fine. + */ + if (dma_need_unmap(dev)) + return dma_addrs[idx]; + + /* Continue to remapping */ + } + + switch (pci_p2pdma_state(p2pdma_state, dev, page)) { + case PCI_P2PDMA_MAP_NONE: + break; + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + attrs |= DMA_ATTR_SKIP_CPU_SYNC; + pfns[idx] |= HMM_PFN_P2PDMA; + break; + case PCI_P2PDMA_MAP_BUS_ADDR: + pfns[idx] |= HMM_PFN_P2PDMA_BUS | HMM_PFN_DMA_MAPPED; + return pci_p2pdma_bus_addr_map(p2pdma_state, paddr); + default: + return DMA_MAPPING_ERROR; + } + + if (dma_use_iova(state)) { + ret = dma_iova_link(dev, state, paddr, offset, + map->dma_entry_size, DMA_BIDIRECTIONAL, + attrs); + if (ret) + goto error; + + ret = dma_iova_sync(dev, state, offset, map->dma_entry_size); + if (ret) { + dma_iova_unlink(dev, state, offset, map->dma_entry_size, + DMA_BIDIRECTIONAL, attrs); + goto error; + } + + dma_addr = state->addr + offset; + } else { + if (WARN_ON_ONCE(dma_need_unmap(dev) && !dma_addrs)) + goto error; + + dma_addr = dma_map_page(dev, page, 0, map->dma_entry_size, + DMA_BIDIRECTIONAL); + if (dma_mapping_error(dev, dma_addr)) + goto error; + + if (dma_need_unmap(dev)) + dma_addrs[idx] = dma_addr; + } + pfns[idx] |= HMM_PFN_DMA_MAPPED; + return dma_addr; +error: + pfns[idx] &= ~HMM_PFN_P2PDMA; + return DMA_MAPPING_ERROR; + +} +EXPORT_SYMBOL_GPL(hmm_dma_map_pfn); + +/** + * hmm_dma_unmap_pfn - Unmap a physical HMM page from DMA address + * @dev: Device to unmap the page from + * @map: HMM map + * @idx: Index of the PFN to unmap + * + * Returns true if the PFN was mapped and has been unmapped, false otherwise. + */ +bool hmm_dma_unmap_pfn(struct device *dev, struct hmm_dma_map *map, size_t idx) +{ + struct dma_iova_state *state = &map->state; + dma_addr_t *dma_addrs = map->dma_list; + unsigned long *pfns = map->pfn_list; + unsigned long attrs = 0; + +#define HMM_PFN_VALID_DMA (HMM_PFN_VALID | HMM_PFN_DMA_MAPPED) + if ((pfns[idx] & HMM_PFN_VALID_DMA) != HMM_PFN_VALID_DMA) + return false; +#undef HMM_PFN_VALID_DMA + + if (pfns[idx] & HMM_PFN_P2PDMA_BUS) + ; /* no need to unmap bus address P2P mappings */ + else if (dma_use_iova(state)) { + if (pfns[idx] & HMM_PFN_P2PDMA) + attrs |= DMA_ATTR_SKIP_CPU_SYNC; + dma_iova_unlink(dev, state, idx * map->dma_entry_size, + map->dma_entry_size, DMA_BIDIRECTIONAL, attrs); + } else if (dma_need_unmap(dev)) + dma_unmap_page(dev, dma_addrs[idx], map->dma_entry_size, + DMA_BIDIRECTIONAL); + + pfns[idx] &= + ~(HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | HMM_PFN_P2PDMA_BUS); + return true; +} +EXPORT_SYMBOL_GPL(hmm_dma_unmap_pfn); From patchwork Tue Dec 17 13:00:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911740 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0602AE7717F for ; Tue, 17 Dec 2024 13:01:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 928BA6B00CE; Tue, 17 Dec 2024 08:01:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D9786B00CF; Tue, 17 Dec 2024 08:01:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 779BD6B00D0; Tue, 17 Dec 2024 08:01:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 54A086B00CE for ; Tue, 17 Dec 2024 08:01:49 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 107C1AEC8A for ; Tue, 17 Dec 2024 13:01:49 +0000 (UTC) X-FDA: 82904462508.16.CE9634C Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf26.hostedemail.com (Postfix) with ESMTP id DFF4914000C for ; Tue, 17 Dec 2024 13:01:23 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=lasbld6N; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf26.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440492; a=rsa-sha256; cv=none; b=tQB36WcKjRHUmEDBbsrsR2o5jGBeiFHgjCzNGlbAojxYs0I3L9o3p6Mfa4xwkwdfoxdEwv PIMll42XAHPP4MjD+8jvupmApxPKuO8+zXWEw/nf1at+FBJQM1GN+uC8n6i8bIiRdVdfEP uFEgHu2kL63ff/ZG/PYMOhn7YJYVDUE= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=lasbld6N; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf26.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440492; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BScRfuIXGYd3g8Kuq6JzPhLfGJk/9FQurr17G2Ft9So=; b=PFUhLMq1PWKWm44xoPVMhFfW8qeqzbK0vUFGyJYYDs5CO34WBhr+owc2ERjzw739N1OlEQ mJj/EXrZBRVqDnsJ9vwvsHZnbL7ZPABye4BcDooJnw5pYZPk6Lb1hnYPhnRg+iLRmPIgOB e9l9KkPi4zEcdGaZDd/hhR9RcriTIOw= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 304C45C0FC1; Tue, 17 Dec 2024 13:01:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D36EC4CED3; Tue, 17 Dec 2024 13:01:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440506; bh=c6PZrvT1wlawN3LJGk7eyZmVLvKEURmeMowXSIMkvfg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lasbld6Nt54JZbWwvAfx77uhlptEhwBLqJzIQFqB+Z4FxokqMJvou72FGDaMRkFNa 59d37v553/Dk7WbMxMvvNGbu0z7B6AUksUMHNwym4Q7SxcRl4XzVrsc5myyf9tDmfT xPzTOGekFB7h/ldRy0qgfqm/tR6xJaYmKotIXV7BQsKcNoLu5GUdMBaD4xf+YYu7je 4S1iqp647w70TGfvmHxYZqXHVgAt2pFRfoTADL7YpE4/iumVkwa19vbr3fOw5KT54t uqQ9yXXcNYDg3oLsoxy06ZiQh1QBCibZcvCKTgZ3EKwZlmO+bUNxEo8g7R3RIP+l+U r1s5NGA3/OD+A== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 12/17] RDMA/umem: Store ODP access mask information in PFN Date: Tue, 17 Dec 2024 15:00:30 +0200 Message-ID: <8bbfffeda71e8dfa7897670681c172efaed8f8c7.1734436840.git.leon@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: DFF4914000C X-Stat-Signature: wcogy8koo5dnkiogbkexpkfyhoyd356c X-HE-Tag: 1734440483-134674 X-HE-Meta: U2FsdGVkX1/u9HWg3h82eceHFVWlbDJe7N4XIE7imePfCbkAubHVbCoki/vBfenN7r1elDAOeYcwoki+I4nIuz7zDddQ9LGnWhE4dWjjRRvzigOv/FzEEu7xNrQkfb6OBDxmI/ClOSQFFJo2Lr/q5SuCXYRbzaOP2jIxO2G6VaWiAs0ij8GrjlWuqmgPGP9KzVzXD6rhjSB5vpAxm+7RJ+ZCdB1PC8CJYFKi/i7xVI0TBFVIvIGzd314mKV5Yske1n4SnrUJEBj/H0u9yfRUUeU0+kG1XxY+az4F87mNqPEidZdisUb3DgMu2RbZ51ypipSHxsfBYhJ78aaXdmhhtvZ+Eoz76wRavXtB3a1MsgZDg4rPnC7fBZjWMCYGEjkEdHz1Cdwg6xk0K1t62Uks9YCfpNHkKo5OHpcoVEeiiRafyZ90n5eNOnMXZeZWj6TNaydtwvqXdSPgfm8Po37gXHnBJ5kwNgcWZOa4nWd6JbBnzhUI4EwEm4jmk9kjBtYTPXZP2C30RJpD/yTuNxYtBqfR46KjNdEvVxVfOm9ufWvk6h8fO6ZEt75L+OnkjjPce7EN+Uxx5OWszGmEpjz/ZpvJP0lVrIE3Ber2lV3YvcOZLPsam/THO/ZlI4/GoLywtTPLUvVumVDQPBX/QuxTWmd9HH2y5CTRfQFY6DPXvAfTmFp6N8UgrM9M3w8LYGlg+CWnahsMM3zKc7ctlwehAF9n1C5WiSweKyC4Ztt2jFA8JlcdBrR6ANpVXz2J9eYm6Wnom/rUp9qa0dQNqoErh69xGHdDbtlpORHltQ4tkdvKcvnGHxZ+HAtdpN+5JS7yUEkxkkSmcvaOWw6tGHqqoWD6FKb5ag5O2HpeUtG7Eq4j6Z7ape8pW4M3YT8xVMAM9K2LAOardxs+a7KUax03XmGEBpsgjaQatvxyyWkitZxN/E0V+GmNMjZ7vPjrPbXAlXdk/GdbM93nJhL44eO 8/Ju3DWn PEvAELo4hCA5xe4/Xh7g1MbLc7XwlmbJ6enZyn+2kIyHFN8W+hKR6rNsXfsmKtEpsFsYb/y9I+QqJloSF6GEMGDjDYNx14BvTSjEs9RHpnfKa8xbyoWPphHFzGKR2HN9Uyy3xVPWsGAPJjkUJ2DB9i+iHz2uL5Ru4EE7C1Trm4D7bL3PI55IZ0ePhf60LHZthWFdNT5AsbIu5N1EXjiaDzvgPdJYP3R3X3PRAz3Bll80IoRU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky As a preparation to remove dma_list, store access mask in PFN pointer and not in dma_addr_t. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 103 +++++++++++---------------- drivers/infiniband/hw/mlx5/mlx5_ib.h | 1 + drivers/infiniband/hw/mlx5/odp.c | 37 +++++----- include/rdma/ib_umem_odp.h | 14 +--- 4 files changed, 64 insertions(+), 91 deletions(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index e9fa22d31c23..e1a5a567efb3 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -296,22 +296,11 @@ EXPORT_SYMBOL(ib_umem_odp_release); static int ib_umem_odp_map_dma_single_page( struct ib_umem_odp *umem_odp, unsigned int dma_index, - struct page *page, - u64 access_mask) + struct page *page) { struct ib_device *dev = umem_odp->umem.ibdev; dma_addr_t *dma_addr = &umem_odp->dma_list[dma_index]; - if (*dma_addr) { - /* - * If the page is already dma mapped it means it went through - * a non-invalidating trasition, like read-only to writable. - * Resync the flags. - */ - *dma_addr = (*dma_addr & ODP_DMA_ADDR_MASK) | access_mask; - return 0; - } - *dma_addr = ib_dma_map_page(dev, page, 0, 1 << umem_odp->page_shift, DMA_BIDIRECTIONAL); if (ib_dma_mapping_error(dev, *dma_addr)) { @@ -319,7 +308,6 @@ static int ib_umem_odp_map_dma_single_page( return -EFAULT; } umem_odp->npages++; - *dma_addr |= access_mask; return 0; } @@ -355,9 +343,6 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, struct hmm_range range = {}; unsigned long timeout; - if (access_mask == 0) - return -EINVAL; - if (user_virt < ib_umem_start(umem_odp) || user_virt + bcnt > ib_umem_end(umem_odp)) return -EFAULT; @@ -383,7 +368,7 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, if (fault) { range.default_flags = HMM_PFN_REQ_FAULT; - if (access_mask & ODP_WRITE_ALLOWED_BIT) + if (access_mask & HMM_PFN_WRITE) range.default_flags |= HMM_PFN_REQ_WRITE; } @@ -415,22 +400,17 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, for (pfn_index = 0; pfn_index < num_pfns; pfn_index += 1 << (page_shift - PAGE_SHIFT), dma_index++) { - if (fault) { - /* - * Since we asked for hmm_range_fault() to populate - * pages it shouldn't return an error entry on success. - */ - WARN_ON(range.hmm_pfns[pfn_index] & HMM_PFN_ERROR); - WARN_ON(!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)); - } else { - if (!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)) { - WARN_ON(umem_odp->dma_list[dma_index]); - continue; - } - access_mask = ODP_READ_ALLOWED_BIT; - if (range.hmm_pfns[pfn_index] & HMM_PFN_WRITE) - access_mask |= ODP_WRITE_ALLOWED_BIT; - } + /* + * Since we asked for hmm_range_fault() to populate + * pages it shouldn't return an error entry on success. + */ + WARN_ON(fault && range.hmm_pfns[pfn_index] & HMM_PFN_ERROR); + WARN_ON(fault && !(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)); + if (!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)) + continue; + + if (range.hmm_pfns[pfn_index] & HMM_PFN_DMA_MAPPED) + continue; hmm_order = hmm_pfn_to_map_order(range.hmm_pfns[pfn_index]); /* If a hugepage was detected and ODP wasn't set for, the umem @@ -445,13 +425,14 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, } ret = ib_umem_odp_map_dma_single_page( - umem_odp, dma_index, hmm_pfn_to_page(range.hmm_pfns[pfn_index]), - access_mask); + umem_odp, dma_index, + hmm_pfn_to_page(range.hmm_pfns[pfn_index])); if (ret < 0) { ibdev_dbg(umem_odp->umem.ibdev, "ib_umem_odp_map_dma_single_page failed with error %d\n", ret); break; } + range.hmm_pfns[pfn_index] |= HMM_PFN_DMA_MAPPED; } /* upon success lock should stay on hold for the callee */ if (!ret) @@ -471,7 +452,6 @@ EXPORT_SYMBOL(ib_umem_odp_map_dma_and_lock); void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, u64 bound) { - dma_addr_t dma_addr; dma_addr_t dma; int idx; u64 addr; @@ -482,34 +462,37 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, virt = max_t(u64, virt, ib_umem_start(umem_odp)); bound = min_t(u64, bound, ib_umem_end(umem_odp)); for (addr = virt; addr < bound; addr += BIT(umem_odp->page_shift)) { + unsigned long pfn_idx = (addr - ib_umem_start(umem_odp)) >> + PAGE_SHIFT; + struct page *page = + hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); + idx = (addr - ib_umem_start(umem_odp)) >> umem_odp->page_shift; dma = umem_odp->dma_list[idx]; - /* The access flags guaranteed a valid DMA address in case was NULL */ - if (dma) { - unsigned long pfn_idx = (addr - ib_umem_start(umem_odp)) >> PAGE_SHIFT; - struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); - - dma_addr = dma & ODP_DMA_ADDR_MASK; - ib_dma_unmap_page(dev, dma_addr, - BIT(umem_odp->page_shift), - DMA_BIDIRECTIONAL); - if (dma & ODP_WRITE_ALLOWED_BIT) { - struct page *head_page = compound_head(page); - /* - * set_page_dirty prefers being called with - * the page lock. However, MMU notifiers are - * called sometimes with and sometimes without - * the lock. We rely on the umem_mutex instead - * to prevent other mmu notifiers from - * continuing and allowing the page mapping to - * be removed. - */ - set_page_dirty(head_page); - } - umem_odp->dma_list[idx] = 0; - umem_odp->npages--; + if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_VALID)) + goto clear; + if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_DMA_MAPPED)) + goto clear; + + ib_dma_unmap_page(dev, dma, BIT(umem_odp->page_shift), + DMA_BIDIRECTIONAL); + if (umem_odp->pfn_list[pfn_idx] & HMM_PFN_WRITE) { + struct page *head_page = compound_head(page); + /* + * set_page_dirty prefers being called with + * the page lock. However, MMU notifiers are + * called sometimes with and sometimes without + * the lock. We rely on the umem_mutex instead + * to prevent other mmu notifiers from + * continuing and allowing the page mapping to + * be removed. + */ + set_page_dirty(head_page); } + umem_odp->npages--; +clear: + umem_odp->pfn_list[pfn_idx] &= ~HMM_PFN_FLAGS; } } EXPORT_SYMBOL(ib_umem_odp_unmap_dma_pages); diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index a01b592aa716..c4946d4f0ad7 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -336,6 +336,7 @@ struct mlx5_ib_flow_db { #define MLX5_IB_UPD_XLT_PD BIT(4) #define MLX5_IB_UPD_XLT_ACCESS BIT(5) #define MLX5_IB_UPD_XLT_INDIRECT BIT(6) +#define MLX5_IB_UPD_XLT_DOWNGRADE BIT(7) /* Private QP creation flags to be passed in ib_qp_init_attr.create_flags. * diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 4b37446758fd..78887500ce15 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -34,6 +34,7 @@ #include #include #include +#include #include "mlx5_ib.h" #include "cmd.h" @@ -158,22 +159,12 @@ static void populate_klm(struct mlx5_klm *pklm, size_t idx, size_t nentries, } } -static u64 umem_dma_to_mtt(dma_addr_t umem_dma) -{ - u64 mtt_entry = umem_dma & ODP_DMA_ADDR_MASK; - - if (umem_dma & ODP_READ_ALLOWED_BIT) - mtt_entry |= MLX5_IB_MTT_READ; - if (umem_dma & ODP_WRITE_ALLOWED_BIT) - mtt_entry |= MLX5_IB_MTT_WRITE; - - return mtt_entry; -} - static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, struct mlx5_ib_mr *mr, int flags) { struct ib_umem_odp *odp = to_ib_umem_odp(mr->umem); + bool downgrade = flags & MLX5_IB_UPD_XLT_DOWNGRADE; + unsigned long pfn; dma_addr_t pa; size_t i; @@ -181,8 +172,17 @@ static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, return; for (i = 0; i < nentries; i++) { + pfn = odp->pfn_list[idx + i]; + if (!(pfn & HMM_PFN_VALID)) + /* ODP initialization */ + continue; + pa = odp->dma_list[idx + i]; - pas[i] = cpu_to_be64(umem_dma_to_mtt(pa)); + pa |= MLX5_IB_MTT_READ; + if ((pfn & HMM_PFN_WRITE) && !downgrade) + pa |= MLX5_IB_MTT_WRITE; + + pas[i] = cpu_to_be64(pa); } } @@ -286,8 +286,7 @@ static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni, * estimate the cost of another UMR vs. the cost of bigger * UMR. */ - if (umem_odp->dma_list[idx] & - (ODP_READ_ALLOWED_BIT | ODP_WRITE_ALLOWED_BIT)) { + if (umem_odp->pfn_list[idx] & HMM_PFN_VALID) { if (!in_block) { blk_start_idx = idx; in_block = 1; @@ -668,7 +667,7 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp, { int page_shift, ret, np; bool downgrade = flags & MLX5_PF_FLAGS_DOWNGRADE; - u64 access_mask; + u64 access_mask = 0; u64 start_idx; bool fault = !(flags & MLX5_PF_FLAGS_SNAPSHOT); u32 xlt_flags = MLX5_IB_UPD_XLT_ATOMIC; @@ -676,12 +675,14 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp, if (flags & MLX5_PF_FLAGS_ENABLE) xlt_flags |= MLX5_IB_UPD_XLT_ENABLE; + if (flags & MLX5_PF_FLAGS_DOWNGRADE) + xlt_flags |= MLX5_IB_UPD_XLT_DOWNGRADE; + page_shift = odp->page_shift; start_idx = (user_va - ib_umem_start(odp)) >> page_shift; - access_mask = ODP_READ_ALLOWED_BIT; if (odp->umem.writable && !downgrade) - access_mask |= ODP_WRITE_ALLOWED_BIT; + access_mask |= HMM_PFN_WRITE; np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask, fault); if (np < 0) diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index 0844c1d05ac6..a345c26a745d 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -8,6 +8,7 @@ #include #include +#include struct ib_umem_odp { struct ib_umem umem; @@ -67,19 +68,6 @@ static inline size_t ib_umem_odp_num_pages(struct ib_umem_odp *umem_odp) umem_odp->page_shift; } -/* - * The lower 2 bits of the DMA address signal the R/W permissions for - * the entry. To upgrade the permissions, provide the appropriate - * bitmask to the map_dma_pages function. - * - * Be aware that upgrading a mapped address might result in change of - * the DMA address for the page. - */ -#define ODP_READ_ALLOWED_BIT (1<<0ULL) -#define ODP_WRITE_ALLOWED_BIT (1<<1ULL) - -#define ODP_DMA_ADDR_MASK (~(ODP_READ_ALLOWED_BIT | ODP_WRITE_ALLOWED_BIT)) - #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING struct ib_umem_odp * From patchwork Tue Dec 17 13:00:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911741 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72E2EE77184 for ; Tue, 17 Dec 2024 13:01:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0CB696B00BA; Tue, 17 Dec 2024 08:01:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 028FC6B00D2; Tue, 17 Dec 2024 08:01:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DBBEB6B00D3; Tue, 17 Dec 2024 08:01:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B539E6B00D1 for ; Tue, 17 Dec 2024 08:01:53 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 64524C0708 for ; Tue, 17 Dec 2024 13:01:53 +0000 (UTC) X-FDA: 82904462088.25.E706AA2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf18.hostedemail.com (Postfix) with ESMTP id AA9561C000C for ; Tue, 17 Dec 2024 13:01:37 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="rmsm/ZP4"; spf=pass (imf18.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440491; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ziLuNqtAcwQH2qygwfkskbcTlrkellZu+hPJLQpOQXQ=; b=qfElGIqccQInlvha6u/CW958tshtc8Kqm63FDra1lBf7eWEmCx6NnrGRluSKqDiYYoqLsQ HNGtQhzjlijU7pwjoh+/KnlT0PYIgluof0MPGl9yPbU4hsANMrLFSjl0XUPUQYX/Mgg/5o 2w9k7pmcGQjVQhSCAKv99kZsV4VivVU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440491; a=rsa-sha256; cv=none; b=KWrec4Q5fsX03/Tn9t+9HYhQqqLiFX6La4q1A2lwVnQX5CCGfHUiGqfnM5lZNj4/VSJ/9K EEn0JTnOUKyZfvHEF5ZW6v9EcGYU0gqeAkAsqlH6GRM5SY84flyKwcBXwvXTLqeqxhY40F 22KNbcaxXEn25GRQ/CYBi65nBmolLzY= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="rmsm/ZP4"; spf=pass (imf18.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 7FC335C6236; Tue, 17 Dec 2024 13:01:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7448EC4CED3; Tue, 17 Dec 2024 13:01:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440510; bh=SgY0XaEUVoPC79h1XUiJyiOJV1lKs6Fx6pfRHp2U0zM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rmsm/ZP4pZUfgM9upRSeNSm9B5BJ3iMtN5zXVxtzUDuzbseKrA7lQm4iIEs9Oi2RH 3y324h4kzQSE9zz10Z4gksjUZeg2EgsgfnJNYaWEEBdhAB0ZfIwSqejm+uaftFc6Zr pLHtVfIWwUxJJ2bI76P6uHm7wmVO7/j/r4aALcEuGs+18crbyBjtAmznmGdrihK292 nqRjbYMzYJblqzEtDgS+VPhKWRuEqdOLBPuJIhk+DNlaH3ohLydfW/T7s5JpSjt85m hNeXsjzx6l8hLXDkdTWiY3mJdnt5wPajGwOmRPGgZ5/sh5jkmU2ok1qiXGToEBPQ/d sBHS7dfwHwWuw== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 13/17] RDMA/core: Convert UMEM ODP DMA mapping to caching IOVA and page linkage Date: Tue, 17 Dec 2024 15:00:31 +0200 Message-ID: <18c07d3de97814dae5b2dadf18c3a678655c70c8.1734436840.git.leon@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: AA9561C000C X-Stat-Signature: zmb57p9orefnjc5xagn9n8fjyi3fq7y8 X-Rspam-User: X-HE-Tag: 1734440497-754281 X-HE-Meta: U2FsdGVkX18FNWpkVAtIbnCHDksQ4ONsI0FxupBh0V2+NOw+0zSN2q3UbsaHuIshlaO2AHaF9jv5pP8Nl1qC/QYj7s5i4nduBFdUK0u4ectoBy4mMPRVq2n0+40cH91643vvDrHhBCApRqMv1P855AJR+RnCXBlvO9ahScnalAkmpLNjoxl4teoyTVmsVg8hUsFSUo9Kqd7cFU/Y+0DdLN7JsR86hK2aETKWamiPENAmLTIrn9Xa65I4ZCeVx+0s9swcubtipdAHN6fRbOcj2XVQEN8Zfa7Sj04i9dII+vYKFJ5VUZhKpCJ0xK8N+0vQwtIqp8hOn1OXHT7uH2A6xbwX7R+bsmmTW+2I2K9NLDMmeScejDZynJieErorNpinzLv2xv5edYijVlmUicAHe1s77C31eMskom0o4QDxY81oS7ATB16lJKaL0qLXetOJtGrmhUMSNgJWAKXMwVOXKZO2iGzNbLg6DsplpU1yVr11SNICpNWSihgl4dAAC5Mixak92ivXygMD3jlI7lk7pb/ERbCmNpUgqw8Fq6EGyIrj3jHRi7PQCbGWaCqCisc+JiNtrUekc0FE4VFZMwklwedMFE2pogVK+oVhOIhSwEJVkjsDMOiWLxFV4rP77TSZLiacVvrk7lmTX1Qplwm5o+CdQ1D2i06MTeBezeOnU4YPj+iSZ9ukY1BtbpnyHS5N3v5/us+YJQylxQ+7xwlKCpQT4cXXqFuhM8pGidho7Z1M9dS/x4QlnZRtnb6VFLODBn3qlSxsKVXHj2C1u5ojf5DdRVa2WKQo5xuos9bVcRwRQXxFg7vcEHtXTo/LzfoHguX/kAZiAs5MeKOgAnEFdWTzLa3mmEUnorY6wz2pWDxaIpFUC/57MNwcP/XhGlud57TtgrtXCSJuDQoZhcz7nxmNqbSdhrvARiXLIjFTE7RmSzvEf+soKwssk6nGyoDEZAf/H8qfKb7QhbdiRb8 wwLGXid5 pq/q1h5l1ZnEyMFeI1a+gC2pWNtpoCWcoYC2ti25BlHrjkL2FgjDqGHxliRAW06T7bHrin3g+y/NH+jt9jqnYZN9xO2rxiYrMbl1x7tCJ2YEjGyKxWsefi92P1pT3keiDDLYyMdHy1i0r5Hf7L0/Ro2/g3nBHFYVj3INSk4hbu0E6yx7GlKavGTVjBacbCzENN87e3aZZL9rtOPHniT6sCr+U8vSI6OiEJGM7fU/T5V1l3LM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Reuse newly added DMA API to cache IOVA and only link/unlink pages in fast path for UMEM ODP flow. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 104 ++++++--------------------- drivers/infiniband/hw/mlx5/mlx5_ib.h | 11 +-- drivers/infiniband/hw/mlx5/odp.c | 40 +++++++---- drivers/infiniband/hw/mlx5/umr.c | 12 +++- include/rdma/ib_umem_odp.h | 13 +--- 5 files changed, 69 insertions(+), 111 deletions(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index e1a5a567efb3..30cd8f353476 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -41,6 +41,7 @@ #include #include #include +#include #include #include @@ -50,6 +51,7 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, const struct mmu_interval_notifier_ops *ops) { + struct ib_device *dev = umem_odp->umem.ibdev; int ret; umem_odp->umem.is_odp = 1; @@ -59,7 +61,6 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, size_t page_size = 1UL << umem_odp->page_shift; unsigned long start; unsigned long end; - size_t ndmas, npfns; start = ALIGN_DOWN(umem_odp->umem.address, page_size); if (check_add_overflow(umem_odp->umem.address, @@ -70,36 +71,23 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, if (unlikely(end < page_size)) return -EOVERFLOW; - ndmas = (end - start) >> umem_odp->page_shift; - if (!ndmas) - return -EINVAL; - - npfns = (end - start) >> PAGE_SHIFT; - umem_odp->pfn_list = kvcalloc( - npfns, sizeof(*umem_odp->pfn_list), GFP_KERNEL); - if (!umem_odp->pfn_list) - return -ENOMEM; - - umem_odp->dma_list = kvcalloc( - ndmas, sizeof(*umem_odp->dma_list), GFP_KERNEL); - if (!umem_odp->dma_list) { - ret = -ENOMEM; - goto out_pfn_list; - } + ret = hmm_dma_map_alloc(dev->dma_device, &umem_odp->map, + (end - start) >> PAGE_SHIFT, + 1 << umem_odp->page_shift); + if (ret) + return ret; ret = mmu_interval_notifier_insert(&umem_odp->notifier, umem_odp->umem.owning_mm, start, end - start, ops); if (ret) - goto out_dma_list; + goto out_free_map; } return 0; -out_dma_list: - kvfree(umem_odp->dma_list); -out_pfn_list: - kvfree(umem_odp->pfn_list); +out_free_map: + hmm_dma_map_free(dev->dma_device, &umem_odp->map); return ret; } @@ -262,6 +250,8 @@ EXPORT_SYMBOL(ib_umem_odp_get); void ib_umem_odp_release(struct ib_umem_odp *umem_odp) { + struct ib_device *dev = umem_odp->umem.ibdev; + /* * Ensure that no more pages are mapped in the umem. * @@ -274,48 +264,17 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) ib_umem_end(umem_odp)); mutex_unlock(&umem_odp->umem_mutex); mmu_interval_notifier_remove(&umem_odp->notifier); - kvfree(umem_odp->dma_list); - kvfree(umem_odp->pfn_list); + hmm_dma_map_free(dev->dma_device, &umem_odp->map); } put_pid(umem_odp->tgid); kfree(umem_odp); } EXPORT_SYMBOL(ib_umem_odp_release); -/* - * Map for DMA and insert a single page into the on-demand paging page tables. - * - * @umem: the umem to insert the page to. - * @dma_index: index in the umem to add the dma to. - * @page: the page struct to map and add. - * @access_mask: access permissions needed for this page. - * - * The function returns -EFAULT if the DMA mapping operation fails. - * - */ -static int ib_umem_odp_map_dma_single_page( - struct ib_umem_odp *umem_odp, - unsigned int dma_index, - struct page *page) -{ - struct ib_device *dev = umem_odp->umem.ibdev; - dma_addr_t *dma_addr = &umem_odp->dma_list[dma_index]; - - *dma_addr = ib_dma_map_page(dev, page, 0, 1 << umem_odp->page_shift, - DMA_BIDIRECTIONAL); - if (ib_dma_mapping_error(dev, *dma_addr)) { - *dma_addr = 0; - return -EFAULT; - } - umem_odp->npages++; - return 0; -} - /** * ib_umem_odp_map_dma_and_lock - DMA map userspace memory in an ODP MR and lock it. * * Maps the range passed in the argument to DMA addresses. - * The DMA addresses of the mapped pages is updated in umem_odp->dma_list. * Upon success the ODP MR will be locked to let caller complete its device * page table update. * @@ -372,7 +331,7 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, range.default_flags |= HMM_PFN_REQ_WRITE; } - range.hmm_pfns = &(umem_odp->pfn_list[pfn_start_idx]); + range.hmm_pfns = &(umem_odp->map.pfn_list[pfn_start_idx]); timeout = jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); retry: @@ -423,16 +382,6 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, __func__, hmm_order, page_shift); break; } - - ret = ib_umem_odp_map_dma_single_page( - umem_odp, dma_index, - hmm_pfn_to_page(range.hmm_pfns[pfn_index])); - if (ret < 0) { - ibdev_dbg(umem_odp->umem.ibdev, - "ib_umem_odp_map_dma_single_page failed with error %d\n", ret); - break; - } - range.hmm_pfns[pfn_index] |= HMM_PFN_DMA_MAPPED; } /* upon success lock should stay on hold for the callee */ if (!ret) @@ -452,32 +401,23 @@ EXPORT_SYMBOL(ib_umem_odp_map_dma_and_lock); void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, u64 bound) { - dma_addr_t dma; - int idx; - u64 addr; struct ib_device *dev = umem_odp->umem.ibdev; + u64 addr; lockdep_assert_held(&umem_odp->umem_mutex); virt = max_t(u64, virt, ib_umem_start(umem_odp)); bound = min_t(u64, bound, ib_umem_end(umem_odp)); for (addr = virt; addr < bound; addr += BIT(umem_odp->page_shift)) { - unsigned long pfn_idx = (addr - ib_umem_start(umem_odp)) >> - PAGE_SHIFT; - struct page *page = - hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); - - idx = (addr - ib_umem_start(umem_odp)) >> umem_odp->page_shift; - dma = umem_odp->dma_list[idx]; + u64 offset = addr - ib_umem_start(umem_odp); + size_t idx = offset >> umem_odp->page_shift; + unsigned long pfn = umem_odp->map.pfn_list[idx]; - if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_VALID)) - goto clear; - if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_DMA_MAPPED)) + if (!hmm_dma_unmap_pfn(dev->dma_device, &umem_odp->map, idx)) goto clear; - ib_dma_unmap_page(dev, dma, BIT(umem_odp->page_shift), - DMA_BIDIRECTIONAL); - if (umem_odp->pfn_list[pfn_idx] & HMM_PFN_WRITE) { + if (pfn & HMM_PFN_WRITE) { + struct page *page = hmm_pfn_to_page(pfn); struct page *head_page = compound_head(page); /* * set_page_dirty prefers being called with @@ -492,7 +432,7 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, } umem_odp->npages--; clear: - umem_odp->pfn_list[pfn_idx] &= ~HMM_PFN_FLAGS; + umem_odp->map.pfn_list[idx] &= ~HMM_PFN_FLAGS; } } EXPORT_SYMBOL(ib_umem_odp_unmap_dma_pages); diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index c4946d4f0ad7..6fa171e74754 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -1445,8 +1445,8 @@ void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *ibdev); int __init mlx5_ib_odp_init(void); void mlx5_ib_odp_cleanup(void); int mlx5_odp_init_mkey_cache(struct mlx5_ib_dev *dev); -void mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, - struct mlx5_ib_mr *mr, int flags); +int mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, + struct mlx5_ib_mr *mr, int flags); int mlx5_ib_advise_mr_prefetch(struct ib_pd *pd, enum ib_uverbs_advise_mr_advice advice, @@ -1467,8 +1467,11 @@ static inline int mlx5_odp_init_mkey_cache(struct mlx5_ib_dev *dev) { return 0; } -static inline void mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, - struct mlx5_ib_mr *mr, int flags) {} +static inline int mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, + struct mlx5_ib_mr *mr, int flags) +{ + return -EOPNOTSUPP; +} static inline int mlx5_ib_advise_mr_prefetch(struct ib_pd *pd, diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 78887500ce15..fbb2a5670c32 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -35,6 +35,8 @@ #include #include #include +#include +#include #include "mlx5_ib.h" #include "cmd.h" @@ -159,40 +161,50 @@ static void populate_klm(struct mlx5_klm *pklm, size_t idx, size_t nentries, } } -static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, - struct mlx5_ib_mr *mr, int flags) +static int populate_mtt(__be64 *pas, size_t start, size_t nentries, + struct mlx5_ib_mr *mr, int flags) { struct ib_umem_odp *odp = to_ib_umem_odp(mr->umem); bool downgrade = flags & MLX5_IB_UPD_XLT_DOWNGRADE; - unsigned long pfn; - dma_addr_t pa; + struct pci_p2pdma_map_state p2pdma_state = {}; + struct ib_device *dev = odp->umem.ibdev; size_t i; if (flags & MLX5_IB_UPD_XLT_ZAP) - return; + return 0; for (i = 0; i < nentries; i++) { - pfn = odp->pfn_list[idx + i]; + unsigned long pfn = odp->map.pfn_list[start + i]; + dma_addr_t dma_addr; + + pfn = odp->map.pfn_list[start + i]; if (!(pfn & HMM_PFN_VALID)) /* ODP initialization */ continue; - pa = odp->dma_list[idx + i]; - pa |= MLX5_IB_MTT_READ; + dma_addr = hmm_dma_map_pfn(dev->dma_device, &odp->map, + start + i, &p2pdma_state); + if (ib_dma_mapping_error(dev, dma_addr)) + return -EFAULT; + + dma_addr |= MLX5_IB_MTT_READ; if ((pfn & HMM_PFN_WRITE) && !downgrade) - pa |= MLX5_IB_MTT_WRITE; + dma_addr |= MLX5_IB_MTT_WRITE; - pas[i] = cpu_to_be64(pa); + pas[i] = cpu_to_be64(dma_addr); + odp->npages++; } + return 0; } -void mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, - struct mlx5_ib_mr *mr, int flags) +int mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, + struct mlx5_ib_mr *mr, int flags) { if (flags & MLX5_IB_UPD_XLT_INDIRECT) { populate_klm(xlt, idx, nentries, mr, flags); + return 0; } else { - populate_mtt(xlt, idx, nentries, mr, flags); + return populate_mtt(xlt, idx, nentries, mr, flags); } } @@ -286,7 +298,7 @@ static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni, * estimate the cost of another UMR vs. the cost of bigger * UMR. */ - if (umem_odp->pfn_list[idx] & HMM_PFN_VALID) { + if (umem_odp->map.pfn_list[idx] & HMM_PFN_VALID) { if (!in_block) { blk_start_idx = idx; in_block = 1; diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c index 887fd6fa3ba9..d7fa94ab23cf 100644 --- a/drivers/infiniband/hw/mlx5/umr.c +++ b/drivers/infiniband/hw/mlx5/umr.c @@ -811,7 +811,17 @@ int mlx5r_umr_update_xlt(struct mlx5_ib_mr *mr, u64 idx, int npages, size_to_map = npages * desc_size; dma_sync_single_for_cpu(ddev, sg.addr, sg.length, DMA_TO_DEVICE); - mlx5_odp_populate_xlt(xlt, idx, npages, mr, flags); + /* + * npages is the maximum number of pages to map, but we + * can't guarantee that all pages are actually mapped. + * + * For example, if page is p2p of type which is not supported + * for mapping, the number of pages mapped will be less than + * requested. + */ + err = mlx5_odp_populate_xlt(xlt, idx, npages, mr, flags); + if (err) + return err; dma_sync_single_for_device(ddev, sg.addr, sg.length, DMA_TO_DEVICE); sg.length = ALIGN(size_to_map, MLX5_UMR_FLEX_ALIGNMENT); diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index a345c26a745d..2a24bf791c10 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -8,24 +8,17 @@ #include #include -#include +#include struct ib_umem_odp { struct ib_umem umem; struct mmu_interval_notifier notifier; struct pid *tgid; - /* An array of the pfns included in the on-demand paging umem. */ - unsigned long *pfn_list; + struct hmm_dma_map map; /* - * An array with DMA addresses mapped for pfns in pfn_list. - * The lower two bits designate access permissions. - * See ODP_READ_ALLOWED_BIT and ODP_WRITE_ALLOWED_BIT. - */ - dma_addr_t *dma_list; - /* - * The umem_mutex protects the page_list and dma_list fields of an ODP + * The umem_mutex protects the page_list field of an ODP * umem, allowing only a single thread to map/unmap pages. The mutex * also protects access to the mmu notifier counters. */ From patchwork Tue Dec 17 13:00:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911742 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3339BE77184 for ; Tue, 17 Dec 2024 13:01:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A65DF6B00D2; Tue, 17 Dec 2024 08:01:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A132A6B00D3; Tue, 17 Dec 2024 08:01:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C8D76B00D4; Tue, 17 Dec 2024 08:01:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 57E446B00D2 for ; Tue, 17 Dec 2024 08:01:57 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 15CE18071D for ; Tue, 17 Dec 2024 13:01:57 +0000 (UTC) X-FDA: 82904462508.23.B2F6FBA Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf24.hostedemail.com (Postfix) with ESMTP id 6BF0718002C for ; Tue, 17 Dec 2024 13:01:51 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=nwtijajh; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf24.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440501; a=rsa-sha256; cv=none; b=3gZq3c5Lv0W1X4Mp+40/5wy7rJVMyNMuJtQ5oW6caZ7ehWMyYpzNm7y6hqudk4d1AVeIfn Mi3/hg7AKCnm97Ojhgd1dOGgN15Fzsa8oABJn0IbzdW8PLlHPtuPS5I6RoqiP1Jy8fKFWM Av0xupOWfnm4YJ5sHFx4WB01l4DVvdc= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=nwtijajh; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf24.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440501; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9wohjECrItiTe3wGt73yRaTZkJ8cJH9lXeliHlLBqbI=; b=dNV5sK1vY3IVPtV4xUXcVNDA5nI++hpVq1T64ebEwodzb9k+Kl9IaCEv7Pj0/o/rN8RXsj FMLvNeFRmuA8UZ3tcneoviNkoLQv+5KznnRkRQbf/TV2d9NdTO7M9hQnR8d82RNKs5SHg9 Ea2MSs6d0DCch+GtPpl8VpYJHl/hzLA= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 723C45C2C39; Tue, 17 Dec 2024 13:01:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 70A69C4CED3; Tue, 17 Dec 2024 13:01:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440514; bh=faZKR+GLvGy07GeNnto8v510/6+S1UMSrHjOqNZtBkg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nwtijajh4ypb7fKZt6k7x4HGup1gTYl0XkljPI7PwxMJuEdG4LvZC2Wkmd+6h0KR0 POnm5xMGOPzSpDi1TnhuSrGzZu6ZTFmvnvbNRQuN3MGJV07HVmi7mGPmC28ntAMKd6 i4zQydEhFNcPmTcgyxQIgkbm89neiWRp0nS57D2/scu6c7ZyQrGZsnd5g300H/Dc6c ssfuuORUh3pBLcdDFCRqn9HPuMABkGdaZ5oZUyyxgTk/SVrFPlNoIXSLoWeqYmG/h8 y6qZ8wi+SPyxvkEHb4689+1w0nzdecfQswNJY8uM2y8icydIHPgzfKdMw9iZH/OHtS CofUCjidyETzw== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 14/17] RDMA/umem: Separate implicit ODP initialization from explicit ODP Date: Tue, 17 Dec 2024 15:00:32 +0200 Message-ID: <7e1538594c048f3d3b41e53dc3875f399909dee8.1734436840.git.leon@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 6BF0718002C X-Stat-Signature: b6o1sfgtz5ur9wym9oq1yr5cnjw5f6iw X-HE-Tag: 1734440511-666418 X-HE-Meta: U2FsdGVkX1+xQWHgvj0WU1GXHbVgez5QJ1kMAstCEVmJSBcecWhS+WFpEFFA80aPoI/RVUjQYB7phI0y00uPAKaky+5Nv351Y7xNXPj07EhpI/6psOKh0t94EZzC80vMUjMijtxlTm8C1rcxZ2n3RzkVImoDOc09Wo0Cr4YmJZyoHFr9YOw6xnITpej6OQOgLxs4mU9EXygfoscGdszoMQumSYezHQz95z1ASCuwztcO96h9NB8G0qdTtk1PKjYdKVK81am68BaWXIXKsPf5ETNybarRjH9gmHIMvBFGZjPEkUQz3aRnF+Ih0WD7B9dFoaYbLcjKY8WOVZwsyRGL0szmADQO0eyYI2IPKlmF1EdNvBY65kTpRs9FB1okt9lreF6wfcYMp9+ijTiznki7Vm7vgsigSYxTHOZAYB+F4FbdTtG8Lnc5jitdJndogrSTMxw/buEzB2O1eiLRKd9bv0FqOTv4t2WBv5x9YvQ8COE1d4akfk2mKWqdLvUq1uoGD9uL3I4GaQXTyUw0v62yujrx/6i60U8KHHYBqmqhXsYLVvXd4pETJQqQyK8u1brsPdNC7ZYo4Qps8jRsUGWgIwxZNeA3iXaqJsihYPgDT34F+nQIPppHjq/YpBSIhptKUqb2ph0di4QsNaBAJNRJ3YJLdf9/w274Ep4kwJvggL86YULDxtfOChcG/u/gBimtkJJU/PrEc8JZf2AReDyG2OlQlhs5y7lWO+70kmOCLxAf+TNsERfaqPc4W+irjvAEKUYitq78NCx6Jcbo++awuzApJWxzEFFIYqULkyA19nXDZvmEhDySl4Uyi3lAF3SfwMiAzvH2LmZxYhCrIqMw4EAFM0QiseNJOVqUuSZEFgXRsZ59Khc6km1dd0RasHMmGcxq0gP/zu8wPJQ0ke/TI0H7u7kmJ1MiuizoE96RAD/4uebENavhhPS+kzxSeWLRYvbPeSgNo/dMwNlILpx TcuwVlcd hK4o/PtvogPiXP9jidkUdqe8CJuc0nubYrF6rvlFAJrtKN/RAQwDTtyftQaMHituERs4fPW5ImB/IG9p0lHajUrqwuF0ppGgMnnj1R9JCG11FvvOAH3iKHLUVZy+dGp4zmhuuipgY64qu/ZB+jeWviUAZpNasORM50rXLJ8DBwVUwJCnX97abVLyb5kdPihVk4JXFeioI+bVIlbw309C8kcIVLLbam9OoVU46w+pUE/vQWR4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Create separate functions for the implicit ODP initialization which is different from the explicit ODP initialization. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 91 +++++++++++++++--------------- 1 file changed, 46 insertions(+), 45 deletions(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 30cd8f353476..51d518989914 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -48,41 +48,44 @@ #include "uverbs.h" -static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, - const struct mmu_interval_notifier_ops *ops) +static void ib_init_umem_implicit_odp(struct ib_umem_odp *umem_odp) +{ + umem_odp->is_implicit_odp = 1; + umem_odp->umem.is_odp = 1; + mutex_init(&umem_odp->umem_mutex); +} + +static int ib_init_umem_odp(struct ib_umem_odp *umem_odp, + const struct mmu_interval_notifier_ops *ops) { struct ib_device *dev = umem_odp->umem.ibdev; + size_t page_size = 1UL << umem_odp->page_shift; + unsigned long start; + unsigned long end; int ret; umem_odp->umem.is_odp = 1; mutex_init(&umem_odp->umem_mutex); - if (!umem_odp->is_implicit_odp) { - size_t page_size = 1UL << umem_odp->page_shift; - unsigned long start; - unsigned long end; - - start = ALIGN_DOWN(umem_odp->umem.address, page_size); - if (check_add_overflow(umem_odp->umem.address, - (unsigned long)umem_odp->umem.length, - &end)) - return -EOVERFLOW; - end = ALIGN(end, page_size); - if (unlikely(end < page_size)) - return -EOVERFLOW; - - ret = hmm_dma_map_alloc(dev->dma_device, &umem_odp->map, - (end - start) >> PAGE_SHIFT, - 1 << umem_odp->page_shift); - if (ret) - return ret; - - ret = mmu_interval_notifier_insert(&umem_odp->notifier, - umem_odp->umem.owning_mm, - start, end - start, ops); - if (ret) - goto out_free_map; - } + start = ALIGN_DOWN(umem_odp->umem.address, page_size); + if (check_add_overflow(umem_odp->umem.address, + (unsigned long)umem_odp->umem.length, &end)) + return -EOVERFLOW; + end = ALIGN(end, page_size); + if (unlikely(end < page_size)) + return -EOVERFLOW; + + ret = hmm_dma_map_alloc(dev->dma_device, &umem_odp->map, + (end - start) >> PAGE_SHIFT, + 1 << umem_odp->page_shift); + if (ret) + return ret; + + ret = mmu_interval_notifier_insert(&umem_odp->notifier, + umem_odp->umem.owning_mm, start, + end - start, ops); + if (ret) + goto out_free_map; return 0; @@ -106,7 +109,6 @@ struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_device *device, { struct ib_umem *umem; struct ib_umem_odp *umem_odp; - int ret; if (access & IB_ACCESS_HUGETLB) return ERR_PTR(-EINVAL); @@ -118,16 +120,10 @@ struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_device *device, umem->ibdev = device; umem->writable = ib_access_writable(access); umem->owning_mm = current->mm; - umem_odp->is_implicit_odp = 1; umem_odp->page_shift = PAGE_SHIFT; umem_odp->tgid = get_task_pid(current->group_leader, PIDTYPE_PID); - ret = ib_init_umem_odp(umem_odp, NULL); - if (ret) { - put_pid(umem_odp->tgid); - kfree(umem_odp); - return ERR_PTR(ret); - } + ib_init_umem_implicit_odp(umem_odp); return umem_odp; } EXPORT_SYMBOL(ib_umem_odp_alloc_implicit); @@ -248,7 +244,7 @@ struct ib_umem_odp *ib_umem_odp_get(struct ib_device *device, } EXPORT_SYMBOL(ib_umem_odp_get); -void ib_umem_odp_release(struct ib_umem_odp *umem_odp) +static void ib_umem_odp_free(struct ib_umem_odp *umem_odp) { struct ib_device *dev = umem_odp->umem.ibdev; @@ -258,14 +254,19 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) * It is the driver's responsibility to ensure, before calling us, * that the hardware will not attempt to access the MR any more. */ - if (!umem_odp->is_implicit_odp) { - mutex_lock(&umem_odp->umem_mutex); - ib_umem_odp_unmap_dma_pages(umem_odp, ib_umem_start(umem_odp), - ib_umem_end(umem_odp)); - mutex_unlock(&umem_odp->umem_mutex); - mmu_interval_notifier_remove(&umem_odp->notifier); - hmm_dma_map_free(dev->dma_device, &umem_odp->map); - } + mutex_lock(&umem_odp->umem_mutex); + ib_umem_odp_unmap_dma_pages(umem_odp, ib_umem_start(umem_odp), + ib_umem_end(umem_odp)); + mutex_unlock(&umem_odp->umem_mutex); + mmu_interval_notifier_remove(&umem_odp->notifier); + hmm_dma_map_free(dev->dma_device, &umem_odp->map); +} + +void ib_umem_odp_release(struct ib_umem_odp *umem_odp) +{ + if (!umem_odp->is_implicit_odp) + ib_umem_odp_free(umem_odp); + put_pid(umem_odp->tgid); kfree(umem_odp); } From patchwork Tue Dec 17 13:00:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911746 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32936E7717F for ; Tue, 17 Dec 2024 13:02:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B4E486B00DB; Tue, 17 Dec 2024 08:02:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AFCBA6B00DC; Tue, 17 Dec 2024 08:02:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9757F6B00DD; Tue, 17 Dec 2024 08:02:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 77E9F6B00DB for ; Tue, 17 Dec 2024 08:02:14 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 24DF1A06B2 for ; Tue, 17 Dec 2024 13:02:14 +0000 (UTC) X-FDA: 82904462004.20.5683FA7 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf13.hostedemail.com (Postfix) with ESMTP id B56D32001C for ; Tue, 17 Dec 2024 13:01:42 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Uxp1t4nf; spf=pass (imf13.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440503; a=rsa-sha256; cv=none; b=EdVGezWbGkGJkSGZ1QJEHuTpndv+BNKFfkK97VIRukLcNWvdDTWgNnIpIapuHvWmIybKRW iyHZUnUGWLXzosNSkjYPRmWjEZ5FI1kkFrBQBhQT+WEPB3ZSZjJzOaU0q2tNsPuWz0U5y0 Wd39O6IiFn5lURwj3LmyaqPLaTg3w0o= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Uxp1t4nf; spf=pass (imf13.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440503; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UbJfahqwLjaE3XHhnu5sAX8e7nUKz4sokYtFCkLWlLU=; b=g/KOliVF0Pifq2gr1UMdXA/Sj8QnRehOHbM9vtF/4EahQvZ2Rcs+zm2nOGoCF7TMmAVwxY ZSvBdJEZKL+lg0Y07AJvwkAZ+biiHI8Zfby7dcA+LyFwJXcDA3T+faRL5flR193JtCPFU3 63jdIiEumCDqllaOC0wDJuPLGouSRBc= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 7F4515C639F; Tue, 17 Dec 2024 13:01:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 915BDC4CEE5; Tue, 17 Dec 2024 13:02:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440531; bh=G4wqehv2MjRfNLzyqIhIA355wtcHGJCWo6SlnNMmpUU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Uxp1t4nfJzRtQBAHE/04E5cavs48crOSCRnSGoACYgUqArxGRHQoUtVO/oIV8Rpf5 nN/IjDHpNObdL9/K1ly+yv0rOkX9uEg1S+6yaozat7vblG7TZ2Z2o8i7CmfPE2YVpK 1L6W4z9kkMtCRsbAJ7OxKsh52fDqqhSosZ9jMAXI0ulEpkmr0AJ34cutrg0cE9Fx62 O5P9J9er64BhB5LQN7dOfy6ZU/34F5Jy/jfiEnljY6GtESDHzyKsfQA4rDvsfKwmHn vxhrcapYyHOqssAOnVxuyfT0DGpynnSBbOIUL11wdl/i+tqdQ8y/J7e+aJ88mmhLcS lYQY04S9DXZGQ== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 15/17] vfio/mlx5: Explicitly use number of pages instead of allocated length Date: Tue, 17 Dec 2024 15:00:33 +0200 Message-ID: <5a5a71c1db2fb0bf300463b77e8de30de30ec0df.1734436840.git.leon@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: B56D32001C X-Stat-Signature: gns6k1a3pyyyciuk61scocaa3pya7gc8 X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1734440502-802860 X-HE-Meta: U2FsdGVkX1+321XP0cW3FVQvJdC8B/qMh2yfPqKLMw/IadevltT/og3KOSR16/QCXdj1Z6zZ1s71rFZ6NUvh25EciX/rLiVLbSiFdVELvS9iLB4l3b1Y9ILWRlYkgK7tDc00OQFcCeGz0m8HLaa+S0rKUjPpRgMv8fG/AFBTetIbO2rhuVyi9R6knrvSb7eFNk5aCy5qd+gFlkaiTdSOW4cOXERLaoqmkJQvr/GgQG5rs+w7LCNQM43hv48xHdhm+FiRYWg+OXTFBHsSWHw/e07ZbhHo1eTENZVZPKtn0ddYgWnRRXA7gqVTMAjyZ5E90E4bsJVxfkCxEB+her9yMJEdTu0+k2MRo+mmnjzhiG/oY2AF7YvZe56D0YCyz3LVts4tCpXI5SPJ00j0G9m0XeXaryEbYtLtGJh1qIx4SP6SSKumE4MbQFexcqEx9osQefkyzGxmqcf7FtIzD23vjJbOFYVtCL7A6eiT8pS7qZKxggAgrnQXKAtR80kn+D9uJqWCnTTwfwvx/Sc1hbbLGqUIckVNw/ESZu1yAXzZ9/XUyF5Mg2sxbscXP5n9QKzxZtJCc+ZCFemH/d+WlNDk8lvlEWNkU1dLmMd1DWdTMidht0JJyN9i6QtZD2nGEhFmd4E6iJL8qNWqIOiCKrUU5jiJ8BvAAvba7g+/151d6JwlaXhh3+0bkjo+EsTJhIpDQ9KcT2cghC4jhiVw3afQDCV3/WDxnpGIe3SAjkYsJdDrcicCx7pUmfpSg5OQW19XK1LUEyFBeDBEbVLhz5q2p+Xn0keDHWyWI3NxUd1oLW1hKUiQEZCnteqbiL5HSfaMI42R57hgYGc6pKfGwrOBNuvjtvjMo6mrPmYfbPR/Z8al5ekd/zciI/1Dm++j2tM4iHHfGDL5hanai7sG4H25TS8Vw4D8ic4QvVI3z1xdyqZYQ3XIse4jKSbFgkmNQMrzYE8Tkn+DjG4CCYNkzKk MU+JZANP OtFvnNX5LAW4FNlVTFKxbOb2zIZh4pghquNFunB7lj6vdwdMJ/bVC+OofsQz0RtnlmftvzIASv07NkxQEY/y4D0hLJFfKcy56bzbTrHdJZ27SCAOYgTtLaNMj47xlsNfwfkg5M3iOO4nDknWInJEJtGYMN3KwdjaNB/UFLavPGjwR+bZG2LjIWL7vcAR/OyoHW7bXODWusLVn+V5qbCneaMbA4c9yiCXhvpT666tLwcodhrIuBwKWGhX4Ls46LBfp14OBpJiQyxzBnekcANy8lSmhFZgC3h3u1ZkgCJR/JfsE68HDILFXm+x+zXqGJ+lKp6CR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky allocated_length is a multiple of page size and number of pages, so let's change the functions to accept number of pages. It opens us a venue to combine receive and send paths together with code readability improvement. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 32 ++++++++++----------- drivers/vfio/pci/mlx5/cmd.h | 10 +++---- drivers/vfio/pci/mlx5/main.c | 56 +++++++++++++++++++++++------------- 3 files changed, 57 insertions(+), 41 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index 7527e277c898..88e76afba606 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -318,8 +318,7 @@ static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, struct mlx5_vhca_recv_buf *recv_buf, u32 *mkey) { - size_t npages = buf ? DIV_ROUND_UP(buf->allocated_length, PAGE_SIZE) : - recv_buf->npages; + size_t npages = buf ? buf->npages : recv_buf->npages; int err = 0, inlen; __be64 *mtt; void *mkc; @@ -375,7 +374,7 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) if (mvdev->mdev_detach) return -ENOTCONN; - if (buf->dmaed || !buf->allocated_length) + if (buf->dmaed || !buf->npages) return -EINVAL; ret = dma_map_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); @@ -445,7 +444,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, if (ret) goto err_append; - buf->allocated_length += filled * PAGE_SIZE; + buf->npages += filled; /* clean input for another bulk allocation */ memset(page_list, 0, filled * sizeof(*page_list)); to_fill = min_t(unsigned int, to_alloc, @@ -464,8 +463,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, } struct mlx5_vhca_data_buffer * -mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, +mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, enum dma_data_direction dma_dir) { struct mlx5_vhca_data_buffer *buf; @@ -477,9 +475,8 @@ mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, buf->dma_dir = dma_dir; buf->migf = migf; - if (length) { - ret = mlx5vf_add_migration_pages(buf, - DIV_ROUND_UP_ULL(length, PAGE_SIZE)); + if (npages) { + ret = mlx5vf_add_migration_pages(buf, npages); if (ret) goto end; @@ -505,8 +502,8 @@ void mlx5vf_put_data_buffer(struct mlx5_vhca_data_buffer *buf) } struct mlx5_vhca_data_buffer * -mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, enum dma_data_direction dma_dir) +mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, + enum dma_data_direction dma_dir) { struct mlx5_vhca_data_buffer *buf, *temp_buf; struct list_head free_list; @@ -521,7 +518,7 @@ mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, list_for_each_entry_safe(buf, temp_buf, &migf->avail_list, buf_elm) { if (buf->dma_dir == dma_dir) { list_del_init(&buf->buf_elm); - if (buf->allocated_length >= length) { + if (buf->npages >= npages) { spin_unlock_irq(&migf->list_lock); goto found; } @@ -535,7 +532,7 @@ mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, } } spin_unlock_irq(&migf->list_lock); - buf = mlx5vf_alloc_data_buffer(migf, length, dma_dir); + buf = mlx5vf_alloc_data_buffer(migf, npages, dma_dir); found: while ((temp_buf = list_first_entry_or_null(&free_list, @@ -716,7 +713,7 @@ int mlx5vf_cmd_save_vhca_state(struct mlx5vf_pci_core_device *mvdev, MLX5_SET(save_vhca_state_in, in, op_mod, 0); MLX5_SET(save_vhca_state_in, in, vhca_id, mvdev->vhca_id); MLX5_SET(save_vhca_state_in, in, mkey, buf->mkey); - MLX5_SET(save_vhca_state_in, in, size, buf->allocated_length); + MLX5_SET(save_vhca_state_in, in, size, buf->npages * PAGE_SIZE); MLX5_SET(save_vhca_state_in, in, incremental, inc); MLX5_SET(save_vhca_state_in, in, set_track, track); @@ -738,8 +735,11 @@ int mlx5vf_cmd_save_vhca_state(struct mlx5vf_pci_core_device *mvdev, } if (!header_buf) { - header_buf = mlx5vf_get_data_buffer(migf, - sizeof(struct mlx5_vf_migration_header), DMA_NONE); + header_buf = mlx5vf_get_data_buffer( + migf, + DIV_ROUND_UP(sizeof(struct mlx5_vf_migration_header), + PAGE_SIZE), + DMA_NONE); if (IS_ERR(header_buf)) { err = PTR_ERR(header_buf); goto err_free; diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index df421dc6de04..7d4a833b6900 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -56,7 +56,7 @@ struct mlx5_vhca_data_buffer { struct sg_append_table table; loff_t start_pos; u64 length; - u64 allocated_length; + u32 npages; u32 mkey; enum dma_data_direction dma_dir; u8 dmaed:1; @@ -217,12 +217,12 @@ int mlx5vf_cmd_alloc_pd(struct mlx5_vf_migration_file *migf); void mlx5vf_cmd_dealloc_pd(struct mlx5_vf_migration_file *migf); void mlx5fv_cmd_clean_migf_resources(struct mlx5_vf_migration_file *migf); struct mlx5_vhca_data_buffer * -mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, enum dma_data_direction dma_dir); +mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, + enum dma_data_direction dma_dir); void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf); struct mlx5_vhca_data_buffer * -mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, enum dma_data_direction dma_dir); +mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, + enum dma_data_direction dma_dir); void mlx5vf_put_data_buffer(struct mlx5_vhca_data_buffer *buf); struct page *mlx5vf_get_migration_page(struct mlx5_vhca_data_buffer *buf, unsigned long offset); diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c index 8833e60d42f5..83247f016441 100644 --- a/drivers/vfio/pci/mlx5/main.c +++ b/drivers/vfio/pci/mlx5/main.c @@ -308,6 +308,7 @@ static struct mlx5_vhca_data_buffer * mlx5vf_mig_file_get_stop_copy_buf(struct mlx5_vf_migration_file *migf, u8 index, size_t required_length) { + u32 npages = DIV_ROUND_UP(required_length, PAGE_SIZE); struct mlx5_vhca_data_buffer *buf = migf->buf[index]; u8 chunk_num; @@ -315,12 +316,11 @@ mlx5vf_mig_file_get_stop_copy_buf(struct mlx5_vf_migration_file *migf, chunk_num = buf->stop_copy_chunk_num; buf->migf->buf[index] = NULL; /* Checking whether the pre-allocated buffer can fit */ - if (buf->allocated_length >= required_length) + if (buf->npages >= npages) return buf; mlx5vf_put_data_buffer(buf); - buf = mlx5vf_get_data_buffer(buf->migf, required_length, - DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer(buf->migf, npages, DMA_FROM_DEVICE); if (IS_ERR(buf)) return buf; @@ -373,7 +373,8 @@ static int mlx5vf_add_stop_copy_header(struct mlx5_vf_migration_file *migf, u8 *to_buff; int ret; - header_buf = mlx5vf_get_data_buffer(migf, size, DMA_NONE); + header_buf = mlx5vf_get_data_buffer(migf, DIV_ROUND_UP(size, PAGE_SIZE), + DMA_NONE); if (IS_ERR(header_buf)) return PTR_ERR(header_buf); @@ -388,7 +389,7 @@ static int mlx5vf_add_stop_copy_header(struct mlx5_vf_migration_file *migf, to_buff = kmap_local_page(page); memcpy(to_buff, &header, sizeof(header)); header_buf->length = sizeof(header); - data.stop_copy_size = cpu_to_le64(migf->buf[0]->allocated_length); + data.stop_copy_size = cpu_to_le64(migf->buf[0]->npages * PAGE_SIZE); memcpy(to_buff + sizeof(header), &data, sizeof(data)); header_buf->length += sizeof(data); kunmap_local(to_buff); @@ -437,15 +438,20 @@ static int mlx5vf_prep_stop_copy(struct mlx5vf_pci_core_device *mvdev, num_chunks = mvdev->chunk_mode ? MAX_NUM_CHUNKS : 1; for (i = 0; i < num_chunks; i++) { - buf = mlx5vf_get_data_buffer(migf, inc_state_size, DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer( + migf, DIV_ROUND_UP(inc_state_size, PAGE_SIZE), + DMA_FROM_DEVICE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto err; } migf->buf[i] = buf; - buf = mlx5vf_get_data_buffer(migf, - sizeof(struct mlx5_vf_migration_header), DMA_NONE); + buf = mlx5vf_get_data_buffer( + migf, + DIV_ROUND_UP(sizeof(struct mlx5_vf_migration_header), + PAGE_SIZE), + DMA_NONE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto err; @@ -553,7 +559,8 @@ static long mlx5vf_precopy_ioctl(struct file *filp, unsigned int cmd, * We finished transferring the current state and the device has a * dirty state, save a new state to be ready for. */ - buf = mlx5vf_get_data_buffer(migf, inc_length, DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer(migf, DIV_ROUND_UP(inc_length, PAGE_SIZE), + DMA_FROM_DEVICE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); mlx5vf_mark_err(migf); @@ -675,8 +682,8 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track) if (track) { /* leave the allocated buffer ready for the stop-copy phase */ - buf = mlx5vf_alloc_data_buffer(migf, - migf->buf[0]->allocated_length, DMA_FROM_DEVICE); + buf = mlx5vf_alloc_data_buffer(migf, migf->buf[0]->npages, + DMA_FROM_DEVICE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto out_pd; @@ -917,11 +924,14 @@ static ssize_t mlx5vf_resume_write(struct file *filp, const char __user *buf, goto out_unlock; break; case MLX5_VF_LOAD_STATE_PREP_HEADER_DATA: - if (vhca_buf_header->allocated_length < migf->record_size) { + { + u32 npages = DIV_ROUND_UP(migf->record_size, PAGE_SIZE); + + if (vhca_buf_header->npages < npages) { mlx5vf_free_data_buffer(vhca_buf_header); - migf->buf_header[0] = mlx5vf_alloc_data_buffer(migf, - migf->record_size, DMA_NONE); + migf->buf_header[0] = mlx5vf_alloc_data_buffer( + migf, npages, DMA_NONE); if (IS_ERR(migf->buf_header[0])) { ret = PTR_ERR(migf->buf_header[0]); migf->buf_header[0] = NULL; @@ -934,6 +944,7 @@ static ssize_t mlx5vf_resume_write(struct file *filp, const char __user *buf, vhca_buf_header->start_pos = migf->max_pos; migf->load_state = MLX5_VF_LOAD_STATE_READ_HEADER_DATA; break; + } case MLX5_VF_LOAD_STATE_READ_HEADER_DATA: ret = mlx5vf_resume_read_header_data(migf, vhca_buf_header, &buf, &len, pos, &done); @@ -944,12 +955,13 @@ static ssize_t mlx5vf_resume_write(struct file *filp, const char __user *buf, { u64 size = max(migf->record_size, migf->stop_copy_prep_size); + u32 npages = DIV_ROUND_UP(size, PAGE_SIZE); - if (vhca_buf->allocated_length < size) { + if (vhca_buf->npages < npages) { mlx5vf_free_data_buffer(vhca_buf); - migf->buf[0] = mlx5vf_alloc_data_buffer(migf, - size, DMA_TO_DEVICE); + migf->buf[0] = mlx5vf_alloc_data_buffer( + migf, npages, DMA_TO_DEVICE); if (IS_ERR(migf->buf[0])) { ret = PTR_ERR(migf->buf[0]); migf->buf[0] = NULL; @@ -1037,8 +1049,11 @@ mlx5vf_pci_resume_device_data(struct mlx5vf_pci_core_device *mvdev) } migf->buf[0] = buf; - buf = mlx5vf_alloc_data_buffer(migf, - sizeof(struct mlx5_vf_migration_header), DMA_NONE); + buf = mlx5vf_alloc_data_buffer( + migf, + DIV_ROUND_UP(sizeof(struct mlx5_vf_migration_header), + PAGE_SIZE), + DMA_NONE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto out_buf; @@ -1148,7 +1163,8 @@ mlx5vf_pci_step_device_state_locked(struct mlx5vf_pci_core_device *mvdev, MLX5VF_QUERY_INC | MLX5VF_QUERY_CLEANUP); if (ret) return ERR_PTR(ret); - buf = mlx5vf_get_data_buffer(migf, size, DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer(migf, + DIV_ROUND_UP(size, PAGE_SIZE), DMA_FROM_DEVICE); if (IS_ERR(buf)) return ERR_CAST(buf); /* pre_copy cleanup */ From patchwork Tue Dec 17 13:00:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911744 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27E1BE7718B for ; Tue, 17 Dec 2024 13:02:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A62456B00D7; Tue, 17 Dec 2024 08:02:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A127C6B00D8; Tue, 17 Dec 2024 08:02:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88B286B00D9; Tue, 17 Dec 2024 08:02:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 65D5E6B00D7 for ; Tue, 17 Dec 2024 08:02:06 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 114111206E8 for ; Tue, 17 Dec 2024 13:02:06 +0000 (UTC) X-FDA: 82904462844.21.DE92D1A Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf13.hostedemail.com (Postfix) with ESMTP id 9B0A720027 for ; Tue, 17 Dec 2024 13:01:34 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=bIOYKoUk; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf13.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440495; a=rsa-sha256; cv=none; b=zyc6Q8mVfUV2Va5opGtpm8fmDQoG4CXEaqWt1iTTrcwvHC5G/azlbF0VFrH/bEbxatqZgb pUr4Bb7atV3CSoh7mTHlYrA/JFtjd+hePbTLaIlAe9fNv8SzBf8ExAMoKxtm4+bzMs1rQN +Cb/cW+VLpJ+adaQWEAs3AYfG1620Mc= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=bIOYKoUk; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf13.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440495; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tqIrsAlD+ai2M2Kv728OsbznysIPwU2STLQxXSr3W/E=; b=krCEFkEJ2GtKdLCejY5qG4a6w8Nl8qzLvWOQaVa64B5psOLCaRKz2QUXTrU8i5lNJHRrO4 7u9D9zpxKVoSO6R5dM+1suD0AeVl76vSY8kFFEx8yItwH/2wA55SqBJ2NuItPQBL/rUba3 hsA7eBqNldC0wFdU7P3Phb6rz3DjedE= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 873065C6287; Tue, 17 Dec 2024 13:01:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 78F79C4CED3; Tue, 17 Dec 2024 13:02:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440523; bh=RYjGrkPN/60I3pZj5IF2qIoEua5ubvT8vrUi1Jz+0mI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bIOYKoUkgI/9heCkpRwZLveaoPHEMBpPBWbX5HoX3WvUmAPLcEM0ke/tOMpby8H+C oPXESciAUnJKr3zPDUaWwGCKgRtwNTXscST/U2wSFrCkvnSB6kGJvhg0hpJqeAr/mO sb4Iy9Xxi6Sk9fGdBlVkBZnK07ymtWANv+kUiGeOvFClr9s1wQKYKvNRjOGEA+oGgH iDx6KOTeMD2iyDIr5mcFMkOe8fwJxBrzxKxo0JmJmH+qJBm1/OxVqq1kRROTeBIIwn gUioU7wCaKaak1tB6dCBC4fgdQaqCGVXeHi+B+jNXffUaDN2IwwT5xE1xtHkJAXugL CnSG2ZRhXG0UA== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 16/17] vfio/mlx5: Rewrite create mkey flow to allow better code reuse Date: Tue, 17 Dec 2024 15:00:34 +0200 Message-ID: X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: t6hwgdxmdembfec56hhf5a7y3p435qt5 X-Rspam-User: X-Rspamd-Queue-Id: 9B0A720027 X-Rspamd-Server: rspam08 X-HE-Tag: 1734440494-217578 X-HE-Meta: U2FsdGVkX1+WQF+bkQUHOTgUZo56Njj+ziZ/KpDfq6J6Z//a75DBBGLeUpw0OG1ddPI9DsULxItj7Gr4KV4SHxBr18b7PyJ+g9LFUBVaQjx+YR2WDDJZKqec0x30GV5Z0QDqdj2zPk93l4q+IPOq+4QgB+z0eT32sGnbz7ZXdX9uI+l3c2WVTGrKdDILdK2KxfT/E0NeJhqh/R9eX9fLTg2+fmAC9QMMA0nf2BCx6960VrSUPCIVPz+JJy6p3npGuajWSaFWI/gvkC/e8ElUCzGd3t4MWm6usLgiYFBMRITga+hdrMeNK3rmO3ipRt7fGOihv5W0V2SqGcw0QqyO/gdqaikiYF+vL53DstB3var1z/xscN8RbaVl5ejeaKelN12TPJA0dGgYefZ8XSYw1b4JGkKljMpFlHThb3mLNpdRGVFJGDeNkKB0CBNw2v2z3jvV398237CkHelWqlcjnLxI4hcTBk17h4kMvKqO7G5Yi2Nr8seSNXnbfwe0rHBysx8++R52j+Ud6CzGn/w7nAgXpmHLzQ2wymJ7QQDOHFin68IeReBTrtCqZw3kGDSNOoc92V9OtOAGwqzjb/O4wYGkrkJzHbR7kyK34OHTR8HLzkkFrrTTPHY2ToQG7LflOAZrquCvIGBGjX+Zh8FIRzY77bDzuu3vXMj/LGROYOPls/AszMzScarTKb0KhxfWX5eWh64j+ReUNyr8zJQCSkp+0Lti5KoWdUmUlzeOb0xWoBYUlxi6kE0/4N1KGW7/5F8k3qs2wroZ4VOjaZXaMY4djExyEH2DpibDZD/PHDsV8UvYb0wC8p6KeMRQLkzBMbYCb2f55jBo0LS98XMx8occR0N35uHC59d98fnsQb6dokdlf/vBxrG0k6oiNoqwK4riosNLOjIvp3G/ujnCME4xvOhZyIKMQ3hIXPzk3WiD7jFZZFryP1VTzUT5zSdEdTdsNfOIE2vfL0LRAwl h3vudDVa Ia4bh/fGG++RDscd0p/Uc+82SMbgwtMW1ds7k46zZUpkyQkQP6D6JbdqPyF2PdqwovzZAhQ0NF2nXV/WGO/OxIuWfeEu3ar2yJDuH3wZVZFbs/tu4KBjxcoaeaCFqhliQC5LXGiOQ5n+BeG50dcjFYl2/HPW4Tj9/G4r0je4Szyw0Au/tTasHckptA8kggvBUN8Z+Hn82Iqo91+9utfOo3zHDt3cz8sWBgT4zQA048lsgBZvjy9pySIG0JSLBxCFXARCENjKFuq/YGAWj+7aMZauGu/u84kS0bZWE4DXy4ITYDq0WSWjU7JhtGtU3MEn8q9O+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Change the creation of mkey to be performed in multiple steps: data allocation, DMA setup and actual call to HW to create that mkey. In this new flow, the whole input to MKEY command is saved to eliminate the need to keep array of pointers for DMA addresses for receive list and in the future patches for send list too. In addition to memory size reduce and elimination of unnecessary data movements to set MKEY input, the code is prepared for future reuse. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 157 ++++++++++++++++++++---------------- drivers/vfio/pci/mlx5/cmd.h | 4 +- 2 files changed, 91 insertions(+), 70 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index 88e76afba606..48c272ecb04f 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -313,39 +313,21 @@ static int mlx5vf_cmd_get_vhca_id(struct mlx5_core_dev *mdev, u16 function_id, return ret; } -static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, - struct mlx5_vhca_data_buffer *buf, - struct mlx5_vhca_recv_buf *recv_buf, - u32 *mkey) +static u32 *alloc_mkey_in(u32 npages, u32 pdn) { - size_t npages = buf ? buf->npages : recv_buf->npages; - int err = 0, inlen; - __be64 *mtt; + int inlen; void *mkc; u32 *in; inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + - sizeof(*mtt) * round_up(npages, 2); + sizeof(__be64) * round_up(npages, 2); - in = kvzalloc(inlen, GFP_KERNEL); + in = kvzalloc(inlen, GFP_KERNEL_ACCOUNT); if (!in) - return -ENOMEM; + return NULL; MLX5_SET(create_mkey_in, in, translations_octword_actual_size, DIV_ROUND_UP(npages, 2)); - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, in, klm_pas_mtt); - - if (buf) { - struct sg_dma_page_iter dma_iter; - - for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) - *mtt++ = cpu_to_be64(sg_page_iter_dma_address(&dma_iter)); - } else { - int i; - - for (i = 0; i < npages; i++) - *mtt++ = cpu_to_be64(recv_buf->dma_addrs[i]); - } mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry); MLX5_SET(mkc, mkc, access_mode_1_0, MLX5_MKC_ACCESS_MODE_MTT); @@ -359,9 +341,30 @@ static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, MLX5_SET(mkc, mkc, log_page_size, PAGE_SHIFT); MLX5_SET(mkc, mkc, translations_octword_size, DIV_ROUND_UP(npages, 2)); MLX5_SET64(mkc, mkc, len, npages * PAGE_SIZE); - err = mlx5_core_create_mkey(mdev, mkey, in, inlen); - kvfree(in); - return err; + + return in; +} + +static int create_mkey(struct mlx5_core_dev *mdev, u32 npages, + struct mlx5_vhca_data_buffer *buf, u32 *mkey_in, + u32 *mkey) +{ + __be64 *mtt; + int inlen; + + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); + if (buf) { + struct sg_dma_page_iter dma_iter; + + for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) + *mtt++ = cpu_to_be64( + sg_page_iter_dma_address(&dma_iter)); + } + + inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + + sizeof(__be64) * round_up(npages, 2); + + return mlx5_core_create_mkey(mdev, mkey, mkey_in, inlen); } static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) @@ -374,20 +377,28 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) if (mvdev->mdev_detach) return -ENOTCONN; - if (buf->dmaed || !buf->npages) + if (buf->mkey_in || !buf->npages) return -EINVAL; ret = dma_map_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); if (ret) return ret; - ret = _create_mkey(mdev, buf->migf->pdn, buf, NULL, &buf->mkey); - if (ret) + buf->mkey_in = alloc_mkey_in(buf->npages, buf->migf->pdn); + if (!buf->mkey_in) { + ret = -ENOMEM; goto err; + } - buf->dmaed = true; + ret = create_mkey(mdev, buf->npages, buf, buf->mkey_in, &buf->mkey); + if (ret) + goto err_create_mkey; return 0; + +err_create_mkey: + kvfree(buf->mkey_in); + buf->mkey_in = NULL; err: dma_unmap_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); return ret; @@ -401,8 +412,9 @@ void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) lockdep_assert_held(&migf->mvdev->state_mutex); WARN_ON(migf->mvdev->mdev_detach); - if (buf->dmaed) { + if (buf->mkey_in) { mlx5_core_destroy_mkey(migf->mvdev->mdev, buf->mkey); + kvfree(buf->mkey_in); dma_unmap_sgtable(migf->mvdev->mdev->device, &buf->table.sgt, buf->dma_dir, 0); } @@ -783,7 +795,7 @@ int mlx5vf_cmd_load_vhca_state(struct mlx5vf_pci_core_device *mvdev, if (mvdev->mdev_detach) return -ENOTCONN; - if (!buf->dmaed) { + if (!buf->mkey_in) { err = mlx5vf_dma_data_buffer(buf); if (err) return err; @@ -1384,56 +1396,54 @@ static int alloc_recv_pages(struct mlx5_vhca_recv_buf *recv_buf, kvfree(recv_buf->page_list); return -ENOMEM; } +static void unregister_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + u32 *mkey_in) +{ + dma_addr_t addr; + __be64 *mtt; + int i; + + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); + for (i = npages - 1; i >= 0; i--) { + addr = be64_to_cpu(mtt[i]); + dma_unmap_single(mdev->device, addr, PAGE_SIZE, + DMA_FROM_DEVICE); + } +} -static int register_dma_recv_pages(struct mlx5_core_dev *mdev, - struct mlx5_vhca_recv_buf *recv_buf) +static int register_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + struct page **page_list, u32 *mkey_in) { - int i, j; + dma_addr_t addr; + __be64 *mtt; + int i; - recv_buf->dma_addrs = kvcalloc(recv_buf->npages, - sizeof(*recv_buf->dma_addrs), - GFP_KERNEL_ACCOUNT); - if (!recv_buf->dma_addrs) - return -ENOMEM; + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - for (i = 0; i < recv_buf->npages; i++) { - recv_buf->dma_addrs[i] = dma_map_page(mdev->device, - recv_buf->page_list[i], - 0, PAGE_SIZE, - DMA_FROM_DEVICE); - if (dma_mapping_error(mdev->device, recv_buf->dma_addrs[i])) + for (i = 0; i < npages; i++) { + addr = dma_map_page(mdev->device, page_list[i], 0, PAGE_SIZE, + DMA_FROM_DEVICE); + if (dma_mapping_error(mdev->device, addr)) goto error; + + *mtt++ = cpu_to_be64(addr); } + return 0; error: - for (j = 0; j < i; j++) - dma_unmap_single(mdev->device, recv_buf->dma_addrs[j], - PAGE_SIZE, DMA_FROM_DEVICE); - - kvfree(recv_buf->dma_addrs); + unregister_dma_pages(mdev, i, mkey_in); return -ENOMEM; } -static void unregister_dma_recv_pages(struct mlx5_core_dev *mdev, - struct mlx5_vhca_recv_buf *recv_buf) -{ - int i; - - for (i = 0; i < recv_buf->npages; i++) - dma_unmap_single(mdev->device, recv_buf->dma_addrs[i], - PAGE_SIZE, DMA_FROM_DEVICE); - - kvfree(recv_buf->dma_addrs); -} - static void mlx5vf_free_qp_recv_resources(struct mlx5_core_dev *mdev, struct mlx5_vhca_qp *qp) { struct mlx5_vhca_recv_buf *recv_buf = &qp->recv_buf; mlx5_core_destroy_mkey(mdev, recv_buf->mkey); - unregister_dma_recv_pages(mdev, recv_buf); + unregister_dma_pages(mdev, recv_buf->npages, recv_buf->mkey_in); + kvfree(recv_buf->mkey_in); free_recv_pages(&qp->recv_buf); } @@ -1449,18 +1459,29 @@ static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, if (err < 0) return err; - err = register_dma_recv_pages(mdev, recv_buf); - if (err) + recv_buf->mkey_in = alloc_mkey_in(npages, pdn); + if (!recv_buf->mkey_in) { + err = -ENOMEM; goto end; + } + + err = register_dma_pages(mdev, npages, recv_buf->page_list, + recv_buf->mkey_in); + if (err) + goto err_register_dma; - err = _create_mkey(mdev, pdn, NULL, recv_buf, &recv_buf->mkey); + err = create_mkey(mdev, npages, NULL, recv_buf->mkey_in, + &recv_buf->mkey); if (err) goto err_create_mkey; return 0; err_create_mkey: - unregister_dma_recv_pages(mdev, recv_buf); + unregister_dma_pages(mdev, npages, recv_buf->mkey_in); +err_register_dma: + kvfree(recv_buf->mkey_in); + recv_buf->mkey_in = NULL; end: free_recv_pages(recv_buf); return err; diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 7d4a833b6900..25dd6ff54591 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -58,8 +58,8 @@ struct mlx5_vhca_data_buffer { u64 length; u32 npages; u32 mkey; + u32 *mkey_in; enum dma_data_direction dma_dir; - u8 dmaed:1; u8 stop_copy_chunk_num; struct list_head buf_elm; struct mlx5_vf_migration_file *migf; @@ -133,8 +133,8 @@ struct mlx5_vhca_cq { struct mlx5_vhca_recv_buf { u32 npages; struct page **page_list; - dma_addr_t *dma_addrs; u32 next_rq_offset; + u32 *mkey_in; u32 mkey; }; From patchwork Tue Dec 17 13:00:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13911745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACE70E7717F for ; Tue, 17 Dec 2024 13:02:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 386116B00DA; Tue, 17 Dec 2024 08:02:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 333C66B00DB; Tue, 17 Dec 2024 08:02:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1AF2C6B00DC; Tue, 17 Dec 2024 08:02:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E90ED6B00DA for ; Tue, 17 Dec 2024 08:02:10 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id A9F388073E for ; Tue, 17 Dec 2024 13:02:10 +0000 (UTC) X-FDA: 82904463096.10.0A96D84 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf14.hostedemail.com (Postfix) with ESMTP id 0EDB910001A for ; Tue, 17 Dec 2024 13:01:34 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=binQUbdu; spf=pass (imf14.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734440508; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/+4H6PVUcjblWOt7kW4ELLt3GTCrGF/jp86bnkmfkIA=; b=IQwMYZPr73kQ9RIz473WzpCeCk8ej9phbm9LO0Ig22ZTrpoSPDnMXX5O5GBdZimgZBSpGi uiPZwikK/RcpwHLj3lp3fTQWhKWSQ/oMTEqWOvpbAATmTQpmGGj2KH0Td1/YxA84WgGDjr 22Klh2rW5sTCDdu1mn6y/6MlfOjHYeI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734440508; a=rsa-sha256; cv=none; b=7tyZdkpjtP62DfLUuNtHKhq61FqQQJ5kvBOD/LHn/Xnpd1UmZcWKW9Q23q5dL8nClCi/pg EacX75+B7UifbaU8jor/lVZ9coUgfRkolupSUqNpgNnmJefLCXkxFmO/Ksb2yVjIhhhXJ3 i6qQPeD4h7QkpQ+pqO255R+txTxOb9A= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=binQUbdu; spf=pass (imf14.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id CB3775C62E1; Tue, 17 Dec 2024 13:01:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3453C4CED3; Tue, 17 Dec 2024 13:02:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734440527; bh=1i4Q22Noom6dP1zwC56KXmf6NLafiu+OCFYVjUdRIJw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=binQUbduqwRJVcrlXtMaGWSODJWKBl2iM2TiId79MZzVA6KgpRJnw9pZ3edaOITRX yrsdmHUJuzqbBpDYUicae+zPOn5svRwoT2c8y22iyrgrqPlAYy+4pwWFwu5us2v7A6 uYPxFGHSpjKsQvJrm6PpKjuhYkVvkyU0jpus4YVJ8GesDQVAvaYHtV3zUTTGq9dEuz hisncVsCbc/n7XcPdih/RLT87+YkTiRB+NmULsgj4VQQn45ZyW9fhG5Dt1/Aeo6lq1 lSY2hQz+PTWur8XC0r/9DtvWQcl8EB4gMnG0dj26zr8lXHy10ogltPoc2OxDO1b7rL z00qkvPLY0kPQ== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v5 17/17] vfio/mlx5: Enable the DMA link API Date: Tue, 17 Dec 2024 15:00:35 +0200 Message-ID: X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 0EDB910001A X-Stat-Signature: nz9u8sbi85565yzcryugb11ywkccgubr X-Rspam-User: X-HE-Tag: 1734440494-402469 X-HE-Meta: U2FsdGVkX18p5z5O4PGGAIKwZ0BRKoqTEy1FPOlD0mYBtwDM+j+gl1bnHRDGP+zX579ISxhA+3Kx2KKXshEmLgS++N+lBNd8Qw0Yg3XthrARuwVDa4LBtUiZCoINshEZAVCCgJ9NRr/1PIESGiGfk193EpXXpZTDbkfJUxtu+M5JuZ9l4Khnux1lVedMtrZuiB5G4rT1CXRmSySNrbEknVTwXOCTs/Giqp4BlXfWgvsQgayrRo3oPAjGc2DBd/WB6P0Ykl43TnoKrwsKGjD7iTMqb5ljKZQ5Q2zROeneR5JpQLVK/1dLy2XD7AuP5r7HZ3aJerhYewANarq+QLzS90+MBwsboOqApdQYHN6+P+FCQn05EKdxXzy2+ozEY/7lhRdDJEP3jtsrzSi7VK1YDRXoX5zWCUmPUvT/nkV5Zh5npityZbqZ0vshTdHDL6VZcPDAD8Z5PT/vSYElFg3LuGbm44nUBwpIXK6BZgrVd5JTbF0gozCN0IGNkZPB5wSCpkBo+GX6mnqXz49uC+DHymI6SxmqScM0osQEZDVPWnKq/6BN+E2BDPvadiqjaYWjafOk+PoItl7YBbDTdMRZKzGDqx4MGKnIPHaYwXQficqoJVeaVZgPezdadTeibjqoQ/hkbPu5d+Cu4KZcD4tK4cp0W/thn28sJhBBI2jEXPyxFbEIyqK6E508lnA6egod2MyYJPoJ3mrfUxyvndGHp+6ZhU4vDCc7PdRVnEjM+pEC+z3h+qDYxy1fWiqfUK6WJ/o+v1uo/QQYkCr5MKzOtcPwH/SxoPhRYLVvL2cZbWiyQKV+ikaJmLO51gzLxuz8eGUSbYh8Ixv//xhaS1NrPoqr8OWJxWBmVlwgEidi689er2gtReJvtj9qsef1hLT2XCbk9Fs6rNuV2BK7Ddo63IzxB7zR9f284XG8gcvpAQGoaZBnEwxZdRBQryotxPmQkMDABAm89gq65YVIdkD qjbY67e5 f1jEwf68YrTJ9nmmspvSww+PJgGmvrdduivhFbHQNKih/c8tJeW4IrMKMCWSUh9rE/R0wLFv4h+zDEnTM2HYIvZB0inZR+r8c/cQrOB2HopKJUJqkcatB683GqEu2fegT1i7YSBOf2X01TBIiQhN8wrsiulLVozdOkjD89nWlab8B4Iii/C/bqHfu9vSp8zZxNTrv5yHak7dO9NVnT4/+xWpfKUv0GSxqM5MDSj8TaOETPYo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Remove intermediate scatter-gather table completely and enable new DMA link API. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 299 ++++++++++++++++------------------- drivers/vfio/pci/mlx5/cmd.h | 21 ++- drivers/vfio/pci/mlx5/main.c | 31 ---- 3 files changed, 148 insertions(+), 203 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index 48c272ecb04f..fba20abf240a 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -345,26 +345,82 @@ static u32 *alloc_mkey_in(u32 npages, u32 pdn) return in; } -static int create_mkey(struct mlx5_core_dev *mdev, u32 npages, - struct mlx5_vhca_data_buffer *buf, u32 *mkey_in, +static int create_mkey(struct mlx5_core_dev *mdev, u32 npages, u32 *mkey_in, u32 *mkey) { + int inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + + sizeof(__be64) * round_up(npages, 2); + + return mlx5_core_create_mkey(mdev, mkey, mkey_in, inlen); +} + +static void unregister_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + u32 *mkey_in, struct dma_iova_state *state, + enum dma_data_direction dir) +{ + dma_addr_t addr; __be64 *mtt; - int inlen; + int i; - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - if (buf) { - struct sg_dma_page_iter dma_iter; + WARN_ON_ONCE(dir == DMA_NONE); - for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) - *mtt++ = cpu_to_be64( - sg_page_iter_dma_address(&dma_iter)); + if (dma_use_iova(state)) { + dma_iova_destroy(mdev->device, state, npages * PAGE_SIZE, dir, + 0); + } else { + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, + klm_pas_mtt); + for (i = npages - 1; i >= 0; i--) { + addr = be64_to_cpu(mtt[i]); + dma_unmap_page(mdev->device, addr, PAGE_SIZE, dir); + } } +} - inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + - sizeof(__be64) * round_up(npages, 2); +static int register_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + struct page **page_list, u32 *mkey_in, + struct dma_iova_state *state, + enum dma_data_direction dir) +{ + dma_addr_t addr; + size_t mapped = 0; + __be64 *mtt; + int i, err; - return mlx5_core_create_mkey(mdev, mkey, mkey_in, inlen); + WARN_ON_ONCE(dir == DMA_NONE); + + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); + + if (dma_iova_try_alloc(mdev->device, state, 0, npages * PAGE_SIZE)) { + addr = state->addr; + for (i = 0; i < npages; i++) { + err = dma_iova_link(mdev->device, state, + page_to_phys(page_list[i]), mapped, + PAGE_SIZE, dir, 0); + if (err) + goto error; + *mtt++ = cpu_to_be64(addr); + addr += PAGE_SIZE; + mapped += PAGE_SIZE; + } + err = dma_iova_sync(mdev->device, state, 0, mapped); + if (err) + goto error; + } else { + for (i = 0; i < npages; i++) { + addr = dma_map_page(mdev->device, page_list[i], 0, + PAGE_SIZE, dir); + err = dma_mapping_error(mdev->device, addr); + if (err) + goto error; + *mtt++ = cpu_to_be64(addr); + } + } + return 0; + +error: + unregister_dma_pages(mdev, i, mkey_in, state, dir); + return err; } static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) @@ -380,98 +436,91 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) if (buf->mkey_in || !buf->npages) return -EINVAL; - ret = dma_map_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); - if (ret) - return ret; - buf->mkey_in = alloc_mkey_in(buf->npages, buf->migf->pdn); - if (!buf->mkey_in) { - ret = -ENOMEM; - goto err; - } + if (!buf->mkey_in) + return -ENOMEM; - ret = create_mkey(mdev, buf->npages, buf, buf->mkey_in, &buf->mkey); + ret = register_dma_pages(mdev, buf->npages, buf->page_list, + buf->mkey_in, &buf->state, buf->dma_dir); + if (ret) + goto err_register_dma; + + ret = create_mkey(mdev, buf->npages, buf->mkey_in, &buf->mkey); if (ret) goto err_create_mkey; return 0; err_create_mkey: + unregister_dma_pages(mdev, buf->npages, buf->mkey_in, &buf->state, + buf->dma_dir); +err_register_dma: kvfree(buf->mkey_in); buf->mkey_in = NULL; -err: - dma_unmap_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); return ret; } +static void free_page_list(u32 npages, struct page **page_list) +{ + int i; + + /* Undo alloc_pages_bulk_array() */ + for (i = npages - 1; i >= 0; i--) + __free_page(page_list[i]); + + kvfree(page_list); +} + void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) { - struct mlx5_vf_migration_file *migf = buf->migf; - struct sg_page_iter sg_iter; + struct mlx5vf_pci_core_device *mvdev = buf->migf->mvdev; + struct mlx5_core_dev *mdev = mvdev->mdev; - lockdep_assert_held(&migf->mvdev->state_mutex); - WARN_ON(migf->mvdev->mdev_detach); + lockdep_assert_held(&mvdev->state_mutex); + WARN_ON(mvdev->mdev_detach); if (buf->mkey_in) { - mlx5_core_destroy_mkey(migf->mvdev->mdev, buf->mkey); + mlx5_core_destroy_mkey(mdev, buf->mkey); + unregister_dma_pages(mdev, buf->npages, buf->mkey_in, + &buf->state, buf->dma_dir); kvfree(buf->mkey_in); - dma_unmap_sgtable(migf->mvdev->mdev->device, &buf->table.sgt, - buf->dma_dir, 0); } - /* Undo alloc_pages_bulk_array() */ - for_each_sgtable_page(&buf->table.sgt, &sg_iter, 0) - __free_page(sg_page_iter_page(&sg_iter)); - sg_free_append_table(&buf->table); + free_page_list(buf->npages, buf->page_list); kfree(buf); } -static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, - unsigned int npages) +static int mlx5vf_add_pages(struct page ***page_list, unsigned int npages) { - unsigned int to_alloc = npages; - struct page **page_list; - unsigned long filled; - unsigned int to_fill; - int ret; + unsigned int filled, done = 0; int i; - to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*page_list)); - page_list = kvzalloc(to_fill * sizeof(*page_list), GFP_KERNEL_ACCOUNT); - if (!page_list) + *page_list = + kvcalloc(npages, sizeof(struct page *), GFP_KERNEL_ACCOUNT); + if (!*page_list) return -ENOMEM; - do { - filled = alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, to_fill, - page_list); - if (!filled) { - ret = -ENOMEM; + for (;;) { + filled = alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, + npages - done, + *page_list + done); + if (!filled) goto err; - } - to_alloc -= filled; - ret = sg_alloc_append_table_from_pages( - &buf->table, page_list, filled, 0, - filled << PAGE_SHIFT, UINT_MAX, SG_MAX_SINGLE_ALLOC, - GFP_KERNEL_ACCOUNT); - if (ret) - goto err_append; - buf->npages += filled; - /* clean input for another bulk allocation */ - memset(page_list, 0, filled * sizeof(*page_list)); - to_fill = min_t(unsigned int, to_alloc, - PAGE_SIZE / sizeof(*page_list)); - } while (to_alloc > 0); + done += filled; + if (done == npages) + break; + } - kvfree(page_list); return 0; -err_append: - for (i = filled - 1; i >= 0; i--) - __free_page(page_list[i]); err: - kvfree(page_list); - return ret; + for (i = 0; i < done; i++) + __free_page(*page_list[i]); + + kvfree(*page_list); + *page_list = NULL; + return -ENOMEM; } struct mlx5_vhca_data_buffer * @@ -488,10 +537,12 @@ mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, buf->dma_dir = dma_dir; buf->migf = migf; if (npages) { - ret = mlx5vf_add_migration_pages(buf, npages); + ret = mlx5vf_add_pages(&buf->page_list, npages); if (ret) goto end; + buf->npages = npages; + if (dma_dir != DMA_NONE) { ret = mlx5vf_dma_data_buffer(buf); if (ret) @@ -1350,101 +1401,16 @@ static void mlx5vf_destroy_qp(struct mlx5_core_dev *mdev, kfree(qp); } -static void free_recv_pages(struct mlx5_vhca_recv_buf *recv_buf) -{ - int i; - - /* Undo alloc_pages_bulk_array() */ - for (i = 0; i < recv_buf->npages; i++) - __free_page(recv_buf->page_list[i]); - - kvfree(recv_buf->page_list); -} - -static int alloc_recv_pages(struct mlx5_vhca_recv_buf *recv_buf, - unsigned int npages) -{ - unsigned int filled = 0, done = 0; - int i; - - recv_buf->page_list = kvcalloc(npages, sizeof(*recv_buf->page_list), - GFP_KERNEL_ACCOUNT); - if (!recv_buf->page_list) - return -ENOMEM; - - for (;;) { - filled = alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, - npages - done, - recv_buf->page_list + done); - if (!filled) - goto err; - - done += filled; - if (done == npages) - break; - } - - recv_buf->npages = npages; - return 0; - -err: - for (i = 0; i < npages; i++) { - if (recv_buf->page_list[i]) - __free_page(recv_buf->page_list[i]); - } - - kvfree(recv_buf->page_list); - return -ENOMEM; -} -static void unregister_dma_pages(struct mlx5_core_dev *mdev, u32 npages, - u32 *mkey_in) -{ - dma_addr_t addr; - __be64 *mtt; - int i; - - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - for (i = npages - 1; i >= 0; i--) { - addr = be64_to_cpu(mtt[i]); - dma_unmap_single(mdev->device, addr, PAGE_SIZE, - DMA_FROM_DEVICE); - } -} - -static int register_dma_pages(struct mlx5_core_dev *mdev, u32 npages, - struct page **page_list, u32 *mkey_in) -{ - dma_addr_t addr; - __be64 *mtt; - int i; - - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - - for (i = 0; i < npages; i++) { - addr = dma_map_page(mdev->device, page_list[i], 0, PAGE_SIZE, - DMA_FROM_DEVICE); - if (dma_mapping_error(mdev->device, addr)) - goto error; - - *mtt++ = cpu_to_be64(addr); - } - - return 0; - -error: - unregister_dma_pages(mdev, i, mkey_in); - return -ENOMEM; -} - static void mlx5vf_free_qp_recv_resources(struct mlx5_core_dev *mdev, struct mlx5_vhca_qp *qp) { struct mlx5_vhca_recv_buf *recv_buf = &qp->recv_buf; mlx5_core_destroy_mkey(mdev, recv_buf->mkey); - unregister_dma_pages(mdev, recv_buf->npages, recv_buf->mkey_in); + unregister_dma_pages(mdev, recv_buf->npages, recv_buf->mkey_in, + &recv_buf->state, DMA_FROM_DEVICE); kvfree(recv_buf->mkey_in); - free_recv_pages(&qp->recv_buf); + free_page_list(recv_buf->npages, recv_buf->page_list); } static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, @@ -1455,10 +1421,12 @@ static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, struct mlx5_vhca_recv_buf *recv_buf = &qp->recv_buf; int err; - err = alloc_recv_pages(recv_buf, npages); - if (err < 0) + err = mlx5vf_add_pages(&recv_buf->page_list, npages); + if (err) return err; + recv_buf->npages = npages; + recv_buf->mkey_in = alloc_mkey_in(npages, pdn); if (!recv_buf->mkey_in) { err = -ENOMEM; @@ -1466,24 +1434,25 @@ static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, } err = register_dma_pages(mdev, npages, recv_buf->page_list, - recv_buf->mkey_in); + recv_buf->mkey_in, &recv_buf->state, + DMA_FROM_DEVICE); if (err) goto err_register_dma; - err = create_mkey(mdev, npages, NULL, recv_buf->mkey_in, - &recv_buf->mkey); + err = create_mkey(mdev, npages, recv_buf->mkey_in, &recv_buf->mkey); if (err) goto err_create_mkey; return 0; err_create_mkey: - unregister_dma_pages(mdev, npages, recv_buf->mkey_in); + unregister_dma_pages(mdev, npages, recv_buf->mkey_in, &recv_buf->state, + DMA_FROM_DEVICE); err_register_dma: kvfree(recv_buf->mkey_in); recv_buf->mkey_in = NULL; end: - free_recv_pages(recv_buf); + free_page_list(npages, recv_buf->page_list); return err; } diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 25dd6ff54591..d7821b5ca772 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -53,7 +53,8 @@ struct mlx5_vf_migration_header { }; struct mlx5_vhca_data_buffer { - struct sg_append_table table; + struct page **page_list; + struct dma_iova_state state; loff_t start_pos; u64 length; u32 npages; @@ -63,10 +64,6 @@ struct mlx5_vhca_data_buffer { u8 stop_copy_chunk_num; struct list_head buf_elm; struct mlx5_vf_migration_file *migf; - /* Optimize mlx5vf_get_migration_page() for sequential access */ - struct scatterlist *last_offset_sg; - unsigned int sg_last_entry; - unsigned long last_offset; }; struct mlx5vf_async_data { @@ -133,6 +130,7 @@ struct mlx5_vhca_cq { struct mlx5_vhca_recv_buf { u32 npages; struct page **page_list; + struct dma_iova_state state; u32 next_rq_offset; u32 *mkey_in; u32 mkey; @@ -224,8 +222,17 @@ struct mlx5_vhca_data_buffer * mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, enum dma_data_direction dma_dir); void mlx5vf_put_data_buffer(struct mlx5_vhca_data_buffer *buf); -struct page *mlx5vf_get_migration_page(struct mlx5_vhca_data_buffer *buf, - unsigned long offset); +static inline struct page * +mlx5vf_get_migration_page(struct mlx5_vhca_data_buffer *buf, + unsigned long offset) +{ + int page_entry = offset / PAGE_SIZE; + + if (page_entry >= buf->npages) + return NULL; + + return buf->page_list[page_entry]; +} void mlx5vf_state_mutex_unlock(struct mlx5vf_pci_core_device *mvdev); void mlx5vf_disable_fds(struct mlx5vf_pci_core_device *mvdev, enum mlx5_vf_migf_state *last_save_state); diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c index 83247f016441..c528932e5739 100644 --- a/drivers/vfio/pci/mlx5/main.c +++ b/drivers/vfio/pci/mlx5/main.c @@ -34,37 +34,6 @@ static struct mlx5vf_pci_core_device *mlx5vf_drvdata(struct pci_dev *pdev) core_device); } -struct page * -mlx5vf_get_migration_page(struct mlx5_vhca_data_buffer *buf, - unsigned long offset) -{ - unsigned long cur_offset = 0; - struct scatterlist *sg; - unsigned int i; - - /* All accesses are sequential */ - if (offset < buf->last_offset || !buf->last_offset_sg) { - buf->last_offset = 0; - buf->last_offset_sg = buf->table.sgt.sgl; - buf->sg_last_entry = 0; - } - - cur_offset = buf->last_offset; - - for_each_sg(buf->last_offset_sg, sg, - buf->table.sgt.orig_nents - buf->sg_last_entry, i) { - if (offset < sg->length + cur_offset) { - buf->last_offset_sg = sg; - buf->sg_last_entry += i; - buf->last_offset = cur_offset; - return nth_page(sg_page(sg), - (offset - cur_offset) / PAGE_SIZE); - } - cur_offset += sg->length; - } - return NULL; -} - static void mlx5vf_disable_fd(struct mlx5_vf_migration_file *migf) { mutex_lock(&migf->lock);