From patchwork Tue Mar 5 11:18:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13582203 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B71BB5A4ED; Tue, 5 Mar 2024 11:19:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709637546; cv=none; b=n0CkIn7HjHa4kw6HS8aK3x0hjVvneAhmLQceiXCqTIxrPBaC40L9PLv0SBBwPvRmTmy6v7UvJXMWp/xWM8R8Q1Ej7r1dR6OEWZ+l6lw0MMfJkywEEsqcqlnqFxCAc/T4lT5RXj5GqRGLMbogUjns95rR1iDBEwZJH0G5iwThj1c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709637546; c=relaxed/simple; bh=qAJEud8zSEtY+UaYfguXl0FuNLhyjwr4jvzS2H85FiM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uOFeUVCht3moNhOjMhjE0hQfOV+6Uhhv2s9qdqzWtRg0lidkv1t2WSBq0OaeDS7bT3Uc752Y/DuC4XLCFEV87EqRp4Nl4go9mVE4Ocagu+FiXG89dvOODXxaZBy8VJm49ZGEzj+Fz8fU7iB6+VmJa5uensB3NAiPbxaLWAWUlb8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KjAV+KIv; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KjAV+KIv" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C7A1EC433C7; Tue, 5 Mar 2024 11:19:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709637546; bh=qAJEud8zSEtY+UaYfguXl0FuNLhyjwr4jvzS2H85FiM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KjAV+KIvzMLMIdyG6JQAPl5ekfey4SRoKdyMLevM5fk4VwjTRfuycwOlywUkEPDcy 0DlyyyZ/zaORCQOgnjFjU8rjSTbdyLBgVk6eAf8U/7rgufjmCZV0Se7mNNWv57XAwW 3kcOXGxfOWDGLKRTmHc39BKEs25lC2ncwvbVWh9JRe7ynug0LMSFdUR51UNihoDebH 7Rp6XPyJK6e/PYCkYMmdP+NsRMPF6nlrrpQ8zDLr4mpVlJhfrPCf1T4vHDBeatDyJP OIIkQ912eZthQoJoEx5LjrRYDUM2DluK9KVVx2lxW+TXlMzYxT1ExRjxvcAi/tEEWs yPGH84ns91w6w== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC RESEND 03/16] dma-mapping: provide callbacks to link/unlink pages to specific IOVA Date: Tue, 5 Mar 2024 13:18:34 +0200 Message-ID: X-Mailer: git-send-email 2.44.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Leon Romanovsky Introduce new DMA link/unlink API to provide a way for advanced users to directly map/unmap pages without ned to allocate IOVA on every map call. Signed-off-by: Leon Romanovsky --- include/linux/dma-map-ops.h | 10 +++++++ include/linux/dma-mapping.h | 13 +++++++++ kernel/dma/debug.h | 2 ++ kernel/dma/direct.h | 3 ++ kernel/dma/mapping.c | 57 +++++++++++++++++++++++++++++++++++++ 5 files changed, 85 insertions(+) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index bd605b44bb57..fd03a080df1e 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -86,6 +86,13 @@ struct dma_map_ops { dma_addr_t (*alloc_iova)(struct device *dev, size_t size); void (*free_iova)(struct device *dev, dma_addr_t dma_addr, size_t size); + dma_addr_t (*link_range)(struct device *dev, struct page *page, + unsigned long offset, dma_addr_t addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs); + void (*unlink_range)(struct device *dev, dma_addr_t dma_handle, + size_t size, enum dma_data_direction dir, + unsigned long attrs); }; #ifdef CONFIG_DMA_OPS @@ -428,6 +435,9 @@ bool arch_dma_unmap_sg_direct(struct device *dev, struct scatterlist *sg, #define arch_dma_unmap_sg_direct(d, s, n) (false) #endif +#define arch_dma_link_range_direct arch_dma_map_page_direct +#define arch_dma_unlink_range_direct arch_dma_unmap_page_direct + #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, bool coherent); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 176fb8a86d63..91cc084adb53 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -113,6 +113,9 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) int dma_alloc_iova(struct dma_iova_attrs *iova); void dma_free_iova(struct dma_iova_attrs *iova); +dma_addr_t dma_link_range(struct page *page, unsigned long offset, + struct dma_iova_attrs *iova, dma_addr_t dma_offset); +void dma_unlink_range(struct dma_iova_attrs *iova, dma_addr_t dma_offset); dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, @@ -179,6 +182,16 @@ static inline int dma_alloc_iova(struct dma_iova_attrs *iova) static inline void dma_free_iova(struct dma_iova_attrs *iova) { } +static inline dma_addr_t dma_link_range(struct page *page, unsigned long offset, + struct dma_iova_attrs *iova, + dma_addr_t dma_offset) +{ + return DMA_MAPPING_ERROR; +} +static inline void dma_unlink_range(struct dma_iova_attrs *iova, + dma_addr_t dma_offset) +{ +} static inline dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs) diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h index f525197d3cae..3d529f355c6d 100644 --- a/kernel/dma/debug.h +++ b/kernel/dma/debug.h @@ -127,4 +127,6 @@ static inline void debug_dma_sync_sg_for_device(struct device *dev, { } #endif /* CONFIG_DMA_API_DEBUG */ +#define debug_dma_link_range debug_dma_map_page +#define debug_dma_unlink_range debug_dma_unmap_page #endif /* _KERNEL_DMA_DEBUG_H */ diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 18d346118fe8..1c30e1cd607a 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -125,4 +125,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr, swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC); } + +#define dma_direct_link_range dma_direct_map_page +#define dma_direct_unlink_range dma_direct_unmap_page #endif /* _KERNEL_DMA_DIRECT_H */ diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index b6b27bab90f3..f989c64622c2 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -213,6 +213,63 @@ void dma_free_iova(struct dma_iova_attrs *iova) } EXPORT_SYMBOL(dma_free_iova); +/** + * dma_link_range - Link a physical page to DMA address + * @page: The page to be mapped + * @offset: The offset within the page + * @iova: Preallocated IOVA attributes + * @dma_offset: DMA offset form which this page needs to be linked + * + * dma_alloc_iova() allocates IOVA based on the size specified by ther user in + * iova->size. Call this function after IOVA allocation to link @page from + * @offset to get the DMA address. Note that very first call to this function + * will have @dma_offset set to 0 in the IOVA space allocated from + * dma_alloc_iova(). For subsequent calls to this function on same @iova, + * @dma_offset needs to be advanced by the caller with the size of previous + * page that was linked + DMA address returned for the previous page that was + * linked by this function. + */ +dma_addr_t dma_link_range(struct page *page, unsigned long offset, + struct dma_iova_attrs *iova, dma_addr_t dma_offset) +{ + struct device *dev = iova->dev; + size_t size = iova->size; + enum dma_data_direction dir = iova->dir; + unsigned long attrs = iova->attrs; + dma_addr_t addr = iova->addr + dma_offset; + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (dma_map_direct(dev, ops) || + arch_dma_link_range_direct(dev, page_to_phys(page) + offset + size)) + addr = dma_direct_link_range(dev, page, offset, size, dir, attrs); + else if (ops->link_range) + addr = ops->link_range(dev, page, offset, addr, size, dir, attrs); + + kmsan_handle_dma(page, offset, size, dir); + debug_dma_link_range(dev, page, offset, size, dir, addr, attrs); + return addr; +} +EXPORT_SYMBOL(dma_link_range); + +void dma_unlink_range(struct dma_iova_attrs *iova, dma_addr_t dma_offset) +{ + struct device *dev = iova->dev; + size_t size = iova->size; + enum dma_data_direction dir = iova->dir; + unsigned long attrs = iova->attrs; + dma_addr_t addr = iova->addr + dma_offset; + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (dma_map_direct(dev, ops) || + arch_dma_unlink_range_direct(dev, addr + size)) + dma_direct_unlink_range(dev, addr, size, dir, attrs); + else if (ops->unlink_range) + ops->unlink_range(dev, addr, size, dir, attrs); + + debug_dma_unlink_range(dev, addr, size, dir); +} +EXPORT_SYMBOL(dma_unlink_range); + static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) {