From patchwork Tue Mar 5 10:15:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13581930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 247C8C54E41 for ; Tue, 5 Mar 2024 10:15:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 04DD56B00AE; Tue, 5 Mar 2024 05:15:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F1B406B00AF; Tue, 5 Mar 2024 05:15:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D41C96B00B0; Tue, 5 Mar 2024 05:15:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id BA3B86B00AE for ; Tue, 5 Mar 2024 05:15:49 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 707261C0A30 for ; Tue, 5 Mar 2024 10:15:49 +0000 (UTC) X-FDA: 81862579218.29.21096DF Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf04.hostedemail.com (Postfix) with ESMTP id EB13A40019 for ; Tue, 5 Mar 2024 10:15:47 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=i6KfRrmN; spf=pass (imf04.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633748; a=rsa-sha256; cv=none; b=xSuvNSBLIcC9TvUdbmmCloNtoCjarInMRsX7kDUaWtKxBDhbcn3Y+rla+HZU+cSgRfJNHq BTIsHzpBd4Sg8WjBaCXRJJF4NC/BFyI8yV/g8I9zXYq5uLXkSXgpjvrr7r95H6ZRjQbHOX K5bpNAchYvk9yia8tONANJlWTL0vwuY= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=i6KfRrmN; spf=pass (imf04.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633748; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=f1hoDfpdTddpyu+zr+83f9Hrpbyga7AGgUx+JQQjpEk=; b=VD+q3R4rp2Mur/KpFNubgcRcm/fEk0eyVRep/X2u76/DZnHemBBd+b4mJAKK31b/AbqJdl QrhwrLoWCMhDGNTJ7PukPaKi36OACM/sbz+XkGX/s7cv9VPtRPL3t3iEMVWjUHz/oyeUnB PTIsHrIpV7IM6JIA2QKA4W3uZvMjZdo= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 2C22C60E8B; Tue, 5 Mar 2024 10:15:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F2653C43394; Tue, 5 Mar 2024 10:15:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633747; bh=qAJEud8zSEtY+UaYfguXl0FuNLhyjwr4jvzS2H85FiM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i6KfRrmN48zwpr58f2jn2e/4KoqDgu11fH84icYWi7KEjDlgMSwa2yskoM0pEdvED lwqbMF5T04iutucWA4erka9aP+EjX7Kjb/imGmt7yI/VQdkXDFZPfbOZatMO67TE1d R1GBVc6uz3XJqv/sD9fRb0932eSAzZZ+OLdul+PniCr2QIaCRf+7ivXT3+1lkcpffl NbvxILp9mDgF+iNO64UjpDjxS33n6TG7Ama1MHUUThE/3v8Ma0XkJPu4Dzh8vf5qTe cSuDCSx3F2rrN4NpXul9RJ+C51ATSFr+VQnbih8yd/sN8rJqiTSxCs6F2KroalPzVA AP3upF62lstFg== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 03/16] dma-mapping: provide callbacks to link/unlink pages to specific IOVA Date: Tue, 5 Mar 2024 12:15:13 +0200 Message-ID: X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: EB13A40019 X-Stat-Signature: iu8suu5rsti7u6mdn161hpkde81hewwo X-Rspam-User: X-HE-Tag: 1709633747-994135 X-HE-Meta: U2FsdGVkX18mwUzENkG+FYEpgjY34cLDRJCXKZF06W0WDtbv+Uep08+Xvk6Dg25jNVF+wk0PHw4ujvfCz2FfD7JD9Pqz9kUPLtti/vO6uvpI1zcPDfUbbvKAF0703ogARJ08dS/ChG3kmHRGlRXTMqyWTC0BrfyU9vm9VD0I3KC+epTsGM8mWW8vNDtxOukmra471nI8kL6bFHOCm3Br5Trqw8ovoNWmrSe36+KLAFSUFL8S9cJibYJb+6sed/SGESBvY6TW8Nxco3S6k+k4rz/sxpHofjDoV7yE8RNObd5b0bsir0RI+lnjvEGzH/wr4ZGnz/usFqRmNoUV/HsGZuuarO42vUzhKeFkI1wjNCdzqcikdbAVr8udT6uXF9hc8wWC7FdOh99anocKagmnxcd4SvdLWsRu1jrQ8JfEgoyN43hj3uVn1LKiiQiOKyZnS07czGnBffgvEdDLZ6tuYjDICuBcP430apRQykqPHPXYzO+jlvrLsyPQPBwfz+p2d14PrpP+FQ0iR0dklO1ids5fehtrz5vQB26M+dBed3MWhhGQKtKn50+ApoWY9FakkfsX6+CcbpXtbVGI1oQeQEizmxfCITfjS5hXexpehvoHID+9CNcVKysAxRwYJoB5Uy7/5wdGyRTTDIUAg3pbwT7amHFP2A5jG44huUn42wFAby+FQK4QzEioznmbmUm4Zie8kTyDmUTOJ0qN+qwZuGGcWJeZBlCvqau0/ilaeutWLl+5uz+vgen7zgy0p+V3nsgkpcvEDTvOj8+hdxmQtVJyuibxddWUwiN+w5hLlZyLwwjyK+X3OteiDTgt0d3Ma4WU1Gy/FNaLeAIvLFT0hwiDVfZKk3WzSXmLFKmoIak6CpYApQvYAr4H+RXEZto1ErL7uAQiMaMZc4fS9SuZE3gv3/SG3XKWwez+andfMA8XvpP0zk0JK98yKsMRLMSqftEXUAWpr1mxiymnmaN w+15J31t nEkJj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new DMA link/unlink API to provide a way for advanced users to directly map/unmap pages without ned to allocate IOVA on every map call. Signed-off-by: Leon Romanovsky --- include/linux/dma-map-ops.h | 10 +++++++ include/linux/dma-mapping.h | 13 +++++++++ kernel/dma/debug.h | 2 ++ kernel/dma/direct.h | 3 ++ kernel/dma/mapping.c | 57 +++++++++++++++++++++++++++++++++++++ 5 files changed, 85 insertions(+) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index bd605b44bb57..fd03a080df1e 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -86,6 +86,13 @@ struct dma_map_ops { dma_addr_t (*alloc_iova)(struct device *dev, size_t size); void (*free_iova)(struct device *dev, dma_addr_t dma_addr, size_t size); + dma_addr_t (*link_range)(struct device *dev, struct page *page, + unsigned long offset, dma_addr_t addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs); + void (*unlink_range)(struct device *dev, dma_addr_t dma_handle, + size_t size, enum dma_data_direction dir, + unsigned long attrs); }; #ifdef CONFIG_DMA_OPS @@ -428,6 +435,9 @@ bool arch_dma_unmap_sg_direct(struct device *dev, struct scatterlist *sg, #define arch_dma_unmap_sg_direct(d, s, n) (false) #endif +#define arch_dma_link_range_direct arch_dma_map_page_direct +#define arch_dma_unlink_range_direct arch_dma_unmap_page_direct + #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, bool coherent); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 176fb8a86d63..91cc084adb53 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -113,6 +113,9 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) int dma_alloc_iova(struct dma_iova_attrs *iova); void dma_free_iova(struct dma_iova_attrs *iova); +dma_addr_t dma_link_range(struct page *page, unsigned long offset, + struct dma_iova_attrs *iova, dma_addr_t dma_offset); +void dma_unlink_range(struct dma_iova_attrs *iova, dma_addr_t dma_offset); dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, @@ -179,6 +182,16 @@ static inline int dma_alloc_iova(struct dma_iova_attrs *iova) static inline void dma_free_iova(struct dma_iova_attrs *iova) { } +static inline dma_addr_t dma_link_range(struct page *page, unsigned long offset, + struct dma_iova_attrs *iova, + dma_addr_t dma_offset) +{ + return DMA_MAPPING_ERROR; +} +static inline void dma_unlink_range(struct dma_iova_attrs *iova, + dma_addr_t dma_offset) +{ +} static inline dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs) diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h index f525197d3cae..3d529f355c6d 100644 --- a/kernel/dma/debug.h +++ b/kernel/dma/debug.h @@ -127,4 +127,6 @@ static inline void debug_dma_sync_sg_for_device(struct device *dev, { } #endif /* CONFIG_DMA_API_DEBUG */ +#define debug_dma_link_range debug_dma_map_page +#define debug_dma_unlink_range debug_dma_unmap_page #endif /* _KERNEL_DMA_DEBUG_H */ diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 18d346118fe8..1c30e1cd607a 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -125,4 +125,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr, swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC); } + +#define dma_direct_link_range dma_direct_map_page +#define dma_direct_unlink_range dma_direct_unmap_page #endif /* _KERNEL_DMA_DIRECT_H */ diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index b6b27bab90f3..f989c64622c2 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -213,6 +213,63 @@ void dma_free_iova(struct dma_iova_attrs *iova) } EXPORT_SYMBOL(dma_free_iova); +/** + * dma_link_range - Link a physical page to DMA address + * @page: The page to be mapped + * @offset: The offset within the page + * @iova: Preallocated IOVA attributes + * @dma_offset: DMA offset form which this page needs to be linked + * + * dma_alloc_iova() allocates IOVA based on the size specified by ther user in + * iova->size. Call this function after IOVA allocation to link @page from + * @offset to get the DMA address. Note that very first call to this function + * will have @dma_offset set to 0 in the IOVA space allocated from + * dma_alloc_iova(). For subsequent calls to this function on same @iova, + * @dma_offset needs to be advanced by the caller with the size of previous + * page that was linked + DMA address returned for the previous page that was + * linked by this function. + */ +dma_addr_t dma_link_range(struct page *page, unsigned long offset, + struct dma_iova_attrs *iova, dma_addr_t dma_offset) +{ + struct device *dev = iova->dev; + size_t size = iova->size; + enum dma_data_direction dir = iova->dir; + unsigned long attrs = iova->attrs; + dma_addr_t addr = iova->addr + dma_offset; + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (dma_map_direct(dev, ops) || + arch_dma_link_range_direct(dev, page_to_phys(page) + offset + size)) + addr = dma_direct_link_range(dev, page, offset, size, dir, attrs); + else if (ops->link_range) + addr = ops->link_range(dev, page, offset, addr, size, dir, attrs); + + kmsan_handle_dma(page, offset, size, dir); + debug_dma_link_range(dev, page, offset, size, dir, addr, attrs); + return addr; +} +EXPORT_SYMBOL(dma_link_range); + +void dma_unlink_range(struct dma_iova_attrs *iova, dma_addr_t dma_offset) +{ + struct device *dev = iova->dev; + size_t size = iova->size; + enum dma_data_direction dir = iova->dir; + unsigned long attrs = iova->attrs; + dma_addr_t addr = iova->addr + dma_offset; + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (dma_map_direct(dev, ops) || + arch_dma_unlink_range_direct(dev, addr + size)) + dma_direct_unlink_range(dev, addr, size, dir, attrs); + else if (ops->unlink_range) + ops->unlink_range(dev, addr, size, dir, attrs); + + debug_dma_unlink_range(dev, addr, size, dir); +} +EXPORT_SYMBOL(dma_unlink_range); + static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) {