From patchwork Sun Nov 10 13:46:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13869869 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 980CD156879; Sun, 10 Nov 2024 13:47:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731246478; cv=none; b=X6HaiWoX5NDlJSNaOSOJXwRdYRF9fpsvcCt0Dj7mNhxzrcu1KWTiTz6U/EC8UtMRrPnpMXr5sLtJkExzlKg7UfblrD2NMclwSFIDMwloX7/N9fufVoGqCzSpGXc4Z+04BgqfqbpDL62WG+tZAJhYofb4ilwfTO5LAXdt6pYs8h0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731246478; c=relaxed/simple; bh=YpLncFZbCC4w0IIg6h/rxYvmMz7724GLt2jYf2BJ9P8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gXzl1GpcNBhlEJHqMHx6zjK/SoCTo8eEzyj5zWirbG+6Yz9a9psWAokoQJQ8lANDRZ5CKT84SjwsJqdKpHgdHB7vdoXH3nhw1F/3vGR9ALyOj4ZAETOfLSxoqFOs3W7A5WvLzJwAjWOgiHL0tlZ0IBq8kLuw5f+BAw5vQLS9ebg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=idrcjwEq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="idrcjwEq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0280EC4CED0; Sun, 10 Nov 2024 13:47:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1731246478; bh=YpLncFZbCC4w0IIg6h/rxYvmMz7724GLt2jYf2BJ9P8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=idrcjwEq/EoT0LUqwuaWreqLebO5SDcBQgDV1O//O+FYNGTiFV3xTldlVeDGerZl3 dVkni7ghlSq1heDpuSI32+VDBkUSGZMqC4GpaAVV9juagT/xvqqNqUvGMJvlY7O97O TLPvmYWSuWzKU9d8iDg+ff05guyyUGBjJksXHCpamrZypv0Kw0MdbKpuqC8flunhS8 X6HrzF+Ea4oua7kVsWSu80BB+da6iiFR95zz86amHsQZeskTApWXCoeg23kJYEbW+j NDcmkfCp51GuTuLWkGnLvv7nRa3sSmVXpmesSHOtc/vQ8uQGNt1qvI3h1jR3zM+9Z2 SuUKDe5+SsCBA== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v3 09/17] docs: core-api: document the IOVA-based API Date: Sun, 10 Nov 2024 15:46:56 +0200 Message-ID: X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-pci@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Christoph Hellwig Add an explanation of the newly added IOVA-based mapping API. Signed-off-by: Christoph Hellwig Signed-off-by: Leon Romanovsky --- Documentation/core-api/dma-api.rst | 70 ++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst index 8e3cce3d0a23..61d6f4fe3d88 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -530,6 +530,76 @@ routines, e.g.::: .... } +Part Ie - IOVA-based DMA mappings +--------------------------------- + +These APIs allow a very efficient mapping when using an IOMMU. They are an +optional path that requires extra code and are only recommended for drivers +where DMA mapping performance, or the space usage for storing the DMA addresses +matter. All the considerations from the previous section apply here as well. + +:: + + bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t size); + +Is used to try to allocate IOVA space for mapping operation. If it returns +false this API can't be used for the given device and the normal streaming +DMA mapping API should be used. The ``struct dma_iova_state`` is allocated +by the driver and must be kept around until unmap time. + +:: + + static inline bool dma_use_iova(struct dma_iova_state *state) + +Can be used by the driver to check if the IOVA-based API is used after a +call to dma_iova_try_alloc. This can be useful in the unmap path. + +:: + + int dma_iova_link(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t offset, size_t size, + enum dma_data_direction dir, unsigned long attrs); + +Is used to link ranges to the IOVA previously allocated. The start of all +but the first call to dma_iova_link for a given state must be aligned +to the DMA merge boundary returned by ``dma_get_merge_boundary())``, and +the size of all but the last range must be aligned to the DMA merge boundary +as well. + +:: + + int dma_iova_sync(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size); + +Must be called to sync the IOMMU page tables for IOVA-range mapped by one or +more calls to ``dma_iova_link()``. + +For drivers that use a one-shot mapping, all ranges can be unmapped and the +IOVA freed by calling: + +:: + + void dma_iova_destroy(struct device *dev, struct dma_iova_state *state, + enum dma_data_direction dir, unsigned long attrs); + +Alternatively drivers can dynamically manage the IOVA space by unmapping +and mapping individual regions. In that case + +:: + + void dma_iova_unlink(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size, enum dma_data_direction dir, + unsigned long attrs); + +is used to unmap a range previously mapped, and + +:: + + void dma_iova_free(struct device *dev, struct dma_iova_state *state); + +is used to free the IOVA space. All regions must have been unmapped using +``dma_iova_unlink()`` before calling ``dma_iova_free()``. Part II - Non-coherent DMA allocations --------------------------------------