From patchwork Sun Oct 27 14:21:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852554 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53EADD13564 for ; Sun, 27 Oct 2024 14:21:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6D53F6B0089; Sun, 27 Oct 2024 10:21:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6858E6B008A; Sun, 27 Oct 2024 10:21:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 487526B008C; Sun, 27 Oct 2024 10:21:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 2E3BF6B0089 for ; Sun, 27 Oct 2024 10:21:44 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id ECFAA1A1C01 for ; Sun, 27 Oct 2024 14:21:03 +0000 (UTC) X-FDA: 82719594468.09.6DAD5F6 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf10.hostedemail.com (Postfix) with ESMTP id 6988DC000C for ; Sun, 27 Oct 2024 14:21:31 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=q0syyreL; spf=pass (imf10.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038692; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AwV2TwCMgNhrqSqcQttBSVa97v5AU32KhIp4xqdqseM=; b=hFRuExDsmZ43hBmauwL0UUurk3w2fFkhE41nzlMyeCJPMlnFYpS+EDtHTMTM3qYZqu0Mli OkynVoKxDHVTWZTMqP5j0KfqMEPKzsgXMR1zQDI22VtIC6eFK3bfvfmQD+F3MKh/2ZR8cM nHlNmLnWdbNlzznb3yJXKkOQ5kp0M2c= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=q0syyreL; spf=pass (imf10.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038692; a=rsa-sha256; cv=none; b=3ECGjffSrmVRCsurm2q97Z4Me9nvlRJHpu6lr8mehjnukpXhRmWfwEW0jDMeqdYuOhoCji uVyeqqVdA6ErGa/5X70oqrDSyR30UhNp8VExMdZuAhBmZ2ZKupJkvbL0evwaN+VpO/H9xy 0PnTHT4hol/AC/6QPsrveOBEV5FpY2o= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 7EE0E5C5814; Sun, 27 Oct 2024 14:20:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 80A54C4CEE5; Sun, 27 Oct 2024 14:21:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038895; bh=+e8i31806td23jDBoUZ6SoVDL4OC5F5d5sbK0vHxzWo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q0syyreLUc/7t5SiYuFaPcZGOkYZ8eC4xzFadtc5YAj809Ior1co+NSFozZj0r0F3 IQ/LLPnkwq+aY3wkdwlqFeMOFEBEOJQD5/ERt1+bkzhBrZc7Tgf/99ivjhy5Shw9bI TtjEZGz0NK/uFgAfod/yCYcJB1VIQNC7DTRB3X5NJDWt2H2k1R9ll2jX/8ADspDD0Q A+K/p2n4S0hzzzPXhO5jo3DoEQYndEdKMkGnprfdfVva0Ib/eQTEmtqGZ0K4g3g8af W2uNoftKi66P6Xu6HPgkIKiKQae77sMzIZmTquUu1V7TE91hx4H6w1sHL09rlFF7B4 0uFDgr6SR5kGg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 01/18] PCI/P2PDMA: refactor the p2pdma mapping helpers Date: Sun, 27 Oct 2024 16:21:01 +0200 Message-ID: X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 6988DC000C X-Stat-Signature: qo98kg3e8tpihyugrjk6zo7xeysr66j6 X-Rspam-User: X-HE-Tag: 1730038891-86183 X-HE-Meta: U2FsdGVkX1+yhkSfKprUPpjvlG9f/mVfJEDs3cadtTeKEvovhnTldjRNHNi0WWghjM1u0zvDS1oX5TExzbl/+v4jdIQKUbFM82G6V7dCe/tBeedM3/jFrn2i2PyXHoPvlEhWJDrT/2uQAEt2hCzxkC8YwJ9LlVFPG7ePi5/QQUv1zIuqCxIKvnuMfb5Gd0DkQz11iDlxzMpWK7SefBbLfXSmiqMuulE1VM3Hy6K44pBs9i5XLZ3UuXvyJKbI2U0UKYP8dZHvTNJS4Wm+aP/V4W3JFx5YHxYocS2JzkL9W3B3MPVKk4lkKte2gk2y72Es5YL1qaNMLpJqYwh+mbH+4dSmowlURKfhYxvQlqT4jRXht1mVtejH1RCaJrSG0XhVQfW3+eNdC+IaGY7jO1NXM4vglbtnnDWTfmi5ff8CmYMPzo/nKV/vI976lnVLSd/equMTvuggPfwYXiVIKKY/LcdY7oxZ6bKpaSIvstOtceDr/V4y5Y7L1Zk6R6N846Ba5kLvfWfSV0h5c4WkuWYrgtEuTqW6CSMNVFXPOylBD4IETR3J0H+ycqrncGEqV3Y9PWTIgbx0Moqi7fX39/lA3lx5AThUhjSiJMXheJWWxlpXvL6weD0gogCgJgUxSU/MlsBkPfB6d3EP9fa+JyswBSA7KQnnZ5Of2gI8QrcRirvxfuV+Rk4Z6PTCKS9wzskXsWlhkHS6MOtxVQzkwfRIEA+7PLs4ew5H8uz2GHNIevQXhc+2cGK5kseb2pHBUhCzktWVoUE0zT8Ro3hxaytXfYJAmOUDy9pOL4v+AAipKHIWVe5sR3Y62w8ieMfpRnThwg/gLJvPGpafI98lPv7hd3UcZtNKqrFAekcnIwCYsWKK/r92i94Wr+V+cQeru8rt3LAPTfND1hRgg1V5nPvun8AisLzDuWvlUP+m53c++qjFqO/kPDkqZcc2lCeviDAEvVUch8jiyLUHrDT4tgJ Ka4xwfcS X6O0y5q26jI8s7/op81RIyJLVW/IyYHXB37X62gB+Jgml4VAASKphH7ZqoethlMrEX39chfnRc5SK8VCuOzL3UBqbBQlOHb74U2lAbGffZLwdK7igVYYR/h3d7vOaUGntJk8IpBtszmuVc2ne6AIhJKnV0ibvwcJtoOnEWm7Y9YgoAC5fnrpUN+ojQR1FPmfEKu5yD/svaG7dtDabosPEm4VMSqd+JJfcvhCR4r98xXsygMc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig The current scheme with a single helper to determine the P2P status and map a scatterlist segment force users to always use the map_sg helper to DMA map, which we're trying to get away from because they are very cache inefficient. Refactor the code so that there is a single helper that checks the P2P state for a page, including the result that it is not a P2P page to simplify the callers, and a second one to perform the address translation for a bus mapped P2P transfer that does not depend on the scatterlist structure. Signed-off-by: Christoph Hellwig Signed-off-by: Leon Romanovsky Reviewed-by: Logan Gunthorpe Acked-by: Bjorn Helgaas --- drivers/iommu/dma-iommu.c | 46 ++++++++++++++++----------------- drivers/pci/p2pdma.c | 38 ++++----------------------- include/linux/dma-map-ops.h | 51 +++++++++++++++++++++++++++++-------- kernel/dma/direct.c | 42 +++++++++++++++--------------- 4 files changed, 89 insertions(+), 88 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 2a9fa0c8cc00..6e50023c8112 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1382,7 +1382,6 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, struct scatterlist *s, *prev = NULL; int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs); struct pci_p2pdma_map_state p2pdma_state = {}; - enum pci_p2pdma_map_type map; dma_addr_t iova; size_t iova_len = 0; unsigned long mask = dma_get_seg_boundary(dev); @@ -1412,28 +1411,29 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, size_t s_length = s->length; size_t pad_len = (mask - iova_len + 1) & mask; - if (is_pci_p2pdma_page(sg_page(s))) { - map = pci_p2pdma_map_segment(&p2pdma_state, dev, s); - switch (map) { - case PCI_P2PDMA_MAP_BUS_ADDR: - /* - * iommu_map_sg() will skip this segment as - * it is marked as a bus address, - * __finalise_sg() will copy the dma address - * into the output segment. - */ - continue; - case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: - /* - * Mapping through host bridge should be - * mapped with regular IOVAs, thus we - * do nothing here and continue below. - */ - break; - default: - ret = -EREMOTEIO; - goto out_restore_sg; - } + switch (pci_p2pdma_state(&p2pdma_state, dev, sg_page(s))) { + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + /* + * Mapping through host bridge should be mapped with + * regular IOVAs, thus we do nothing here and continue + * below. + */ + case PCI_P2PDMA_MAP_NONE: + break; + case PCI_P2PDMA_MAP_BUS_ADDR: + /* + * iommu_map_sg() will skip this segment as it is marked + * as a bus address, __finalise_sg() will copy the dma + * address into the output segment. + */ + s->dma_address = pci_p2pdma_bus_addr_map(&p2pdma_state, + sg_phys(s)); + sg_dma_len(s) = sg->length; + sg_dma_mark_bus_address(s); + continue; + default: + ret = -EREMOTEIO; + goto out_restore_sg; } sg_dma_address(s) = s_iova_off; diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 4f47a13cb500..f38d16d71dd5 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -995,40 +995,12 @@ static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap, return type; } -/** - * pci_p2pdma_map_segment - map an sg segment determining the mapping type - * @state: State structure that should be declared outside of the for_each_sg() - * loop and initialized to zero. - * @dev: DMA device that's doing the mapping operation - * @sg: scatterlist segment to map - * - * This is a helper to be used by non-IOMMU dma_map_sg() implementations where - * the sg segment is the same for the page_link and the dma_address. - * - * Attempt to map a single segment in an SGL with the PCI bus address. - * The segment must point to a PCI P2PDMA page and thus must be - * wrapped in a is_pci_p2pdma_page(sg_page(sg)) check. - * - * Returns the type of mapping used and maps the page if the type is - * PCI_P2PDMA_MAP_BUS_ADDR. - */ -enum pci_p2pdma_map_type -pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, - struct scatterlist *sg) +void __pci_p2pdma_update_state(struct pci_p2pdma_map_state *state, + struct device *dev, struct page *page) { - if (state->pgmap != sg_page(sg)->pgmap) { - state->pgmap = sg_page(sg)->pgmap; - state->map = pci_p2pdma_map_type(state->pgmap, dev); - state->bus_off = to_p2p_pgmap(state->pgmap)->bus_offset; - } - - if (state->map == PCI_P2PDMA_MAP_BUS_ADDR) { - sg->dma_address = sg_phys(sg) + state->bus_off; - sg_dma_len(sg) = sg->length; - sg_dma_mark_bus_address(sg); - } - - return state->map; + state->pgmap = page->pgmap; + state->map = pci_p2pdma_map_type(state->pgmap, dev); + state->bus_off = to_p2p_pgmap(state->pgmap)->bus_offset; } /** diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index b7773201414c..49edcbda19d1 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -443,6 +443,11 @@ enum pci_p2pdma_map_type { */ PCI_P2PDMA_MAP_UNKNOWN = 0, + /* + * Not a PCI P2PDMA transfer. + */ + PCI_P2PDMA_MAP_NONE, + /* * PCI_P2PDMA_MAP_NOT_SUPPORTED: Indicates the transaction will * traverse the host bridge and the host bridge is not in the @@ -471,21 +476,47 @@ enum pci_p2pdma_map_type { struct pci_p2pdma_map_state { struct dev_pagemap *pgmap; - int map; + enum pci_p2pdma_map_type map; u64 bus_off; }; -#ifdef CONFIG_PCI_P2PDMA -enum pci_p2pdma_map_type -pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, - struct scatterlist *sg); -#else /* CONFIG_PCI_P2PDMA */ +/* helper for pci_p2pdma_state(), do not use directly */ +void __pci_p2pdma_update_state(struct pci_p2pdma_map_state *state, + struct device *dev, struct page *page); + +/** + * pci_p2pdma_state - check the P2P transfer state of a page + * @state: P2P state structure + * @dev: device to transfer to/from + * @page: page to map + * + * Check if @page is a PCI P2PDMA page, and if yes of what kind. Returns the + * map type, and updates @state with all information needed for a P2P transfer. + */ static inline enum pci_p2pdma_map_type -pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, - struct scatterlist *sg) +pci_p2pdma_state(struct pci_p2pdma_map_state *state, struct device *dev, + struct page *page) +{ + if (IS_ENABLED(CONFIG_PCI_P2PDMA) && is_pci_p2pdma_page(page)) { + if (state->pgmap != page->pgmap) + __pci_p2pdma_update_state(state, dev, page); + return state->map; + } + return PCI_P2PDMA_MAP_NONE; +} + +/** + * pci_p2pdma_bus_addr_map - map a PCI_P2PDMA_MAP_BUS_ADDR P2P transfer + * @state: P2P state structure + * @paddr: physical address to map + * + * Map a physically contigous PCI_P2PDMA_MAP_BUS_ADDR transfer. + */ +static inline dma_addr_t +pci_p2pdma_bus_addr_map(struct pci_p2pdma_map_state *state, phys_addr_t paddr) { - return PCI_P2PDMA_MAP_NOT_SUPPORTED; + WARN_ON_ONCE(state->map != PCI_P2PDMA_MAP_BUS_ADDR); + return paddr + state->bus_off; } -#endif /* CONFIG_PCI_P2PDMA */ #endif /* _LINUX_DMA_MAP_OPS_H */ diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 5b4e6d3bf7bc..a793400161c2 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -462,34 +462,32 @@ int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction dir, unsigned long attrs) { struct pci_p2pdma_map_state p2pdma_state = {}; - enum pci_p2pdma_map_type map; struct scatterlist *sg; int i, ret; for_each_sg(sgl, sg, nents, i) { - if (is_pci_p2pdma_page(sg_page(sg))) { - map = pci_p2pdma_map_segment(&p2pdma_state, dev, sg); - switch (map) { - case PCI_P2PDMA_MAP_BUS_ADDR: - continue; - case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: - /* - * Any P2P mapping that traverses the PCI - * host bridge must be mapped with CPU physical - * address and not PCI bus addresses. This is - * done with dma_direct_map_page() below. - */ - break; - default: - ret = -EREMOTEIO; + switch (pci_p2pdma_state(&p2pdma_state, dev, sg_page(sg))) { + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + /* + * Any P2P mapping that traverses the PCI host bridge + * must be mapped with CPU physical address and not PCI + * bus addresses. + */ + case PCI_P2PDMA_MAP_NONE: + sg->dma_address = dma_direct_map_page(dev, sg_page(sg), + sg->offset, sg->length, dir, attrs); + if (sg->dma_address == DMA_MAPPING_ERROR) { + ret = -EIO; goto out_unmap; } - } - - sg->dma_address = dma_direct_map_page(dev, sg_page(sg), - sg->offset, sg->length, dir, attrs); - if (sg->dma_address == DMA_MAPPING_ERROR) { - ret = -EIO; + break; + case PCI_P2PDMA_MAP_BUS_ADDR: + sg->dma_address = pci_p2pdma_bus_addr_map(&p2pdma_state, + sg_phys(sg)); + sg_dma_mark_bus_address(sg); + continue; + default: + ret = -EREMOTEIO; goto out_unmap; } sg_dma_len(sg) = sg->length; From patchwork Sun Oct 27 14:21:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19ACED13561 for ; Sun, 27 Oct 2024 14:21:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B9436B0088; Sun, 27 Oct 2024 10:21:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 968F46B0089; Sun, 27 Oct 2024 10:21:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 809646B008A; Sun, 27 Oct 2024 10:21:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5F5E26B0088 for ; Sun, 27 Oct 2024 10:21:43 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 886E9A0158 for ; Sun, 27 Oct 2024 14:21:03 +0000 (UTC) X-FDA: 82719595140.05.A787EC1 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf08.hostedemail.com (Postfix) with ESMTP id 798C7160020 for ; Sun, 27 Oct 2024 14:21:26 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=u+QtyyGx; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf08.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038786; a=rsa-sha256; cv=none; b=KgsXcKRLsCe5ncLb7lqz7X35DMvY3oB/diMolulUIwDbNLbLy1HaFWfPyxa+gyXePArFxY CFaEcr3s0g5yu9Cmt6UFZ5tABjnVSk6ePoIxKStJV0H3vCvonuM2cGsNXLstziqrS8wOTF ZnsC5WMBbqefwfj4XvGBu6Ix9xCD6YI= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=u+QtyyGx; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf08.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038786; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cozRCff1ntom9Ogt46kQ0FZ66FW6TLcRK1LNFtm3M+c=; b=Rqp/cYtoWtcG/5tXDzURc9MKFEUrrweaw4GKhhJHrk4HnodElKeBYcGrbayhEHOX36kL4b sX3DJvt0nmRqLP1s9OUjTIxzlrP7cibWAMDHvUmmnAVcmgz0K3A84QrmIhG3T96jV1Ts1N 4AwOu9iYw4CvpBRla4k/RjD4w9tCyNY= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 9D1515C581B; Sun, 27 Oct 2024 14:20:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A14C5C4CEC3; Sun, 27 Oct 2024 14:21:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038899; bh=mMv5ivKZnrq5wTJK9xe2uOLrB81TyleydzmptqBr1cY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u+QtyyGxO6sV+DJmwPh4Su2TK5gj3Qp3052pUqEJR+prtFmaQ5jikzOd8XKV4OSpo Sv9Spv6vybok40YNPsJx3S/gprWP8Am34W10etZFMd1Je+qOCAeQGzLhZkXMC1U3Dd KtEo/foOKkoBmUt2qAv1JWeLUvtFQVw6msSr8vlqfy0EgP09h/TP+sTWkKNqjkNCQx heheTH7qDUq9IB5345/0YxdhbsjOWWNTeL6jSxU1g5v8NPHvUsdsge3j9Bs28GpdFg eWxFH7HDRMKAeEhLnr3e+tDplEjUb/RZH7cdmMrM3sK3T6H2agOZ8Kkxhe1OBg3rH/ ewX1Fqmev9HcQ== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 02/18] dma-mapping: move the PCI P2PDMA mapping helpers to pci-p2pdma.h Date: Sun, 27 Oct 2024 16:21:02 +0200 Message-ID: <27698e7cc55f6ca5371c3d86c50fd3afce9afddd.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 798C7160020 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: uhp7hx4so791nkq3bheixfnuj8qstq78 X-HE-Tag: 1730038886-246220 X-HE-Meta: U2FsdGVkX19HsSEDVOUi8EYJOwMQjGvJR8X/XL6VTY5Zw6L4taEzvcMO6rlfhAGT2ab89z8NKlw6OTl5FyrfODOx5ZHyiwgh3sMwbn6krygcp7OyALTbXp6HrKTSokEX+HxL9/hXRd59Re//LhXGWcfWI5SQq6G0z/B9Er1MKRZ9mty2ClLzDx7zOQCfDaWoz/xv3pSciAe1t0ffrw1GOGMLNOfsQCsMDTsUDy87cMpnAtRWUv01hHwoKPe/dVJKrOMqDVbeH/CwlSwGir1tbVsIGW2BrDYnUyuPt2zoW2yAa34oiySr1o33hrLaVrJZuQ+X9dFqJFrya9lDmzomw80lpzCmuT9+XefLlzzIsf94sub2153iFaNt05pmRxv88FbF6mGxyKlAzA9daJ0Bs/t8xbTifogP5AKl0G+pClkX34qYKRj57cQH1mTVVuWBSQchA1JEgyUZho3iTHkz45lUbUKAwQc6NU0cc8PCcJ1wXUoIPDLye10P/LBhS+CPKLadSA3ejQ8hZWqs8JekBxv7e0W7d6uQ8nWngZbOk8hVWhVCVtNtJ+ndGt4/K+Se7ktV+Wf3Wt+1O0vXJOR2iHM8aCye4MpBSWKfstjJZ4gesbvMO/bJHzt9pKpF3eqrKPq6cUPGFkQWpqLy5Gm5/rSpZMxleDqApEC2IGvyR8xjDcWoOm8PR44fepghRwgRItmfg5BCs8nWsgJAcYWG8mWnCc0oDQBuLvLZUjp8uOshaKZaQ6fCZCV7AEx8DdkVMqWACHLVtefeTy2Tc8wh+Rb2uyCmmOttSZRJk3HV5UDYTdt9ooiGXDCqCcOl7vQSUGCTUkbqOiEyGhKgQe206C1tlAJNAv7FkbI1xY8Rcel1olORlvje2dkgQVmm3GPZKud2UtB058oSfqFPsql+p5m8lMoZjZauRLOfIxMndK4IZRk0NH9LdwzfBXIbTi+VS+0bcGJeafK9ZOd2PQN mXv1LfXO 8OxN1R+Tl+3bXguGB3ZW0sWPxo+ApsqClhJpHORPM/+6lP/Yo3r01IseLaaKOTcHF+FyM6WjukdwALK+CmTUAtfTKsAtRxS2KiRNrFvzJogdiEODHjlI64vi5Zhl7sHORyuafvlS4gWkZIvwffJl4ThqXvFkIBPGKg7Vyt0+NX/3LsoHjxv63JTrotaGoSmiMnH3YMskhkgyzaJd+RuEM80RHzKNaIjytAK8049xCrAka90o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig To support the upcoming non-scatterlist mapping helpers, we need to go back to have them called outside of the DMA API. Thus move them out of dma-map-ops.h, which is only for DMA API implementations to pci-p2pdma.h, which is for driver use. Note that the core helper is still not exported as the mapping is expected to be done only by very highlevel subsystem code at least for now. Signed-off-by: Christoph Hellwig Signed-off-by: Leon Romanovsky Reviewed-by: Logan Gunthorpe Acked-by: Bjorn Helgaas --- drivers/iommu/dma-iommu.c | 1 + include/linux/dma-map-ops.h | 84 ------------------------------------- include/linux/pci-p2pdma.h | 84 +++++++++++++++++++++++++++++++++++++ kernel/dma/direct.c | 1 + 4 files changed, 86 insertions(+), 84 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 6e50023c8112..c422e36c0d66 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include #include diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 49edcbda19d1..6ee626e50708 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -435,88 +435,4 @@ static inline void debug_dma_dump_mappings(struct device *dev) extern const struct dma_map_ops dma_dummy_ops; -enum pci_p2pdma_map_type { - /* - * PCI_P2PDMA_MAP_UNKNOWN: Used internally for indicating the mapping - * type hasn't been calculated yet. Functions that return this enum - * never return this value. - */ - PCI_P2PDMA_MAP_UNKNOWN = 0, - - /* - * Not a PCI P2PDMA transfer. - */ - PCI_P2PDMA_MAP_NONE, - - /* - * PCI_P2PDMA_MAP_NOT_SUPPORTED: Indicates the transaction will - * traverse the host bridge and the host bridge is not in the - * allowlist. DMA Mapping routines should return an error when - * this is returned. - */ - PCI_P2PDMA_MAP_NOT_SUPPORTED, - - /* - * PCI_P2PDMA_BUS_ADDR: Indicates that two devices can talk to - * each other directly through a PCI switch and the transaction will - * not traverse the host bridge. Such a mapping should program - * the DMA engine with PCI bus addresses. - */ - PCI_P2PDMA_MAP_BUS_ADDR, - - /* - * PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: Indicates two devices can talk - * to each other, but the transaction traverses a host bridge on the - * allowlist. In this case, a normal mapping either with CPU physical - * addresses (in the case of dma-direct) or IOVA addresses (in the - * case of IOMMUs) should be used to program the DMA engine. - */ - PCI_P2PDMA_MAP_THRU_HOST_BRIDGE, -}; - -struct pci_p2pdma_map_state { - struct dev_pagemap *pgmap; - enum pci_p2pdma_map_type map; - u64 bus_off; -}; - -/* helper for pci_p2pdma_state(), do not use directly */ -void __pci_p2pdma_update_state(struct pci_p2pdma_map_state *state, - struct device *dev, struct page *page); - -/** - * pci_p2pdma_state - check the P2P transfer state of a page - * @state: P2P state structure - * @dev: device to transfer to/from - * @page: page to map - * - * Check if @page is a PCI P2PDMA page, and if yes of what kind. Returns the - * map type, and updates @state with all information needed for a P2P transfer. - */ -static inline enum pci_p2pdma_map_type -pci_p2pdma_state(struct pci_p2pdma_map_state *state, struct device *dev, - struct page *page) -{ - if (IS_ENABLED(CONFIG_PCI_P2PDMA) && is_pci_p2pdma_page(page)) { - if (state->pgmap != page->pgmap) - __pci_p2pdma_update_state(state, dev, page); - return state->map; - } - return PCI_P2PDMA_MAP_NONE; -} - -/** - * pci_p2pdma_bus_addr_map - map a PCI_P2PDMA_MAP_BUS_ADDR P2P transfer - * @state: P2P state structure - * @paddr: physical address to map - * - * Map a physically contigous PCI_P2PDMA_MAP_BUS_ADDR transfer. - */ -static inline dma_addr_t -pci_p2pdma_bus_addr_map(struct pci_p2pdma_map_state *state, phys_addr_t paddr) -{ - WARN_ON_ONCE(state->map != PCI_P2PDMA_MAP_BUS_ADDR); - return paddr + state->bus_off; -} - #endif /* _LINUX_DMA_MAP_OPS_H */ diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h index 2c07aa6b7665..66b71f60a811 100644 --- a/include/linux/pci-p2pdma.h +++ b/include/linux/pci-p2pdma.h @@ -104,4 +104,88 @@ static inline struct pci_dev *pci_p2pmem_find(struct device *client) return pci_p2pmem_find_many(&client, 1); } +enum pci_p2pdma_map_type { + /* + * PCI_P2PDMA_MAP_UNKNOWN: Used internally for indicating the mapping + * type hasn't been calculated yet. Functions that return this enum + * never return this value. + */ + PCI_P2PDMA_MAP_UNKNOWN = 0, + + /* + * Not a PCI P2PDMA transfer. + */ + PCI_P2PDMA_MAP_NONE, + + /* + * PCI_P2PDMA_MAP_NOT_SUPPORTED: Indicates the transaction will + * traverse the host bridge and the host bridge is not in the + * allowlist. DMA Mapping routines should return an error when + * this is returned. + */ + PCI_P2PDMA_MAP_NOT_SUPPORTED, + + /* + * PCI_P2PDMA_BUS_ADDR: Indicates that two devices can talk to + * each other directly through a PCI switch and the transaction will + * not traverse the host bridge. Such a mapping should program + * the DMA engine with PCI bus addresses. + */ + PCI_P2PDMA_MAP_BUS_ADDR, + + /* + * PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: Indicates two devices can talk + * to each other, but the transaction traverses a host bridge on the + * allowlist. In this case, a normal mapping either with CPU physical + * addresses (in the case of dma-direct) or IOVA addresses (in the + * case of IOMMUs) should be used to program the DMA engine. + */ + PCI_P2PDMA_MAP_THRU_HOST_BRIDGE, +}; + +struct pci_p2pdma_map_state { + struct dev_pagemap *pgmap; + enum pci_p2pdma_map_type map; + u64 bus_off; +}; + +/* helper for pci_p2pdma_state(), do not use directly */ +void __pci_p2pdma_update_state(struct pci_p2pdma_map_state *state, + struct device *dev, struct page *page); + +/** + * pci_p2pdma_state - check the P2P transfer state of a page + * @state: P2P state structure + * @dev: device to transfer to/from + * @page: page to map + * + * Check if @page is a PCI P2PDMA page, and if yes of what kind. Returns the + * map type, and updates @state with all information needed for a P2P transfer. + */ +static inline enum pci_p2pdma_map_type +pci_p2pdma_state(struct pci_p2pdma_map_state *state, struct device *dev, + struct page *page) +{ + if (IS_ENABLED(CONFIG_PCI_P2PDMA) && is_pci_p2pdma_page(page)) { + if (state->pgmap != page->pgmap) + __pci_p2pdma_update_state(state, dev, page); + return state->map; + } + return PCI_P2PDMA_MAP_NONE; +} + +/** + * pci_p2pdma_bus_addr_map - map a PCI_P2PDMA_MAP_BUS_ADDR P2P transfer + * @state: P2P state structure + * @paddr: physical address to map + * + * Map a physically contigous PCI_P2PDMA_MAP_BUS_ADDR transfer. + */ +static inline dma_addr_t +pci_p2pdma_bus_addr_map(struct pci_p2pdma_map_state *state, phys_addr_t paddr) +{ + WARN_ON_ONCE(state->map != PCI_P2PDMA_MAP_BUS_ADDR); + return paddr + state->bus_off; +} + #endif /* _LINUX_PCI_P2P_H */ diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index a793400161c2..47e124561fff 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "direct.h" /* From patchwork Sun Oct 27 14:21:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852556 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F186FD13562 for ; Sun, 27 Oct 2024 14:21:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 80CE26B0093; Sun, 27 Oct 2024 10:21:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 76E366B0095; Sun, 27 Oct 2024 10:21:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5EB116B0096; Sun, 27 Oct 2024 10:21:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 3F64C6B0093 for ; Sun, 27 Oct 2024 10:21:51 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2DFD4160147 for ; Sun, 27 Oct 2024 14:21:26 +0000 (UTC) X-FDA: 82719595392.12.7833AA2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf25.hostedemail.com (Postfix) with ESMTP id 746C6A0006 for ; Sun, 27 Oct 2024 14:21:33 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=nXd1InWC; spf=pass (imf25.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038752; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=v5SYRMY/doGJo/E57q9NuO+kOt01Crkg32kQAVBzJCY=; b=HV2Ui4f3ViqEU4q5+eD9vNCVq3RRGOvtAtfkr3+gN1nWz2ZgLc9B5tgHoFG1RP0MG76keb yPQob15lFFf8VaM+xb6safKUghomIeKu5CmmnaIPW85Fr0a9g5Za1tt4Trv/FwmNNpmkzL QTYSMMy/j1wXLk88oHU1s7adX0sPprE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038752; a=rsa-sha256; cv=none; b=1Wz2G/dfJXSOpdb7dPCXCJvH5AV91EB68HbwQIz9RT0/tvBWamwoY/fTWUbCD4t1+fpu8w HS0keJ4vfRt9CqVRzPB5JP/nPRtsxxAJb3kqPZtqF2N3ElRoWheTbMCDIFg2tSA9AvlW7x d2UfqD7U9fXz2wqw8gAYVGb/l/LSzrE= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=nXd1InWC; spf=pass (imf25.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 4DC215C577A; Sun, 27 Oct 2024 14:21:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 53221C4CEE8; Sun, 27 Oct 2024 14:21:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038908; bh=RhcK3jpx80P74UKqRSLHZKtWTOJGp14TVgQFOHpK5uo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nXd1InWCDEf9wFXEnvYnNA199PsDEzlkERQQaYZC8V+zLcE33wRMLyfgbSH/SMU98 H8d+1ptvk6eIe6/6IES17WIXLlv7+BF2han6/V0cGsFPRXP8l1QOAztijX2Y3GaG2q wnm5X1/7IT2NFH57Zi6tSrsT9ewxKmSpbfJPLHqRaP5V67ASKO9DHXsrl04GbkZFmk FT+T/qAyFvHJUWTerjbYT7JeNh7vI3p8lKZqtn9C9pOWLNQm6u9bVpPJEsmidekpnc 8RVUPCJjT5FAT0cc1xKwpYaiwdDQ/cTA2KzTkUsb9kGeBaFOqkDLV2x2pXhgjZRSLR 6D6mn65kvJ4Qw== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 03/18] iommu: generalize the batched sync after map interface Date: Sun, 27 Oct 2024 16:21:03 +0200 Message-ID: <6bcf8efc0e817be5c19c263b6bc43994b411b0c5.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 746C6A0006 X-Stat-Signature: gumpm5gnozbykyi73wgiuhwo6petfyd9 X-HE-Tag: 1730038893-622043 X-HE-Meta: U2FsdGVkX1+ij1C8CKBGnDB30SdbbS94u5qBDyhYDFpYd6lTYG+nhJP6bi6UTlNRcNNqorWQB3V5ZLORpwO+urRx5XQ5n81hxB1O3p85o4JsKDjvrs/w/gmiB5aDZoeugIV+sEKsKRtMyZEi70/0PAZsLJdPdHRKVmwQ4wsU9/c3wq8ekvONty6nEyd8tf/VxixMThuxmfcKXziWr9HDgjcRonYhjWNnht6yAC5U9nz2d5Sr7fJDT72Mt+yUuUs6CMKw2pKV6QIRV5yL9Jdgde6PQvV6gFiJBLNK9isxE8LsZL82rurqm9Qyyfaf+EZiVty595Z0xE3f5eCsqxvRgAjZ2Mak+0tVuupSnmNHpHH4PUPHakk+eYYb9/u6w+DcUSdlbzXmR4xkyOkkofem8iQ6qkd/VzufgpGN2jqZI/mRb/Em8/nX2tJibv82AlUobjYrEazHdJa5JfK9mX2Ux/96ex/vdm4iWRtbdBoFM3HSWkqLnL23D64wI1xu8Y9+Ir5o+kx4oX2nHXo4byJzIy7uUVM2o6GyjhsBt+Mu8LUz8y3RiofNPIdp+kqVFYWrrnYtkYfvgrp3cgMiGz6vaVmbvRJNXBIllPCe+17zWWEv+S68nV8sVrfpzfvTjyrz1rGLLhZWPGPBA/QOSwBIozi2x4qDJSPeNqcKs2Q7biKH0euOGh1DwE5qYhteyEmpeSUDOgS7tbzV960ZYfaI2I9yM/UauEslG3GpIrzjpkv62cdaRdbx28BbLDfNDelsG5+0xPtR8bVaS3yEfqiyvVRmBl792Dg7gn7pOFyuaBsnsmSW3ogFCmCgAzMTadC8gSY3zVnL4p79U+ENt6bsKTVtYUwlGbXkeJ3nJ3jqIHA4zBSVcmPsMjPUL0nPZfGVOdoW2X1pqiKgE9PSQeBeZIg++SjtRHppn05ll6sHX9bUD1FjIiFXGypYagYNQ6tcxb+lKBq8n8asz19yU2U piZ2GTzz 9D6fMPw8sDe1d4dYngX1ZqKpb1Qy22mtLAKHki+QDdaDTDq2+G+lYcbSiQi9DghPjaXH7PlJJY8evWRvZaTSLWxoDIM9tbKMpoSpgVyQU5uDUZGJoGAL/b/S1j54FWxZ7JvgoWlsWEo2IX2ktgPmZ2DpnpVV57Ikqk4rpjtKutUQphb3sBY+2qaWagVUEduT6ptO+cJmxGbQcVHRofa4lofyZMn8nmuZl/Uehhmy+fGDCI/w= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig For the upcoming IOVA-based DMA API we want to use the interface batch the sync after mapping multiple entries from dma-iommu without having a scatterlist. For that move more sanity checks from the callers into __iommu_map and make that function available outside of iommu.c as iommu_map_nosync. Add a wrapper for the map_sync as iommu_sync_map so that callers don't need to poke into the methods directly. Signed-off-by: Christoph Hellwig Signed-off-by: Leon Romanovsky --- drivers/iommu/iommu.c | 65 +++++++++++++++++++------------------------ include/linux/iommu.h | 4 +++ 2 files changed, 33 insertions(+), 36 deletions(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 83c8e617a2c5..6b0943397e1e 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2439,8 +2439,8 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova, return pgsize; } -static int __iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { const struct iommu_domain_ops *ops = domain->ops; unsigned long orig_iova = iova; @@ -2449,12 +2449,19 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t orig_paddr = paddr; int ret = 0; + might_sleep_if(gfpflags_allow_blocking(gfp)); + if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING))) return -EINVAL; if (WARN_ON(!ops->map_pages || domain->pgsize_bitmap == 0UL)) return -ENODEV; + /* Discourage passing strange GFP flags */ + if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | + __GFP_HIGHMEM))) + return -EINVAL; + /* find out the minimum page size supported */ min_pagesz = 1 << __ffs(domain->pgsize_bitmap); @@ -2502,31 +2509,27 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, return ret; } -int iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +int iommu_sync_map(struct iommu_domain *domain, unsigned long iova, size_t size) { const struct iommu_domain_ops *ops = domain->ops; - int ret; - - might_sleep_if(gfpflags_allow_blocking(gfp)); - /* Discourage passing strange GFP flags */ - if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | - __GFP_HIGHMEM))) - return -EINVAL; + if (!ops->iotlb_sync_map) + return 0; + return ops->iotlb_sync_map(domain, iova, size); +} - ret = __iommu_map(domain, iova, paddr, size, prot, gfp); - if (ret == 0 && ops->iotlb_sync_map) { - ret = ops->iotlb_sync_map(domain, iova, size); - if (ret) - goto out_err; - } +int iommu_map(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +{ + int ret; - return ret; + ret = iommu_map_nosync(domain, iova, paddr, size, prot, gfp); + if (ret) + return ret; -out_err: - /* undo mappings already done */ - iommu_unmap(domain, iova, size); + ret = iommu_sync_map(domain, iova, size); + if (ret) + iommu_unmap(domain, iova, size); return ret; } @@ -2612,26 +2615,17 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova, struct scatterlist *sg, unsigned int nents, int prot, gfp_t gfp) { - const struct iommu_domain_ops *ops = domain->ops; size_t len = 0, mapped = 0; phys_addr_t start; unsigned int i = 0; int ret; - might_sleep_if(gfpflags_allow_blocking(gfp)); - - /* Discourage passing strange GFP flags */ - if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | - __GFP_HIGHMEM))) - return -EINVAL; - while (i <= nents) { phys_addr_t s_phys = sg_phys(sg); if (len && s_phys != start + len) { - ret = __iommu_map(domain, iova + mapped, start, + ret = iommu_map_nosync(domain, iova + mapped, start, len, prot, gfp); - if (ret) goto out_err; @@ -2654,11 +2648,10 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova, sg = sg_next(sg); } - if (ops->iotlb_sync_map) { - ret = ops->iotlb_sync_map(domain, iova, mapped); - if (ret) - goto out_err; - } + ret = iommu_sync_map(domain, iova, mapped); + if (ret) + goto out_err; + return mapped; out_err: diff --git a/include/linux/iommu.h b/include/linux/iommu.h index bd722f473635..8927e5f996c2 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -799,6 +799,10 @@ extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev); extern struct iommu_domain *iommu_get_dma_domain(struct device *dev); extern int iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot, gfp_t gfp); +int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp); +int iommu_sync_map(struct iommu_domain *domain, unsigned long iova, + size_t size); extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size); extern size_t iommu_unmap_fast(struct iommu_domain *domain, From patchwork Sun Oct 27 14:21:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CF3AD13562 for ; Sun, 27 Oct 2024 14:21:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 882106B008A; Sun, 27 Oct 2024 10:21:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 80D1B6B008C; Sun, 27 Oct 2024 10:21:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 65D276B0092; Sun, 27 Oct 2024 10:21:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 43D9B6B008A for ; Sun, 27 Oct 2024 10:21:46 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0FF22ABC6C for ; Sun, 27 Oct 2024 14:21:03 +0000 (UTC) X-FDA: 82719595392.11.434EE4F Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf23.hostedemail.com (Postfix) with ESMTP id 17CA0140018 for ; Sun, 27 Oct 2024 14:21:29 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZoVzrJRd; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf23.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038731; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3MQIVTpPUFkgHQ2kL+3oRPWGhhug+8ZedKPylFoOF7U=; b=wibN4VAST41CcyfvIg2BKFVG+mY/YchxgBdatItCUFOJ3Y7xaqvf8W2AxycbOxpLSJDp5n 7U0fTGfikXIcUYTfwpdJkNHsP0PeSVFPaGAyecTwjLC+AFNRyXje/tfYIyhuCyXS1C7PoK W216+QnoXekpbjv8t1TlOEQvE3r3Lx4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038731; a=rsa-sha256; cv=none; b=ImHBlafTP/gWUocvkPDs+oYN790h7ABPjNXtui8eU5p7P9AuvmT1w0QEYX3243iCM9KhDB AHrGKy8B7CCD+ptGNNAeBrHg5dEHj+F89lYy2C14vNBoV271rik6rP4Y3hNFdFngdxEZZu mjmMvAxkr3D6k14wzeKKOSP0rKdNpCM= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZoVzrJRd; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf23.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id ADE43A40F9C; Sun, 27 Oct 2024 14:19:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B9AA2C4CEC3; Sun, 27 Oct 2024 14:21:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038903; bh=Lk+QUmdhMcbaHNbOGFaFlupW/NnaXwQbfPX5iL2RGqE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZoVzrJRdd8rbuT6L+Xi5EMaR91AubipgWfrJzgY4JanYIOix9GILLMyRG2Xftdq+D lvrjY2oCt9XfFuqUh3vIevkzrvt0atPYbUaWw6Pm4Ef6wiC0RsEWSs63z9vUuUJP2Z Exza5dLmVV1JWpGz7EmA7eXKpzaBn72zgcq/WewExhzpFWYJjBWcm6nQgFMuMchQbV bmSbUpVoeD9Br3okWLfdemzPmx6pAYzml6afn/7ySUeHq5DwWPvSPtv3xBRXzwj+Zo kAWWFq5krUcNCImhCceSB8D3j9OASgP8EKBeMBugYciQuO3BvEw7YndlF7mwhOLJTk 4S9LNT9t1ys+g== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 04/18] dma-mapping: Add check if IOVA can be used Date: Sun, 27 Oct 2024 16:21:04 +0200 Message-ID: <6225a5bcc7fc584abfc6d0ed0473b7b2a24a2df2.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 17CA0140018 X-Stat-Signature: obnxdweudguz78esbrpa7dpfuebkr6x6 X-Rspam-User: X-HE-Tag: 1730038889-438092 X-HE-Meta: U2FsdGVkX19hF/VPFIGdhUW0NsL0GMSGLsgjDu7xSekF15UmtDSwnTFGgqArGgENjaNpZ7h2PK7GNfU8ydL+L+j8MnFyR3iiL3weyD1A0V+Fkpqu2izoXLdPMKI4FP7AN8EvZ2HEI8xGN4rzT9N8zWc43x1vKIxt55tdhAHXodv+j2JmO6tYisW8c0cupO9DyQRp+cpMrVAbJazxphI68s+7II51Ry4GuluVUUkHUWFR76jt72QRAN5FVYIs5wXAFKLqpSD0mk6EsQs5QVEEUh6A6EEEqNaliarzYJGGNX5sZ3Zm7/Fb6v1TCpN1XCkg6XdRQoZrn4PMuzuIZBQp/+FTwylvS9DNltOK41/rU6b4Er9MaX2ip7Mmt9+idZD3Qmi2iEG2hauCSwtNvADd26mz13SPBduBe/F2l4WYD4tUhENY83jG6MCIy5+HWaMgiNMCeoRiBTmxuDPmaWAyymmMsIm0ZbKocWASV217qTjwCTTaKU4tSUSCibE3FBiYy2k3+aOgS1YqsZT5WEKC8rfDTC/bXTQcrk0AX2ck0Fe7oC1/fdMgrhl1NJ8pHXAwc9o8dXvQUXXOg1D5HJHsCb5siHEy/En+2W3vb1uWzPaF7A9GWkXCUSmpHJV13Xs69ZHhwqwA0GsmIghLjPYKLqvJKMSF+14rGZkR1nKMCVrQHJJ1HuGBJuAGARYlWH9l4gX+RZkbQ02h7MInOQVAfXf4203DLmB/IrORas/3nKZZU5fo2ETFi8Jeiyrj6/RLXGHRv/x7t7jb9zJkVRdiQU/P+lTiG2B7E3NBqr6rd4tCOBydwZWoJhxzHuou6i9jp0OqXfuD7j3pP9Aiu4vREgfv9iJjK9XhuYNY0hRGrU65v8YW7OhsC56U3MwRJGw+fH/xGGEfEQ9cmQPbrGousTuHjRgXCnO1solMa1sgPcSNk3/Wz9syK50lV89ZPHv6z6TL3r9jTEfk607N+Kw 4SaFaxIk YiubUXq6q5Vszxiue9veiTaNtBYVFpBwp2L9Rt1YQFEOsiizHvyACj5ZH8auM0blsbeiKzdafy6HJRclAWulcvPAmueea8nBFpLkemP/S76KLhTIEhkf8dJ0z1k7fFPP9nM5iRxQ6sOCv/QvtGJ1+mtGeeVN8SYkgYCvkzkcYE85o+QRVWf52bscjrH1VPLa320Y6I96rJhHfzzVJQ8jkZsNIr2kjAmGsrwoUi0ZjaOnv48w= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky This patch adds a check if IOVA can be used for the specific transaction. In the new API a DMA mapping transaction is identified by a struct dma_iova_state, which holds some recomputed information for the transaction which does not change for each page being mapped. Signed-off-by: Leon Romanovsky --- include/linux/dma-mapping.h | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 1524da363734..6075e0708deb 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -76,6 +76,20 @@ #define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) +struct dma_iova_state { + size_t __size; +}; + +/* + * Use the high bit to mark if we used swiotlb for one or more ranges. + */ +#define DMA_IOVA_USE_SWIOTLB (1ULL << 63) + +static inline size_t dma_iova_size(struct dma_iova_state *state) +{ + return state->__size & ~DMA_IOVA_USE_SWIOTLB; +} + #ifdef CONFIG_DMA_API_DEBUG void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); void debug_dma_map_single(struct device *dev, const void *addr, @@ -281,6 +295,25 @@ static inline int dma_mmap_noncontiguous(struct device *dev, } #endif /* CONFIG_HAS_DMA */ +#ifdef CONFIG_IOMMU_DMA +/** + * dma_use_iova - check if the IOVA API is used for this state + * @state: IOVA state + * + * Return %true if the DMA transfers uses the dma_iova_*() calls or %false if + * they can't be used. + */ +static inline bool dma_use_iova(struct dma_iova_state *state) +{ + return state->__size != 0; +} +#else /* CONFIG_IOMMU_DMA */ +static inline bool dma_use_iova(struct dma_iova_state *state) +{ + return false; +} +#endif /* CONFIG_IOMMU_DMA */ + #if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC) void __dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir); From patchwork Sun Oct 27 14:21:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AECBD13561 for ; Sun, 27 Oct 2024 14:22:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 992C36B00A1; Sun, 27 Oct 2024 10:22:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 942706B00A2; Sun, 27 Oct 2024 10:22:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E2D26B00A3; Sun, 27 Oct 2024 10:22:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 572546B00A1 for ; Sun, 27 Oct 2024 10:22:11 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A34B7A1C27 for ; Sun, 27 Oct 2024 14:21:31 +0000 (UTC) X-FDA: 82719595770.01.8548AE8 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf06.hostedemail.com (Postfix) with ESMTP id AFCB3180013 for ; Sun, 27 Oct 2024 14:21:53 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=vJ4SOP7g; spf=pass (imf06.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038774; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NDTPA/jhN45f72ExSB7FDz7Zdbv+u1YzP4KVh20yN9Y=; b=DDKoL3dCay4IgmHB8iAwsIYCQuwdb1StpEHEDjWzE2gEbDZV9uTBOPkQDNHTTS1v/SNPBO uTwdzPv3xibF00gDgzgADkRrUTR/PbPspiIObP5PhsZUemjia2ts2o6sZLQ6xzZ7SLfcd3 Cg/ol1GTys6zRIPFvkLSWwpcO6oKuT4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038774; a=rsa-sha256; cv=none; b=TQkR4iMB/veq/RZGVmX6EN3JcWDT0cxsn1bj/M6dl7trRl1zV4rab8SeUP8YNcaRfcg7uK MgpyMlWkMKAeihwGnqcwl/4uh9GJVflMUuBg4PvAqppFLG5gkISsafNazfprH5tIrvI0bt 26HWwxQ0Qy3EwFgiJU31JmixUPMpQgs= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=vJ4SOP7g; spf=pass (imf06.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id D147AA403FF; Sun, 27 Oct 2024 14:20:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D842BC4CEC3; Sun, 27 Oct 2024 14:22:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038928; bh=obFrC/8mEpgTohWqbSf2NYuferMlyIrijdEaJsyBjag=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vJ4SOP7gvCOmSakM9GmH61r5dlq/Vh89T+kjFaLLN13hmZfQf1jhZ7VnJUvpO8RS4 Ot3h88WLGyBGodBhbgLFLO1oRjoH2dXt3/qea4ZIVujmyq7q3rs6aftzpdKbd7dkwe iOFz72QAKcFESW/zKQLGKVCBDj6ce/d2SSag7jvmvjfIppKDQf2PdUc3aVWYQaKjX9 Ef5ORwF0V0m4xYd8ZkrUAFuAiBSVYhj9uTpCNLQOiwkzTg1y7CbjTrKVVGc/9UVgW5 JRPQ3E00Lla8siHhJD3YV2HS3Up33U/zketrX8P/VXaRAoQEMFgZ6k1BDQvsGyfH2E tWdMh0n6h7Jkg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 05/18] dma: Provide an interface to allow allocate IOVA Date: Sun, 27 Oct 2024 16:21:05 +0200 Message-ID: <844f3dcf9c341b8178bfbc90909ef13d11dd2193.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: AFCB3180013 X-Stat-Signature: 7yr9k3ufg8znhhxt8qjrwx8fa8rik67u X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1730038913-830024 X-HE-Meta: U2FsdGVkX18syC5mjwebFn6dJklzeGoq+wFMBoGLrKxDyR5YGFKCaDZP+KYpw8Px7b+pm/T2phkltMx0YcfzsHIuEulhLXtOFohhI1w10kWCs3ktrpgH6TLkuuTneERcSdIhu9T/ZkIhMYvIBOfkaVkWR2tD+NDNdNPuOmWKO+07zDuhlQQtZwiwwEVmWqI16cnIPkosMJWwcbQVRXPBO1yDBHgbFHf6+7/MCb4zy3ZLQSR6WSrIdgpM3iuLJrk0fcVCx035EmHR3Elnn+472NzbRXqWXZZ3gB/eoUu4LWp7TJ5i1b1KA4BZplET3NjctoKlExWR1CS7uN4s41WviUYsZD0Pdju8GKyTUVGF/LvEmDGLTSP7gXRX5cQdEz8qNKBZKAa7rKlfdWnPcfdlhXfYsicx7lqftlM5i30XfH1eG2mpS08PVvIrzyCd4/H1oF7ObacQxKCLpTGpAxzl+addmQXkwoYA+Ba+QlRtOevYwPOR3t3xJM1s+inw/BflURajZJ52Lcy7spM9dgC0OOkTgH9+C9OEtGYPFX+Melzl3k4hSKeOYiQ0YuULlm9IqVDG3meuxNd/Js3lpH5kk8+fcqWuBy3nJFnA0foioeIEp3WKZoF/cN5HJ+xXz0+EqMGsYHsg9fz83XNhoMMAkdZo0jW9s1zHkqxFPKdpR+gPAvg0QvKDFRA1rIf7R+ZP4/JKXAVcPTt8CCAmqxCPbQhbeJDyw+AU1tXus0VX/wa5LQySk1XU0kN1W9/u+Cwfyelo2DIe38m8OYc4Kj0VkCv+eaij7hpOhB4GpjtU6SvGuzWAHQDEIhswTDZrbiYtBebatKAg8Ru/OzEMfATc7DERedgvRCwhllnkgFt0GNV1KqVeFfgiF/4Enf3PSVmhIwQVs317lN91U7wRAvvNUF+r5pALbJZylzvhfcGndH8LM/kRGtlGraAsaLvnA0RGIW6n1ZnwciwaFG3/iN8 9C935wXN MZisJvvKaWPA5K3YQss021wx8ZLVsXDQKmWcFHCYZLcrhuuBHZxhzOgJ1b9BqODYU37XLTvv+Kjxtifnxumj+7fPnWd9f3xT5wuwYlYSSqO34EUa/TMXDNi1itZkz/2MHado0qY8hcmN2hNOFUhVJdZM2Kh1lbu0NVPc62CZBwgb2OrE3zXu31hps8NYSej202edxr8NAxTIjmmVgpAtV6sQGdNo/99PYpxUuNkg81HZV1V0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky The existing .map_page() callback provides both allocating of IOVA and linking DMA pages. That combination works great for most of the callers who use it in control paths, but is less effective in fast paths where there may be multiple calls to map_page(). These advanced callers already manage their data in some sort of database and can perform IOVA allocation in advance, leaving range linkage operation to be in fast path. Provide an interface to allocate/deallocate IOVA and next patch link/unlink DMA ranges to that specific IOVA. The API is exported from dma-iommu as it is the only implementation supported, the namespace is clearly different from iommu_* functions which are not allowed to be used. This code layout allows us to save function call per API call used in datapath as well as a lot of boilerplate code. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 79 +++++++++++++++++++++++++++++++++++++ include/linux/dma-mapping.h | 15 +++++++ 2 files changed, 94 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index c422e36c0d66..0644152c5aad 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1745,6 +1745,85 @@ size_t iommu_dma_max_mapping_size(struct device *dev) return SIZE_MAX; } +static bool iommu_dma_iova_alloc(struct device *dev, + struct dma_iova_state *state, phys_addr_t phys, size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_off = iova_offset(iovad, phys); + dma_addr_t addr; + + if (WARN_ON_ONCE(!size)) + return false; + if (WARN_ON_ONCE(size & DMA_IOVA_USE_SWIOTLB)) + return false; + + addr = iommu_dma_alloc_iova(domain, + iova_align(iovad, size + iova_off), + dma_get_mask(dev), dev); + if (!addr) + return false; + + state->addr = addr + iova_off; + state->__size = size; + return true; +} + +/** + * dma_iova_try_alloc - Try to allocate an IOVA space + * @dev: Device to allocate the IOVA space for + * @state: IOVA state + * @phys: physical address + * @size: IOVA size + * + * Check if @dev supports the IOVA-based DMA API, and if yes allocate IOVA space + * for the given base address and size. + * + * Note: @phys is only used to calculate the IOVA alignment. Callers that always + * do PAGE_SIZE aligned transfers can safely pass 0 here. + * + * Returns %true if the IOVA-based DMA API can be used and IOVA space has been + * allocated, or %false if the regular DMA API should be used. + */ +bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t size) +{ + memset(state, 0, sizeof(*state)); + if (!use_dma_iommu(dev)) + return false; + if (static_branch_unlikely(&iommu_deferred_attach_enabled) && + iommu_deferred_attach(dev, iommu_get_domain_for_dev(dev))) + return false; + return iommu_dma_iova_alloc(dev, state, phys, size); +} +EXPORT_SYMBOL_GPL(dma_iova_try_alloc); + +/** + * dma_iova_free - Free an IOVA space + * @dev: Device to free the IOVA space for + * @state: IOVA state + * + * Undoes a successful dma_try_iova_alloc(). + * + * Note that all dma_iova_link() calls need to be undone first. For callers + * that never call dma_iova_unlink(), dma_iova_destroy() can be used instead + * which unlinks all ranges and frees the IOVA space in a single efficient + * operation. + */ +void dma_iova_free(struct device *dev, struct dma_iova_state *state) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_start_pad = iova_offset(iovad, state->addr); + size_t size = dma_iova_size(state); + + iommu_dma_free_iova(cookie, state->addr - iova_start_pad, + iova_align(iovad, size + iova_start_pad), NULL); +} +EXPORT_SYMBOL_GPL(dma_iova_free); + void iommu_setup_dma_ops(struct device *dev) { struct iommu_domain *domain = iommu_get_domain_for_dev(dev); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 6075e0708deb..817f11bce7bc 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -11,6 +11,7 @@ #include #include #include +#include /** * List of possible attributes associated with a DMA mapping. The semantics @@ -77,6 +78,7 @@ #define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) struct dma_iova_state { + dma_addr_t addr; size_t __size; }; @@ -307,11 +309,24 @@ static inline bool dma_use_iova(struct dma_iova_state *state) { return state->__size != 0; } + +bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t size); +void dma_iova_free(struct device *dev, struct dma_iova_state *state); #else /* CONFIG_IOMMU_DMA */ static inline bool dma_use_iova(struct dma_iova_state *state) { return false; } +static inline bool dma_iova_try_alloc(struct device *dev, + struct dma_iova_state *state, phys_addr_t phys, size_t size) +{ + return false; +} +static inline void dma_iova_free(struct device *dev, + struct dma_iova_state *state) +{ +} #endif /* CONFIG_IOMMU_DMA */ #if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC) From patchwork Sun Oct 27 14:21:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41ACFD13562 for ; Sun, 27 Oct 2024 14:21:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C78626B0098; Sun, 27 Oct 2024 10:21:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C28516B0099; Sun, 27 Oct 2024 10:21:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA3D66B009B; Sun, 27 Oct 2024 10:21:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8976E6B0098 for ; Sun, 27 Oct 2024 10:21:55 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 61FC541B35 for ; Sun, 27 Oct 2024 14:21:43 +0000 (UTC) X-FDA: 82719595098.13.E466929 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf15.hostedemail.com (Postfix) with ESMTP id D1641A0020 for ; Sun, 27 Oct 2024 14:21:30 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=D26J1pmp; spf=pass (imf15.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038758; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ceq31KEKz4ZyyrLOVquqUBIvG/1mSUT5s9fjdMBaHqg=; b=2jIpjdbZn+SuHyISOEeoVisygU3lkfReJCCJWFdAaDrJ+84K+J5XOSBmNlKEubjo+nWKG9 j6sDSsjSbrWI9zIGY3yq1vQkoR3QEPUTiB4UDajHmT73sU5YhQUrcQnVCwAOhOAq7lJOXz mPRZsVEX68if8+3TwbwoveOFGhUpOt0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038758; a=rsa-sha256; cv=none; b=ZX92nrItM+3l3BQzZNUjiKgB/uXMDLkOKgyjvA4wjSE/g81IqdF3iwaqzJZa8BYwBG3/9o ifPhqeoOc7YFG938fBtJellal0kHHPiJc2hLU94OISdc3grik5fy3aTybEW7HVia0bFwiJ aE80OAqVOGHXSgMNltO7RBitplUF9ZQ= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=D26J1pmp; spf=pass (imf15.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 7F2185C5815; Sun, 27 Oct 2024 14:21:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 80DF1C4CEC3; Sun, 27 Oct 2024 14:21:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038912; bh=MdXpbAVJE/DdsoE6uvGP4b0+QEt2tkaMWUlj9WEQTqg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=D26J1pmpgqk5llwpJRrf2eMyMO9QuI3Gq74wSsBHOMI3DWpPYMd+RasCL9RyC0V+o TrMuAHGq9JbgpkV1cGs6LJyvSzVNMeTBNnM4cWjvcDGu/5gnCZkSHLNVmlCLK5XxeN YuBMXpquX6JtgzjjNfQ1oHRXidJIKJ/EJVV6WAjoZ1TC9TlKiQDrwIWKw7+dQWSoRy YTTg2cNGeeXlBzTlTrcqBxHJNfbGfmyukOaYnR5XEi9B+aBWJSMOm0Px6bNB2rRfdG i2D7Ef5PVN10ByXImp2GQkJ1J0yWFDB8cEJ8EU5AEshCRuNmt368ywSUTxf1xxOj+a YgAcNEKE78x3g== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 06/18] iommu/dma: Factor out a iommu_dma_map_swiotlb helper Date: Sun, 27 Oct 2024 16:21:06 +0200 Message-ID: <7bf6002620085411ed65ca5ff9390189306dd0b5.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: D1641A0020 X-Stat-Signature: 4gwh1o75xmkhnwu5cbm3byrge1u98c8h X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1730038890-742282 X-HE-Meta: U2FsdGVkX1+cX4zalr0qJkc0X/4b2bjG3cc7E3G6lzSVBr09rRVP0LCv9x0XciQ/S+1MR5lhZM/m8xk7w3HiLMp2RQhVrrNdBbDm52buih7Rr0QmfDfGMPcthP+0YgsPJiMFFBW/FGtvBB1IpCzBXaUxXyJI59rmzDRAFj4p5WcWYpB+h3bYFLC+dTpgKo2O0Fgl4fXlf4OPlkbfBnKuAqI919H4PGlHynl/cgIPFqbuTorR7qS4YhcOCya3n/6RZaeu25hecqlE2d+aUVi9Sby8HO5DNbBtgugMiFfcNQrHxwkZei2Y9Itw1CXEaretoUXHfnMf2QNxuaeXEzuhax3WxA6rZpypdo9HxJ5UU6zbezgGtd+GAW6r43WHO+gpdg/XYkkCxxktNhn5FhsEK77gjq5iylYh+kzqNn1mpO9bdPu1uSbdiK6+HbO7gGGEQIch5e/erp7VhS8/YndoulH1dmg8knAv85zQhhKA1HA/pbfGSgF9OaGzXY6qjq4BLKys/HV2qtlno0OzMs9CItoWX7lWWuSEEf/NAizknf7L/36mjA9P9yUVk+1QxZMRZhtHTD9qW1259Ip5bpieK3eVz1YsRxUEKsYU43DZUz2ZVHH5ZEbVkrQL+TC71iXXBY+opUkZnnF0eFdphR8WIqslwtcdYhIlHusyHgZChemlDaLTn898DHf3eE9t9paAOJG/HmOkggopHJT98/mWs1sEa+l06N7ff+8S6a6xKmgbRIcn1pTgUvVzXY8uHd2RCOlsJ5VkCYQNMmEUVx7uSI5Z/wsNNRPDD96uDP/sW7xZxE7HGkAF+wi0t16Sm0EtKWb97vV1+x4BjJm4GXnisYh5Rbpx4gIf9xHzKHGGeW7YunxaBoPDAcvEhjwz+ME8v7gmh4SPYbyt8/0YaxMGr29MAWEoNJ9Yb025Dc0KqhgRxaWVNUe0xsSQT7fTkXnznFpuQGYKazoFoVz/LKy JYzYCX8e RrfvTjHJDYQNCGKmNu1705tSl+ONxXgN9LgCrNKM200pLBAEM6j6xApDmTYACvcXap0mmxGNxek3rpbScATqJmk85zNJp7JvHemmSp+1E+bDoWF2xl20x5jeFSFLsvMXPwwEOWwegAjkZsRhAAmn1ZKzqlDp4x2er0nBuSokrBV8LmoQ0b0OK9qaZiTSxiGDgumnmXTIsXV+uOk2Nbazer/19k2WY3/i2rLLGePO2xbB/AxRW1EjvX5duNFsOlMtKdEZ5CUgbHNLoI+N/i2vlFWdLZCJNO41/XDW8 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig Split the iommu logic from iommu_dma_map_page into a separate helper. This not only keeps the code neatly separated, but will also allow for reuse in another caller. Signed-off-by: Christoph Hellwig Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 73 ++++++++++++++++++++++----------------- 1 file changed, 41 insertions(+), 32 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 0644152c5aad..b859f93b1c17 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1161,6 +1161,43 @@ void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); } +static phys_addr_t iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iova_domain *iovad = &domain->iova_cookie->iovad; + + if (!is_swiotlb_active(dev)) { + dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n"); + return DMA_MAPPING_ERROR; + } + + trace_swiotlb_bounced(dev, phys, size); + + phys = swiotlb_tbl_map_single(dev, phys, size, iova_mask(iovad), dir, + attrs); + + /* + * Untrusted devices should not see padding areas with random leftover + * kernel data, so zero the pre- and post-padding. + * swiotlb_tbl_map_single() has initialized the bounce buffer proper to + * the contents of the original memory buffer. + */ + if (phys != DMA_MAPPING_ERROR && dev_is_untrusted(dev)) { + size_t start, virt = (size_t)phys_to_virt(phys); + + /* Pre-padding */ + start = iova_align_down(iovad, virt); + memset((void *)start, 0, virt - start); + + /* Post-padding */ + start = virt + size; + memset((void *)start, 0, iova_align(iovad, start) - start); + } + + return phys; +} + dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long attrs) @@ -1174,42 +1211,14 @@ dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, dma_addr_t iova, dma_mask = dma_get_mask(dev); /* - * If both the physical buffer start address and size are - * page aligned, we don't need to use a bounce page. + * If both the physical buffer start address and size are page aligned, + * we don't need to use a bounce page. */ if (dev_use_swiotlb(dev, size, dir) && iova_offset(iovad, phys | size)) { - if (!is_swiotlb_active(dev)) { - dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n"); - return DMA_MAPPING_ERROR; - } - - trace_swiotlb_bounced(dev, phys, size); - - phys = swiotlb_tbl_map_single(dev, phys, size, - iova_mask(iovad), dir, attrs); - + phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); if (phys == DMA_MAPPING_ERROR) - return DMA_MAPPING_ERROR; - - /* - * Untrusted devices should not see padding areas with random - * leftover kernel data, so zero the pre- and post-padding. - * swiotlb_tbl_map_single() has initialized the bounce buffer - * proper to the contents of the original memory buffer. - */ - if (dev_is_untrusted(dev)) { - size_t start, virt = (size_t)phys_to_virt(phys); - - /* Pre-padding */ - start = iova_align_down(iovad, virt); - memset((void *)start, 0, virt - start); - - /* Post-padding */ - start = virt + size; - memset((void *)start, 0, - iova_align(iovad, start) - start); - } + return phys; } if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) From patchwork Sun Oct 27 14:21:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852558 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88614D13562 for ; Sun, 27 Oct 2024 14:22:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1CC3A6B009B; Sun, 27 Oct 2024 10:22:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 17C076B009C; Sun, 27 Oct 2024 10:22:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F36506B009D; Sun, 27 Oct 2024 10:21:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id D0F156B009B for ; Sun, 27 Oct 2024 10:21:59 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D7185A1B8F for ; Sun, 27 Oct 2024 14:21:19 +0000 (UTC) X-FDA: 82719595896.18.96E301F Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf05.hostedemail.com (Postfix) with ESMTP id 99873100006 for ; Sun, 27 Oct 2024 14:21:15 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=pyhT2VpS; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf05.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038802; a=rsa-sha256; cv=none; b=HfYdJDFwtIaVD+afuUf8aJsj6ptZLN/27fcRLGPnU0mg89ennK63I5b5eyM6R2/D/yz2DQ WqHfCMXdPJ3rzED8QNAit1UF/ojtW4Ot2L4m16pCBY2oKwaeZ3veIY/+L7Rnli9TD4T+tm vx2e0t992RZLyD2C+XQNWeNNwaPfB9U= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=pyhT2VpS; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf05.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038802; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gEV+gRKKUzvh6Fau45x4RtCsqpps5WWAt3Bg0VyfNt4=; b=IMJZc2C+cRTNdHVxa5X12xsuwWFXUqXR9K1X+XPncs3y3/7BXF4oNwBuwt1Wsrn41pfmH7 IqpCvdBE/r5sOOfQheatF0tNGd/75KltrowcLEpwPHJfsgn0cOQ1AkUIAidESnanNwYjKX 96tUWDZLNDa4AKVRMEQdWrSwYBVIREo= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id BD225A40F9A; Sun, 27 Oct 2024 14:20:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BF229C4CEE7; Sun, 27 Oct 2024 14:21:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038916; bh=PAw5hw13Z95vfq73SLbV0SEXjd0ocCP2MdgdT7qst9A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pyhT2VpSHFFg+sggZU1U1x0Yd2yadtjGm6brk2JiJizkGlQh6O4THnpREhzM35TSQ snmSykCmO9tDcYUqTZzxeQEGQXPIXNJmpRhpdfHRrDqWKE3ntHnQjmR0UYZTI6pn4F Q/jRPZd+lz5NMus3TlpbGpy14+sOtFWz6sfWK2nU7m8ek1r3teoTBvWpWfBd0Vdrrg kevnma7G5Ikx2Xkmz/qiDa9SZUbnNqGdYg3liGlRZIgV1zXAB7KAXbiWsMb5JuNn1i o7WBB1Kc6bGKhHLClJltnGnx7N2V6VKDEpkLcoUYFGwOZJAZ31G27BxtKDLymMUpIT 247L+S/rMqtvQ== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 07/18] dma-mapping: Implement link/unlink ranges API Date: Sun, 27 Oct 2024 16:21:07 +0200 Message-ID: X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 99873100006 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: xtawgj3mz9zkz3io8pm1uo4u4s1pogj4 X-HE-Tag: 1730038875-200734 X-HE-Meta: U2FsdGVkX1/EnVpXPDYl7ZqnZqVMwhrYamBDzhRLjfH2TBS9DmxPMTIeyLSBrzv/icSoz5jevE5gTFo+ZiOzlbFPlg31P/u5g84K4sByTBRwXqGZ6B1TbSzMVUd3khVo1rYeiLkjOWrMp8EG40o7l6dOwkIMxx24rKVP2SX2O5V/SYa/n4RZNEI/1vB9yBLqxn94sbBKmVzEzp4RBEYpdVmGUh9omYyMXvUXOfPbf9lzQq5LTcpmp/p23f7p42pgF44/KmB2aQW/rBaLmabbzQt3XPLnZeSKONJiCn2V6g04QC53dqTYkfjZyIgWdTGH3VZIb0ldY68Cj3FlGva8hObD9f0Ps4XSGDSxwdlashuYDHBSB1jBB71xpMC2Zho1uz84XTsLjxJr3ZD5vEVZJaokdrAYgzuVyCk9kVePGlA1iRGvPL7abEzJDVnHLsh7hvUwDVhl/VGkfbY2xV+ACwtKBri90qsR5uOgDffIVYME4deHi1ARl98K028mADSpxrkqNZcgT1+X+q5hby0N2gc8OPKhEGe+5tSHtc3qU9RLKLBU2h+9Z9CfmAS6qpCMWWTkWEU1L4iXq5TToobZxd8SoQFudfhzEKGz+BQDK0rJpxo6ZmWUauUJuho4vKS1nIEFcS6qY0wntNjoS1csVR8/Q9D6674msoNnVBmcZ7BSUhoVzfNKp/00Ne84Vwc/XGJzJpZixUoAdwmqSiyiiClXJbYQ+RbUAL+QYliCzafEKfydWVu4Vusp4qB9lTBj8NTETL+BPHoVyV9Po9yzmm0AIkCtZr1WaIxYqVtzJ7b3m0c87XS2lL9gAQxQKbj03KIk20NN9uF8Bxcn4mV02aNjaGsiA0HSTu8q3p3xQAKnfA/zNOcPda/waaZtLW9zkUG03TBsM6tWsDYR2OkiOQ3qWHDE1S4/VZ3UpccFZNxrtj991nDfgrRHSGBwxoajqT1UqfmV25rM8wZPxtZ Tc6+hMah uugH2Jk9+FSaHEElrsbZFTe8KO1J+/PjCgZmHKGe2UtdxCNhK0wqIAYui2vVCBjCi0fl/mHBwnL9gI4nZdnPhXbT4Ib4LldKEd+hT3bL3IEiSnZHzYx/5Li5slv/z6BSeeWwJ24ws00tVEHDB9knAavvAp866WmIlvmd0n7bFQFZegZM82US2IiVf0gJMtRA/KBmt3A5ad5Cai+Jv6O9TYel9B8vMo+Olnw7j24gHKC8kv8g= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new DMA APIs to perform DMA linkage of buffers in layers higher than DMA. In proposed API, the callers will perform the following steps. In map path: if (dma_can_use_iova(...)) dma_iova_alloc() for (page in range) dma_iova_link_next(...) dma_iova_sync(...) else /* Fallback to legacy map pages */ for (all pages) dma_map_page(...) In unmap path: if (dma_can_use_iova(...)) dma_iova_destroy() else for (all pages) dma_unmap_page(...) Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 256 ++++++++++++++++++++++++++++++++++++ include/linux/dma-map-ops.h | 1 - include/linux/dma-mapping.h | 31 +++++ 3 files changed, 287 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index b859f93b1c17..f853762c2d54 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1833,6 +1833,262 @@ void dma_iova_free(struct device *dev, struct dma_iova_state *state) } EXPORT_SYMBOL_GPL(dma_iova_free); +static int __dma_iova_link(struct device *dev, dma_addr_t addr, + phys_addr_t phys, size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + bool coherent = dev_is_dma_coherent(dev); + + if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + arch_sync_dma_for_device(phys, size, dir); + + return iommu_map_nosync(iommu_get_dma_domain(dev), addr, phys, size, + dma_info_to_prot(dir, coherent, attrs), GFP_ATOMIC); +} + +static int iommu_dma_iova_bounce_and_link(struct device *dev, dma_addr_t addr, + phys_addr_t phys, size_t bounce_len, + enum dma_data_direction dir, unsigned long attrs, + size_t iova_start_pad) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iova_domain *iovad = &domain->iova_cookie->iovad; + phys_addr_t bounce_phys; + int error; + + bounce_phys = iommu_dma_map_swiotlb(dev, phys, bounce_len, dir, attrs); + if (bounce_phys == DMA_MAPPING_ERROR) + return -ENOMEM; + + error = __dma_iova_link(dev, addr - iova_start_pad, + bounce_phys - iova_start_pad, + iova_align(iovad, bounce_len), dir, attrs); + if (error) + swiotlb_tbl_unmap_single(dev, bounce_phys, bounce_len, dir, + attrs); + return error; +} + +static int iommu_dma_iova_link_swiotlb(struct device *dev, + struct dma_iova_state *state, phys_addr_t phys, size_t offset, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_start_pad = iova_offset(iovad, phys); + size_t iova_end_pad = iova_offset(iovad, phys + size); + dma_addr_t addr = state->addr + offset; + size_t mapped = 0; + int error; + + if (iova_start_pad) { + size_t bounce_len = min(size, iovad->granule - iova_start_pad); + + error = iommu_dma_iova_bounce_and_link(dev, addr, phys, + bounce_len, dir, attrs, iova_start_pad); + if (error) + return error; + state->__size |= DMA_IOVA_USE_SWIOTLB; + + mapped += bounce_len; + size -= bounce_len; + if (!size) + return 0; + } + + size -= iova_end_pad; + error = __dma_iova_link(dev, addr + mapped, phys + mapped, size, dir, + attrs); + if (error) + goto out_unmap; + mapped += size; + + if (iova_end_pad) { + error = iommu_dma_iova_bounce_and_link(dev, addr + mapped, + phys + mapped, iova_end_pad, dir, attrs, 0); + if (error) + goto out_unmap; + state->__size |= DMA_IOVA_USE_SWIOTLB; + } + + return 0; + +out_unmap: + dma_iova_unlink(dev, state, 0, mapped, dir, attrs); + return error; +} + +/** + * dma_iova_link - Link a range of IOVA space + * @dev: DMA device + * @state: IOVA state + * @phys: physical address to link + * @offset: offset into the IOVA state to map into + * @size: size of the buffer + * @dir: DMA direction + * @attrs: attributes of mapping properties + * + * Link a range of IOVA space for the given IOVA state without IOTLB sync. + * This function is used to link multiple physical addresses in contigueous + * IOVA space without performing costly IOTLB sync. + * + * The caller is responsible to call to dma_iova_sync() to sync IOTLB at + * the end of linkage. + */ +int dma_iova_link(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t offset, size_t size, + enum dma_data_direction dir, unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_start_pad = iova_offset(iovad, phys); + + if (WARN_ON_ONCE(iova_start_pad && offset > 0)) + return -EIO; + + if (dev_use_swiotlb(dev, size, dir) && iova_offset(iovad, phys | size)) + return iommu_dma_iova_link_swiotlb(dev, state, phys, offset, + size, dir, attrs); + + return __dma_iova_link(dev, state->addr + offset - iova_start_pad, + phys - iova_start_pad, + iova_align(iovad, size + iova_start_pad), dir, attrs); +} +EXPORT_SYMBOL_GPL(dma_iova_link); + +/** + * dma_iova_sync - Sync IOTLB + * @dev: DMA device + * @state: IOVA state + * @offset: offset into the IOVA state to sync + * @size: size of the buffer + * @ret: return value from the last IOVA operation + * + * Sync IOTLB for the given IOVA state. This function should be called on + * the IOVA-contigous range created by one ore more dma_iova_link() calls + * to sync the IOTLB. + */ +int dma_iova_sync(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size, int ret) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + dma_addr_t addr = state->addr + offset; + size_t iova_start_pad = iova_offset(iovad, addr); + + addr -= iova_start_pad; + size = iova_align(iovad, size + iova_start_pad); + + if (!ret) + ret = iommu_sync_map(domain, addr, size); + if (ret) + iommu_unmap(domain, addr, size); + return ret; +} +EXPORT_SYMBOL_GPL(dma_iova_sync); + +static void iommu_dma_iova_unlink_range_slow(struct device *dev, + dma_addr_t addr, size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_start_pad = iova_offset(iovad, addr); + dma_addr_t end = addr + size; + + do { + phys_addr_t phys; + size_t len; + + phys = iommu_iova_to_phys(domain, addr); + if (WARN_ON(!phys)) + continue; + len = min_t(size_t, + end - addr, iovad->granule - iova_start_pad); + + if (!dev_is_dma_coherent(dev) && + !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + arch_sync_dma_for_cpu(phys, len, dir); + + swiotlb_tbl_unmap_single(dev, phys, len, dir, attrs); + + addr += len; + iova_start_pad = 0; + } while (addr < end); +} + +static void __iommu_dma_iova_unlink(struct device *dev, + struct dma_iova_state *state, size_t offset, size_t size, + enum dma_data_direction dir, unsigned long attrs, + bool free_iova) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + dma_addr_t addr = state->addr + offset; + size_t iova_start_pad = iova_offset(iovad, addr); + struct iommu_iotlb_gather iotlb_gather; + size_t unmapped; + + if ((state->__size & DMA_IOVA_USE_SWIOTLB) || + (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))) + iommu_dma_iova_unlink_range_slow(dev, addr, size, dir, attrs); + + iommu_iotlb_gather_init(&iotlb_gather); + iotlb_gather.queued = free_iova && READ_ONCE(cookie->fq_domain); + + size = iova_align(iovad, size + iova_start_pad); + addr -= iova_start_pad; + unmapped = iommu_unmap_fast(domain, addr, size, &iotlb_gather); + WARN_ON(unmapped != size); + + if (!iotlb_gather.queued) + iommu_iotlb_sync(domain, &iotlb_gather); + if (free_iova) + iommu_dma_free_iova(cookie, addr, size, &iotlb_gather); +} + +/** + * dma_iova_unlink - Unlink a range of IOVA space + * @dev: DMA device + * @state: IOVA state + * @offset: offset into the IOVA state to unlink + * @size: size of the buffer + * @dir: DMA direction + * @attrs: attributes of mapping properties + * + * Unlink a range of IOVA space for the given IOVA state. + */ +void dma_iova_unlink(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + __iommu_dma_iova_unlink(dev, state, offset, size, dir, attrs, false); +} +EXPORT_SYMBOL_GPL(dma_iova_unlink); + +/** + * dma_iova_destroy - Finish a DMA mapping transaction + * @dev: DMA device + * @state: IOVA state + * @dir: DMA direction + * @attrs: attributes of mapping properties + * + * Unlink whole IOVA range and free an IOVA space. The range of IOVA from + * dma_addr to size must all be linked, and be the only linked IOVA in state + */ +void dma_iova_destroy(struct device *dev, struct dma_iova_state *state, + enum dma_data_direction dir, unsigned long attrs) +{ + __iommu_dma_iova_unlink(dev, state, 0, dma_iova_size(state), dir, attrs, + true); +} +EXPORT_SYMBOL_GPL(dma_iova_destroy); + void iommu_setup_dma_ops(struct device *dev) { struct iommu_domain *domain = iommu_get_domain_for_dev(dev); diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 6ee626e50708..dced37816ede 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -434,5 +434,4 @@ static inline void debug_dma_dump_mappings(struct device *dev) #endif /* CONFIG_DMA_API_DEBUG */ extern const struct dma_map_ops dma_dummy_ops; - #endif /* _LINUX_DMA_MAP_OPS_H */ diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 817f11bce7bc..50f0edfe7350 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -313,6 +313,16 @@ static inline bool dma_use_iova(struct dma_iova_state *state) bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state, phys_addr_t phys, size_t size); void dma_iova_free(struct device *dev, struct dma_iova_state *state); +void dma_iova_destroy(struct device *dev, struct dma_iova_state *state, + enum dma_data_direction dir, unsigned long attrs); +int dma_iova_sync(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size, int ret); +int dma_iova_link(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t offset, size_t size, + enum dma_data_direction dir, unsigned long attrs); +void dma_iova_unlink(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size, enum dma_data_direction dir, + unsigned long attrs); #else /* CONFIG_IOMMU_DMA */ static inline bool dma_use_iova(struct dma_iova_state *state) { @@ -327,6 +337,27 @@ static inline void dma_iova_free(struct device *dev, struct dma_iova_state *state) { } +static inline void dma_iova_destroy(struct device *dev, + struct dma_iova_state *state, enum dma_data_direction dir, + unsigned long attrs) +{ +} +static inline int dma_iova_sync(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size, int ret) +{ + return -EOPNOTSUPP; +} +static inline int dma_iova_link(struct device *dev, + struct dma_iova_state *state, phys_addr_t phys, size_t offset, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + return -EOPNOTSUPP; +} +static inline void dma_iova_unlink(struct device *dev, + struct dma_iova_state *state, size_t offset, size_t size, + enum dma_data_direction dir, unsigned long attrs) +{ +} #endif /* CONFIG_IOMMU_DMA */ #if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC) From patchwork Sun Oct 27 14:21:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11163D13561 for ; Sun, 27 Oct 2024 14:22:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 96B8F6B009D; Sun, 27 Oct 2024 10:22:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F3D76B009E; Sun, 27 Oct 2024 10:22:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 76ED46B009F; Sun, 27 Oct 2024 10:22:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 540816B009D for ; Sun, 27 Oct 2024 10:22:03 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 10E0E1C7004 for ; Sun, 27 Oct 2024 14:21:38 +0000 (UTC) X-FDA: 82719595434.03.DA6DBA7 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf18.hostedemail.com (Postfix) with ESMTP id BAE681C000D for ; Sun, 27 Oct 2024 14:21:51 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=u7uZy6FQ; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf18.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038806; a=rsa-sha256; cv=none; b=cLSQQj2lV+tYlhsB3H+XkLlbikQBxvZSWHy5KiJq0MuDd3QRjBMmRswyPN4xmJXNWg0O6g 700AFSkxS7y38QJQ/zSwFABBISVw7v5nry5wT0sgul6s4tzss82DUDorDfdwcIZC80S0yV R8hHEx4++5kA3ADzR0qAT2WC4/jsuI4= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=u7uZy6FQ; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf18.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038806; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SqsRPvKex7cm/CMiQHjz3yXGqiB/OJXlfjJSO7FlhgI=; b=VqjGVNeGR6oOMQ5JwE+LJ5fX+A4sDTf3XQx03zcF8Ogi+HoFditb7W2MfCsL/BTV8mJmJD Gh6Sshd1fHlQSW2cubKN6wFSousOyWUq3IBrYsiPqA+lNoWgfahvX0b2gKEj/7w4OtU2IT 4/LBm5NWGVc8YUFPsmIKuimiHUZla6E= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id C108EA40F9B; Sun, 27 Oct 2024 14:20:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C823CC4CEE5; Sun, 27 Oct 2024 14:21:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038920; bh=E0mwC287J+JVGCuete1tXAWszQr+5mVGJICiBM6Bbwo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u7uZy6FQUuTRVdjyJS+f2O25SgyVfds4jIYtDwXnwnzdXm9qmTDAfM6108w0SJp6V RYLV7aB8mTg8rA55P2vaJ2mFfY1BZwXsgHv/WQ5eFvQMp1tETi/S9i+QD13XvbxLsj t181rumEA9YbHlPzju+nJ0FGojfmArJCFHNulyXqDm0O4ZxDw88kj5m+88WBcYwsKG N/DOT+4wqMVPIS6QfwV81ed6vXVNuYzGgT4saY2ELt6BQE8HRM0aCzYwA/+s90liba i1nLhVEcObKHs8fsKZEzNlA+wu2nIwnS3S90n9FpHsXaNuZWdGwSFCKe9BeMzIPKgs Lm0Z5R7DWaYyg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 08/18] dma-mapping: add a dma_need_unmap helper Date: Sun, 27 Oct 2024 16:21:08 +0200 Message-ID: <2916b8526078cadd5107588610aed1c2db6d4d70.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: BAE681C000D X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 7nejo38kterqktzxfh6hzrf1dndd41xj X-HE-Tag: 1730038911-737054 X-HE-Meta: U2FsdGVkX183NwEUbPGmkGzzDLS+AFAzWorY+DnkRLUfmoDhmzk1t1pcS+OkIL+E3E4iLB2m52RTJR4apqz1xJNod9E0HwcNdAAYwMm8/2pkENESVJ2sOb0A9pARUqsDo/nztVqc6GeKmb5u0tuzouMwnfNhji43WdohDzFZqReO9Y3NOXCBP1zUCiMoFNMgR4dqkQfX0baAS7JKTtzbSY23Jt/iLE8DDKaMA9q0/KQo8dXOi6iq4ZwpoAsj1d8PWoQdmWPIqRs/pqH/2ex9I5Tx1m3XpgGf4U1k5NC+HqaP6eeEBCQekdg/cKF9vZEHo5Vwj865/j2LKjnbw/CtzsKS5OQoNuEwlT4iapR9xEVr4bGZJYqdy4Y4jeqL49kvkOsHOnmFEVIgedDlvgQa0sbNoNyrUvln8KqaJ2rJXg0cngmkakIn0+OmORkr5Lbg4DexS+nw84HWvn6RjDKjJpDEoGihlHzVkyZesv1SKurJ5SyZdr9NsbznTRm0Kg5swyQyhJKB4ttvmn6WVmPoyUAk+ZoXcnuk/gRvhNa8wi/Qjf++A3Mz/q0uBiXKg8vsaR7fRqqhonrb2UPW4tzZlHQ8wKLseIoGudIXvHOKw637051ErUyQjovseG5Xgh7FlIk+4/AdJJ1P4rAiuftJ+7ByWwGA6G4OQBQIgRqP/NO4eAEJKZuusmUa+7LlB8vywQOoujNHLtx0K2n8TsKRjcYbLWO2ILrVcYY7Y+MsMD0Ht2MYOZ8TVvJ6SgVHsWdgapsiDDYt3ochsKBP4fM1EsBMIjTcEAxvTW4OKaJx36mF9jPOMQ1dpewhMTDKV+qQLDGJsuFI6P56Mvnk4v7VZShU3tSeSt70HyAAMNcTYKc2dYJFId5+Sg44wep5vSSxVzJ4uVqIVxL5xE0C1dOFzP7RIWLvczQgUx3fmVGfDOQmFWxhtmKc2pudUZ8GvEcvZ3hZgAgamEA70yXO32H +z1gXmiD 9Iydv3A9m/rab+aB0Rpm6mTPmc9PFxMB+pjTPN2ggAzk0++BdMW4l0UF8PbATpqHJBkog9K7S+cx+BHLKlxURv50b20z1GQ8t0ImU9O7pn2Cg4xq3EUTrxc7PLVMHN0VC1D/jizd7Yf9yJzB+UnFMc9ItOS0RIlnOP2cjIecJiq2ey/sn8V2K1Ylo6I74AbnPlnXNMg+kj8kgHM21gWHoMcqgmH9laRLELGm1L2hooiFCM79XjeByMjQxHSUY1pYqNH9TUPzWpNxYZAlMEGe+AGygbxYhpRfc5ZFn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig Add helper that allows a driver to skip calling dma_unmap_* if the DMA layer can guarantee that they are no-nops. Signed-off-by: Christoph Hellwig Signed-off-by: Leon Romanovsky --- include/linux/dma-mapping.h | 5 +++++ kernel/dma/mapping.c | 20 ++++++++++++++++++++ 2 files changed, 25 insertions(+) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 50f0edfe7350..c3edd6c3e1ab 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -409,6 +409,7 @@ static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr) { return dma_dev_need_sync(dev) ? __dma_need_sync(dev, dma_addr) : false; } +bool dma_need_unmap(struct device *dev); #else /* !CONFIG_HAS_DMA || !CONFIG_DMA_NEED_SYNC */ static inline bool dma_dev_need_sync(const struct device *dev) { @@ -434,6 +435,10 @@ static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr) { return false; } +static inline bool dma_need_unmap(struct device *dev) +{ + return false; +} #endif /* !CONFIG_HAS_DMA || !CONFIG_DMA_NEED_SYNC */ struct page *dma_alloc_pages(struct device *dev, size_t size, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 864a1121bf08..daa97a650778 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -442,6 +442,26 @@ bool __dma_need_sync(struct device *dev, dma_addr_t dma_addr) } EXPORT_SYMBOL_GPL(__dma_need_sync); +/** + * dma_need_unmap - does this device need dma_unmap_* operations + * @dev: device to check + * + * If this function returns %false, drivers can skip calling dma_unmap_* after + * finishing an I/O. This function must be called after all mappings that might + * need to be unmapped have been performed. + */ +bool dma_need_unmap(struct device *dev) +{ + if (!dma_map_direct(dev, get_dma_ops(dev))) + return true; +#ifdef CONFIG_DMA_NEED_SYNC + if (!dev->dma_skip_sync) + return true; +#endif + return IS_ENABLED(CONFIG_DMA_API_DEBUG); +} +EXPORT_SYMBOL_GPL(dma_need_unmap); + static void dma_setup_need_sync(struct device *dev) { const struct dma_map_ops *ops = get_dma_ops(dev); From patchwork Sun Oct 27 14:21:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852560 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FADDD13562 for ; Sun, 27 Oct 2024 14:22:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C03A06B009F; Sun, 27 Oct 2024 10:22:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB2C96B00A0; Sun, 27 Oct 2024 10:22:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A06546B00A1; Sun, 27 Oct 2024 10:22:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7FE906B009F for ; Sun, 27 Oct 2024 10:22:07 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5D20181C06 for ; Sun, 27 Oct 2024 14:21:48 +0000 (UTC) X-FDA: 82719596148.15.758546A Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf09.hostedemail.com (Postfix) with ESMTP id 07E5E140014 for ; Sun, 27 Oct 2024 14:21:49 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=K8XddZUd; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038810; a=rsa-sha256; cv=none; b=GSJgVdhr44ix2YsdWLiXO7LnrQ8ZNMKJmbbx5BJkQEksFD5A6K2i5zZlM06jprb0RyGCys bQdxHiHrg86HkdamK/wjOlxQ/vx3PTOHuanmEiCk7mNGta7VXaKINU123MKeJXxoNbu8Ml TrTqppLUeGRkJn56ZtIeuVOQAL1QMOg= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=K8XddZUd; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038810; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=C0iUC0oN5p23xwtNAZQ7coqFE0soY0vdjJUBCEePVpw=; b=6oq3o7/CR49TetQ5ehHiX8V0i2eUBdVoOwaLTPrb3TULuN2ZdU8aa5Y3ZllgztPOYMOGtZ JznX6jfVxbfU+AIlvmAaqYwv0M7AgeRCCjMBEp07FY1GIesv2CSRFTyZdnDOzuThtHRRM1 IDML3OzNcwLoyCNjRvSPpyCjh1VWKSE= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id B6D20A40FA1; Sun, 27 Oct 2024 14:20:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C43C6C4CEE5; Sun, 27 Oct 2024 14:22:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038924; bh=/sbuIDM8stH6n2UeyfL7kHgFr33InHFa1ViD0uMPXVw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=K8XddZUd7NCAgeaLZIXoHwCNGx7vNF3NqNMbF/8a+cxY8rfn9dT5mfOanb6O5VOGx j7MgDcK6X0DjyTMWjDSqNGQJfD/POF1CayBc3iERVlbyy/ORghr5DqRU7WR+PzV4tw 3sXl5qJyyTf/vxUEgs/R7hUCipaPZoc//NyaXF3k8JiM0VM2G58pa1lQnQuxWokltI cX9duzpAJ84DJobl+ITHnG5LXwVjlwVToirlRDKp2VDhGgZxf7pt/OXmUsRq3A5oOQ QGFbOa2N/KBRwAmECFzoGhpKAX/A+9z7g5tUxvtZYE+ACF0MgLBhGySXlMu3LA1mu/ NrckH6XO6zAmA== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 09/18] docs: core-api: document the IOVA-based API Date: Sun, 27 Oct 2024 16:21:09 +0200 Message-ID: X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 07E5E140014 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: qnfdxp13ngbw4emfi6e5ugp4id1t8qyn X-HE-Tag: 1730038909-470641 X-HE-Meta: U2FsdGVkX1/5q35/DC+isisamWjqnQ1Fp+2x42SWQan/di3Q0NSYWWNalDcUFf5qXuo9RzLhZgumL0y3CzrYnXALmk96CEu8oYZ7xuj0p2Xj3tburHFHIGAKujBB8vDj+gFoONxlqq6GGTytFqa9AkaqHLc7jw6vyJNoPqFRuEjsSIqDo4M0nM4T/TC3+2qY216MpDvvE0lKqDb2NZy+YThquaA5XiPSqAb2Oq71d+hXHQrkBZAR950tlsQulCyeAwVqdH0V+MNd0dGqeriLIyziZ7zpKW28+dWRdon6a8EBz7RgeWx/xsbPFau99tWj5yzvwoKxYTlLNXKxkgWibjTvvtYbokjfxUVraS+bLM9REUAgNl+xFhjPtij+z8B7Kl7aFofF8c3NoCPLsnL56kvrLAfF9iaHKugzZHHHw7n3yGVZNbHmVXqTg2raGlj5xaWT9tQSb8fJwmu4gV2/ofsjgKQUNbUUGeb5jTtOm5h6ar7O2cM4WBfN2zLtMaLmGeQ2uTDc5xwIDsX8RDZUERHg6iwjuxvySwBZMqht0kUoCyGuYEF9bKgijZINGLF3CZFFSCTyquWneR7V6igx+1KtsUU8bnhITKot14aLpBC7e87f3cpIOwJKQOxxBySTYkhYLVXPR7q5hQazdL5UiZIyNX3h0SJ81cGXYlA2TREx3eZ8I4GGplm2Zy1NzE0/Nl4eFKUbRS94l5L7Vkx3mIxiV7jHh86kkTQUssf3mXWEpjxlm6ilHynoqJftt/dr0g2tlesCKUYjkg3Ut0McvE49Kjr5C/KJyIq8PiqF6oKu82Hq0RdY9tj07B7+bwEeRAV2Nx6zRiS4pB4B2WAiy7zVGXVAV6hoZJEkuPD9AiWuYOJBcvHvt4NLKtIk75iBklV8zXMGZE1oy8E2lDw0Jwy6HhbXU9fh46uR9NYeE312X1xvj0NqwpsaPcqh/nWg2K542FxMUYnj+Vs6yXf eo2SWTGv 8N/K2iBlEj6L7RsgxTf0VNPBIWiG5sKtathoX/n9YEEjwOuc6XlAV/8+yRJw+n7ZnGoaiZ5mQTkp+YV+3emsFtpfsVhg79wnu7+wwzX6wqNGbateNyHpO3LIdjNHt6vtVBi/5v/TNShmgLSZGxa0YOpuCwNuYNZVwJj0KVIAPStqii8VCXew/jEuUsfTbCtX2YFw7vfBTqZcdBz+VCMAwn/kQEPUPiK/1SuFDn9dVE+buHyE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig Add an explanation of the newly added IOVA-based mapping API. Signed-off-by: Christoph Hellwig Signed-off-by: Leon Romanovsky --- Documentation/core-api/dma-api.rst | 70 ++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst index 8e3cce3d0a23..9ecbff473d9a 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -530,6 +530,76 @@ routines, e.g.::: .... } +Part Ie - IOVA-based DMA mappings +--------------------------------- + +These APIs allow a very efficient mapping when using an IOMMU. They are an +optional path that requires extra code and are only recommended for drivers +where DMA mapping performance, or the space usage for storing the dma addresses +matter. All the consideration from the previous section apply here as well. + +:: + + bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t size); + +Is used to try to allocate IOVA space for mapping operation. If it returns +false this API can't be used for the given device and the normal streaming +DMA mapping API should be used. The ``struct dma_iova_state`` is allocated +by the driver and must be kept around until unmap time. + +:: + + static inline bool dma_use_iova(struct dma_iova_state *state) + +Can be used by the driver to check if the IOVA-based API is used after a +call to dma_iova_try_alloc. This can be useful in the unmap path. + +:: + + int dma_iova_link(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t offset, size_t size, + enum dma_data_direction dir, unsigned long attrs); + +Is used to link ranges to the IOVA previously allocated. The start of all +but the first call to dma_iova_link for a given state must be aligned +to the DMA merge boundary returned by ``dma_get_merge_boundary())``, and +the size of all but the last range must be aligned to the DMA merge boundary +as well. + +:: + + int dma_iova_sync(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size, int ret); + +Must called to sync the IOMMU page tables for IOVA-range mapped by one or +more calls to ``dma_iova_link()``. + +For drivers that use a one-shot mapping, all ranges can be unmapped and the +IOVA freed by calling: + +:: + + void dma_iova_destroy(struct device *dev, struct dma_iova_state *state, + enum dma_data_direction dir, unsigned long attrs); + +Alternatively drivers can dynamically manage the IOVA space by unmapping +and mapping individual regions. In that case + +:: + + void dma_iova_unlink(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size, enum dma_data_direction dir, + unsigned long attrs); + +is used to unmap a range previous mapped, and + +:: + + void dma_iova_free(struct device *dev, struct dma_iova_state *state); + +is used to free the IOVA space. All regions must have been unmapped using +``dma_iova_unlink()`` before calling ``dma_iova_free()``. Part II - Non-coherent DMA allocations -------------------------------------- From patchwork Sun Oct 27 14:21:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852566 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2459DD13562 for ; Sun, 27 Oct 2024 14:22:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF4CA6B00A9; Sun, 27 Oct 2024 10:22:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA3636B00AA; Sun, 27 Oct 2024 10:22:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F51F6B00AB; Sun, 27 Oct 2024 10:22:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 70E1C6B00A9 for ; Sun, 27 Oct 2024 10:22:31 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 37BD6ADBA0 for ; Sun, 27 Oct 2024 14:21:48 +0000 (UTC) X-FDA: 82719596484.09.087B688 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf03.hostedemail.com (Postfix) with ESMTP id 4D1662000C for ; Sun, 27 Oct 2024 14:22:19 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iCkLbrpn; spf=pass (imf03.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038792; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tFfAVeRpTt9MOKvdbyKITP7haOM8Jd6d88FaRUe8ZOQ=; b=mZ7ozPxzBMoNvU8P4wCE+jdkvIs4q+eCltfx0L0abGGvk6h+8WMnrBWNiQaVwb6NFPkpL4 Vvr24MZh1FLcQqMKj/rCj6inUrAeQrqX7CIaY66kwymBZ1HR11cDKA4x7Dan0bxzIiuUwF i+gF2QItq3cPDiHyg4B7vbFCAEJnPqA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038792; a=rsa-sha256; cv=none; b=fi+YtqtfnzkK+ygNGx4MlA/CxJv7mskJap/bD5AJVnSSdBvBHrg1jbm9nquij7Jh2TM1JK DORjsnnaxZPwWWtjRuUai3ay6jYdcA5iaVR7VOnIk9sdnmwLbmsLYHAnWEd7OCeS4HOttJ Ac0bLftQ5D/2K66VScxFcgiw3cGqzLE= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iCkLbrpn; spf=pass (imf03.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id C30C0A40F21; Sun, 27 Oct 2024 14:20:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CBBC4C4CEC3; Sun, 27 Oct 2024 14:22:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038948; bh=LyGbooKE736rWckmyvuR6Y2aUxxUKiIhQ25bI43LeHo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iCkLbrpnlImhB9B2714xCMDVyAnzWG7MsfR2tObG6bSGDFDz4Bf4CI0mqKt+rhuYq oH9+qokY2xOMGvGOGy85p6vNn6z9dqDdwZ9MAzJ/Y7n8iQv0e0nn5FsdeSoMXGJcFj B+Zi8xW0UVh0uK/OidgiacDE2NCdMWAyryo2eQ61nfECpDk59aRsqWtCbWH4zdhzfH EE5mWIwZYs2U2XkGvGVAKEq7GkstM4Q+7cbJ824WG2VF+/8pl7vlytQS/XVuNbujES urNdKuuovQ1egFVlgv0jTDSmxVbzWG5blHyNDBWJJkFVst1jo9u8sMvdVfv9ThBNkJ kD4ta2u0CCROg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 10/18] mm/hmm: let users to tag specific PFN with DMA mapped bit Date: Sun, 27 Oct 2024 16:21:10 +0200 Message-ID: <6c79710ccc5d9fec36172fea13498e30132a0600.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 4D1662000C X-Stat-Signature: hcig9qkhhmqegwz3hgj5quc3mst6xhfg X-HE-Tag: 1730038939-166147 X-HE-Meta: U2FsdGVkX1/DW30gf7sBbn/CTvAQ0BLyEjRzJXc4HS9PtqTbBeml2dpiUoJu9cDM+zDaJcTUJUytrKgTMYX0xntxNTsViC9mgfOvA39ilMA2aAiWvY+AO1dQlhKhLIgCMFttBQH/oid5EEv57VZGOR7joz6XNVyPPXvIgOQMI46Ks7Tn6WfUdPjDoKabVBy9QZ055lYg59BddjviepYzM5i6E31No+KxQotf0qnCYRvCcrevYklb8OajA+W/S6qHvOM3vtz5oEPk8apSRSONFy95qa/gDvEHmgqupKaHN71vYWTWb6/I+7yp+a2OPz7InHo8njoafYWI70L5+a8jBUp9Nfs1N2oCr4/AWw9x2eUZY0Ra45h8eNtagO4vM0vp6P8RGxhbj6cyxAkofDms2rkM2eMtrdG3F8AULk/gYBhGkayNsHmJC1bRX79TxxALzRtcKDDxKHnl73OS6p442Jc9BtImlQjt8rsRcqqxxnhnyzKJz7Q6NUTSLETNFJLUYpFEKHySg2R4kGZmv7QbWoBTipvSR3HDKK1Dan+XLaLYt7g/KAoVyp5XugjbTD+zNY5g8w6c9FN3p4QYnLXJvdbVafBRLwrB1j9P2iW8LuWUhBWwHr4p6SSf6xLJMfydcV+9nP94Nk9MDJ1RPx6QRzZBJsZQrVNc4lj2K2RcmX4QHe+E0/Bm49HuHbCpBVoXt0x7mUVRAu66XHOnpRoqwd7KH6hNDOYneBgGRQ44VKQ4XckVslsbMJM6LzeK4VxWKPgFLyVAqayewyHghdECyP2q5G0qkbi28mObXBJ0OAv8aXOizvUReVotB/BNrl2mf2QkKYwSYnsRFkBPzLtLnMSvjZLsJjiTfjNbyWXAZrYHUpnlrrAqczHuzjflJzhWpe1hazi/HKKLMvciphrwLj7P01+v0VbS2c7Lyb7IUVevtITF0q5l7Mh5lYROw1VxSHryK3lnbn/7jBwJu9I 3G21gytE cOr3rQwr+9TpafrIGvMJOzAIkMGW+TB7ZNvMxE+scDOGhutyRECK2bwZ/i24lWrRjfac2sr8sXhNXaOLseDVElkOh4WFx30FcLYNPq2N91TnQNgi8Wx1bWGShf0ren2ukFBiH81IwnUn6xze8IcyKtd/jUjcmNPE5OJJntylPmkHiaXTvZRjjJ+GbDy+IPIRTbKTU2HCV2ts+vHOpN21fSBtBe6a/OZ2ncopq3BO8Fnlh9p8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new sticky flag (HMM_PFN_DMA_MAPPED), which isn't overwritten by HMM range fault. Such flag allows users to tag specific PFNs with information if this specific PFN was already DMA mapped. Signed-off-by: Leon Romanovsky --- include/linux/hmm.h | 14 ++++++++++++++ mm/hmm.c | 34 +++++++++++++++++++++------------- 2 files changed, 35 insertions(+), 13 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 126a36571667..5dd655f6766b 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -23,6 +23,8 @@ struct mmu_interval_notifier; * HMM_PFN_WRITE - if the page memory can be written to (requires HMM_PFN_VALID) * HMM_PFN_ERROR - accessing the pfn is impossible and the device should * fail. ie poisoned memory, special pages, no vma, etc + * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation + * to mark that page is already DMA mapped * * On input: * 0 - Return the current state of the page, do not fault it. @@ -36,6 +38,10 @@ enum hmm_pfn_flags { HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + + /* Sticky flag, carried from Input to Output */ + HMM_PFN_DMA_MAPPED = 1UL << (BITS_PER_LONG - 7), + HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), /* Input flags */ @@ -57,6 +63,14 @@ static inline struct page *hmm_pfn_to_page(unsigned long hmm_pfn) return pfn_to_page(hmm_pfn & ~HMM_PFN_FLAGS); } +/* + * hmm_pfn_to_phys() - return physical address pointed to by a device entry + */ +static inline phys_addr_t hmm_pfn_to_phys(unsigned long hmm_pfn) +{ + return __pfn_to_phys(hmm_pfn & ~HMM_PFN_FLAGS); +} + /* * hmm_pfn_to_map_order() - return the CPU mapping size order * diff --git a/mm/hmm.c b/mm/hmm.c index 7e0229ae4a5a..2a0c34d7cb2b 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -44,8 +44,10 @@ static int hmm_pfns_fill(unsigned long addr, unsigned long end, { unsigned long i = (addr - range->start) >> PAGE_SHIFT; - for (; addr < end; addr += PAGE_SIZE, i++) - range->hmm_pfns[i] = cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++) { + range->hmm_pfns[i] &= HMM_PFN_DMA_MAPPED; + range->hmm_pfns[i] |= cpu_flags; + } return 0; } @@ -202,8 +204,10 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, return hmm_vma_fault(addr, end, required_fault, walk); pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) { + hmm_pfns[i] &= HMM_PFN_DMA_MAPPED; + hmm_pfns[i] |= pfn | cpu_flags; + } return 0; } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -236,7 +240,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (required_fault) goto fault; - *hmm_pfn = 0; + *hmm_pfn = *hmm_pfn & HMM_PFN_DMA_MAPPED; return 0; } @@ -253,14 +257,14 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) cpu_flags |= HMM_PFN_WRITE; - *hmm_pfn = swp_offset_pfn(entry) | cpu_flags; + *hmm_pfn = (*hmm_pfn & HMM_PFN_DMA_MAPPED) | swp_offset_pfn(entry) | cpu_flags; return 0; } required_fault = hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (!required_fault) { - *hmm_pfn = 0; + *hmm_pfn = *hmm_pfn & HMM_PFN_DMA_MAPPED; return 0; } @@ -304,11 +308,11 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, pte_unmap(ptep); return -EFAULT; } - *hmm_pfn = HMM_PFN_ERROR; + *hmm_pfn = (*hmm_pfn & HMM_PFN_DMA_MAPPED) | HMM_PFN_ERROR; return 0; } - *hmm_pfn = pte_pfn(pte) | cpu_flags; + *hmm_pfn = (*hmm_pfn & HMM_PFN_DMA_MAPPED) | pte_pfn(pte) | cpu_flags; return 0; fault: @@ -448,8 +452,10 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, } pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - for (i = 0; i < npages; ++i, ++pfn) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; i < npages; ++i, ++pfn) { + hmm_pfns[i] &= HMM_PFN_DMA_MAPPED; + hmm_pfns[i] |= pfn | cpu_flags; + } goto out_unlock; } @@ -507,8 +513,10 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, } pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT); - for (; addr < end; addr += PAGE_SIZE, i++, pfn++) - range->hmm_pfns[i] = pfn | cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++, pfn++) { + range->hmm_pfns[i] &= HMM_PFN_DMA_MAPPED; + range->hmm_pfns[i] |= pfn | cpu_flags; + } spin_unlock(ptl); return 0; From patchwork Sun Oct 27 14:21:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852562 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAB98D13563 for ; Sun, 27 Oct 2024 14:22:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 302886B00A3; Sun, 27 Oct 2024 10:22:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 28B5D6B00A4; Sun, 27 Oct 2024 10:22:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12DF16B00A5; Sun, 27 Oct 2024 10:22:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E480D6B00A3 for ; Sun, 27 Oct 2024 10:22:15 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 17202C0162 for ; Sun, 27 Oct 2024 14:21:52 +0000 (UTC) X-FDA: 82719596400.22.8BC1BA9 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf11.hostedemail.com (Postfix) with ESMTP id 9B09040012 for ; Sun, 27 Oct 2024 14:21:46 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=b7l02Pni; spf=pass (imf11.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038855; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BDCAkWgGss9kl9ZT78c8x4/6Ij5rtJXZfhxmBj0VOX0=; b=FqfWa9MhI6ott3bmgt5C32aTtgHUVJy9l1Ay+oyIEFHhHw2rKZVQEDCu/kcwFlOwysn6Ov /hayPWMjkF07cQxubBBL8SGnUzWgM3Fys3uGpLIZkk8cILOzkVPnjpGee8NG+fZ7ff3YNA sU30JHvIWyMKmLKigYex3D0EshCBuzY= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=b7l02Pni; spf=pass (imf11.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038855; a=rsa-sha256; cv=none; b=6YHbKd4tNl+cYU7Zc+Rx1Kajys2/sLnCHxIW9hYIPnc0/BfV+AVL/j1xlHmHWMe0CCyjGB 6jii/xlZGKcvaBOBj8VlExetpyhO1vvAZbtUiVy2ijwOg0CoCGQ/jqt2bwMX77RdekW+ct aHTRgrm9fLql0aORrJKeHjN/1yUDZqg= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id CC65B5C5818; Sun, 27 Oct 2024 14:21:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D29EAC4CEC3; Sun, 27 Oct 2024 14:22:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038932; bh=evzyHAPqYyUnLaT3iQjwydmJ5tMcKT9praApMXMc55E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b7l02PniEWSFRqY/0/7iovO8qTlI6NrvEizVq3tbrFR55i9XgyDxqpALSjeUlsPU/ EcexJQe46XOJyyybAEAneKjPfeVEbVRKSk+ueBWim8imKq0EL9W8Zxxx8Ag/Y+dQjI 484WsFeAaV/l/UqoOTlrEAkrSoMavbb4aJPz3i08UJ31OLJClDDOaCd0hJydQQGZZ5 QU1zw09zD+xbg8XjW6id0xLiDIdxjiEkrauEhHo0IHyLfwxnatQaLj81pXo9c9lP0X mULqUtOUjHkcyZawF2rvXZnsPWzAYJows4zddbbiIYiCBUPd5Zfw1F8fD4d2KopfFx PllSIyesHoGwA== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 11/18] mm/hmm: provide generic DMA managing logic Date: Sun, 27 Oct 2024 16:21:11 +0200 Message-ID: <505c3956e0101f3e4f180e67319ff33c789f83b5.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 9B09040012 X-Stat-Signature: rx9nipoc1qicibqfihfdzric9gmyy85c X-HE-Tag: 1730038906-331803 X-HE-Meta: U2FsdGVkX18lUu8G4XmwqBrjweAR7iqwN3HWUSNo6V5kbOgHbCsXdx+NPYT+5RX/yqILERkgeRRytaLauvbOKbj6WsL01y9xy1kkkLWhwK6+vW4+vCcPn3WpTq94O+AV5nIMtz8jw0j+b7E/wL9A548q5+bU/CGAKq/FuKIbejyGWbPDgcMWM8UAO/wNihbuL6S4PvTZ3oVrPBjNtkOnSLqA2jLSnFHMurOK5yH7f/CIQKEFlLAt8JRYqz6emKTGzRVJGbW1YzH7pnKUYoZkVygPd2EnF1LQxJcfIt0GB5WN9F8VJYlfTpW2vYE+974lbxqclbCph1ftuSAZHt69q817EOWOqDGKBsGHfc2imDBx6dHSwj/6VuloynTiiJYEo2Bf5rZeItPntUQKnH8rYW/2gfKARagSqXdaSsMSMeIrRz6KrE/zXjHYc81MPg5S5aWvNcfPqj/35WfMVy4Nm8gcyNdj7jXZ4VLJrHOWfSNYk/IPsJ9NEGePGNAiJgNtaQ3e0KQufW4MEbq0HKpS5MBlLJzB8+BtlgrE8msYboaeCKOmvWg+MzLeyaBggmPkxxSermXl5bP/Bq/iAXm+/jaB+XOgyEh16X62Pl6JME7V9iSb+gf9kvXoRSQVF+GQ+vOqUHUS0sGJYp1yyJI5UEyW8ELAF/pmWXK5erexqpV85JhqAH2hmYqmcTJix2kE0n73tilblWqxNCapqBMZWQa4DepyPa5ZyI7X6qVsw2uq/kSw31zX0HBD61yzRiKoMlmiFAjxu0LwWAqEMyL0jzojDcIh2FivY8209uogLPIzbjor2dFDh1q6ZRET61ZSWFrG6irItuanRgS/91htK2NlbaSHRSR860AlPwaLePLw9bVgeVQauEDNH747IUBlWhqOLN6/4NGOsU1GnbdFVLxdPyWSCBJAeQQAuu5bXTTZhDimXcVrfHgoKnVmgNS1HqOrVKw5gO7kBXJcb7F 4Kq94XHo XccDxpFb0SVP0KANRGvrNVhLfuceYpnQILlHH3POUw8DZPA1SvsmbzhqnK9IybdFxY2OwnvwbkHZJmtHFVBeKlT9tYAdrAbENvgLKkptybmQrBeJYUQiVXKp8RZJSL3HRJqkjWdU0PvlyqnE/BzfJxlKyV2otCMu0BAxglDEuj6mdQBkba10QKGmjBTo4SFgv85D+YnVHcnmULf/3tvid4ZItk+zSLkKip/sivSHKFFj+vQA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky HMM callers use PFN list to populate range while calling to hmm_range_fault(), the conversion from PFN to DMA address is done by the callers with help of another DMA list. However, it is wasteful on any modern platform and by doing the right logic, that DMA list can be avoided. Provide generic logic to manage these lists and gave an interface to map/unmap PFNs to DMA addresses, without requiring from the callers to be an experts in DMA core API. Signed-off-by: Leon Romanovsky --- include/linux/hmm-dma.h | 32 +++++++ include/linux/hmm.h | 2 + mm/hmm.c | 195 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 229 insertions(+) create mode 100644 include/linux/hmm-dma.h diff --git a/include/linux/hmm-dma.h b/include/linux/hmm-dma.h new file mode 100644 index 000000000000..f6ce2a00d74d --- /dev/null +++ b/include/linux/hmm-dma.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */ +#ifndef LINUX_HMM_DMA_H +#define LINUX_HMM_DMA_H + +#include + +struct dma_iova_state; +struct pci_p2pdma_map_state; + +/* + * struct hmm_dma_map - array of PFNs and DMA addresses + * + * @state: DMA IOVA state + * @pfns: array of PFNs + * @dma_list: array of DMA addresses + * @dma_entry_size: size of each DMA entry in the array + */ +struct hmm_dma_map { + struct dma_iova_state state; + unsigned long *pfn_list; + dma_addr_t *dma_list; + size_t dma_entry_size; +}; + +int hmm_dma_map_alloc(struct device *dev, struct hmm_dma_map *map, + size_t nr_entries, size_t dma_entry_size); +void hmm_dma_map_free(struct device *dev, struct hmm_dma_map *map); +dma_addr_t hmm_dma_map_pfn(struct device *dev, struct hmm_dma_map *map, + size_t idx, struct pci_p2pdma_map_state *p2pdma_state); +bool hmm_dma_unmap_pfn(struct device *dev, struct hmm_dma_map *map, size_t idx); +#endif /* LINUX_HMM_DMA_H */ diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 5dd655f6766b..62980ca8f3c5 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -23,6 +23,7 @@ struct mmu_interval_notifier; * HMM_PFN_WRITE - if the page memory can be written to (requires HMM_PFN_VALID) * HMM_PFN_ERROR - accessing the pfn is impossible and the device should * fail. ie poisoned memory, special pages, no vma, etc + * HMM_PFN_P2PDMA_BUS - Bus mapped P2P transfer * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation * to mark that page is already DMA mapped * @@ -40,6 +41,7 @@ enum hmm_pfn_flags { HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), /* Sticky flag, carried from Input to Output */ + HMM_PFN_P2PDMA_BUS = 1UL << (BITS_PER_LONG - 6), HMM_PFN_DMA_MAPPED = 1UL << (BITS_PER_LONG - 7), HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), diff --git a/mm/hmm.c b/mm/hmm.c index 2a0c34d7cb2b..85cd6f20303c 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -10,6 +10,7 @@ */ #include #include +#include #include #include #include @@ -23,6 +24,7 @@ #include #include #include +#include #include #include @@ -615,3 +617,196 @@ int hmm_range_fault(struct hmm_range *range) return ret; } EXPORT_SYMBOL(hmm_range_fault); + +/** + * hmm_dma_map_alloc - Allocate HMM map structure + * @dev: device to allocate structure for + * @map: HMM map to allocate + * @nr_entries: number of entries in the map + * @dma_entry_size: size of the DMA entry in the map + * + * Allocate the HMM map structure and all the lists it contains. + * Return 0 on success, -ENOMEM on failure. + */ +int hmm_dma_map_alloc(struct device *dev, struct hmm_dma_map *map, + size_t nr_entries, size_t dma_entry_size) +{ + bool dma_need_sync = false; + bool use_iova; + + if (!(nr_entries * PAGE_SIZE / dma_entry_size)) + return -EINVAL; + + /* + * The HMM API violates our normal DMA buffer ownership rules and can't + * transfer buffer ownership. The dma_addressing_limited() check is a + * best approximation to ensure no swiotlb buffering happens. + */ + if (IS_ENABLED(CONFIG_DMA_NEED_SYNC)) + dma_need_sync = !dev->dma_skip_sync; + if (dma_need_sync || dma_addressing_limited(dev)) + return -EOPNOTSUPP; + + map->dma_entry_size = dma_entry_size; + map->pfn_list = + kvcalloc(nr_entries, sizeof(*map->pfn_list), GFP_KERNEL); + if (!map->pfn_list) + return -ENOMEM; + + use_iova = dma_iova_try_alloc(dev, &map->state, 0, + nr_entries * PAGE_SIZE); + if (!use_iova && dma_need_unmap(dev)) { + map->dma_list = kvcalloc(nr_entries, sizeof(*map->dma_list), + GFP_KERNEL); + if (!map->dma_list) + goto err_dma; + } + return 0; + +err_dma: + kfree(map->pfn_list); + return -ENOMEM; +} +EXPORT_SYMBOL_GPL(hmm_dma_map_alloc); + +/** + * hmm_dma_map_free - iFree HMM map structure + * @dev: device to free structure from + * @map: HMM map containing the various lists and state + * + * Free the HMM map structure and all the lists it contains. + */ +void hmm_dma_map_free(struct device *dev, struct hmm_dma_map *map) +{ + if (dma_use_iova(&map->state)) + dma_iova_free(dev, &map->state); + kfree(map->pfn_list); + kfree(map->dma_list); +} +EXPORT_SYMBOL_GPL(hmm_dma_map_free); + +/** + * hmm_dma_map_pfn - Map a physical HMM page to DMA address + * @dev: Device to map the page for + * @map: HMM map + * @idx: Index into the PFN and dma address arrays + * @pci_p2pdma_map_state: PCI P2P state. + * + * dma_alloc_iova() allocates IOVA based on the size specified by their use in + * iova->size. Call this function after IOVA allocation to link whole @page + * to get the DMA address. Note that very first call to this function + * will have @offset set to 0 in the IOVA space allocated from + * dma_alloc_iova(). For subsequent calls to this function on same @iova, + * @offset needs to be advanced by the caller with the size of previous + * page that was linked + DMA address returned for the previous page that was + * linked by this function. + */ +dma_addr_t hmm_dma_map_pfn(struct device *dev, struct hmm_dma_map *map, + size_t idx, struct pci_p2pdma_map_state *p2pdma_state) +{ + struct dma_iova_state *state = &map->state; + dma_addr_t *dma_addrs = map->dma_list; + unsigned long *pfns = map->pfn_list; + struct page *page = hmm_pfn_to_page(pfns[idx]); + phys_addr_t paddr = hmm_pfn_to_phys(pfns[idx]); + size_t offset = idx * map->dma_entry_size; + dma_addr_t dma_addr; + int ret; + + if ((pfns[idx] & HMM_PFN_DMA_MAPPED) && + !(pfns[idx] & HMM_PFN_P2PDMA_BUS)) { + /* + * We are in this flow when there is a need to resync flags, + * for example when page was already linked in prefetch call + * with READ flag and now we need to add WRITE flag + * + * This page was already programmed to HW and we don't want/need + * to unlink and link it again just to resync flags. + */ + if (dma_use_iova(state)) + return state->addr + offset; + + /* + * Without dma_need_unmap, the dma_addrs array is NULL, thus we + * need to regenerate the address below even if there already + * was a mapping. But !dma_need_unmap implies that the + * mapping stateless, so this is fine. + */ + if (dma_need_unmap(dev)) + return dma_addrs[idx]; + + /* Continue to remapping */ + } + + switch (pci_p2pdma_state(p2pdma_state, dev, page)) { + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + case PCI_P2PDMA_MAP_NONE: + break; + case PCI_P2PDMA_MAP_BUS_ADDR: + dma_addr = pci_p2pdma_bus_addr_map(p2pdma_state, paddr); + pfns[idx] |= HMM_PFN_P2PDMA_BUS; + goto done; + default: + return DMA_MAPPING_ERROR; + } + + if (dma_use_iova(state)) { + ret = dma_iova_link(dev, state, paddr, offset, + map->dma_entry_size, DMA_BIDIRECTIONAL, 0); + ret = dma_iova_sync(dev, state, offset, map->dma_entry_size, + ret); + if (ret) + return DMA_MAPPING_ERROR; + + dma_addr = state->addr + offset; + } else { + if (WARN_ON_ONCE(dma_need_unmap(dev) && !dma_addrs)) + return DMA_MAPPING_ERROR; + + dma_addr = dma_map_page(dev, page, 0, map->dma_entry_size, + DMA_BIDIRECTIONAL); + if (dma_mapping_error(dev, dma_addr)) + return DMA_MAPPING_ERROR; + + if (dma_need_unmap(dev)) + dma_addrs[idx] = dma_addr; + } + +done: + pfns[idx] |= HMM_PFN_DMA_MAPPED; + return dma_addr; +} +EXPORT_SYMBOL_GPL(hmm_dma_map_pfn); + +/** + * hmm_dma_unmap_pfn - Unmap a physical HMM page from DMA address + * @dev: Device to unmap the page from + * @map: HMM map + * @idx: Index of the PFN to unmap + * + * Returns true if the PFN was mapped and has been unmapped, false otherwise. + */ +bool hmm_dma_unmap_pfn(struct device *dev, struct hmm_dma_map *map, size_t idx) +{ + struct dma_iova_state *state = &map->state; + dma_addr_t *dma_addrs = map->dma_list; + unsigned long *pfns = map->pfn_list; + +#define HMM_PFN_VALID_DMA (HMM_PFN_VALID | HMM_PFN_DMA_MAPPED) + if ((pfns[idx] & HMM_PFN_VALID_DMA) != HMM_PFN_VALID_DMA) + return false; +#undef HMM_PFN_VALID_DMA + + if (pfns[idx] & HMM_PFN_P2PDMA_BUS) + ; /* no need to unmap bus address P2P mappings */ + else if (dma_use_iova(state)) + dma_iova_unlink(dev, state, idx * map->dma_entry_size, + map->dma_entry_size, DMA_BIDIRECTIONAL, 0); + else if (dma_need_unmap(dev)) + dma_unmap_page(dev, dma_addrs[idx], map->dma_entry_size, + DMA_BIDIRECTIONAL); + + pfns[idx] &= ~(HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA_BUS); + return true; +} +EXPORT_SYMBOL_GPL(hmm_dma_unmap_pfn); From patchwork Sun Oct 27 14:21:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4893FD13561 for ; Sun, 27 Oct 2024 14:22:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC7C86B00A4; Sun, 27 Oct 2024 10:22:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C76876B00A5; Sun, 27 Oct 2024 10:22:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC8E66B00A6; Sun, 27 Oct 2024 10:22:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8C5856B00A4 for ; Sun, 27 Oct 2024 10:22:19 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A804D121B41 for ; Sun, 27 Oct 2024 14:21:59 +0000 (UTC) X-FDA: 82719596568.12.2F571D7 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf10.hostedemail.com (Postfix) with ESMTP id CBD53C000C for ; Sun, 27 Oct 2024 14:22:07 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DoDgucpH; spf=pass (imf10.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038729; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zRilljuSlrHZeowBJUFk5kMExeeLfbXBNqxkEwyETU4=; b=OcP43bwL1pBnauAzHwn0T2O4iH/SCyrCQ+++kiKqvavJ5TfP61otmllWk7kHPinPvYbhph 2W9Fkher1CXS11jEaMfAjPakcElQu+kXxLTTBdxkrrbeN4lY80m3Qa+Q0Chay1xsLwh2Rk vVSGJSkv4LNOjs8xPkGZQTU0mZeyttI= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DoDgucpH; spf=pass (imf10.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038729; a=rsa-sha256; cv=none; b=ngfPBHUDqn2yaWtYiV311jlt2D7bJMgzOphbPeBK8hmLsN7XXhYQI/+hDS/BdEr5aiRD0u HNcZqWzDpM6cY1KMqkuWgye7WsZZQDYT3nsoxwvV5MB33ITc3X2g49u3W+uhVWRmJgoMw5 QAKDhNWOW7R5WTWXnhmUIqd7xY8250A= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id C9D05A40FA8; Sun, 27 Oct 2024 14:20:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CCE1FC4CEE5; Sun, 27 Oct 2024 14:22:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038936; bh=z2vnn5LCpeU9AGWM/Q4PQinA49Qn1NqHhrSnH32MJZk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DoDgucpHB54BotMg1OhDc49TcPcabulXkAaeU5s4GLF1Raa7IWcR7YRU/b5o4jQIA Qj8+PZ0sJs4MGulAQ6DQbzyG7NKcmZt6T6kOyocNLGI5TeqQ0DGOczdoLwy78bOi/u aHTr80Fzz5i5YZuCHKNa36Cz8t0vF0FVc/7jjzVPBSKrH1QRQtbXNG+tzancbBi0uG kIGc5w5zYD/B/f0t8wYPNqgfxOhNXZvKqGc8RieMTTxzBelCJlw3+rZIrRruJsbtTW p40QuoHCilzOziE/bZuf1dj6Wj4CxFNbKXNaL1CQR1N0liwWCO+k97mEIxUnyCLVV6 w3gIGXwcRB/qg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 12/18] RDMA/umem: Store ODP access mask information in PFN Date: Sun, 27 Oct 2024 16:21:12 +0200 Message-ID: <68d43b7fa3d91cf9a107ede26777ef544f5e016b.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CBD53C000C X-Stat-Signature: 1bsqz8qkue66soccjiqenb6bs4pcmk3y X-Rspam-User: X-HE-Tag: 1730038927-686150 X-HE-Meta: U2FsdGVkX18cNCm2GFbqdTW1P2CLD1gTJuzfaLZO0qCEWIFyhZkuqE26OMg5eYYcU9ExxH80m48IHU15qa43zGyOug4mSACzo7UNNLot8cqU92gdcqxXCj6mrZWCB3jdVWb+RUV1ZNQSM4AF61CzFe6LxZMjGzWwJAgjX4K9T7bNgmlAs3qy9YY+gxh7lcXr8TfCYQu6e4Z2n1tvgvcDQwOGn9YZVYi6WRYaIPCEK8wuG9hlj5xhrdU+Ck8rXjtqopcyS2YANOgoZSLaOIWGM9Ux9R1VmP7sxzhYmACE8Ecsqg9QOBEFS0PuMm1si+IbBfB1tIBLc4XXtueCQ5AVuU78IZDfYp00zWPgrl+6HbOUFglbfYeb9xTwntXs2MbbKP95h7JF0fC5YdjH54SHQUcAEID8WiX1T77PHk6XtjKdH7wrRjSKIsHi0a89mXqYbxIT4JuqMBiTaJ1b6bgH1HtILmVLr9yn+72rRVOauXLNsCGbu1NGy6akgAmeH5xqB1VYLldqLBxiZSH7C+zqe/awmh+Qz6GDOTZHthzD4RmA69gXGA2Aj6PDEgL7eCJu/Ry1UwJiIFzmesYLaeKHfVwuKSbTzDhNL/+D85y/W6EjGmSRt53rntIGkr7e9RCbcJgwePAHz/QeUyfELGtth4NuZOzwVukyhYJ+SkU/7UDsO/tqWBftFR0jb/JTqKQOsbsG2qjLQnRXN7ajGg2Yh4xW8QQR10xe5Hk5GIkZjZFnqNGGF4Dj7D+fxYstzCF7CcZ3Kc7gDk+0fIO+TnoJ28n1IcYsuDAgTV0jQTYbWc8S58xFmoTPzYIRG/4ZvOPIr6fGMd0GoLZaZbQB2UE11s3Mpr2QWsldJrUzlbgm9+soHnRyNRmp7AHLk24+1nWi3SRrukuUfiaevh2boe9wbqwgfBZvjIdYJPd/ZsuEgfhfHPLx0VZL1T5K+2E6btwFnF8qlsKIiwKTXGYGRH5 XBXneqV9 qrVuTbm0IXUzRu15uchso6H0H4VCtUbeZApfWegp9t4PwEXprVTwj87DOb8gAUx5Ue2WAL03bB1Gt3Z1hXTGMPdOehXDAw2uYx+dos24aQnPT7AfD9zovhanoYvcchLLO2aX4kt8g5dBMG5e5QRx9QTeIHPItmkSUHOiHJ0p2ENTQa3LrXzCYzxdRI55BU4Rjvulec05ZHqf4cYNPdtLYwRi93Hn6OlNnc3Clsj6Yi4qCgBI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky As a preparation to remove dma_list, store access mask in PFN pointer and not in dma_addr_t. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 100 +++++++++++---------------- drivers/infiniband/hw/mlx5/mlx5_ib.h | 1 + drivers/infiniband/hw/mlx5/odp.c | 37 +++++----- include/rdma/ib_umem_odp.h | 14 +--- 4 files changed, 61 insertions(+), 91 deletions(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index e9fa22d31c23..9dba369365af 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -296,22 +296,11 @@ EXPORT_SYMBOL(ib_umem_odp_release); static int ib_umem_odp_map_dma_single_page( struct ib_umem_odp *umem_odp, unsigned int dma_index, - struct page *page, - u64 access_mask) + struct page *page) { struct ib_device *dev = umem_odp->umem.ibdev; dma_addr_t *dma_addr = &umem_odp->dma_list[dma_index]; - if (*dma_addr) { - /* - * If the page is already dma mapped it means it went through - * a non-invalidating trasition, like read-only to writable. - * Resync the flags. - */ - *dma_addr = (*dma_addr & ODP_DMA_ADDR_MASK) | access_mask; - return 0; - } - *dma_addr = ib_dma_map_page(dev, page, 0, 1 << umem_odp->page_shift, DMA_BIDIRECTIONAL); if (ib_dma_mapping_error(dev, *dma_addr)) { @@ -319,7 +308,6 @@ static int ib_umem_odp_map_dma_single_page( return -EFAULT; } umem_odp->npages++; - *dma_addr |= access_mask; return 0; } @@ -355,9 +343,6 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, struct hmm_range range = {}; unsigned long timeout; - if (access_mask == 0) - return -EINVAL; - if (user_virt < ib_umem_start(umem_odp) || user_virt + bcnt > ib_umem_end(umem_odp)) return -EFAULT; @@ -383,7 +368,7 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, if (fault) { range.default_flags = HMM_PFN_REQ_FAULT; - if (access_mask & ODP_WRITE_ALLOWED_BIT) + if (access_mask & HMM_PFN_WRITE) range.default_flags |= HMM_PFN_REQ_WRITE; } @@ -415,22 +400,17 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, for (pfn_index = 0; pfn_index < num_pfns; pfn_index += 1 << (page_shift - PAGE_SHIFT), dma_index++) { - if (fault) { - /* - * Since we asked for hmm_range_fault() to populate - * pages it shouldn't return an error entry on success. - */ - WARN_ON(range.hmm_pfns[pfn_index] & HMM_PFN_ERROR); - WARN_ON(!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)); - } else { - if (!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)) { - WARN_ON(umem_odp->dma_list[dma_index]); - continue; - } - access_mask = ODP_READ_ALLOWED_BIT; - if (range.hmm_pfns[pfn_index] & HMM_PFN_WRITE) - access_mask |= ODP_WRITE_ALLOWED_BIT; - } + /* + * Since we asked for hmm_range_fault() to populate + * pages it shouldn't return an error entry on success. + */ + WARN_ON(fault && range.hmm_pfns[pfn_index] & HMM_PFN_ERROR); + WARN_ON(fault && !(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)); + if (!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)) + continue; + + if (range.hmm_pfns[pfn_index] & HMM_PFN_DMA_MAPPED) + continue; hmm_order = hmm_pfn_to_map_order(range.hmm_pfns[pfn_index]); /* If a hugepage was detected and ODP wasn't set for, the umem @@ -445,13 +425,13 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, } ret = ib_umem_odp_map_dma_single_page( - umem_odp, dma_index, hmm_pfn_to_page(range.hmm_pfns[pfn_index]), - access_mask); + umem_odp, dma_index, hmm_pfn_to_page(range.hmm_pfns[pfn_index])); if (ret < 0) { ibdev_dbg(umem_odp->umem.ibdev, "ib_umem_odp_map_dma_single_page failed with error %d\n", ret); break; } + range.hmm_pfns[pfn_index] |= HMM_PFN_DMA_MAPPED; } /* upon success lock should stay on hold for the callee */ if (!ret) @@ -471,7 +451,6 @@ EXPORT_SYMBOL(ib_umem_odp_map_dma_and_lock); void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, u64 bound) { - dma_addr_t dma_addr; dma_addr_t dma; int idx; u64 addr; @@ -482,34 +461,35 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, virt = max_t(u64, virt, ib_umem_start(umem_odp)); bound = min_t(u64, bound, ib_umem_end(umem_odp)); for (addr = virt; addr < bound; addr += BIT(umem_odp->page_shift)) { + unsigned long pfn_idx = (addr - ib_umem_start(umem_odp)) >> PAGE_SHIFT; + struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); + idx = (addr - ib_umem_start(umem_odp)) >> umem_odp->page_shift; dma = umem_odp->dma_list[idx]; - /* The access flags guaranteed a valid DMA address in case was NULL */ - if (dma) { - unsigned long pfn_idx = (addr - ib_umem_start(umem_odp)) >> PAGE_SHIFT; - struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); - - dma_addr = dma & ODP_DMA_ADDR_MASK; - ib_dma_unmap_page(dev, dma_addr, - BIT(umem_odp->page_shift), - DMA_BIDIRECTIONAL); - if (dma & ODP_WRITE_ALLOWED_BIT) { - struct page *head_page = compound_head(page); - /* - * set_page_dirty prefers being called with - * the page lock. However, MMU notifiers are - * called sometimes with and sometimes without - * the lock. We rely on the umem_mutex instead - * to prevent other mmu notifiers from - * continuing and allowing the page mapping to - * be removed. - */ - set_page_dirty(head_page); - } - umem_odp->dma_list[idx] = 0; - umem_odp->npages--; + if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_VALID)) + goto clear; + if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_DMA_MAPPED)) + goto clear; + + ib_dma_unmap_page(dev, dma, BIT(umem_odp->page_shift), + DMA_BIDIRECTIONAL); + if (umem_odp->pfn_list[pfn_idx] & HMM_PFN_WRITE) { + struct page *head_page = compound_head(page); + /* + * set_page_dirty prefers being called with + * the page lock. However, MMU notifiers are + * called sometimes with and sometimes without + * the lock. We rely on the umem_mutex instead + * to prevent other mmu notifiers from + * continuing and allowing the page mapping to + * be removed. + */ + set_page_dirty(head_page); } + umem_odp->npages--; +clear: + umem_odp->pfn_list[pfn_idx] &= ~HMM_PFN_FLAGS; } } EXPORT_SYMBOL(ib_umem_odp_unmap_dma_pages); diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 23fd72f7f63d..3e4aaa6319db 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -336,6 +336,7 @@ struct mlx5_ib_flow_db { #define MLX5_IB_UPD_XLT_PD BIT(4) #define MLX5_IB_UPD_XLT_ACCESS BIT(5) #define MLX5_IB_UPD_XLT_INDIRECT BIT(6) +#define MLX5_IB_UPD_XLT_DOWNGRADE BIT(7) /* Private QP creation flags to be passed in ib_qp_init_attr.create_flags. * diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 4b37446758fd..78887500ce15 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -34,6 +34,7 @@ #include #include #include +#include #include "mlx5_ib.h" #include "cmd.h" @@ -158,22 +159,12 @@ static void populate_klm(struct mlx5_klm *pklm, size_t idx, size_t nentries, } } -static u64 umem_dma_to_mtt(dma_addr_t umem_dma) -{ - u64 mtt_entry = umem_dma & ODP_DMA_ADDR_MASK; - - if (umem_dma & ODP_READ_ALLOWED_BIT) - mtt_entry |= MLX5_IB_MTT_READ; - if (umem_dma & ODP_WRITE_ALLOWED_BIT) - mtt_entry |= MLX5_IB_MTT_WRITE; - - return mtt_entry; -} - static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, struct mlx5_ib_mr *mr, int flags) { struct ib_umem_odp *odp = to_ib_umem_odp(mr->umem); + bool downgrade = flags & MLX5_IB_UPD_XLT_DOWNGRADE; + unsigned long pfn; dma_addr_t pa; size_t i; @@ -181,8 +172,17 @@ static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, return; for (i = 0; i < nentries; i++) { + pfn = odp->pfn_list[idx + i]; + if (!(pfn & HMM_PFN_VALID)) + /* ODP initialization */ + continue; + pa = odp->dma_list[idx + i]; - pas[i] = cpu_to_be64(umem_dma_to_mtt(pa)); + pa |= MLX5_IB_MTT_READ; + if ((pfn & HMM_PFN_WRITE) && !downgrade) + pa |= MLX5_IB_MTT_WRITE; + + pas[i] = cpu_to_be64(pa); } } @@ -286,8 +286,7 @@ static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni, * estimate the cost of another UMR vs. the cost of bigger * UMR. */ - if (umem_odp->dma_list[idx] & - (ODP_READ_ALLOWED_BIT | ODP_WRITE_ALLOWED_BIT)) { + if (umem_odp->pfn_list[idx] & HMM_PFN_VALID) { if (!in_block) { blk_start_idx = idx; in_block = 1; @@ -668,7 +667,7 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp, { int page_shift, ret, np; bool downgrade = flags & MLX5_PF_FLAGS_DOWNGRADE; - u64 access_mask; + u64 access_mask = 0; u64 start_idx; bool fault = !(flags & MLX5_PF_FLAGS_SNAPSHOT); u32 xlt_flags = MLX5_IB_UPD_XLT_ATOMIC; @@ -676,12 +675,14 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp, if (flags & MLX5_PF_FLAGS_ENABLE) xlt_flags |= MLX5_IB_UPD_XLT_ENABLE; + if (flags & MLX5_PF_FLAGS_DOWNGRADE) + xlt_flags |= MLX5_IB_UPD_XLT_DOWNGRADE; + page_shift = odp->page_shift; start_idx = (user_va - ib_umem_start(odp)) >> page_shift; - access_mask = ODP_READ_ALLOWED_BIT; if (odp->umem.writable && !downgrade) - access_mask |= ODP_WRITE_ALLOWED_BIT; + access_mask |= HMM_PFN_WRITE; np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask, fault); if (np < 0) diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index 0844c1d05ac6..a345c26a745d 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -8,6 +8,7 @@ #include #include +#include struct ib_umem_odp { struct ib_umem umem; @@ -67,19 +68,6 @@ static inline size_t ib_umem_odp_num_pages(struct ib_umem_odp *umem_odp) umem_odp->page_shift; } -/* - * The lower 2 bits of the DMA address signal the R/W permissions for - * the entry. To upgrade the permissions, provide the appropriate - * bitmask to the map_dma_pages function. - * - * Be aware that upgrading a mapped address might result in change of - * the DMA address for the page. - */ -#define ODP_READ_ALLOWED_BIT (1<<0ULL) -#define ODP_WRITE_ALLOWED_BIT (1<<1ULL) - -#define ODP_DMA_ADDR_MASK (~(ODP_READ_ALLOWED_BIT | ODP_WRITE_ALLOWED_BIT)) - #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING struct ib_umem_odp * From patchwork Sun Oct 27 14:21:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852564 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CB2CD13561 for ; Sun, 27 Oct 2024 14:22:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B454E6B00A6; Sun, 27 Oct 2024 10:22:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AF0AC6B00A7; Sun, 27 Oct 2024 10:22:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 943396B00A8; Sun, 27 Oct 2024 10:22:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 6DDF76B00A6 for ; Sun, 27 Oct 2024 10:22:23 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 40C66AC8B4 for ; Sun, 27 Oct 2024 14:21:40 +0000 (UTC) X-FDA: 82719595854.17.49A805C Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf05.hostedemail.com (Postfix) with ESMTP id 83730100006 for ; Sun, 27 Oct 2024 14:21:39 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=fOZHmxpc; spf=pass (imf05.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038733; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NI4+esa6fCVcBQC++yJUbQGT81QwVH0x4mwOIGIz0fE=; b=Cg7QPToK/tFCV6OWdMRLFT+x4shtvLflJcAKhQs8HCZWJRKanIFhGE7ppesHF1dPRUMjtl 7jFUcqJXs21pD1Pu9EvF/Pacdk/cCIVlw8+d16KqneRg8uanshOlvZ0sbUXh0qN3ghUUaV 4s4PXrYc5hjOIdHwf+LwrfzreLY7uSk= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=fOZHmxpc; spf=pass (imf05.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038733; a=rsa-sha256; cv=none; b=TvCZgKfxyRN5WwTPef0ig6SO+mR3np33orbVXnmootpZXp11QcifJEPC0LjAmxRO9+2mKi /X0mymyM2jDj7UjTekWlurcIliLXXuHo0bXsF9t/ytdvhdunsZXqdUiFpwf3llhxQZMJxN rK8ZohIfNWASvlf42/754vf+LvnQmWc= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id CB3EE5C581F; Sun, 27 Oct 2024 14:21:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CA8CCC4CEC3; Sun, 27 Oct 2024 14:22:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038940; bh=ufIIJTJY/y6wCplhr9nPwK7JHAPS3UUfFuT2KoVZ//k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fOZHmxpcqkJmbQaKEC3Db5mURkJ2CSvGR9y1sM4c09/cyDn55R5+9tsnoPQFpeVdr utXoa2pz7BSlRuVZpoWyCjic+ZRKWtFbLpgtJcSrJwXkeE6RQN4Dq8z0TNJ2tR4bXZ olsZZFmhb0d+vSDqfiwmtGPh9eemWUC/gsW7Q0yvexTvDwXaWXi1bf7zHmp6DlEpLo ppmd90uVvoo80+kdtFqhayrKQNKFPA6RE5ZgjNkH304e4R4u9aqBcX/cVFm86/+x4W eyQh/jxd2rXoIM9BqLAwdaEprIotSvpUOPSOwvMP7auswWVbeMwV/TI9CNnjIh6/Tq T8YCpqBScneNw== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 13/18] RDMA/core: Convert UMEM ODP DMA mapping to caching IOVA and page linkage Date: Sun, 27 Oct 2024 16:21:13 +0200 Message-ID: <5b825e07dd9b61d3d543664e3c833ed1aaa12eae.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 83730100006 X-Stat-Signature: ka39pbocsw6jnm8yfcta7pnwge8p9jc3 X-Rspam-User: X-HE-Tag: 1730038899-823482 X-HE-Meta: U2FsdGVkX18B021havAgQo7QrI9G+B2XdNBF1BIOfnObRZ/hxM/0/iVhtOFKXcqRM1v4M+N4d7zkbYQSNinyHzWQfPEvX3Mj5MBQcScD/MPg4zLtASfSjrwFiOkKnVtSBWJyb2DGXFgiMKOoIoKVnlLrivgVPZkipZrFQmNiT66HQlbGivIuS/KXEBr39UFnCG2Wv57oR+9W2OK5Py3PH7CUFmmJ6X8GMcrEuBapU2C38Q4hkRjTzhIDye8UEagnK5aFAqGqS6eH9vAIsGWshLo/l0OoHPDR1ABt6dIbYJ4GqPSw6yLhqjBwey3rULbrj6tebDjBLzz4C/gyzk8U5SkhYq1P2jT5dXcZOCAoa8bkkrL7OWVUFNofMeDgAj1ZHjoHuouR8j46jMuAA8Z49BZdXWxjL+/+8LKrQi+3oy6c8Tp91vhd13vpp4+hWaQzwPKyy/mupvZHGoq46vOMTNUCIpvJYkn1qJMIi8rz/lNDGFw/+x4as2FLiICfsexS/L70Ys80W1NhiSrOw33tBUbSoPmBHPveWilQnLfE7KP/ZnalkTWnTupz3QVXC5RBGoVUf5JJuuM4zQV8/IA8Wc3OfLKTOwo3oC/FdwFQbrF2kQ+8NAYy2EHfEIIZXKrjXhfydiTh0sWlk1ejbvIjSAemvuW4MLAOpU3eECnHMWWeTXjBQHN+h+bKL7/dL2PGaGHgv4BzzTzO7K6Pagzd6O7hq6nuYEALkXz0Xhpa8kCr952qN14s/hn7cFGPHTkZRlMTmUTC+VKPDmkopTmY8PF97fKL570aRrY5EaS5fgW8IF2bdIrGMU8uuSEchCQZc6olyLwItL/HUoAf8N5beCWZBoKWU5vw4FacIjGDPwbwYKzG0VwPf4p/OseH9z/gOXTgDv7krQi4AIK2HJVnpYH5pPXuq0J2mxa7u3Lm/YpbAe+OgDBPHmaZA9al/nc4hR/hYp4pvpmnZZ7XOCY 8T7oeHAG mcw5n/1iBzOh33OEwNzJ5ZL5gAtzxKsI8/w5s/HlC0yU3Fe7VMKAxuecYDD/UtXL21cKZCRFrtVrMlNtozk97dyMVFa2wKAmH3fNka3pU3ELb6nWr/nxdlkvQopDfVvCYLpDUfWy+iZ9DNvFLkepj9NtW/spy0DQoLlf5oFbLXDTAGQOlEcv/OSeQrPGqBIcwZUWxYXz6BLDpszGqVEzrzmYc59nlKXykZIcauqsBy29f4UU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Reuse newly added DMA API to cache IOVA and only link/unlink pages in fast path for UMEM ODP flow. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 101 ++++++--------------------- drivers/infiniband/hw/mlx5/mlx5_ib.h | 11 +-- drivers/infiniband/hw/mlx5/odp.c | 40 +++++++---- drivers/infiniband/hw/mlx5/umr.c | 12 +++- include/rdma/ib_umem_odp.h | 13 +--- 5 files changed, 69 insertions(+), 108 deletions(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 9dba369365af..30cd8f353476 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -41,6 +41,7 @@ #include #include #include +#include #include #include @@ -50,6 +51,7 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, const struct mmu_interval_notifier_ops *ops) { + struct ib_device *dev = umem_odp->umem.ibdev; int ret; umem_odp->umem.is_odp = 1; @@ -59,7 +61,6 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, size_t page_size = 1UL << umem_odp->page_shift; unsigned long start; unsigned long end; - size_t ndmas, npfns; start = ALIGN_DOWN(umem_odp->umem.address, page_size); if (check_add_overflow(umem_odp->umem.address, @@ -70,36 +71,23 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, if (unlikely(end < page_size)) return -EOVERFLOW; - ndmas = (end - start) >> umem_odp->page_shift; - if (!ndmas) - return -EINVAL; - - npfns = (end - start) >> PAGE_SHIFT; - umem_odp->pfn_list = kvcalloc( - npfns, sizeof(*umem_odp->pfn_list), GFP_KERNEL); - if (!umem_odp->pfn_list) - return -ENOMEM; - - umem_odp->dma_list = kvcalloc( - ndmas, sizeof(*umem_odp->dma_list), GFP_KERNEL); - if (!umem_odp->dma_list) { - ret = -ENOMEM; - goto out_pfn_list; - } + ret = hmm_dma_map_alloc(dev->dma_device, &umem_odp->map, + (end - start) >> PAGE_SHIFT, + 1 << umem_odp->page_shift); + if (ret) + return ret; ret = mmu_interval_notifier_insert(&umem_odp->notifier, umem_odp->umem.owning_mm, start, end - start, ops); if (ret) - goto out_dma_list; + goto out_free_map; } return 0; -out_dma_list: - kvfree(umem_odp->dma_list); -out_pfn_list: - kvfree(umem_odp->pfn_list); +out_free_map: + hmm_dma_map_free(dev->dma_device, &umem_odp->map); return ret; } @@ -262,6 +250,8 @@ EXPORT_SYMBOL(ib_umem_odp_get); void ib_umem_odp_release(struct ib_umem_odp *umem_odp) { + struct ib_device *dev = umem_odp->umem.ibdev; + /* * Ensure that no more pages are mapped in the umem. * @@ -274,48 +264,17 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) ib_umem_end(umem_odp)); mutex_unlock(&umem_odp->umem_mutex); mmu_interval_notifier_remove(&umem_odp->notifier); - kvfree(umem_odp->dma_list); - kvfree(umem_odp->pfn_list); + hmm_dma_map_free(dev->dma_device, &umem_odp->map); } put_pid(umem_odp->tgid); kfree(umem_odp); } EXPORT_SYMBOL(ib_umem_odp_release); -/* - * Map for DMA and insert a single page into the on-demand paging page tables. - * - * @umem: the umem to insert the page to. - * @dma_index: index in the umem to add the dma to. - * @page: the page struct to map and add. - * @access_mask: access permissions needed for this page. - * - * The function returns -EFAULT if the DMA mapping operation fails. - * - */ -static int ib_umem_odp_map_dma_single_page( - struct ib_umem_odp *umem_odp, - unsigned int dma_index, - struct page *page) -{ - struct ib_device *dev = umem_odp->umem.ibdev; - dma_addr_t *dma_addr = &umem_odp->dma_list[dma_index]; - - *dma_addr = ib_dma_map_page(dev, page, 0, 1 << umem_odp->page_shift, - DMA_BIDIRECTIONAL); - if (ib_dma_mapping_error(dev, *dma_addr)) { - *dma_addr = 0; - return -EFAULT; - } - umem_odp->npages++; - return 0; -} - /** * ib_umem_odp_map_dma_and_lock - DMA map userspace memory in an ODP MR and lock it. * * Maps the range passed in the argument to DMA addresses. - * The DMA addresses of the mapped pages is updated in umem_odp->dma_list. * Upon success the ODP MR will be locked to let caller complete its device * page table update. * @@ -372,7 +331,7 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, range.default_flags |= HMM_PFN_REQ_WRITE; } - range.hmm_pfns = &(umem_odp->pfn_list[pfn_start_idx]); + range.hmm_pfns = &(umem_odp->map.pfn_list[pfn_start_idx]); timeout = jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); retry: @@ -423,15 +382,6 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, __func__, hmm_order, page_shift); break; } - - ret = ib_umem_odp_map_dma_single_page( - umem_odp, dma_index, hmm_pfn_to_page(range.hmm_pfns[pfn_index])); - if (ret < 0) { - ibdev_dbg(umem_odp->umem.ibdev, - "ib_umem_odp_map_dma_single_page failed with error %d\n", ret); - break; - } - range.hmm_pfns[pfn_index] |= HMM_PFN_DMA_MAPPED; } /* upon success lock should stay on hold for the callee */ if (!ret) @@ -451,30 +401,23 @@ EXPORT_SYMBOL(ib_umem_odp_map_dma_and_lock); void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, u64 bound) { - dma_addr_t dma; - int idx; - u64 addr; struct ib_device *dev = umem_odp->umem.ibdev; + u64 addr; lockdep_assert_held(&umem_odp->umem_mutex); virt = max_t(u64, virt, ib_umem_start(umem_odp)); bound = min_t(u64, bound, ib_umem_end(umem_odp)); for (addr = virt; addr < bound; addr += BIT(umem_odp->page_shift)) { - unsigned long pfn_idx = (addr - ib_umem_start(umem_odp)) >> PAGE_SHIFT; - struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); - - idx = (addr - ib_umem_start(umem_odp)) >> umem_odp->page_shift; - dma = umem_odp->dma_list[idx]; + u64 offset = addr - ib_umem_start(umem_odp); + size_t idx = offset >> umem_odp->page_shift; + unsigned long pfn = umem_odp->map.pfn_list[idx]; - if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_VALID)) - goto clear; - if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_DMA_MAPPED)) + if (!hmm_dma_unmap_pfn(dev->dma_device, &umem_odp->map, idx)) goto clear; - ib_dma_unmap_page(dev, dma, BIT(umem_odp->page_shift), - DMA_BIDIRECTIONAL); - if (umem_odp->pfn_list[pfn_idx] & HMM_PFN_WRITE) { + if (pfn & HMM_PFN_WRITE) { + struct page *page = hmm_pfn_to_page(pfn); struct page *head_page = compound_head(page); /* * set_page_dirty prefers being called with @@ -489,7 +432,7 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, } umem_odp->npages--; clear: - umem_odp->pfn_list[pfn_idx] &= ~HMM_PFN_FLAGS; + umem_odp->map.pfn_list[idx] &= ~HMM_PFN_FLAGS; } } EXPORT_SYMBOL(ib_umem_odp_unmap_dma_pages); diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 3e4aaa6319db..1bae5595c729 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -1444,8 +1444,8 @@ void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *ibdev); int __init mlx5_ib_odp_init(void); void mlx5_ib_odp_cleanup(void); int mlx5_odp_init_mkey_cache(struct mlx5_ib_dev *dev); -void mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, - struct mlx5_ib_mr *mr, int flags); +int mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, + struct mlx5_ib_mr *mr, int flags); int mlx5_ib_advise_mr_prefetch(struct ib_pd *pd, enum ib_uverbs_advise_mr_advice advice, @@ -1466,8 +1466,11 @@ static inline int mlx5_odp_init_mkey_cache(struct mlx5_ib_dev *dev) { return 0; } -static inline void mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, - struct mlx5_ib_mr *mr, int flags) {} +static inline int mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, + struct mlx5_ib_mr *mr, int flags) +{ + return -EOPNOTSUPP; +} static inline int mlx5_ib_advise_mr_prefetch(struct ib_pd *pd, diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 78887500ce15..fbb2a5670c32 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -35,6 +35,8 @@ #include #include #include +#include +#include #include "mlx5_ib.h" #include "cmd.h" @@ -159,40 +161,50 @@ static void populate_klm(struct mlx5_klm *pklm, size_t idx, size_t nentries, } } -static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, - struct mlx5_ib_mr *mr, int flags) +static int populate_mtt(__be64 *pas, size_t start, size_t nentries, + struct mlx5_ib_mr *mr, int flags) { struct ib_umem_odp *odp = to_ib_umem_odp(mr->umem); bool downgrade = flags & MLX5_IB_UPD_XLT_DOWNGRADE; - unsigned long pfn; - dma_addr_t pa; + struct pci_p2pdma_map_state p2pdma_state = {}; + struct ib_device *dev = odp->umem.ibdev; size_t i; if (flags & MLX5_IB_UPD_XLT_ZAP) - return; + return 0; for (i = 0; i < nentries; i++) { - pfn = odp->pfn_list[idx + i]; + unsigned long pfn = odp->map.pfn_list[start + i]; + dma_addr_t dma_addr; + + pfn = odp->map.pfn_list[start + i]; if (!(pfn & HMM_PFN_VALID)) /* ODP initialization */ continue; - pa = odp->dma_list[idx + i]; - pa |= MLX5_IB_MTT_READ; + dma_addr = hmm_dma_map_pfn(dev->dma_device, &odp->map, + start + i, &p2pdma_state); + if (ib_dma_mapping_error(dev, dma_addr)) + return -EFAULT; + + dma_addr |= MLX5_IB_MTT_READ; if ((pfn & HMM_PFN_WRITE) && !downgrade) - pa |= MLX5_IB_MTT_WRITE; + dma_addr |= MLX5_IB_MTT_WRITE; - pas[i] = cpu_to_be64(pa); + pas[i] = cpu_to_be64(dma_addr); + odp->npages++; } + return 0; } -void mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, - struct mlx5_ib_mr *mr, int flags) +int mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, + struct mlx5_ib_mr *mr, int flags) { if (flags & MLX5_IB_UPD_XLT_INDIRECT) { populate_klm(xlt, idx, nentries, mr, flags); + return 0; } else { - populate_mtt(xlt, idx, nentries, mr, flags); + return populate_mtt(xlt, idx, nentries, mr, flags); } } @@ -286,7 +298,7 @@ static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni, * estimate the cost of another UMR vs. the cost of bigger * UMR. */ - if (umem_odp->pfn_list[idx] & HMM_PFN_VALID) { + if (umem_odp->map.pfn_list[idx] & HMM_PFN_VALID) { if (!in_block) { blk_start_idx = idx; in_block = 1; diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c index 887fd6fa3ba9..d7fa94ab23cf 100644 --- a/drivers/infiniband/hw/mlx5/umr.c +++ b/drivers/infiniband/hw/mlx5/umr.c @@ -811,7 +811,17 @@ int mlx5r_umr_update_xlt(struct mlx5_ib_mr *mr, u64 idx, int npages, size_to_map = npages * desc_size; dma_sync_single_for_cpu(ddev, sg.addr, sg.length, DMA_TO_DEVICE); - mlx5_odp_populate_xlt(xlt, idx, npages, mr, flags); + /* + * npages is the maximum number of pages to map, but we + * can't guarantee that all pages are actually mapped. + * + * For example, if page is p2p of type which is not supported + * for mapping, the number of pages mapped will be less than + * requested. + */ + err = mlx5_odp_populate_xlt(xlt, idx, npages, mr, flags); + if (err) + return err; dma_sync_single_for_device(ddev, sg.addr, sg.length, DMA_TO_DEVICE); sg.length = ALIGN(size_to_map, MLX5_UMR_FLEX_ALIGNMENT); diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index a345c26a745d..2a24bf791c10 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -8,24 +8,17 @@ #include #include -#include +#include struct ib_umem_odp { struct ib_umem umem; struct mmu_interval_notifier notifier; struct pid *tgid; - /* An array of the pfns included in the on-demand paging umem. */ - unsigned long *pfn_list; + struct hmm_dma_map map; /* - * An array with DMA addresses mapped for pfns in pfn_list. - * The lower two bits designate access permissions. - * See ODP_READ_ALLOWED_BIT and ODP_WRITE_ALLOWED_BIT. - */ - dma_addr_t *dma_list; - /* - * The umem_mutex protects the page_list and dma_list fields of an ODP + * The umem_mutex protects the page_list field of an ODP * umem, allowing only a single thread to map/unmap pages. The mutex * also protects access to the mmu notifier counters. */ From patchwork Sun Oct 27 14:21:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81D2FD13562 for ; Sun, 27 Oct 2024 14:22:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1574D6B00A8; Sun, 27 Oct 2024 10:22:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 104E66B00A9; Sun, 27 Oct 2024 10:22:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC12A6B00AA; Sun, 27 Oct 2024 10:22:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C69246B00A8 for ; Sun, 27 Oct 2024 10:22:27 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 3B5D21C39CA for ; Sun, 27 Oct 2024 14:22:02 +0000 (UTC) X-FDA: 82719596442.03.47B335B Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf28.hostedemail.com (Postfix) with ESMTP id 3A46EC0025 for ; Sun, 27 Oct 2024 14:22:03 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=sztJgRnj; spf=pass (imf28.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038790; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zSWGIO8+eoNDo2boHntx+WY/7C1S7kHkrtB0FlbeWwg=; b=iaRjdy2SFhj2YD6xWc42XrqMKB54O/uqL0bOA00O9UkYkiVQGq4VjmVVd0kL4M3X7WZ+NA l4V5FLdB97n/CC8kkmxnyNxq23mmePjM0qEgi/0SxgfT18Eko/xD78DBfJSmlXxVw/GFuO qyDzIW5i9kr+R8Y8Uhh4LIzIrVTX/Bo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038790; a=rsa-sha256; cv=none; b=UK4krRJBad1TTpVXbNkcDIFs4P4It11f+6nOIlAWauEmGEvJ0TNOGwey4Oh3uq7Ydwk4QV b3Vw0xyt0niyK6S/flq+UD709ib9ua0KuTT5n5acP3eFqb7bbE0+v/YH2UmY95yLyB3yBI rHUjP9gwHE6ajti6U2d1IxbFspAKtWw= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=sztJgRnj; spf=pass (imf28.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C45245C26E1; Sun, 27 Oct 2024 14:21:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C608AC4CEC3; Sun, 27 Oct 2024 14:22:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038944; bh=OlIySM8DodN7dp6laXmryqK8XnG8NlruPsXd0qhOBbs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sztJgRnjCjy3T5rMJco14XeEyz8AWK1qLXhlZcsbCww3IJvK3OYpWQrq2z9gEDbiF FTJoGWbzMr8kWgmSADG6xXLjvVF8HtYovxY6UxlpVj7y7e/Tv2J36mDsf1ni0KXR2Z /L04eQr+tpgpAe7RiZLdHdK8VRnuV0oh3cNQNiTRxdfhWyD1lngcUvZ2nRU85HQwjw YjZnNpO3e2lTdKB5KlXnQ1bl3+jFcahVq0pQVLzDcFk027ZEe+D9GNlD7erxmrEzjW vG3Vcv0r7iIYoRUZpnqyp15h0b7btdwJAWusMpLm4ThTR/ZxmftAjX2BhtFgj6WMm/ xmjbH9ceNw+FQ== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 14/18] RDMA/umem: Separate implicit ODP initialization from explicit ODP Date: Sun, 27 Oct 2024 16:21:14 +0200 Message-ID: <90a17feec5781fb79f56c4ad56f8844878205fab.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 3A46EC0025 X-Stat-Signature: zhpckz6y5we9jg5ggaxyd3oxt4r1u5kg X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1730038923-828377 X-HE-Meta: U2FsdGVkX19jTAxDfoZL6HNRyHJLQzVl1kNvFf4qZ9oop3zSD+c6yVJRueOzxb2AHemXXrfuAHUOpMjsHyHBKh/Qiw4wh0ahry9wmS0FXpL8snjM76hJ70tOKj5UEXaP2aHDO/X0AgdG2sqh2xsdY4V3dz/VGRoO/RQaYPuAdnhEQ5z2hc4ShI5d08lCGZ6bZlOqxxYujMJvkYT6msPVSATLKTzkWdHFb4fES1FoPSep37sOVcWqEqW3O6x62V3eZIqMNRguFu8OV3tyXDjETPfmjx3WFEfQAMgJvWgqhpg+et5nDbY893I22//VsEeEbqdDXR4+GJJz75zbWA/1D9f5u0JptTKmFBoc+Pfh6qGN53VafFY30PWNtnENZtd+DSwXWLv0cyiw55dUY42qninYtX2+yVE0RUAaw7rSzaJSYrjN0/DU21zahUUSHKjvb8tGOZdVlvegfovW8WhaP4AaH4ugqQEabCVWgpj2cKslxgZTfYtL8IkRVMffUbLEy8WbhiBMZjAA/4p/qOji4XGUkLxkYHJ+NbpljW9VlYMpwRusbynoOm0UIiSBU/Fuqc6XXASOSK1CMCwVAtijyRoby+5KO8wcjaPJUfsJpk4bb4BDx94orClNuFciyvtggXiX3XcSLk+HUEL1L37tyEADZ0CPl0gk+p+42HdLN6IlrFxcyjjphzitpMmsJMJe6gZVDSgmKGGjBttQRAbOlBOtVm0bQnZ7kV0oAwIGf2PNkW0PMDTfaVWsiQMJmdqBE7VEJWBrQcz0rx5NxGi8bKfozQ5W3574AvyDe1pxdYoonIN8INHJD0az0r1Sy6Y7BIMUz1VcVFNqVPe+a7OJNYUYKxMaXwrJArBvhxllHFMmFKaMsS5C1hl8u/G+fLhU4eNIyEwPSG0Ew07eYoLmi8BDS+6YzbU/nwFrsUuZxrjetgWCU8TdFdWq4XP1M90oO8kNWiDGDjDeJhcyYhW 60ZsfK6v ULboCmGhLu7jUrb10Tlw0ggKzm8g1vTxOrZsvJTxf72DZN9IIske/aMzyO5DIHL3bu1xt3HX2J40iD5NiHzzBW330oNOBU+4OIlTk2LqGaNzDXqxJfCv6L92ux0PpiQqdgbd26I3P5AW+Q4AibwB+1OJORZGlUEe31Q59jSkZng3JMGUqqF1PgT4+IeZlGOfVNO7o133sdPeRTrlC4VjCSlSI4acUDH0P6NBV71xPWYJArTo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Create separate functions for the implicit ODP initialization which is different from the explicit ODP initialization. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 91 +++++++++++++++--------------- 1 file changed, 46 insertions(+), 45 deletions(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 30cd8f353476..51d518989914 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -48,41 +48,44 @@ #include "uverbs.h" -static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, - const struct mmu_interval_notifier_ops *ops) +static void ib_init_umem_implicit_odp(struct ib_umem_odp *umem_odp) +{ + umem_odp->is_implicit_odp = 1; + umem_odp->umem.is_odp = 1; + mutex_init(&umem_odp->umem_mutex); +} + +static int ib_init_umem_odp(struct ib_umem_odp *umem_odp, + const struct mmu_interval_notifier_ops *ops) { struct ib_device *dev = umem_odp->umem.ibdev; + size_t page_size = 1UL << umem_odp->page_shift; + unsigned long start; + unsigned long end; int ret; umem_odp->umem.is_odp = 1; mutex_init(&umem_odp->umem_mutex); - if (!umem_odp->is_implicit_odp) { - size_t page_size = 1UL << umem_odp->page_shift; - unsigned long start; - unsigned long end; - - start = ALIGN_DOWN(umem_odp->umem.address, page_size); - if (check_add_overflow(umem_odp->umem.address, - (unsigned long)umem_odp->umem.length, - &end)) - return -EOVERFLOW; - end = ALIGN(end, page_size); - if (unlikely(end < page_size)) - return -EOVERFLOW; - - ret = hmm_dma_map_alloc(dev->dma_device, &umem_odp->map, - (end - start) >> PAGE_SHIFT, - 1 << umem_odp->page_shift); - if (ret) - return ret; - - ret = mmu_interval_notifier_insert(&umem_odp->notifier, - umem_odp->umem.owning_mm, - start, end - start, ops); - if (ret) - goto out_free_map; - } + start = ALIGN_DOWN(umem_odp->umem.address, page_size); + if (check_add_overflow(umem_odp->umem.address, + (unsigned long)umem_odp->umem.length, &end)) + return -EOVERFLOW; + end = ALIGN(end, page_size); + if (unlikely(end < page_size)) + return -EOVERFLOW; + + ret = hmm_dma_map_alloc(dev->dma_device, &umem_odp->map, + (end - start) >> PAGE_SHIFT, + 1 << umem_odp->page_shift); + if (ret) + return ret; + + ret = mmu_interval_notifier_insert(&umem_odp->notifier, + umem_odp->umem.owning_mm, start, + end - start, ops); + if (ret) + goto out_free_map; return 0; @@ -106,7 +109,6 @@ struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_device *device, { struct ib_umem *umem; struct ib_umem_odp *umem_odp; - int ret; if (access & IB_ACCESS_HUGETLB) return ERR_PTR(-EINVAL); @@ -118,16 +120,10 @@ struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_device *device, umem->ibdev = device; umem->writable = ib_access_writable(access); umem->owning_mm = current->mm; - umem_odp->is_implicit_odp = 1; umem_odp->page_shift = PAGE_SHIFT; umem_odp->tgid = get_task_pid(current->group_leader, PIDTYPE_PID); - ret = ib_init_umem_odp(umem_odp, NULL); - if (ret) { - put_pid(umem_odp->tgid); - kfree(umem_odp); - return ERR_PTR(ret); - } + ib_init_umem_implicit_odp(umem_odp); return umem_odp; } EXPORT_SYMBOL(ib_umem_odp_alloc_implicit); @@ -248,7 +244,7 @@ struct ib_umem_odp *ib_umem_odp_get(struct ib_device *device, } EXPORT_SYMBOL(ib_umem_odp_get); -void ib_umem_odp_release(struct ib_umem_odp *umem_odp) +static void ib_umem_odp_free(struct ib_umem_odp *umem_odp) { struct ib_device *dev = umem_odp->umem.ibdev; @@ -258,14 +254,19 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) * It is the driver's responsibility to ensure, before calling us, * that the hardware will not attempt to access the MR any more. */ - if (!umem_odp->is_implicit_odp) { - mutex_lock(&umem_odp->umem_mutex); - ib_umem_odp_unmap_dma_pages(umem_odp, ib_umem_start(umem_odp), - ib_umem_end(umem_odp)); - mutex_unlock(&umem_odp->umem_mutex); - mmu_interval_notifier_remove(&umem_odp->notifier); - hmm_dma_map_free(dev->dma_device, &umem_odp->map); - } + mutex_lock(&umem_odp->umem_mutex); + ib_umem_odp_unmap_dma_pages(umem_odp, ib_umem_start(umem_odp), + ib_umem_end(umem_odp)); + mutex_unlock(&umem_odp->umem_mutex); + mmu_interval_notifier_remove(&umem_odp->notifier); + hmm_dma_map_free(dev->dma_device, &umem_odp->map); +} + +void ib_umem_odp_release(struct ib_umem_odp *umem_odp) +{ + if (!umem_odp->is_implicit_odp) + ib_umem_odp_free(umem_odp); + put_pid(umem_odp->tgid); kfree(umem_odp); } From patchwork Sun Oct 27 14:21:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852660 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10E0CD13561 for ; Sun, 27 Oct 2024 14:22:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 093C38D0006; Sun, 27 Oct 2024 10:22:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F36D48D0001; Sun, 27 Oct 2024 10:22:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D89418D0006; Sun, 27 Oct 2024 10:22:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B52488D0001 for ; Sun, 27 Oct 2024 10:22:47 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 73D531A1C1E for ; Sun, 27 Oct 2024 14:22:08 +0000 (UTC) X-FDA: 82719597912.18.0244770 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf19.hostedemail.com (Postfix) with ESMTP id 2D3151A0017 for ; Sun, 27 Oct 2024 14:22:18 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=kBnqU+gO; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf19.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038838; a=rsa-sha256; cv=none; b=3JK6TSVASlFOVT3J6Se2UrcA56wdn3nOWNBC+8GkQYYzu5qPUVK/ciO4S8/stGVxxNa+sZ iryufXIfSRSwR6AAHyaDpBlj1E3isigPb6/BFRET/Z6aefTUkGYagnk7BJZQf4uO1iMi7e bC0Iklvr+sm3hILj0T/iyUPrbVd7bDA= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=kBnqU+gO; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf19.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038838; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=u/onjaAzz4OkttdC/5YCGBTFG7PsSXENF3Yj4oybQ+I=; b=JsR85uwCd+Q38SpPOlvK6R04QgbgWMbzcc7LM42GifK8DVDil+tWpN/ymv5Pnmfts+AO26 9TCnAvbQBzqmk/Jcjx7A6WKrrj0kE4p4YC4W0sqnbOhRJZizYUd9XwPN6cwOqUwOePCr9y XhljHIsbfvzu0ZcJj5iruDleLwGFs3g= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 0C618A40FAB; Sun, 27 Oct 2024 14:20:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 14D82C4CEC3; Sun, 27 Oct 2024 14:22:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038964; bh=Tou6h56yAf8zPXrsCYXLKq3TAhdexORUkVzCl7QijME=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kBnqU+gOG1LwVIZ2pH0dF4mdEmX9btKjnU+YZTF2d190F1hXsctcyGdrG/lw4wI25 ep/ySGzV12Q4gmgm9+cO+wLUAT9Afg4k3KfZ+f7Xb26w3najP7ERyQi8cfKWOal7O0 jHvoA1O8YE0rhFbMEq6c+aUlaiA8u6FpbF1TmcrRzs+hLa6YunI9Zu+XG8p8Ztxhds H3/84dhDRkVOqeaehcw0K7hG22/AUVnn/WNABknVKrFPnXnqnDqWM1h3SL+XFbnZR5 fvk0KAXAHUf7CfKktXoE+reCg1MqDKlvz+LkqtBSWTjk2yAovglafED1IdM+CDVeUl KXo5ghSyrrphg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 15/18] vfio/mlx5: Explicitly use number of pages instead of allocated length Date: Sun, 27 Oct 2024 16:21:15 +0200 Message-ID: X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 2D3151A0017 X-Stat-Signature: oma87a8zj78tij4gahu4hjm5whe7tyrb X-Rspam-User: X-HE-Tag: 1730038938-113782 X-HE-Meta: U2FsdGVkX1+qtHpcrVS1oug8OmmdPLfsSngQmWupBHZl0IE6pGuYuNPso55tgYd7NBqgKcqBESJLMJ4JR9fbS3yxCUTfYwcqgcLnEksp4I1m+HAtX3dyJFb70ZvbH/9oPWe58814uXGTzkSasWVEa2j7RVToIA5LiST7wKWjsJMxeeAm7klKEPRR2StekIMNArXrPrlkLioTOsAQdc6aYeZ0l2iu/R0um59B/qZUoa2Nysnp5ZBHWCXkTA8ZGEpM6f+UYUwyeFstuGbVgRN+BtwbdlHx0anVoX1iO6tAo/6sRSpzLO81BetONc8QDG9YStb2TeeD0NO5LTut+gTOjHEFDXDN+086oobIjc6gAKbqVsEl7X2enY/uArM2QpyiAG6oC/LPDtL4+9AbCNgFY52PV8LJ8iYkT2tKxNaNTK3D8pukCIp9QTiE9WVgQJQR219PYR6r2PbxqsS91ernL+T0VHaWWHpeOltJqzFZhwc4BuCwwPSx0UBog5vffBms8tSSDY9iq9Q4ADe/8iO+LAfkrFIVZ0HduFWBe4QX4JRrY36gykJuswHI8bBbvsJo+Kd/HCPvoXUBsAhBxIXj2NUFFejTGIbY0LE7tHbbJbi+2QC2jebkd2OT/llNuVg0qvfPqVCFYGEbiCuwdOM3JZaSc/kktUABF4kR21BgBPT9ukqdIKOtri9JLDH9Oah3FE+fOITbktsCpD4uGluAcHcIx5wP4FGO1f3MXK3bgm/mP8Kp3YMbLayE1pCTNzBUBd0OdbQm+aMt1WmF0Tx1Q3iMajAJ8nVw2ziHG8DeWe9GoA5SA4uL9iNZ1iiMoCx+Ts308b4C8fZcQ7PfT9HpsO8pGba0f6C6fZFGFbJ6T7pdYof3dOSCftBKNG2CKsFZTUOfyvtYKomvRKP0ypRyaEen0RuAOU7LWqvIfQXxUm7mcj4cKXYfLg6n316racj/XUUOF6x3gs2xpf6U+Su 3BDPDeQA INO1f2TJJRycExZ5lmrWjd3jv2kvL02d1mguf/t22VOJOwzlPUCuAQLoUejChXGsNwERh+cci5qqV3ZTK/KQVnSAnOjI92Z6WA9KISoZStdmblv4/QvwEMLh7+BpDLz9AtJaE2y/1w/UbzibbY1/nuFelflln582zlycqeoju4R4uBq/ieCHV9DexsCpgkUol7frNTxZtCWTeYF/ZrqIOOhF7XadCzHZDFYXa0Oyy2wFWpto= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky allocated_length is a multiple of page size and number of pages, so let's change the functions to accept number of pages. It opens us a venue to combine receive and send paths together with code readability improvement. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 32 ++++++++++----------- drivers/vfio/pci/mlx5/cmd.h | 10 +++---- drivers/vfio/pci/mlx5/main.c | 56 +++++++++++++++++++++++------------- 3 files changed, 57 insertions(+), 41 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index 41a4b0cf4297..fdc3e515741f 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -318,8 +318,7 @@ static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, struct mlx5_vhca_recv_buf *recv_buf, u32 *mkey) { - size_t npages = buf ? DIV_ROUND_UP(buf->allocated_length, PAGE_SIZE) : - recv_buf->npages; + size_t npages = buf ? buf->npages : recv_buf->npages; int err = 0, inlen; __be64 *mtt; void *mkc; @@ -375,7 +374,7 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) if (mvdev->mdev_detach) return -ENOTCONN; - if (buf->dmaed || !buf->allocated_length) + if (buf->dmaed || !buf->npages) return -EINVAL; ret = dma_map_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); @@ -444,7 +443,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, if (ret) goto err; - buf->allocated_length += filled * PAGE_SIZE; + buf->npages += filled; /* clean input for another bulk allocation */ memset(page_list, 0, filled * sizeof(*page_list)); to_fill = min_t(unsigned int, to_alloc, @@ -460,8 +459,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, } struct mlx5_vhca_data_buffer * -mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, +mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, enum dma_data_direction dma_dir) { struct mlx5_vhca_data_buffer *buf; @@ -473,9 +471,8 @@ mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, buf->dma_dir = dma_dir; buf->migf = migf; - if (length) { - ret = mlx5vf_add_migration_pages(buf, - DIV_ROUND_UP_ULL(length, PAGE_SIZE)); + if (npages) { + ret = mlx5vf_add_migration_pages(buf, npages); if (ret) goto end; @@ -501,8 +498,8 @@ void mlx5vf_put_data_buffer(struct mlx5_vhca_data_buffer *buf) } struct mlx5_vhca_data_buffer * -mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, enum dma_data_direction dma_dir) +mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, + enum dma_data_direction dma_dir) { struct mlx5_vhca_data_buffer *buf, *temp_buf; struct list_head free_list; @@ -517,7 +514,7 @@ mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, list_for_each_entry_safe(buf, temp_buf, &migf->avail_list, buf_elm) { if (buf->dma_dir == dma_dir) { list_del_init(&buf->buf_elm); - if (buf->allocated_length >= length) { + if (buf->npages >= npages) { spin_unlock_irq(&migf->list_lock); goto found; } @@ -531,7 +528,7 @@ mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, } } spin_unlock_irq(&migf->list_lock); - buf = mlx5vf_alloc_data_buffer(migf, length, dma_dir); + buf = mlx5vf_alloc_data_buffer(migf, npages, dma_dir); found: while ((temp_buf = list_first_entry_or_null(&free_list, @@ -712,7 +709,7 @@ int mlx5vf_cmd_save_vhca_state(struct mlx5vf_pci_core_device *mvdev, MLX5_SET(save_vhca_state_in, in, op_mod, 0); MLX5_SET(save_vhca_state_in, in, vhca_id, mvdev->vhca_id); MLX5_SET(save_vhca_state_in, in, mkey, buf->mkey); - MLX5_SET(save_vhca_state_in, in, size, buf->allocated_length); + MLX5_SET(save_vhca_state_in, in, size, buf->npages * PAGE_SIZE); MLX5_SET(save_vhca_state_in, in, incremental, inc); MLX5_SET(save_vhca_state_in, in, set_track, track); @@ -734,8 +731,11 @@ int mlx5vf_cmd_save_vhca_state(struct mlx5vf_pci_core_device *mvdev, } if (!header_buf) { - header_buf = mlx5vf_get_data_buffer(migf, - sizeof(struct mlx5_vf_migration_header), DMA_NONE); + header_buf = mlx5vf_get_data_buffer( + migf, + DIV_ROUND_UP(sizeof(struct mlx5_vf_migration_header), + PAGE_SIZE), + DMA_NONE); if (IS_ERR(header_buf)) { err = PTR_ERR(header_buf); goto err_free; diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index df421dc6de04..7d4a833b6900 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -56,7 +56,7 @@ struct mlx5_vhca_data_buffer { struct sg_append_table table; loff_t start_pos; u64 length; - u64 allocated_length; + u32 npages; u32 mkey; enum dma_data_direction dma_dir; u8 dmaed:1; @@ -217,12 +217,12 @@ int mlx5vf_cmd_alloc_pd(struct mlx5_vf_migration_file *migf); void mlx5vf_cmd_dealloc_pd(struct mlx5_vf_migration_file *migf); void mlx5fv_cmd_clean_migf_resources(struct mlx5_vf_migration_file *migf); struct mlx5_vhca_data_buffer * -mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, enum dma_data_direction dma_dir); +mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, + enum dma_data_direction dma_dir); void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf); struct mlx5_vhca_data_buffer * -mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, enum dma_data_direction dma_dir); +mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, + enum dma_data_direction dma_dir); void mlx5vf_put_data_buffer(struct mlx5_vhca_data_buffer *buf); struct page *mlx5vf_get_migration_page(struct mlx5_vhca_data_buffer *buf, unsigned long offset); diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c index 242c23eef452..a1dbee3be1e0 100644 --- a/drivers/vfio/pci/mlx5/main.c +++ b/drivers/vfio/pci/mlx5/main.c @@ -308,6 +308,7 @@ static struct mlx5_vhca_data_buffer * mlx5vf_mig_file_get_stop_copy_buf(struct mlx5_vf_migration_file *migf, u8 index, size_t required_length) { + u32 npages = DIV_ROUND_UP(required_length, PAGE_SIZE); struct mlx5_vhca_data_buffer *buf = migf->buf[index]; u8 chunk_num; @@ -315,12 +316,11 @@ mlx5vf_mig_file_get_stop_copy_buf(struct mlx5_vf_migration_file *migf, chunk_num = buf->stop_copy_chunk_num; buf->migf->buf[index] = NULL; /* Checking whether the pre-allocated buffer can fit */ - if (buf->allocated_length >= required_length) + if (buf->npages >= npages) return buf; mlx5vf_put_data_buffer(buf); - buf = mlx5vf_get_data_buffer(buf->migf, required_length, - DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer(buf->migf, npages, DMA_FROM_DEVICE); if (IS_ERR(buf)) return buf; @@ -373,7 +373,8 @@ static int mlx5vf_add_stop_copy_header(struct mlx5_vf_migration_file *migf, u8 *to_buff; int ret; - header_buf = mlx5vf_get_data_buffer(migf, size, DMA_NONE); + header_buf = mlx5vf_get_data_buffer(migf, DIV_ROUND_UP(size, PAGE_SIZE), + DMA_NONE); if (IS_ERR(header_buf)) return PTR_ERR(header_buf); @@ -388,7 +389,7 @@ static int mlx5vf_add_stop_copy_header(struct mlx5_vf_migration_file *migf, to_buff = kmap_local_page(page); memcpy(to_buff, &header, sizeof(header)); header_buf->length = sizeof(header); - data.stop_copy_size = cpu_to_le64(migf->buf[0]->allocated_length); + data.stop_copy_size = cpu_to_le64(migf->buf[0]->npages * PAGE_SIZE); memcpy(to_buff + sizeof(header), &data, sizeof(data)); header_buf->length += sizeof(data); kunmap_local(to_buff); @@ -437,15 +438,20 @@ static int mlx5vf_prep_stop_copy(struct mlx5vf_pci_core_device *mvdev, num_chunks = mvdev->chunk_mode ? MAX_NUM_CHUNKS : 1; for (i = 0; i < num_chunks; i++) { - buf = mlx5vf_get_data_buffer(migf, inc_state_size, DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer( + migf, DIV_ROUND_UP(inc_state_size, PAGE_SIZE), + DMA_FROM_DEVICE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto err; } migf->buf[i] = buf; - buf = mlx5vf_get_data_buffer(migf, - sizeof(struct mlx5_vf_migration_header), DMA_NONE); + buf = mlx5vf_get_data_buffer( + migf, + DIV_ROUND_UP(sizeof(struct mlx5_vf_migration_header), + PAGE_SIZE), + DMA_NONE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto err; @@ -553,7 +559,8 @@ static long mlx5vf_precopy_ioctl(struct file *filp, unsigned int cmd, * We finished transferring the current state and the device has a * dirty state, save a new state to be ready for. */ - buf = mlx5vf_get_data_buffer(migf, inc_length, DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer(migf, DIV_ROUND_UP(inc_length, PAGE_SIZE), + DMA_FROM_DEVICE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); mlx5vf_mark_err(migf); @@ -673,8 +680,8 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track) if (track) { /* leave the allocated buffer ready for the stop-copy phase */ - buf = mlx5vf_alloc_data_buffer(migf, - migf->buf[0]->allocated_length, DMA_FROM_DEVICE); + buf = mlx5vf_alloc_data_buffer(migf, migf->buf[0]->npages, + DMA_FROM_DEVICE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto out_pd; @@ -917,11 +924,14 @@ static ssize_t mlx5vf_resume_write(struct file *filp, const char __user *buf, goto out_unlock; break; case MLX5_VF_LOAD_STATE_PREP_HEADER_DATA: - if (vhca_buf_header->allocated_length < migf->record_size) { + { + u32 npages = DIV_ROUND_UP(migf->record_size, PAGE_SIZE); + + if (vhca_buf_header->npages < npages) { mlx5vf_free_data_buffer(vhca_buf_header); - migf->buf_header[0] = mlx5vf_alloc_data_buffer(migf, - migf->record_size, DMA_NONE); + migf->buf_header[0] = mlx5vf_alloc_data_buffer( + migf, npages, DMA_NONE); if (IS_ERR(migf->buf_header[0])) { ret = PTR_ERR(migf->buf_header[0]); migf->buf_header[0] = NULL; @@ -934,6 +944,7 @@ static ssize_t mlx5vf_resume_write(struct file *filp, const char __user *buf, vhca_buf_header->start_pos = migf->max_pos; migf->load_state = MLX5_VF_LOAD_STATE_READ_HEADER_DATA; break; + } case MLX5_VF_LOAD_STATE_READ_HEADER_DATA: ret = mlx5vf_resume_read_header_data(migf, vhca_buf_header, &buf, &len, pos, &done); @@ -944,12 +955,13 @@ static ssize_t mlx5vf_resume_write(struct file *filp, const char __user *buf, { u64 size = max(migf->record_size, migf->stop_copy_prep_size); + u32 npages = DIV_ROUND_UP(size, PAGE_SIZE); - if (vhca_buf->allocated_length < size) { + if (vhca_buf->npages < npages) { mlx5vf_free_data_buffer(vhca_buf); - migf->buf[0] = mlx5vf_alloc_data_buffer(migf, - size, DMA_TO_DEVICE); + migf->buf[0] = mlx5vf_alloc_data_buffer( + migf, npages, DMA_TO_DEVICE); if (IS_ERR(migf->buf[0])) { ret = PTR_ERR(migf->buf[0]); migf->buf[0] = NULL; @@ -1031,8 +1043,11 @@ mlx5vf_pci_resume_device_data(struct mlx5vf_pci_core_device *mvdev) } migf->buf[0] = buf; - buf = mlx5vf_alloc_data_buffer(migf, - sizeof(struct mlx5_vf_migration_header), DMA_NONE); + buf = mlx5vf_alloc_data_buffer( + migf, + DIV_ROUND_UP(sizeof(struct mlx5_vf_migration_header), + PAGE_SIZE), + DMA_NONE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto out_buf; @@ -1149,7 +1164,8 @@ mlx5vf_pci_step_device_state_locked(struct mlx5vf_pci_core_device *mvdev, MLX5VF_QUERY_INC | MLX5VF_QUERY_CLEANUP); if (ret) return ERR_PTR(ret); - buf = mlx5vf_get_data_buffer(migf, size, DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer(migf, + DIV_ROUND_UP(size, PAGE_SIZE), DMA_FROM_DEVICE); if (IS_ERR(buf)) return ERR_CAST(buf); /* pre_copy cleanup */ From patchwork Sun Oct 27 14:21:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E39FD13562 for ; Sun, 27 Oct 2024 14:22:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8F7E68D0002; Sun, 27 Oct 2024 10:22:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8A6498D0001; Sun, 27 Oct 2024 10:22:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D2FF8D0002; Sun, 27 Oct 2024 10:22:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 426128D0001 for ; Sun, 27 Oct 2024 10:22:36 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 48823121B85 for ; Sun, 27 Oct 2024 14:22:16 +0000 (UTC) X-FDA: 82719597660.14.25FE70A Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf08.hostedemail.com (Postfix) with ESMTP id 6DB5416001F for ; Sun, 27 Oct 2024 14:22:19 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=p469KH+i; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf08.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038781; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PhW4VjnvtzkDooyFGath+3T4tsAPk9cdglqFB5sUC1Q=; b=BlJnhrfeCSaJ1qsN9qLdwqgkXlU/c/aLcyWqS2eLMIwzbwpEZpj4xyJFygr852sxZ4AHcO FTPdzHuGMnKExlBnAVCBvNSwfPm/fytt25FTGyL1jSYQ20yTH1D6SI4NrRDpGkBsBBnat9 i1k8Ig7GzGfeJwbKyzKHpqplgH8qyNM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038781; a=rsa-sha256; cv=none; b=fDHxoSiG/SvaGVysvS13G7zKFttUrqVmPClHCnkO9g3XIhOHxnyIOncFpO4B696638rtf5 OvWLvwhBhqnTL3yodBb7o4yqWpioNVIqpSwr4mcM3wYnsi1/Phd1Zg7CBdDESx4RIJ9NUc 6G65neFM6rC0uY2dv7PApTy2HN8CfLc= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=p469KH+i; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf08.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 00CB0A40FA4; Sun, 27 Oct 2024 14:20:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0792FC4CEC3; Sun, 27 Oct 2024 14:22:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038952; bh=RCW7z84/26BITUxg8PndrrKsB8pTf34fxPy1qnPVTz8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=p469KH+iYKJpE42ausBaBh8WvcfCIuLRW+/Ozj9WyaS1awuuaSGbTYgDOqftciJYn tYLopKnnfSl5vD/Zx8A/kVhx2smNtfxLcxnKp0YBwvo6isKlJPBvKOEANH8xHWPI3u BEwvOeSuvNFGcEAVRwCURH5KJVNClvtUfMssTSQ81yaWUw0XSyZEmG6HBxA3vb+VnI fYsuqFi8i9W/SUik2wC7KbpMDsjkC3EQKlW+wpTMkwZgDyky1gesPtr4uLlpQaqpES beis3VfX/oGZC3vRYVyuC3mPz868lFfuTz1MEcaVLULeCwdQXwWk74JMotfSEdoy7x etTNqb1MXlY7Q== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 16/18] vfio/mlx5: Rewrite create mkey flow to allow better code reuse Date: Sun, 27 Oct 2024 16:21:16 +0200 Message-ID: <3b0710f67d75daecc15529307de11d01da45f52c.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6DB5416001F X-Stat-Signature: qjz9g6h8c8ziab6fhj6btwgg8a9qotdb X-Rspam-User: X-HE-Tag: 1730038939-733075 X-HE-Meta: U2FsdGVkX18751tuXo+Il3BCJgteX+rrrVqu0DQ6TrqY239uMd30x9vyxjKY1tEQKGfE9eXHnGcOSv1JcR8cymvTZjiezVIfAgAZcw1ab6r6Ir9dnzT6pcC3fFMgLNjZIT416uMfldnKWA+6DnaGSl4U+5iAaL0Fl4sLdBr29+Po9Ij3lpYZfsOzWE1BSIsKfqonaIbpuVJQB9rEQya8Z+dbf16cpN2NF65e9RbcN7gQ/O8OuO3UDxBdmy7rbUL9Wcrtw5Lhp/GZjPNvKYUAC/fPHbUDDQP7kTnfxrbOPgZ+4VdFEIK8zKj2Iy+Ccmjqp/5aQG2PncZtfgXArxcwJAhucvxgaYDvyXNmtyCy6xfZeKQRPX5Ux/0mFtBM03SbttBL4jPhBbZyhIMg+yerbVybAF46cccFKQL2H65p4D8DldfjoL35QzH9v/NqCsWcgNs/woNzX7OJ0RVQt4+kazvdyNoZsKguuZvno4aMHr48SS3eUBRhXZUlED/t68u96kFNCBfcb/ASq+FU3OVj4y7935KJvtFKofKqxOkC4hz03IQhV8CzEPrSHV1HYvwrG+RwT7ziLU0WY5ZAUCD0nxZjn5xY7CgkFf5DwPiuzMThL1bKFQR8bcc0B2pqIDCQGacj3S3fJ4WsU7hu3jk3d/jXd404jfhnM66LPtHtxN8/d1iEe6uT1/oVkpzoLn+b2birk99402Fv1yRnGIh/GR71NZhztpU+83yjtx7V+Qa13NHBGou5Tac7n1IZHYH+7gLnnZMxRDz2QcB4mGPL/qkkdAgmI1KgTaWLMeqajBvIwwRrsd1ze7e33L85E+V0isY/Kiyf0R+A9e/g0guaTNBZw3x0RLBVOC2SlijU/3isFJQMUFQIsaikHH3bNvu5gpsHop2575ipuyvvZafmZbnzG6rIrzMAfKFQDRnSPmE+6/53TEVBRz/wVvYC8gnLr+fff2QBhRdBqlgIE/D jEYIzCTR unU1ecXKNhXyfFEdxeFv1g1zjon5ff+5gHBzk/RHo435EPeCQZ2eyyd8N3144szA/qsgn2Y5k3sxQBcsN8F+zSXEyNiZuykRuWEFB6TdZfPAEceVQnjnqzLkRycD3Y11/7vwF1k8iYZRi8mWJJOc+UlCmsdZAaXZHFycCsgIuKH2wdN1N2JwTZUJs8vPl6HSLGgRcKqZu2BRnN2vImJRuPvqqtIcYYLfdPqe+k43f9eBuuc0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Change the creation of mkey to be performed in multiple steps: data allocation, DMA setup and actual call to HW to create that mkey. In this new flow, the whole input to MKEY command is saved to eliminate the need to keep array of pointers for DMA addresses for receive list and in the future patches for send list too. In addition to memory size reduce and elimination of unnecessary data movements to set MKEY input, the code is prepared for future reuse. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 156 ++++++++++++++++++++---------------- drivers/vfio/pci/mlx5/cmd.h | 4 +- 2 files changed, 90 insertions(+), 70 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index fdc3e515741f..1832a6c1f35d 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -313,39 +313,21 @@ static int mlx5vf_cmd_get_vhca_id(struct mlx5_core_dev *mdev, u16 function_id, return ret; } -static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, - struct mlx5_vhca_data_buffer *buf, - struct mlx5_vhca_recv_buf *recv_buf, - u32 *mkey) +static u32 *alloc_mkey_in(u32 npages, u32 pdn) { - size_t npages = buf ? buf->npages : recv_buf->npages; - int err = 0, inlen; - __be64 *mtt; + int inlen; void *mkc; u32 *in; inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + - sizeof(*mtt) * round_up(npages, 2); + sizeof(__be64) * round_up(npages, 2); - in = kvzalloc(inlen, GFP_KERNEL); + in = kvzalloc(inlen, GFP_KERNEL_ACCOUNT); if (!in) - return -ENOMEM; + return NULL; MLX5_SET(create_mkey_in, in, translations_octword_actual_size, DIV_ROUND_UP(npages, 2)); - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, in, klm_pas_mtt); - - if (buf) { - struct sg_dma_page_iter dma_iter; - - for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) - *mtt++ = cpu_to_be64(sg_page_iter_dma_address(&dma_iter)); - } else { - int i; - - for (i = 0; i < npages; i++) - *mtt++ = cpu_to_be64(recv_buf->dma_addrs[i]); - } mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry); MLX5_SET(mkc, mkc, access_mode_1_0, MLX5_MKC_ACCESS_MODE_MTT); @@ -359,9 +341,29 @@ static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, MLX5_SET(mkc, mkc, log_page_size, PAGE_SHIFT); MLX5_SET(mkc, mkc, translations_octword_size, DIV_ROUND_UP(npages, 2)); MLX5_SET64(mkc, mkc, len, npages * PAGE_SIZE); - err = mlx5_core_create_mkey(mdev, mkey, in, inlen); - kvfree(in); - return err; + + return in; +} + +static int create_mkey(struct mlx5_core_dev *mdev, u32 npages, + struct mlx5_vhca_data_buffer *buf, u32 *mkey_in, + u32 *mkey) +{ + __be64 *mtt; + int inlen; + + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); + if (buf) { + struct sg_dma_page_iter dma_iter; + + for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) + *mtt++ = cpu_to_be64(sg_page_iter_dma_address(&dma_iter)); + } + + inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + + sizeof(__be64) * round_up(npages, 2); + + return mlx5_core_create_mkey(mdev, mkey, mkey_in, inlen); } static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) @@ -374,20 +376,28 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) if (mvdev->mdev_detach) return -ENOTCONN; - if (buf->dmaed || !buf->npages) + if (buf->mkey_in || !buf->npages) return -EINVAL; ret = dma_map_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); if (ret) return ret; - ret = _create_mkey(mdev, buf->migf->pdn, buf, NULL, &buf->mkey); - if (ret) + buf->mkey_in = alloc_mkey_in(buf->npages, buf->migf->pdn); + if (!buf->mkey_in) { + ret = -ENOMEM; goto err; + } - buf->dmaed = true; + ret = create_mkey(mdev, buf->npages, buf, buf->mkey_in, &buf->mkey); + if (ret) + goto err_create_mkey; return 0; + +err_create_mkey: + kvfree(buf->mkey_in); + buf->mkey_in = NULL; err: dma_unmap_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); return ret; @@ -401,8 +411,9 @@ void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) lockdep_assert_held(&migf->mvdev->state_mutex); WARN_ON(migf->mvdev->mdev_detach); - if (buf->dmaed) { + if (buf->mkey_in) { mlx5_core_destroy_mkey(migf->mvdev->mdev, buf->mkey); + kvfree(buf->mkey_in); dma_unmap_sgtable(migf->mvdev->mdev->device, &buf->table.sgt, buf->dma_dir, 0); } @@ -779,7 +790,7 @@ int mlx5vf_cmd_load_vhca_state(struct mlx5vf_pci_core_device *mvdev, if (mvdev->mdev_detach) return -ENOTCONN; - if (!buf->dmaed) { + if (!buf->mkey_in) { err = mlx5vf_dma_data_buffer(buf); if (err) return err; @@ -1380,56 +1391,54 @@ static int alloc_recv_pages(struct mlx5_vhca_recv_buf *recv_buf, kvfree(recv_buf->page_list); return -ENOMEM; } +static void unregister_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + u32 *mkey_in) +{ + dma_addr_t addr; + __be64 *mtt; + int i; + + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); + for (i = npages - 1; i >= 0; i--) { + addr = be64_to_cpu(mtt[i]); + dma_unmap_single(mdev->device, addr, PAGE_SIZE, + DMA_FROM_DEVICE); + } +} -static int register_dma_recv_pages(struct mlx5_core_dev *mdev, - struct mlx5_vhca_recv_buf *recv_buf) +static int register_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + struct page **page_list, u32 *mkey_in) { - int i, j; + dma_addr_t addr; + __be64 *mtt; + int i; - recv_buf->dma_addrs = kvcalloc(recv_buf->npages, - sizeof(*recv_buf->dma_addrs), - GFP_KERNEL_ACCOUNT); - if (!recv_buf->dma_addrs) - return -ENOMEM; + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - for (i = 0; i < recv_buf->npages; i++) { - recv_buf->dma_addrs[i] = dma_map_page(mdev->device, - recv_buf->page_list[i], - 0, PAGE_SIZE, - DMA_FROM_DEVICE); - if (dma_mapping_error(mdev->device, recv_buf->dma_addrs[i])) + for (i = 0; i < npages; i++) { + addr = dma_map_page(mdev->device, page_list[i], 0, PAGE_SIZE, + DMA_FROM_DEVICE); + if (dma_mapping_error(mdev->device, addr)) goto error; + + *mtt++ = cpu_to_be64(addr); } + return 0; error: - for (j = 0; j < i; j++) - dma_unmap_single(mdev->device, recv_buf->dma_addrs[j], - PAGE_SIZE, DMA_FROM_DEVICE); - - kvfree(recv_buf->dma_addrs); + unregister_dma_pages(mdev, i, mkey_in); return -ENOMEM; } -static void unregister_dma_recv_pages(struct mlx5_core_dev *mdev, - struct mlx5_vhca_recv_buf *recv_buf) -{ - int i; - - for (i = 0; i < recv_buf->npages; i++) - dma_unmap_single(mdev->device, recv_buf->dma_addrs[i], - PAGE_SIZE, DMA_FROM_DEVICE); - - kvfree(recv_buf->dma_addrs); -} - static void mlx5vf_free_qp_recv_resources(struct mlx5_core_dev *mdev, struct mlx5_vhca_qp *qp) { struct mlx5_vhca_recv_buf *recv_buf = &qp->recv_buf; mlx5_core_destroy_mkey(mdev, recv_buf->mkey); - unregister_dma_recv_pages(mdev, recv_buf); + unregister_dma_pages(mdev, recv_buf->npages, recv_buf->mkey_in); + kvfree(recv_buf->mkey_in); free_recv_pages(&qp->recv_buf); } @@ -1445,18 +1454,29 @@ static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, if (err < 0) return err; - err = register_dma_recv_pages(mdev, recv_buf); - if (err) + recv_buf->mkey_in = alloc_mkey_in(npages, pdn); + if (!recv_buf->mkey_in) { + err = -ENOMEM; goto end; + } + + err = register_dma_pages(mdev, npages, recv_buf->page_list, + recv_buf->mkey_in); + if (err) + goto err_register_dma; - err = _create_mkey(mdev, pdn, NULL, recv_buf, &recv_buf->mkey); + err = create_mkey(mdev, npages, NULL, recv_buf->mkey_in, + &recv_buf->mkey); if (err) goto err_create_mkey; return 0; err_create_mkey: - unregister_dma_recv_pages(mdev, recv_buf); + unregister_dma_pages(mdev, npages, recv_buf->mkey_in); +err_register_dma: + kvfree(recv_buf->mkey_in); + recv_buf->mkey_in = NULL; end: free_recv_pages(recv_buf); return err; diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 7d4a833b6900..25dd6ff54591 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -58,8 +58,8 @@ struct mlx5_vhca_data_buffer { u64 length; u32 npages; u32 mkey; + u32 *mkey_in; enum dma_data_direction dma_dir; - u8 dmaed:1; u8 stop_copy_chunk_num; struct list_head buf_elm; struct mlx5_vf_migration_file *migf; @@ -133,8 +133,8 @@ struct mlx5_vhca_cq { struct mlx5_vhca_recv_buf { u32 npages; struct page **page_list; - dma_addr_t *dma_addrs; u32 next_rq_offset; + u32 *mkey_in; u32 mkey; }; From patchwork Sun Oct 27 14:21:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852568 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05AF7D13562 for ; Sun, 27 Oct 2024 14:22:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E75B08D0003; Sun, 27 Oct 2024 10:22:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E26828D0001; Sun, 27 Oct 2024 10:22:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C78C08D0003; Sun, 27 Oct 2024 10:22:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A67A78D0001 for ; Sun, 27 Oct 2024 10:22:39 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id BB2B040145 for ; Sun, 27 Oct 2024 14:22:27 +0000 (UTC) X-FDA: 82719596946.07.86449FE Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf14.hostedemail.com (Postfix) with ESMTP id E93F3100011 for ; Sun, 27 Oct 2024 14:22:12 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=XnYNHf81; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf14.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038785; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AIaNNcm3XUGBo9OgO3ovSINrRXEDBCyPOnC0y7/NQWs=; b=if+SAr+bCVQPLxDbycqTEiKaXtwhXkwCCZ+uVOZqk+HP9FbN87MDqmFmV9aNVoz4m9v4kR 0+QiZdSmcxlNv5WtqQwGOUQXjWwwDswuPtrqAL22JJ1Q4u2NZjdBHwTt2ONJIyoWp82Ldz 2OxbdWqj2tIp0TgJ8yyGJ/RQqZ7dUmM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038785; a=rsa-sha256; cv=none; b=Ct/DcTNfJfm/jSRdaOxFidRKNo0N3BMtzdpsFtLgz5SUDmD5KTyH6kHrZdDk0VU8NXFNlR yOo9IMzv5EJM5bdZdUzACOVN8hzLdTrqr5uf1qfhO5fvSu6B/RAwUoE8fHCtqqCFn98nAs ghxZlDvU9oz0NxEq8FZeavrY9B6uUJA= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=XnYNHf81; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf14.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id ECDFDA40FA9; Sun, 27 Oct 2024 14:20:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0785CC4CEE5; Sun, 27 Oct 2024 14:22:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038956; bh=0oAZb2A1AcGBYwYJqLqYoF06Rd3XR3a/YcihuL4Yd4c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XnYNHf81XRJqGaj3hsxMDwA7dy6KsvsmA+yLe06qPkXYN2xa++1K94IKvJFVhF3WJ T4nkxQIbD+/taXB1cTkhu7YVDVMs52CwQsF6Amd9bdNSuoW6OOoIjgXvVDwmEvZguv 8WlPW9lCV2ndSXcOd/O3Yle9aIqIcn5rb/jdxIv+xfewF90Snln3D0sQEOog6AYL9f sGWvMWAh24LWTaEyPcu71Fg+Q1qBrTCefvjH+3Eda+IVvo3/EjNd7zcWwc1s35Yx2V 0GfmH2Gg2xDEgchvQ6+NhsCw8PELJVVU3eaVeCo4YhOOnUEW7QANXpX8NQDUgVH6hg Njugb3kh4Lfgg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 17/18] vfio/mlx5: Explicitly store page list Date: Sun, 27 Oct 2024 16:21:17 +0200 Message-ID: <4c0a2f96d672d395caec3fe7dd6049a48b2582c6.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: E93F3100011 X-Stat-Signature: xto86949dwiqr8jkdpeqd4b1qwrt9u7b X-Rspam-User: X-HE-Tag: 1730038932-818600 X-HE-Meta: U2FsdGVkX1/dkjXaxP8r2zaG2CpQlajOBkPZGpqQCpJTsSP0K+UgbExEWdSlkbx6FNNWY89Ur872WIHvfpmllQha9PT1FYvQw0EdiNVILQkrhuz0hW5+YOiGyUMnDVZyNs9LgDMHQ1yk1TQf1DBFDQK3ZdjYhfjmtVHrnkgYjzPqLr/SHqRuLOAaSqsz8mXHqD82SJUbNPOgpFxmDH6U/f31X3JehgoRpxhlqjAZ/RB9ZmLe2kXpSiWEmO3bra6RXHYTTw0QclxW5iOk9i6tZeX7rdGZktS6/+Uj6gRCUBuN9LOydBdGmGUVpP+ecJd79XMrMtmjqMe/ixbkhpJQE8ks0sXwXB0B2MKD85cz1lCdxGjEz+yJ7wbww7NeJc+7rNEBNkwKzzbXYYL/NY2LLmD+kcaITAXiJSUUfk/W8GMAkALJPMtamenTeCIYgvsDqP4zFsSMBcuRxNJOVH3R12wKqwdHkZEqaMJgMxORoLaNZdfAYlOkiw7FWb9IfiASX3LvjFuFEvanSFmjMDj1gBCM0AGb1Tbbt2EYJeROjge8hB+ThVae4AutUnEd/9cwu9+El+zdKnfT3s4xU6fWr/6DVh57gCW0MNaoiJ6BiWRWWVhc+cENHLwOb4tENuUsBtJRbm3ICAd2KZcI07CQV9ybMsP7/F1EXYirzxdR7VqYoHedx1cIEzBJ7z/W26GXIdpMDE5nSJGr81tlHUMsaOxnLQtmV/6Ri+eFjVcDG2bnfmBE+bbZGgjQPbR5v5ZLPl7r/P8+I3dZAspooOnMlbtFiTNqquNuZbpYIxVtO5c3bMHWRw5pvoNUWxUsysssSyrjfZEb0wdRI4MuE3F6NkolA5KCa9gLRZPtVe8aPUAPGOU9POlu22odtzeY/nFOT2DtYvFka9hWh5edXwYhhcxaZRjXfL+NC2SNQvpUF+J83Bs1t1Fx8FO55RLfXTr10yE4uNTKKUa7sDWlwd7 ufq0DE5y trnQAbBkPCtHj16fwxn/OxV3UqWje7dBsm2CV4uBijh/cZg03nsmiJH7AFe25xZSCgoI/iahE0p7PyYp0FfNUqzZUvEm7f7h+GmUXenyECSHeahUnMl7T8JNZb84UZVNIeLyvjewba/KaH3ZCZJwu3/a27a1hDBdchBXB9pkh4974BXGvSEG3O9y9/fFptoyQCSp8UvACT3BU5bmHV7NPaXGp+V030a/qTGGKg9o0yPYUt9U= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky As a preparation to removal scatter-gather table and unifying receive and send list, explicitly store page list. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 29 ++++++++++++----------------- drivers/vfio/pci/mlx5/cmd.h | 1 + 2 files changed, 13 insertions(+), 17 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index 1832a6c1f35d..34ae3e299a9e 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -422,6 +422,7 @@ void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) for_each_sgtable_page(&buf->table.sgt, &sg_iter, 0) __free_page(sg_page_iter_page(&sg_iter)); sg_free_append_table(&buf->table); + kvfree(buf->page_list); kfree(buf); } @@ -434,39 +435,33 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, unsigned int to_fill; int ret; - to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*page_list)); - page_list = kvzalloc(to_fill * sizeof(*page_list), GFP_KERNEL_ACCOUNT); + to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*buf->page_list)); + page_list = kvzalloc(to_fill * sizeof(*buf->page_list), GFP_KERNEL_ACCOUNT); if (!page_list) return -ENOMEM; + buf->page_list = page_list; + do { filled = alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, to_fill, - page_list); - if (!filled) { - ret = -ENOMEM; - goto err; - } + buf->page_list + buf->npages); + if (!filled) + return -ENOMEM; + to_alloc -= filled; ret = sg_alloc_append_table_from_pages( - &buf->table, page_list, filled, 0, + &buf->table, buf->page_list + buf->npages, filled, 0, filled << PAGE_SHIFT, UINT_MAX, SG_MAX_SINGLE_ALLOC, GFP_KERNEL_ACCOUNT); if (ret) - goto err; + return ret; buf->npages += filled; - /* clean input for another bulk allocation */ - memset(page_list, 0, filled * sizeof(*page_list)); to_fill = min_t(unsigned int, to_alloc, - PAGE_SIZE / sizeof(*page_list)); + PAGE_SIZE / sizeof(*buf->page_list)); } while (to_alloc > 0); - kvfree(page_list); return 0; - -err: - kvfree(page_list); - return ret; } struct mlx5_vhca_data_buffer * diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 25dd6ff54591..5b764199db53 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -53,6 +53,7 @@ struct mlx5_vf_migration_header { }; struct mlx5_vhca_data_buffer { + struct page **page_list; struct sg_append_table table; loff_t start_pos; u64 length; From patchwork Sun Oct 27 14:21:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852659 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E7C1D13561 for ; Sun, 27 Oct 2024 14:22:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 250828D0005; Sun, 27 Oct 2024 10:22:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F0408D0001; Sun, 27 Oct 2024 10:22:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 041D98D0005; Sun, 27 Oct 2024 10:22:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D9DCE8D0001 for ; Sun, 27 Oct 2024 10:22:43 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id BC04C140142 for ; Sun, 27 Oct 2024 14:22:20 +0000 (UTC) X-FDA: 82719597660.15.F5DD324 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf13.hostedemail.com (Postfix) with ESMTP id 1A0C820021 for ; Sun, 27 Oct 2024 14:22:18 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=D6nBNy7K; spf=pass (imf13.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038882; a=rsa-sha256; cv=none; b=Z3+R/mFQE01hwBdspX+ddAdmCK6Di9OkghKRjpIhHYgbKXeMiH6OXrIPUKag982iior7gH cHuXkHS7P2scuTyLpQeqmGXAdyb7YwHI7CqGFOdR05nbk/qgJzL2jaoiQbB704qZNp5RCd RmTRxZTl6kRfIzFSETx27Eaom8eFHPE= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=D6nBNy7K; spf=pass (imf13.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038882; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zqvfSXA47GF7YeUS5geOGS1Di2D4vIEM/P6G8yafBlY=; b=5gtYme6aS77LDL5Dyu3jGM3UsH4LkQItj5umRSVm2xaVB5vutoHo6odf7IbVzn79ZvmJ86 iaron5ylSc5oFQxW8zwtUymUz15wB39O6JOBHhAEei0Q7A905iTx1+SVz3Kdg0aAsTnCuQ 2fGp1VxPFWrqX9ahGEuYBwSKiruxFE0= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 09BF0A40FA0; Sun, 27 Oct 2024 14:20:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0CCEAC4CEC3; Sun, 27 Oct 2024 14:22:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038960; bh=9Y8s1sn+jRVmq+XduWzJ+4BCR70epcg9p+8pEEFM2cI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=D6nBNy7KRpHMQDBD/hIk8id0BwPrCi1QrX6RrVsB2mIixEfPurh4i3M7Q+NV8Pe+G O1UJBOKRQl1UIMFjxgpdFl264u3K378hbDGEWKGy0KIWwt5SdFzvoUsGeCbwF46Eb0 sVEBYsZ2HVsm83TcrJXjk7/XZP96tbjWUizHwVcMnAmKJm6J5d29yR8sT2IhN4v+oo sYQo22P1Bkwsv65FV96jsyaj8FU7QvlpcUQ4UHHGclNVimMdEHYeG+cD3hoMzUy8IE zW/kkqQxodZssIgCBi7yBhnU8LumK9wCldrWqn+g3ObKUe3DGjGVKJMSMy9Z7Xmgc0 C/UIHvlE0yU7g== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 18/18] vfio/mlx5: Convert vfio to use DMA link API Date: Sun, 27 Oct 2024 16:21:18 +0200 Message-ID: <0a517ddff099c14fac1ceb0e75f2f50ed183d09c.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: 6a7axtachtdcgdotz1j17onxr36jecuj X-Rspamd-Queue-Id: 1A0C820021 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1730038938-452113 X-HE-Meta: U2FsdGVkX1/+D54o6Y0R9Zbja1RPDQTFOt2wSBsyAFkUvf8WQd3r94DpvyNOSr72VJOhqJ3NzBy41ITGWjy1a5RHC9KFcsXqmWKuOHDL/1vQIJpsejuaB0lNbUsQ//fhi/c49YgkD279P71XyuMw6RNWfS/5PyBugGcW/YcESaQpuGLday1A5KQrLINGu/NNH3yc0PuEym5Qd0EKAWw+XemTqQUekxrgTBKCCHeyvcGZwBRT0kMx1vyKx0EIIINcbQBZ9sHZ1sGR0LV80eFS+fboA3uIvMQYpRgsLaTeEVzMmIQxcwyG4QKH9ZngICuQAfIA064O3P+3ZksTyBXXEvRidW2MK5/bToTZU6rNeHtN2f2mwxN0DIgSfEdRGyv6FpZgbWfJEr3LuRfGSVLWOgPz6O9fTziCjf6mXh9rQIjUWni5pA2V5NR09x/WehRivLLzsa8Hah/xMg4VDsxDw/8v0aJmnSrAG1wUxVmPNJZM/TNGUR1SVW64tEFSi4Qm61GanziB04eKsxgP28v8760RpxSfuNySQfKaRPEOoUU6SLPrNDeT1wRszh/iA1KCaH1eGPiaazXscb4lF2JQkHZVCpWBEGdzAiCrlsNCmCfpZ71c+PDjZg/BlzLV/24YDtepRhwD74EYV127M27UG1JBjE1f2wft338+aQdBIFJCjiTppSkePFCb/MCgpckJ1VEzPxNnciWejzm9q2FU57nSZSM26jC75RMHqdT0M5LjKcNChiud5hMpPYN0zJHluDgnAn7e9aaPAW4DlcM583Hx6SvqCafmDOSEtVMNvEoj9fdd63a9YqLNp7gPpC/XivcxCiZ2w9we9a0wkV2Huic5No7PXe4I9fQXm+LwbdbhqreJ8vhu7BE5QNpjWyPh8iT+u7FUSs86FQn9f6/nN4pmbZ1KUm6YDdRKbh80W7e0Ss4wogW2DANgKS/8b+O7y7/E66pE0TEpZyyc7c9 gH0sh9Rb jDCTn+r4O+E+6WOChzZjE11p0CjOZcNz+ofFxnca9ZdGCfpYlkxBWbyV4CVUES+U3fx/9aymGKRyf+2asD7+aBN097Sf5iuKdI6lanKmP8Tqhi/secPvJNEbUqXMoIcJEb6OZtJSiv66sqqYfXJg89TKXYyfVtc4cD6qfuKGrgC4/NHUD8wSJkXSpaqg9Kaz6JeSVGGbJwOlpKmejALB3bH3dvcCp+p9Ha7R9Va9c7jYqEbU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Remove intermediate scatter-gather table as it is not needed if DMA link API is used. This conversion reduces drastically the memory used to manage that table. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 211 ++++++++++++++++++----------------- drivers/vfio/pci/mlx5/cmd.h | 9 +- drivers/vfio/pci/mlx5/main.c | 31 +---- 3 files changed, 114 insertions(+), 137 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index 34ae3e299a9e..58c490222be7 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -345,25 +345,81 @@ static u32 *alloc_mkey_in(u32 npages, u32 pdn) return in; } -static int create_mkey(struct mlx5_core_dev *mdev, u32 npages, - struct mlx5_vhca_data_buffer *buf, u32 *mkey_in, +static int create_mkey(struct mlx5_core_dev *mdev, u32 npages, u32 *mkey_in, u32 *mkey) { + int inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + + sizeof(__be64) * round_up(npages, 2); + + return mlx5_core_create_mkey(mdev, mkey, mkey_in, inlen); +} + +static void unregister_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + u32 *mkey_in, struct dma_iova_state *state, + enum dma_data_direction dir) +{ + dma_addr_t addr; __be64 *mtt; - int inlen; + int i; - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - if (buf) { - struct sg_dma_page_iter dma_iter; + WARN_ON_ONCE(dir == DMA_NONE); - for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) - *mtt++ = cpu_to_be64(sg_page_iter_dma_address(&dma_iter)); + if (dma_use_iova(state)) { + dma_iova_destroy(mdev->device, state, dir, 0); + } else { + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, + klm_pas_mtt); + for (i = npages - 1; i >= 0; i--) { + addr = be64_to_cpu(mtt[i]); + dma_unmap_page(mdev->device, addr, PAGE_SIZE, dir); + } } +} - inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + - sizeof(__be64) * round_up(npages, 2); +static int register_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + struct page **page_list, u32 *mkey_in, + struct dma_iova_state *state, + enum dma_data_direction dir) +{ + dma_addr_t addr; + size_t mapped = 0; + __be64 *mtt; + int i, err; - return mlx5_core_create_mkey(mdev, mkey, mkey_in, inlen); + WARN_ON_ONCE(dir == DMA_NONE); + + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); + + if (dma_iova_try_alloc(mdev->device, state, 0, npages * PAGE_SIZE)) { + addr = state->addr; + for (i = 0; i < npages; i++) { + err = dma_iova_link(mdev->device, state, + page_to_phys(page_list[i]), mapped, + PAGE_SIZE, dir, 0); + if (err) + break; + *mtt++ = cpu_to_be64(addr); + addr += PAGE_SIZE; + mapped += PAGE_SIZE; + } + err = dma_iova_sync(mdev->device, state, 0, mapped, err); + if (err) + goto error; + } else { + for (i = 0; i < npages; i++) { + addr = dma_map_page(mdev->device, page_list[i], 0, + PAGE_SIZE, dir); + err = dma_mapping_error(mdev->device, addr); + if (err) + goto error; + *mtt++ = cpu_to_be64(addr); + } + } + return 0; + +error: + unregister_dma_pages(mdev, i, mkey_in, state, dir); + return err; } static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) @@ -379,50 +435,57 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) if (buf->mkey_in || !buf->npages) return -EINVAL; - ret = dma_map_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); - if (ret) - return ret; - buf->mkey_in = alloc_mkey_in(buf->npages, buf->migf->pdn); - if (!buf->mkey_in) { - ret = -ENOMEM; - goto err; - } + if (!buf->mkey_in) + return -ENOMEM; + + ret = register_dma_pages(mdev, buf->npages, buf->page_list, + buf->mkey_in, &buf->state, buf->dma_dir); + if (ret) + goto err_register_dma; - ret = create_mkey(mdev, buf->npages, buf, buf->mkey_in, &buf->mkey); + ret = create_mkey(mdev, buf->npages, buf->mkey_in, &buf->mkey); if (ret) goto err_create_mkey; return 0; err_create_mkey: + unregister_dma_pages(mdev, buf->npages, buf->mkey_in, &buf->state, + buf->dma_dir); +err_register_dma: kvfree(buf->mkey_in); buf->mkey_in = NULL; -err: - dma_unmap_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); return ret; } +static void free_page_list(u32 npages, struct page **page_list) +{ + int i; + + /* Undo alloc_pages_bulk_array() */ + for (i = npages - 1; i >= 0; i--) + __free_page(page_list[i]); + + kvfree(page_list); +} + void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) { - struct mlx5_vf_migration_file *migf = buf->migf; - struct sg_page_iter sg_iter; + struct mlx5vf_pci_core_device *mvdev = buf->migf->mvdev; + struct mlx5_core_dev *mdev = mvdev->mdev; - lockdep_assert_held(&migf->mvdev->state_mutex); - WARN_ON(migf->mvdev->mdev_detach); + lockdep_assert_held(&mvdev->state_mutex); + WARN_ON(mvdev->mdev_detach); if (buf->mkey_in) { - mlx5_core_destroy_mkey(migf->mvdev->mdev, buf->mkey); + mlx5_core_destroy_mkey(mdev, buf->mkey); + unregister_dma_pages(mdev, buf->npages, buf->mkey_in, + &buf->state, buf->dma_dir); kvfree(buf->mkey_in); - dma_unmap_sgtable(migf->mvdev->mdev->device, &buf->table.sgt, - buf->dma_dir, 0); } - /* Undo alloc_pages_bulk_array() */ - for_each_sgtable_page(&buf->table.sgt, &sg_iter, 0) - __free_page(sg_page_iter_page(&sg_iter)); - sg_free_append_table(&buf->table); - kvfree(buf->page_list); + free_page_list(buf->npages, buf->page_list); kfree(buf); } @@ -433,7 +496,6 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, struct page **page_list; unsigned long filled; unsigned int to_fill; - int ret; to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*buf->page_list)); page_list = kvzalloc(to_fill * sizeof(*buf->page_list), GFP_KERNEL_ACCOUNT); @@ -443,22 +505,13 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, buf->page_list = page_list; do { - filled = alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, to_fill, - buf->page_list + buf->npages); + filled = alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, to_alloc, + buf->page_list + buf->npages); if (!filled) return -ENOMEM; to_alloc -= filled; - ret = sg_alloc_append_table_from_pages( - &buf->table, buf->page_list + buf->npages, filled, 0, - filled << PAGE_SHIFT, UINT_MAX, SG_MAX_SINGLE_ALLOC, - GFP_KERNEL_ACCOUNT); - - if (ret) - return ret; buf->npages += filled; - to_fill = min_t(unsigned int, to_alloc, - PAGE_SIZE / sizeof(*buf->page_list)); } while (to_alloc > 0); return 0; @@ -1340,17 +1393,6 @@ static void mlx5vf_destroy_qp(struct mlx5_core_dev *mdev, kfree(qp); } -static void free_recv_pages(struct mlx5_vhca_recv_buf *recv_buf) -{ - int i; - - /* Undo alloc_pages_bulk_array() */ - for (i = 0; i < recv_buf->npages; i++) - __free_page(recv_buf->page_list[i]); - - kvfree(recv_buf->page_list); -} - static int alloc_recv_pages(struct mlx5_vhca_recv_buf *recv_buf, unsigned int npages) { @@ -1386,45 +1428,6 @@ static int alloc_recv_pages(struct mlx5_vhca_recv_buf *recv_buf, kvfree(recv_buf->page_list); return -ENOMEM; } -static void unregister_dma_pages(struct mlx5_core_dev *mdev, u32 npages, - u32 *mkey_in) -{ - dma_addr_t addr; - __be64 *mtt; - int i; - - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - for (i = npages - 1; i >= 0; i--) { - addr = be64_to_cpu(mtt[i]); - dma_unmap_single(mdev->device, addr, PAGE_SIZE, - DMA_FROM_DEVICE); - } -} - -static int register_dma_pages(struct mlx5_core_dev *mdev, u32 npages, - struct page **page_list, u32 *mkey_in) -{ - dma_addr_t addr; - __be64 *mtt; - int i; - - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - - for (i = 0; i < npages; i++) { - addr = dma_map_page(mdev->device, page_list[i], 0, PAGE_SIZE, - DMA_FROM_DEVICE); - if (dma_mapping_error(mdev->device, addr)) - goto error; - - *mtt++ = cpu_to_be64(addr); - } - - return 0; - -error: - unregister_dma_pages(mdev, i, mkey_in); - return -ENOMEM; -} static void mlx5vf_free_qp_recv_resources(struct mlx5_core_dev *mdev, struct mlx5_vhca_qp *qp) @@ -1432,9 +1435,10 @@ static void mlx5vf_free_qp_recv_resources(struct mlx5_core_dev *mdev, struct mlx5_vhca_recv_buf *recv_buf = &qp->recv_buf; mlx5_core_destroy_mkey(mdev, recv_buf->mkey); - unregister_dma_pages(mdev, recv_buf->npages, recv_buf->mkey_in); + unregister_dma_pages(mdev, recv_buf->npages, recv_buf->mkey_in, + &recv_buf->state, DMA_FROM_DEVICE); kvfree(recv_buf->mkey_in); - free_recv_pages(&qp->recv_buf); + free_page_list(recv_buf->npages, recv_buf->page_list); } static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, @@ -1456,24 +1460,25 @@ static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, } err = register_dma_pages(mdev, npages, recv_buf->page_list, - recv_buf->mkey_in); + recv_buf->mkey_in, &recv_buf->state, + DMA_FROM_DEVICE); if (err) goto err_register_dma; - err = create_mkey(mdev, npages, NULL, recv_buf->mkey_in, - &recv_buf->mkey); + err = create_mkey(mdev, npages, recv_buf->mkey_in, &recv_buf->mkey); if (err) goto err_create_mkey; return 0; err_create_mkey: - unregister_dma_pages(mdev, npages, recv_buf->mkey_in); + unregister_dma_pages(mdev, npages, recv_buf->mkey_in, &recv_buf->state, + DMA_FROM_DEVICE); err_register_dma: kvfree(recv_buf->mkey_in); recv_buf->mkey_in = NULL; end: - free_recv_pages(recv_buf); + free_page_list(npages, recv_buf->page_list); return err; } diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 5b764199db53..f9c7268272e7 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -54,20 +54,16 @@ struct mlx5_vf_migration_header { struct mlx5_vhca_data_buffer { struct page **page_list; - struct sg_append_table table; + struct dma_iova_state state; + enum dma_data_direction dma_dir; loff_t start_pos; u64 length; u32 npages; u32 mkey; u32 *mkey_in; - enum dma_data_direction dma_dir; u8 stop_copy_chunk_num; struct list_head buf_elm; struct mlx5_vf_migration_file *migf; - /* Optimize mlx5vf_get_migration_page() for sequential access */ - struct scatterlist *last_offset_sg; - unsigned int sg_last_entry; - unsigned long last_offset; }; struct mlx5vf_async_data { @@ -134,6 +130,7 @@ struct mlx5_vhca_cq { struct mlx5_vhca_recv_buf { u32 npages; struct page **page_list; + struct dma_iova_state state; u32 next_rq_offset; u32 *mkey_in; u32 mkey; diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c index a1dbee3be1e0..cac99e6b047d 100644 --- a/drivers/vfio/pci/mlx5/main.c +++ b/drivers/vfio/pci/mlx5/main.c @@ -34,35 +34,10 @@ static struct mlx5vf_pci_core_device *mlx5vf_drvdata(struct pci_dev *pdev) core_device); } -struct page * -mlx5vf_get_migration_page(struct mlx5_vhca_data_buffer *buf, - unsigned long offset) +struct page *mlx5vf_get_migration_page(struct mlx5_vhca_data_buffer *buf, + unsigned long offset) { - unsigned long cur_offset = 0; - struct scatterlist *sg; - unsigned int i; - - /* All accesses are sequential */ - if (offset < buf->last_offset || !buf->last_offset_sg) { - buf->last_offset = 0; - buf->last_offset_sg = buf->table.sgt.sgl; - buf->sg_last_entry = 0; - } - - cur_offset = buf->last_offset; - - for_each_sg(buf->last_offset_sg, sg, - buf->table.sgt.orig_nents - buf->sg_last_entry, i) { - if (offset < sg->length + cur_offset) { - buf->last_offset_sg = sg; - buf->sg_last_entry += i; - buf->last_offset = cur_offset; - return nth_page(sg_page(sg), - (offset - cur_offset) / PAGE_SIZE); - } - cur_offset += sg->length; - } - return NULL; + return buf->page_list[offset / PAGE_SIZE]; } static void mlx5vf_disable_fd(struct mlx5_vf_migration_file *migf)