From patchwork Thu Apr 7 15:46:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBB4DC433EF for ; Thu, 7 Apr 2022 15:50:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8FC0C8D0006; Thu, 7 Apr 2022 11:47:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8ACAF8D0005; Thu, 7 Apr 2022 11:47:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 749E98D0006; Thu, 7 Apr 2022 11:47:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 65BB98D0005 for ; Thu, 7 Apr 2022 11:47:59 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 34CCB25DED for ; Thu, 7 Apr 2022 15:47:49 +0000 (UTC) X-FDA: 79330513458.08.21124C1 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf22.hostedemail.com (Postfix) with ESMTP id 8CBD3C0007 for ; Thu, 7 Apr 2022 15:47:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=mtQBRhKAuvCwGThlB7Jvwgfm5H5uDyU50loVdVNL23w=; b=eC0x024KGX2rizqrbDnRF7QGyP PTIJi8ULjZM2NXbDyn9j+NJs/zw63bx1ap5MQwjrB8SuWYrwDYBn9La6Oes+pW2AaOjw3/K9Lc8XA ZnUQ2OSWD6EeOVFFNPXWpXstaMgK1osRSprUbKhFO0EjDvhPm4roSjuSNUas7JiMaVOHpyy9+sXln wNH0T/9JHodD64cmpmnDuPPFnHpH9PesdwUFhcA4x7fg5iGBbNGo93rxEfBbu6fhYXQSj/nkoUwrJ vD9FP0FpxHOCq/FFoSdIboDvZRIr965ebPkVXhgGi1+fCSSGU9pBN5SF56ZkuAc7AqLCefsx1ysqm rAuK+wGg==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMH-002BBf-KW; Thu, 07 Apr 2022 09:47:42 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMB-00021H-Ak; Thu, 07 Apr 2022 09:47:35 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe , Chaitanya Kulkarni Date: Thu, 7 Apr 2022 09:46:57 -0600 Message-Id: <20220407154717.7695-2-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, logang@deltatee.com, jhubbard@nvidia.com, rcampbell@nvidia.com, kch@nvidia.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 01/21] lib/scatterlist: add flag for indicating P2PDMA segments in an SGL X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Stat-Signature: p1rw3swdujfoo71zewktf466xa9kw6ra Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=eC0x024K; spf=pass (imf22.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com; dmarc=pass (policy=none) header.from=deltatee.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 8CBD3C0007 X-HE-Tag: 1649346466-541606 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make use of the third free LSB in scatterlist's page_link on 64bit systems. The extra bit will be used by dma_[un]map_sg_p2pdma() to determine when a given SGL segments dma_address points to a PCI bus address. dma_unmap_sg_p2pdma() will need to perform different cleanup when a segment is marked as a bus address. The new bit will only be used when CONFIG_PCI_P2PDMA is set; this means PCI P2PDMA will require CONFIG_64BIT. This should be acceptable as the majority of P2PDMA use cases are restricted to newer root complexes and roughly require the extra address space for memory BARs used in the transactions. Signed-off-by: Logan Gunthorpe Reviewed-by: Chaitanya Kulkarni --- drivers/pci/Kconfig | 5 +++++ include/linux/scatterlist.h | 44 ++++++++++++++++++++++++++++++++++++- 2 files changed, 48 insertions(+), 1 deletion(-) diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index 133c73207782..5cc7cba1941f 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig @@ -164,6 +164,11 @@ config PCI_PASID config PCI_P2PDMA bool "PCI peer-to-peer transfer support" depends on ZONE_DEVICE + # + # The need for the scatterlist DMA bus address flag means PCI P2PDMA + # requires 64bit + # + depends on 64BIT select GENERIC_ALLOCATOR help Enableѕ drivers to do PCI peer-to-peer transactions to and from diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 7ff9d6386c12..6561ca8aead8 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -64,12 +64,24 @@ struct sg_append_table { #define SG_CHAIN 0x01UL #define SG_END 0x02UL +/* + * bit 2 is the third free bit in the page_link on 64bit systems which + * is used by dma_unmap_sg() to determine if the dma_address is a + * bus address when doing P2PDMA. + */ +#ifdef CONFIG_PCI_P2PDMA +#define SG_DMA_BUS_ADDRESS 0x04UL +static_assert(__alignof__(struct page) >= 8); +#else +#define SG_DMA_BUS_ADDRESS 0x00UL +#endif + /* * We overload the LSB of the page pointer to indicate whether it's * a valid sg entry, or whether it points to the start of a new scatterlist. * Those low bits are there for everyone! (thanks mason :-) */ -#define SG_PAGE_LINK_MASK (SG_CHAIN | SG_END) +#define SG_PAGE_LINK_MASK (SG_CHAIN | SG_END | SG_DMA_BUS_ADDRESS) static inline unsigned int __sg_flags(struct scatterlist *sg) { @@ -91,6 +103,11 @@ static inline bool sg_is_last(struct scatterlist *sg) return __sg_flags(sg) & SG_END; } +static inline bool sg_is_dma_bus_address(struct scatterlist *sg) +{ + return __sg_flags(sg) & SG_DMA_BUS_ADDRESS; +} + /** * sg_assign_page - Assign a given page to an SG entry * @sg: SG entry @@ -245,6 +262,31 @@ static inline void sg_unmark_end(struct scatterlist *sg) sg->page_link &= ~SG_END; } +/** + * sg_dma_mark_bus address - Mark the scatterlist entry as a bus address + * @sg: SG entryScatterlist + * + * Description: + * Marks the passed in sg entry to indicate that the dma_address is + * a bus address and doesn't need to be unmapped. + **/ +static inline void sg_dma_mark_bus_address(struct scatterlist *sg) +{ + sg->page_link |= SG_DMA_BUS_ADDRESS; +} + +/** + * sg_unmark_pci_p2pdma - Unmark the scatterlist entry as a bus address + * @sg: SG entryScatterlist + * + * Description: + * Clears the bus address mark. + **/ +static inline void sg_dma_unmark_bus_address(struct scatterlist *sg) +{ + sg->page_link &= ~SG_DMA_BUS_ADDRESS; +} + /** * sg_phys - Return physical address of an sg entry * @sg: SG entry From patchwork Thu Apr 7 15:46:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805412 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BA88C433F5 for ; Thu, 7 Apr 2022 15:50:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3BDF18D0001; Thu, 7 Apr 2022 11:47:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EEF9B8D0005; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CF2628D0002; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0133.hostedemail.com [216.40.44.133]) by kanga.kvack.org (Postfix) with ESMTP id A7F268D0001 for ; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6B55218391D74 for ; Thu, 7 Apr 2022 15:47:48 +0000 (UTC) X-FDA: 79330513416.30.1C07B0D Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf24.hostedemail.com (Postfix) with ESMTP id B018518000A for ; Thu, 7 Apr 2022 15:47:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=ZRqXik+AbRLmYtJfL+C2eBMnrjpO8efYUeKCeHmi1ik=; b=Lm33E960X0HEggurxTc5ED8v5F ArL+/Y4Xsll2xJBwqN1YYJqdImLFWp4CkBPMrxv6eTaRnsDdmr+Y0+M+lB/K7jb6kW3D5XynL/hCI PfIFa5SsohA5CLAzbD/ckpTiJwC0Qqn3osjKuD0IdXmFvCzWRIOCGm5xSktfZ3VW6/jroF2E+fExE JyN2gczLjgxvDCXYBAeDx2i8i64aGV0p2CKHYfTZi5YERkT6weMeCOYKlRAcg3WfwcNZJYeNl7Kfi UBTutMHgzzmQOdcPnqCPWz7qM+lBMMxDGg22S0jMYCPhl8Sx/2RZXy8IzPqv2NwqSOTUEHpS3wgXm bKBsrcSQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMH-002BBg-KW; Thu, 07 Apr 2022 09:47:43 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMB-00021K-G4; Thu, 07 Apr 2022 09:47:35 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe , Bjorn Helgaas , Chaitanya Kulkarni Date: Thu, 7 Apr 2022 09:46:58 -0600 Message-Id: <20220407154717.7695-3-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, logang@deltatee.com, bhelgaas@google.com, jhubbard@nvidia.com, rcampbell@nvidia.com, kch@nvidia.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 02/21] PCI/P2PDMA: Attempt to set map_type if it has not been set X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Stat-Signature: tpzyb9oye973nh8k1hb5q9kj4w5nw6cf Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=Lm33E960; dmarc=pass (policy=none) header.from=deltatee.com; spf=pass (imf24.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: B018518000A X-HE-Tag: 1649346466-870557 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Attempt to find the mapping type for P2PDMA pages on the first DMA map attempt if it has not been done ahead of time. Previously, the mapping type was expected to be calculated ahead of time, but if pages are to come from userspace then there's no way to ensure the path was checked ahead of time. This change will calculate the mapping type if it hasn't pre-calculated so it is no longer invalid to call pci_p2pdma_map_sg() before the mapping type is calculated, so drop the WARN_ON when that is the case. Signed-off-by: Logan Gunthorpe Acked-by: Bjorn Helgaas Reviewed-by: Chaitanya Kulkarni --- drivers/pci/p2pdma.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 30b1df3c9d2f..c3a68e82cf36 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -849,6 +849,7 @@ static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap, struct pci_dev *provider = to_p2p_pgmap(pgmap)->provider; struct pci_dev *client; struct pci_p2pdma *p2pdma; + int dist; if (!provider->p2pdma) return PCI_P2PDMA_MAP_NOT_SUPPORTED; @@ -865,6 +866,10 @@ static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap, type = xa_to_value(xa_load(&p2pdma->map_types, map_types_idx(client))); rcu_read_unlock(); + + if (type == PCI_P2PDMA_MAP_UNKNOWN) + return calc_map_type_and_dist(provider, client, &dist, true); + return type; } @@ -907,7 +912,7 @@ int pci_p2pdma_map_sg_attrs(struct device *dev, struct scatterlist *sg, case PCI_P2PDMA_MAP_BUS_ADDR: return __pci_p2pdma_map_sg(p2p_pgmap, dev, sg, nents); default: - WARN_ON_ONCE(1); + /* Mapping is not Supported */ return 0; } } From patchwork Thu Apr 7 15:46:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A0E4C433F5 for ; Thu, 7 Apr 2022 15:49:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17CF38D0002; Thu, 7 Apr 2022 11:47:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D8C298D0001; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A09926B0078; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 88BC36B0074 for ; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 28411278D1 for ; Thu, 7 Apr 2022 15:47:48 +0000 (UTC) X-FDA: 79330513416.19.66A8A05 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf31.hostedemail.com (Postfix) with ESMTP id A83B420009 for ; Thu, 7 Apr 2022 15:47:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=ZOsyiabOyxecdEuYc3kJmut49yeuPwIWNC6/tcRPR9U=; b=mGO/cGmneP/dZW/WsfodTl7Ogz PEIgQwvjqGOvP7HfvYwFaT5lZ8aiSS2eZytzyjxCYimAQmmbhaa1XR3DHQEzrXgYBYFHa9y/aDKte KBkPf5kd0p8cByNnJoSO8kvewiiTQkzGyCyZu6v/sHFK1gxwE5YqTXMRzT5xGU7IzKQ2r3RKXDMZj PodJrYUo+VDNunK+R/AUaAezTvlwUo2DqO4n5ScdgFWcOKAfGAOioZEhI77wVtDxIERDGGRALxYi/ z08rJQMeWbD/I5CD4FzVjnAf+70lUu4R5y5IpAH+TB4u7hSmRxUk2GFkQSih31bgB28vH6hEiIhVA x5tOBmYQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMH-002BBh-KX; Thu, 07 Apr 2022 09:47:43 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMB-00021N-Kh; Thu, 07 Apr 2022 09:47:35 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe , Bjorn Helgaas , Jason Gunthorpe , Chaitanya Kulkarni Date: Thu, 7 Apr 2022 09:46:59 -0600 Message-Id: <20220407154717.7695-4-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, logang@deltatee.com, bhelgaas@google.com, jhubbard@nvidia.com, rcampbell@nvidia.com, jgg@nvidia.com, kch@nvidia.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 03/21] PCI/P2PDMA: Expose pci_p2pdma_map_type() X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: A83B420009 X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b="mGO/cGmn"; dmarc=pass (policy=none) header.from=deltatee.com; spf=pass (imf31.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com X-Stat-Signature: 7r7wcee6m5scefeie941z5iaamkuc7mt X-HE-Tag: 1649346466-427175 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: pci_p2pdma_map_type() will be needed by the dma-iommu map_sg implementation because it will need to determine the mapping type ahead of actually doing the mapping to create the actual IOMMU mapping. Prototypes for this helper are added to dma-map-ops.h as they are only useful to dma map implementations and don't need to pollute the public pci-p2pdma header Signed-off-by: Logan Gunthorpe Acked-by: Bjorn Helgaas Reviewed-by: Jason Gunthorpe Reviewed-by: Chaitanya Kulkarni --- drivers/pci/p2pdma.c | 25 +++++++++++++-------- include/linux/dma-map-ops.h | 45 +++++++++++++++++++++++++++++++++++++ 2 files changed, 61 insertions(+), 9 deletions(-) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index c3a68e82cf36..8573bf9d651a 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -10,6 +10,7 @@ #define pr_fmt(fmt) "pci-p2pdma: " fmt #include +#include #include #include #include @@ -20,13 +21,6 @@ #include #include -enum pci_p2pdma_map_type { - PCI_P2PDMA_MAP_UNKNOWN = 0, - PCI_P2PDMA_MAP_NOT_SUPPORTED, - PCI_P2PDMA_MAP_BUS_ADDR, - PCI_P2PDMA_MAP_THRU_HOST_BRIDGE, -}; - struct pci_p2pdma { struct gen_pool *pool; bool p2pmem_published; @@ -842,8 +836,21 @@ void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) } EXPORT_SYMBOL_GPL(pci_p2pmem_publish); -static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap, - struct device *dev) +/** + * pci_p2pdma_map_type - return the type of mapping that should be used for + * a given device and pgmap + * @pgmap: the pagemap of a page to determine the mapping type for + * @dev: device that is mapping the page + * + * Returns one of: + * PCI_P2PDMA_MAP_NOT_SUPPORTED - The mapping should not be done + * PCI_P2PDMA_MAP_BUS_ADDR - The mapping should use the PCI bus address + * PCI_P2PDMA_MAP_THRU_HOST_BRIDGE - The mapping should be done normally + * using the CPU physical address (in dma-direct) or an IOVA + * mapping for the IOMMU. + */ +enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap, + struct device *dev) { enum pci_p2pdma_map_type type = PCI_P2PDMA_MAP_NOT_SUPPORTED; struct pci_dev *provider = to_p2p_pgmap(pgmap)->provider; diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 0d5b06b3a4a6..d693a0e33bac 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -379,4 +379,49 @@ static inline void debug_dma_dump_mappings(struct device *dev) extern const struct dma_map_ops dma_dummy_ops; +enum pci_p2pdma_map_type { + /* + * PCI_P2PDMA_MAP_UNKNOWN: Used internally for indicating the mapping + * type hasn't been calculated yet. Functions that return this enum + * never return this value. + */ + PCI_P2PDMA_MAP_UNKNOWN = 0, + + /* + * PCI_P2PDMA_MAP_NOT_SUPPORTED: Indicates the transaction will + * traverse the host bridge and the host bridge is not in the + * allowlist. DMA Mapping routines should return an error when + * this is returned. + */ + PCI_P2PDMA_MAP_NOT_SUPPORTED, + + /* + * PCI_P2PDMA_BUS_ADDR: Indicates that two devices can talk to + * each other directly through a PCI switch and the transaction will + * not traverse the host bridge. Such a mapping should program + * the DMA engine with PCI bus addresses. + */ + PCI_P2PDMA_MAP_BUS_ADDR, + + /* + * PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: Indicates two devices can talk + * to each other, but the transaction traverses a host bridge on the + * allowlist. In this case, a normal mapping either with CPU physical + * addresses (in the case of dma-direct) or IOVA addresses (in the + * case of IOMMUs) should be used to program the DMA engine. + */ + PCI_P2PDMA_MAP_THRU_HOST_BRIDGE, +}; + +#ifdef CONFIG_PCI_P2PDMA +enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap, + struct device *dev); +#else /* CONFIG_PCI_P2PDMA */ +static inline enum pci_p2pdma_map_type +pci_p2pdma_map_type(struct dev_pagemap *pgmap, struct device *dev) +{ + return PCI_P2PDMA_MAP_NOT_SUPPORTED; +} +#endif /* CONFIG_PCI_P2PDMA */ + #endif /* _LINUX_DMA_MAP_OPS_H */ From patchwork Thu Apr 7 15:47:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805486 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78257C433EF for ; Thu, 7 Apr 2022 16:22:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7EBD86B0072; Thu, 7 Apr 2022 12:22:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 799FF6B0073; Thu, 7 Apr 2022 12:22:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 63B3E6B0074; Thu, 7 Apr 2022 12:22:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 54AC46B0072 for ; Thu, 7 Apr 2022 12:22:15 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 153BF27FDC for ; Thu, 7 Apr 2022 16:22:05 +0000 (UTC) X-FDA: 79330599810.03.F0B21D6 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf22.hostedemail.com (Postfix) with ESMTP id 4F2B5C0007 for ; Thu, 7 Apr 2022 16:22:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=ZcEuIp4PwMiHh+NZkAQGrq55hZht6UPtWsSd6hdhZDU=; b=BR9UXzrEkTiLy8HudZ2MG0kJ1I 8ZgN6SAm80EwOkzeYRhlxbqchosuKD3vvSDAk9qPWS9N64Oh+vdJ09LR/KUY0Izwl630eeitDIDA6 8Xo/PRw9sEn5lDsoobhj+TvrQDoirv8ddvW5/vq9urfHRtqvRg/AyAmb8xLYdU7nzcDTRACACof4c 5Y8vu5WFrCUM6fT98flMe693IavQNcWS5WUUz4D4yIkFhfFJyxtUKhQf5hlrw54G7QomuNkVMsdc+ L9I2g4jOt90tSBmC6UHmzIJ1JiQc5Xqqaf6VoguAixhxLkINVbM2BJGsmkMAauYZ0fdmQcoFyfK2q vXFB5I9g==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMH-002BBi-Ka; Thu, 07 Apr 2022 09:47:43 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMB-00021Q-Rf; Thu, 07 Apr 2022 09:47:35 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe , Bjorn Helgaas Date: Thu, 7 Apr 2022 09:47:00 -0600 Message-Id: <20220407154717.7695-5-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, jhubbard@nvidia.com, rcampbell@nvidia.com, logang@deltatee.com, bhelgaas@google.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 04/21] PCI/P2PDMA: Introduce helpers for dma_map_sg implementations X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: e8ggmc4kkzmbkhj3twprqjfgpuesqzn7 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=BR9UXzrE; dmarc=pass (policy=none) header.from=deltatee.com; spf=pass (imf22.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com X-Rspamd-Queue-Id: 4F2B5C0007 X-HE-Tag: 1649348523-320234 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add pci_p2pdma_map_segment() as a helper for simple dma_map_sg() implementations. It takes an scatterlist segment that must point to a pci_p2pdma struct page and will map it if the mapping requires a bus address. The return value indicates whether the mapping required a bus address or whether the caller still needs to map the segment normally. If the segment should not be mapped, -EREMOTEIO is returned. This helper uses a state structure to track the changes to the pgmap across calls and avoid needing to lookup into the xarray for every page. Also add pci_p2pdma_map_bus_segment() which is useful for IOMMU dma_map_sg() implementations where the sg segment containing the page differs from the sg segment containing the DMA address. Prototypes for these helpers are added to dma-map-ops.h as they are only useful to dma map implementations and don't need to pollute the public pci-p2pdma header. Signed-off-by: Logan Gunthorpe Acked-by: Bjorn Helgaas --- drivers/pci/p2pdma.c | 59 +++++++++++++++++++++++++++++++++++++ include/linux/dma-map-ops.h | 21 +++++++++++++ 2 files changed, 80 insertions(+) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 8573bf9d651a..9032c2ed2cdf 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -946,6 +946,65 @@ void pci_p2pdma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, } EXPORT_SYMBOL_GPL(pci_p2pdma_unmap_sg_attrs); +/** + * pci_p2pdma_map_segment - map an sg segment determining the mapping type + * @state: State structure that should be declared outside of the for_each_sg() + * loop and initialized to zero. + * @dev: DMA device that's doing the mapping operation + * @sg: scatterlist segment to map + * + * This is a helper to be used by non-IOMMU dma_map_sg() implementations where + * the sg segment is the same for the page_link and the dma_address. + * + * Attempt to map a single segment in an SGL with the PCI bus address. + * The segment must point to a PCI P2PDMA page and thus must be + * wrapped in a is_pci_p2pdma_page(sg_page(sg)) check. + * + * Returns the type of mapping used and maps the page if the type is + * PCI_P2PDMA_MAP_BUS_ADDR. + */ +enum pci_p2pdma_map_type +pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, + struct scatterlist *sg) +{ + if (state->pgmap != sg_page(sg)->pgmap) { + state->pgmap = sg_page(sg)->pgmap; + state->map = pci_p2pdma_map_type(state->pgmap, dev); + state->bus_off = to_p2p_pgmap(state->pgmap)->bus_offset; + } + + if (state->map == PCI_P2PDMA_MAP_BUS_ADDR) { + sg->dma_address = sg_phys(sg) + state->bus_off; + sg_dma_len(sg) = sg->length; + sg_dma_mark_bus_address(sg); + } + + return state->map; +} + +/** + * pci_p2pdma_map_bus_segment - map an sg segment pre determined to + * be mapped with PCI_P2PDMA_MAP_BUS_ADDR + * @pg_sg: scatterlist segment with the page to map + * @dma_sg: scatterlist segment to assign a DMA address to + * + * This is a helper for iommu dma_map_sg() implementations when the + * segment for the DMA address differs from the segment containing the + * source page. + * + * pci_p2pdma_map_type() must have already been called on the pg_sg and + * returned PCI_P2PDMA_MAP_BUS_ADDR. + */ +void pci_p2pdma_map_bus_segment(struct scatterlist *pg_sg, + struct scatterlist *dma_sg) +{ + struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(sg_page(pg_sg)->pgmap); + + dma_sg->dma_address = sg_phys(pg_sg) + pgmap->bus_offset; + sg_dma_len(dma_sg) = pg_sg->length; + sg_dma_mark_bus_address(dma_sg); +} + /** * pci_p2pdma_enable_store - parse a configfs/sysfs attribute store * to enable p2pdma diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index d693a0e33bac..752f91e5eb5d 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -413,15 +413,36 @@ enum pci_p2pdma_map_type { PCI_P2PDMA_MAP_THRU_HOST_BRIDGE, }; +struct pci_p2pdma_map_state { + struct dev_pagemap *pgmap; + int map; + u64 bus_off; +}; + #ifdef CONFIG_PCI_P2PDMA enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap, struct device *dev); +enum pci_p2pdma_map_type +pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, + struct scatterlist *sg); +void pci_p2pdma_map_bus_segment(struct scatterlist *pg_sg, + struct scatterlist *dma_sg); #else /* CONFIG_PCI_P2PDMA */ static inline enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap, struct device *dev) { return PCI_P2PDMA_MAP_NOT_SUPPORTED; } +static inline enum pci_p2pdma_map_type +pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, + struct scatterlist *sg) +{ + return PCI_P2PDMA_MAP_NOT_SUPPORTED; +} +static inline void pci_p2pdma_map_bus_segment(struct scatterlist *pg_sg, + struct scatterlist *dma_sg) +{ +} #endif /* CONFIG_PCI_P2PDMA */ #endif /* _LINUX_DMA_MAP_OPS_H */ From patchwork Thu Apr 7 15:47:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805432 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DEF1C433F5 for ; Thu, 7 Apr 2022 15:55:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CCD548D0010; Thu, 7 Apr 2022 11:48:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C7B468D000C; Thu, 7 Apr 2022 11:48:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B432C8D0010; Thu, 7 Apr 2022 11:48:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id A27FA8D000C for ; Thu, 7 Apr 2022 11:48:17 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 7B023664EC for ; Thu, 7 Apr 2022 15:48:07 +0000 (UTC) X-FDA: 79330514214.06.2EE2CE6 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf25.hostedemail.com (Postfix) with ESMTP id 18F29A0006 for ; Thu, 7 Apr 2022 15:48:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=qwHhhptcjs675dAW5wV9iUf/ZWGXFcOtu4sOGzEzFjs=; b=EOT6Ow6UdVvIY4DKNDl28DKbdr XL5XBU0ol/6KJ5yyMeQNVEigE3hI5Hq4vvc9uqb7OcthVIfXm8w4RQpIMDrwoU1JA19vDtZcFnUxv l7jAKiqD9ealgWX/ab89zuM+BD85MGrlTeWGSadgaMFatmmm0Y/RkNARI8eHFX1+dzqYiaQLyxl20 9DAAC0pIu2lI1X5Vo1nT8DW41LqG5BDQMmtDKYwT0g3x/8xszhQy7g+abnKiSf+qEbhvQuOU1qe3o SeUkhbs0C0A1KQMoBspjKBQVeW2vsuXxkiX5n3YFsn8s+JEHD0SNW9Y5gMID8PeWk3yZlXoJd2PpH ZryrM0ug==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMM-002BBg-He; Thu, 07 Apr 2022 09:47:47 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMC-00021T-0m; Thu, 07 Apr 2022 09:47:36 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe , Jason Gunthorpe Date: Thu, 7 Apr 2022 09:47:01 -0600 Message-Id: <20220407154717.7695-6-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, logang@deltatee.com, jhubbard@nvidia.com, rcampbell@nvidia.com, jgg@nvidia.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 05/21] dma-mapping: allow EREMOTEIO return code for P2PDMA transfers X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Rspam-User: X-Stat-Signature: epajoi1ooserhmcza65qwcqfcb8jnqs1 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=EOT6Ow6U; spf=pass (imf25.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com; dmarc=pass (policy=none) header.from=deltatee.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 18F29A0006 X-HE-Tag: 1649346486-935937 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add EREMOTEIO error return to dma_map_sgtable() which will be used by .map_sg() implementations that detect P2PDMA pages that the underlying DMA device cannot access. Signed-off-by: Logan Gunthorpe Reviewed-by: Jason Gunthorpe --- kernel/dma/mapping.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index db7244291b74..9f65d1041638 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -197,7 +197,7 @@ static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, if (ents > 0) debug_dma_map_sg(dev, sg, nents, ents, dir, attrs); else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && - ents != -EIO)) + ents != -EIO && ents != -EREMOTEIO)) return -EIO; return ents; @@ -255,6 +255,8 @@ EXPORT_SYMBOL(dma_map_sg_attrs); * complete the mapping. Should succeed if retried later. * -EIO Legacy error code with an unknown meaning. eg. this is * returned if a lower level call returned DMA_MAPPING_ERROR. + * -EREMOTEIO The DMA device cannot access P2PDMA memory specified in + * the sg_table. This will not succeed if retried. */ int dma_map_sgtable(struct device *dev, struct sg_table *sgt, enum dma_data_direction dir, unsigned long attrs) From patchwork Thu Apr 7 15:47:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805430 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12583C433F5 for ; Thu, 7 Apr 2022 15:54:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A0DF48D000E; Thu, 7 Apr 2022 11:48:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BB658D000C; Thu, 7 Apr 2022 11:48:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 85DCA8D000E; Thu, 7 Apr 2022 11:48:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 768B38D000C for ; Thu, 7 Apr 2022 11:48:16 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 549AE12245E for ; Thu, 7 Apr 2022 15:48:06 +0000 (UTC) X-FDA: 79330514172.06.5DB64B5 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf23.hostedemail.com (Postfix) with ESMTP id D790C140002 for ; Thu, 7 Apr 2022 15:48:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=Kj/SB7arPoOJg6cQxRfaeWMDbv4G2wYTNapPOj3Wwdw=; b=NT9H+hlZ+KIP/WMXMQWNKkhN/L f6WTtGH3C69fwh98cKtrQOJSxMKFgb60ulndEzepoDRlaoXp2MAdZ60WhPpvGb2dLEhbM3qmNH7SI HjdiGVTAqIPFZxkt2rRCB51WoI27d3M3cky6msJTw7hAzx7CPaaceN+3PPA1OLUTux27R4BmZsBoJ xVmBFtArkFf2/a4Qn5jDJ3ifCiD7q8htyu0jAuQ6hxkAs8jPqGkcQtIwpTltjlkbiOErsHe2wW1+O dTE/wDwY9PpiWjzFr+beyW958mdpG8ZRAzU97p9rIhokLPvL7UXjQX82d+DPVjHNM8JAUdjDvDOy9 NIjGFazg==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMM-002BBh-2c; Thu, 07 Apr 2022 09:47:46 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMC-00021W-5f; Thu, 07 Apr 2022 09:47:36 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe Date: Thu, 7 Apr 2022 09:47:02 -0600 Message-Id: <20220407154717.7695-7-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, jhubbard@nvidia.com, rcampbell@nvidia.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 06/21] dma-direct: support PCI P2PDMA pages in dma-direct map_sg X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: n8mgonnceuamerut88aaj1bjac3rz1ee Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=NT9H+hlZ; dmarc=pass (policy=none) header.from=deltatee.com; spf=pass (imf23.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com X-Rspamd-Queue-Id: D790C140002 X-HE-Tag: 1649346485-382390 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PCI P2PDMA support for dma_direct_map_sg() so that it can map PCI P2PDMA pages directly without a hack in the callers. This allows for heterogeneous SGLs that contain both P2PDMA and regular pages. A P2PDMA page may have three possible outcomes when being mapped: 1) If the data path between the two devices doesn't go through the root port, then it should be mapped with a PCI bus address 2) If the data path goes through the host bridge, it should be mapped normally, as though it were a CPU physical address 3) It is not possible for the two devices to communicate and thus the mapping operation should fail (and it will return -EREMOTEIO). SGL segments that contain PCI bus addresses are marked with sg_dma_mark_pci_p2pdma() and are ignored when unmapped. P2PDMA mappings are also failed if swiotlb needs to be used on the mapping. Signed-off-by: Logan Gunthorpe --- kernel/dma/direct.c | 43 +++++++++++++++++++++++++++++++++++++------ kernel/dma/direct.h | 8 +++++++- 2 files changed, 44 insertions(+), 7 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 9743c6ccce1a..33b838a3ccb2 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -461,29 +461,60 @@ void dma_direct_sync_sg_for_cpu(struct device *dev, arch_sync_dma_for_cpu_all(); } +/* + * Unmaps segments, except for ones marked as pci_p2pdma which do not + * require any further action as they contain a bus address. + */ void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction dir, unsigned long attrs) { struct scatterlist *sg; int i; - for_each_sg(sgl, sg, nents, i) - dma_direct_unmap_page(dev, sg->dma_address, sg_dma_len(sg), dir, - attrs); + for_each_sg(sgl, sg, nents, i) { + if (sg_is_dma_bus_address(sg)) + sg_dma_unmark_bus_address(sg); + else + dma_direct_unmap_page(dev, sg->dma_address, + sg_dma_len(sg), dir, attrs); + } } #endif int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction dir, unsigned long attrs) { - int i; + struct pci_p2pdma_map_state p2pdma_state = {}; + enum pci_p2pdma_map_type map; struct scatterlist *sg; + int i, ret; for_each_sg(sgl, sg, nents, i) { + if (is_pci_p2pdma_page(sg_page(sg))) { + map = pci_p2pdma_map_segment(&p2pdma_state, dev, sg); + switch (map) { + case PCI_P2PDMA_MAP_BUS_ADDR: + continue; + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + /* + * Any P2P mapping that traverses the PCI + * host bridge must be mapped with CPU physical + * address and not PCI bus addresses. This is + * done with dma_direct_map_page() below. + */ + break; + default: + ret = -EREMOTEIO; + goto out_unmap; + } + } + sg->dma_address = dma_direct_map_page(dev, sg_page(sg), sg->offset, sg->length, dir, attrs); - if (sg->dma_address == DMA_MAPPING_ERROR) + if (sg->dma_address == DMA_MAPPING_ERROR) { + ret = -EIO; goto out_unmap; + } sg_dma_len(sg) = sg->length; } @@ -491,7 +522,7 @@ int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, out_unmap: dma_direct_unmap_sg(dev, sgl, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC); - return -EIO; + return ret; } dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr, diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 4632b0f4f72e..81b213409ce8 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -8,6 +8,7 @@ #define _KERNEL_DMA_DIRECT_H #include +#include int dma_direct_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, @@ -87,10 +88,15 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, phys_addr_t phys = page_to_phys(page) + offset; dma_addr_t dma_addr = phys_to_dma(dev, phys); - if (is_swiotlb_force_bounce(dev)) + if (is_swiotlb_force_bounce(dev)) { + if (is_pci_p2pdma_page(page)) + return DMA_MAPPING_ERROR; return swiotlb_map(dev, phys, size, dir, attrs); + } if (unlikely(!dma_capable(dev, dma_addr, size, true))) { + if (is_pci_p2pdma_page(page)) + return DMA_MAPPING_ERROR; if (swiotlb_force != SWIOTLB_NO_FORCE) return swiotlb_map(dev, phys, size, dir, attrs); From patchwork Thu Apr 7 15:47:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 186F1C433EF for ; Thu, 7 Apr 2022 15:56:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 860A08D0011; Thu, 7 Apr 2022 11:48:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 80E6B8D000C; Thu, 7 Apr 2022 11:48:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 660C28D0011; Thu, 7 Apr 2022 11:48:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 56F488D000C for ; Thu, 7 Apr 2022 11:48:21 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3353B20D0E for ; Thu, 7 Apr 2022 15:48:11 +0000 (UTC) X-FDA: 79330514382.02.76CF84B Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf22.hostedemail.com (Postfix) with ESMTP id B9707C0008 for ; Thu, 7 Apr 2022 15:48:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=Qrm5texuQYIS48hhNa5mDas14Fbkzf7AsffQMeTIYTY=; b=HN0RN1OGCsnwwSkOngyCf1iXVT NNpQxlYMNW3XwohLm6l1485oVA3Y+uIeazNXrmx4rG1jz5X8s93aVcj6y1Oklp6dLr0j3a+6YzxDr qj3j5qTpjfo68QOg0jHxqjiWWavQbfdk9C5m1AxXIpXDgyx66snnKtdaZmB7A5bt1bCKH5V2W/ler JUzog834AoVaVB1K/wo7qr9mXFGmRH5YcEccoUcdfPZGM4crlfss7YBgMwxkH8uL8AZVRS1pCehy/ Zu3pPTQ/vKfVloJKgrq84TclFhKAM9aRyIOy6Tg9J7fYTWTnCj+/twpjJJuA39mRFvOwdQK5SAtgc 7ayZ43lQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMM-002BBf-Dc; Thu, 07 Apr 2022 09:47:47 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMC-00021Z-Am; Thu, 07 Apr 2022 09:47:36 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe , Jason Gunthorpe Date: Thu, 7 Apr 2022 09:47:03 -0600 Message-Id: <20220407154717.7695-8-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, logang@deltatee.com, jhubbard@nvidia.com, rcampbell@nvidia.com, jgg@nvidia.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 07/21] dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA support X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Stat-Signature: fwcwh1it93zrxap65i7sno831xhny4k9 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=HN0RN1OG; spf=pass (imf22.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com; dmarc=pass (policy=none) header.from=deltatee.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: B9707C0008 X-HE-Tag: 1649346490-937186 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a flags member to the dma_map_ops structure with one flag to indicate support for PCI P2PDMA. Also, add a helper to check if a device supports PCI P2PDMA. Signed-off-by: Logan Gunthorpe Reviewed-by: Jason Gunthorpe --- include/linux/dma-map-ops.h | 10 ++++++++++ include/linux/dma-mapping.h | 5 +++++ kernel/dma/mapping.c | 18 ++++++++++++++++++ 3 files changed, 33 insertions(+) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 752f91e5eb5d..4d4161d58ce0 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -11,7 +11,17 @@ struct cma; +/* + * Values for struct dma_map_ops.flags: + * + * DMA_F_PCI_P2PDMA_SUPPORTED: Indicates the dma_map_ops implementation can + * handle PCI P2PDMA pages in the map_sg/unmap_sg operation. + */ +#define DMA_F_PCI_P2PDMA_SUPPORTED (1 << 0) + struct dma_map_ops { + unsigned int flags; + void *(*alloc)(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index dca2b1355bb1..f7c61b2b4b5e 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -140,6 +140,7 @@ int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma, unsigned long attrs); bool dma_can_mmap(struct device *dev); int dma_supported(struct device *dev, u64 mask); +bool dma_pci_p2pdma_supported(struct device *dev); int dma_set_mask(struct device *dev, u64 mask); int dma_set_coherent_mask(struct device *dev, u64 mask); u64 dma_get_required_mask(struct device *dev); @@ -250,6 +251,10 @@ static inline int dma_supported(struct device *dev, u64 mask) { return 0; } +static inline bool dma_pci_p2pdma_supported(struct device *dev) +{ + return false; +} static inline int dma_set_mask(struct device *dev, u64 mask) { return -EIO; diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 9f65d1041638..21793506fdb6 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -722,6 +722,24 @@ int dma_supported(struct device *dev, u64 mask) } EXPORT_SYMBOL(dma_supported); +bool dma_pci_p2pdma_supported(struct device *dev) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + /* if ops is not set, dma direct will be used which supports P2PDMA */ + if (!ops) + return true; + + /* + * Note: dma_ops_bypass is not checked here because P2PDMA should + * not be used with dma mapping ops that do not have support even + * if the specific device is bypassing them. + */ + + return ops->flags & DMA_F_PCI_P2PDMA_SUPPORTED; +} +EXPORT_SYMBOL_GPL(dma_pci_p2pdma_supported); + #ifdef CONFIG_ARCH_HAS_DMA_SET_MASK void arch_dma_set_mask(struct device *dev, u64 mask); #else From patchwork Thu Apr 7 15:47:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805471 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DC5CC433FE for ; Thu, 7 Apr 2022 15:58:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0B8C48D0013; Thu, 7 Apr 2022 11:48:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 069668D0015; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C55648D0016; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 8CA648D0013 for ; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5B463260F5 for ; Thu, 7 Apr 2022 15:48:12 +0000 (UTC) X-FDA: 79330514424.16.EC6C8F1 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf14.hostedemail.com (Postfix) with ESMTP id BFB6B100004 for ; Thu, 7 Apr 2022 15:48:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=ovlJrYObsVoYdr2RvtdexVYsfsYqnIa8VrWPgiCjUPs=; b=N+aoyesE0LoORq4oSbY366Q++P 4mwH9SdlzTE74+q0fW1Re3lBYqIsWL8BBQW411tzWTlGTFjhAZ25MTwMJxbU301jutosB2bvlgKPb QCwlssO53DCAnn4NrbM5zbM5opL8FweBtXZJog8xRnQtMtuKZjbdNwtUBW/gS3+xS/tQ+dY+q68I0 JeIiF4d3SREldvwgTE6oWBZMZPtyRctqk0ji7r2rMEZLh5DAg5wNZ6s1iB8xS5lBSbXA7vP8DrL68 yeAhNfNhNZTJ0VYOPhpDcbuVgmmWu/UBC6p83ISvCF6grBpKPyew+BYYu8+HcKfU4yZnMTXM5nloS VQSUeYDA==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUML-002BBi-Rq; Thu, 07 Apr 2022 09:47:46 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMC-00021d-Gl; Thu, 07 Apr 2022 09:47:36 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe , Jason Gunthorpe Date: Thu, 7 Apr 2022 09:47:04 -0600 Message-Id: <20220407154717.7695-9-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, logang@deltatee.com, jhubbard@nvidia.com, rcampbell@nvidia.com, jgg@nvidia.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 08/21] iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=N+aoyesE; spf=pass (imf14.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com; dmarc=pass (policy=none) header.from=deltatee.com X-Stat-Signature: kzh5w14uxicbnunnnxcjqeonahtzhy8z X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: BFB6B100004 X-HE-Tag: 1649346491-443304 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When a PCI P2PDMA page is seen, set the IOVA length of the segment to zero so that it is not mapped into the IOVA. Then, in finalise_sg(), apply the appropriate bus address to the segment. The IOVA is not created if the scatterlist only consists of P2PDMA pages. A P2PDMA page may have three possible outcomes when being mapped: 1) If the data path between the two devices doesn't go through the root port, then it should be mapped with a PCI bus address 2) If the data path goes through the host bridge, it should be mapped normally with an IOMMU IOVA. 3) It is not possible for the two devices to communicate and thus the mapping operation should fail (and it will return -EREMOTEIO). Similar to dma-direct, the sg_dma_mark_pci_p2pdma() flag is used to indicate bus address segments. On unmap, P2PDMA segments are skipped over when determining the start and end IOVA addresses. With this change, the flags variable in the dma_map_ops is set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for P2PDMA pages. Signed-off-by: Logan Gunthorpe Reviewed-by: Jason Gunthorpe --- drivers/iommu/dma-iommu.c | 68 +++++++++++++++++++++++++++++++++++---- 1 file changed, 61 insertions(+), 7 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 09f6e1c0f9c0..ef86f2b573d1 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -1045,6 +1046,16 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents, sg_dma_address(s) = DMA_MAPPING_ERROR; sg_dma_len(s) = 0; + if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) { + if (i > 0) + cur = sg_next(cur); + + pci_p2pdma_map_bus_segment(s, cur); + count++; + cur_len = 0; + continue; + } + /* * Now fill in the real DMA data. If... * - there is a valid output segment to append to @@ -1141,6 +1152,8 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, struct iova_domain *iovad = &cookie->iovad; struct scatterlist *s, *prev = NULL; int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs); + struct dev_pagemap *pgmap = NULL; + enum pci_p2pdma_map_type map_type; dma_addr_t iova; size_t iova_len = 0; unsigned long mask = dma_get_seg_boundary(dev); @@ -1176,6 +1189,35 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, s_length = iova_align(iovad, s_length + s_iova_off); s->length = s_length; + if (is_pci_p2pdma_page(sg_page(s))) { + if (sg_page(s)->pgmap != pgmap) { + pgmap = sg_page(s)->pgmap; + map_type = pci_p2pdma_map_type(pgmap, dev); + } + + switch (map_type) { + case PCI_P2PDMA_MAP_BUS_ADDR: + /* + * A zero length will be ignored by + * iommu_map_sg() and then can be detected + * in __finalise_sg() to actually map the + * bus address. + */ + s->length = 0; + continue; + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + /* + * Mapping through host bridge should be + * mapped with regular IOVAs, thus we + * do nothing here and continue below. + */ + break; + default: + ret = -EREMOTEIO; + goto out_restore_sg; + } + } + /* * Due to the alignment of our single IOVA allocation, we can * depend on these assumptions about the segment boundary mask: @@ -1198,6 +1240,9 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, prev = s; } + if (!iova_len) + return __finalise_sg(dev, sg, nents, 0); + iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); if (!iova) { ret = -ENOMEM; @@ -1219,7 +1264,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, out_restore_sg: __invalidate_sg(sg, nents); out: - if (ret != -ENOMEM) + if (ret != -ENOMEM && ret != -EREMOTEIO) return -EINVAL; return ret; } @@ -1227,7 +1272,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) { - dma_addr_t start, end; + dma_addr_t end, start = DMA_MAPPING_ERROR; struct scatterlist *tmp; int i; @@ -1243,14 +1288,22 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, * The scatterlist segments are mapped into a single * contiguous IOVA allocation, so this is incredibly easy. */ - start = sg_dma_address(sg); - for_each_sg(sg_next(sg), tmp, nents - 1, i) { + for_each_sg(sg, tmp, nents, i) { + if (sg_is_dma_bus_address(tmp)) { + sg_dma_unmark_bus_address(tmp); + continue; + } if (sg_dma_len(tmp) == 0) break; - sg = tmp; + + if (start == DMA_MAPPING_ERROR) + start = sg_dma_address(tmp); + + end = sg_dma_address(tmp) + sg_dma_len(tmp); } - end = sg_dma_address(sg) + sg_dma_len(sg); - __iommu_dma_unmap(dev, start, end - start); + + if (start != DMA_MAPPING_ERROR) + __iommu_dma_unmap(dev, start, end - start); } static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, @@ -1443,6 +1496,7 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev) } static const struct dma_map_ops iommu_dma_ops = { + .flags = DMA_F_PCI_P2PDMA_SUPPORTED, .alloc = iommu_dma_alloc, .free = iommu_dma_free, .alloc_pages = dma_common_alloc_pages, From patchwork Thu Apr 7 15:47:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E7CBC433F5 for ; Thu, 7 Apr 2022 15:57:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF4D48D0012; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 977568D000C; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 688798D0012; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 40E088D000C for ; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 0F9EA122435 for ; Thu, 7 Apr 2022 15:48:12 +0000 (UTC) X-FDA: 79330514424.14.6EB0F8D Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf07.hostedemail.com (Postfix) with ESMTP id 94FEC4000B for ; Thu, 7 Apr 2022 15:48:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=cJFdRmGP+MP9l5faMvJFERPooxGuZ819O9Za05vP01s=; b=VY7wzVoSA1H4619PNjKlGRyblc DdTnLk+/j+4nN82vTdwh+ksjmiC6d7h01epBkF2klu1+n0F+BlDl1dpJlZRl2c3AK6vEnMYBiVoGh 6pp0H5oaApWi9Abkx9bxnEtqMhZso57uYg7qyBQ/MKOWV3OLT602qdNhXWbEfnZ4yESJ/psm4UtNm MS9e+kSDnKT+ZffvK3SQjNTJUF84BgGY+eCzpskC1VDQFIW0ml+U/IYPSIaDpNmIMVeASd8tRHq7m 7J0jfrA/1FghRuvelat/o4BsXoAWKmuRNv2HraS9iznfuKmsXpO5oR2Y4TExl2/54ZDnHKUDWLtt8 W5uuyooQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUML-002BBe-Nc; Thu, 07 Apr 2022 09:47:46 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMC-00021i-QU; Thu, 07 Apr 2022 09:47:36 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe , Chaitanya Kulkarni Date: Thu, 7 Apr 2022 09:47:05 -0600 Message-Id: <20220407154717.7695-10-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, logang@deltatee.com, jhubbard@nvidia.com, rcampbell@nvidia.com, kch@nvidia.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 09/21] nvme-pci: check DMA ops when indicating support for PCI P2PDMA X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Rspam-User: X-Stat-Signature: zmsubt1a76dqp6gdsbtuttdsstkd7ak3 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=VY7wzVoS; spf=pass (imf07.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com; dmarc=pass (policy=none) header.from=deltatee.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 94FEC4000B X-HE-Tag: 1649346491-164685 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce a supports_pci_p2pdma() operation in nvme_ctrl_ops to replace the fixed NVME_F_PCI_P2PDMA flag such that the dma_map_ops flags can be checked for PCI P2PDMA support. Signed-off-by: Logan Gunthorpe Reviewed-by: Chaitanya Kulkarni --- drivers/nvme/host/core.c | 3 ++- drivers/nvme/host/nvme.h | 2 +- drivers/nvme/host/pci.c | 11 +++++++++-- 3 files changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index efb85c6d8e2d..bbc276dda49f 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3912,7 +3912,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid, blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, ns->queue); blk_queue_flag_set(QUEUE_FLAG_NONROT, ns->queue); - if (ctrl->ops->flags & NVME_F_PCI_P2PDMA) + if (ctrl->ops->supports_pci_p2pdma && + ctrl->ops->supports_pci_p2pdma(ctrl)) blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue); ns->ctrl = ctrl; diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 1393bbf82d71..7d97bfb2a9e2 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -489,7 +489,6 @@ struct nvme_ctrl_ops { unsigned int flags; #define NVME_F_FABRICS (1 << 0) #define NVME_F_METADATA_SUPPORTED (1 << 1) -#define NVME_F_PCI_P2PDMA (1 << 2) int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val); int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val); int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val); @@ -497,6 +496,7 @@ struct nvme_ctrl_ops { void (*submit_async_event)(struct nvme_ctrl *ctrl); void (*delete_ctrl)(struct nvme_ctrl *ctrl); int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size); + bool (*supports_pci_p2pdma)(struct nvme_ctrl *ctrl); }; /* diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index d817ca17463e..fec4c7191310 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2969,17 +2969,24 @@ static int nvme_pci_get_address(struct nvme_ctrl *ctrl, char *buf, int size) return snprintf(buf, size, "%s\n", dev_name(&pdev->dev)); } +static bool nvme_pci_supports_pci_p2pdma(struct nvme_ctrl *ctrl) +{ + struct nvme_dev *dev = to_nvme_dev(ctrl); + + return dma_pci_p2pdma_supported(dev->dev); +} + static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = { .name = "pcie", .module = THIS_MODULE, - .flags = NVME_F_METADATA_SUPPORTED | - NVME_F_PCI_P2PDMA, + .flags = NVME_F_METADATA_SUPPORTED, .reg_read32 = nvme_pci_reg_read32, .reg_write32 = nvme_pci_reg_write32, .reg_read64 = nvme_pci_reg_read64, .free_ctrl = nvme_pci_free_ctrl, .submit_async_event = nvme_pci_submit_async_event, .get_address = nvme_pci_get_address, + .supports_pci_p2pdma = nvme_pci_supports_pci_p2pdma, }; static int nvme_dev_map(struct nvme_dev *dev) From patchwork Thu Apr 7 15:47:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805470 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0F30C433EF for ; Thu, 7 Apr 2022 15:58:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D8CFB8D000C; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D17F98D0013; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 79B918D0015; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 520AC8D0013 for ; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 25C445F95 for ; Thu, 7 Apr 2022 15:48:12 +0000 (UTC) X-FDA: 79330514424.09.7D991DC Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf13.hostedemail.com (Postfix) with ESMTP id 8674620004 for ; Thu, 7 Apr 2022 15:48:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=Ro5Q2HMUd1HfFndmJ8UruOv1XXFC1TCh+hCWqq4F5HI=; b=lNdxkLiyyrn3aS3xg/neZkhWSy EQ/x8P103d73dG+E1L8gyXZXQahh/qSrQJ5E2iEr3HGbbjjb7dzGf6dxaWlJYS6KdU/r4qMDwRcfG ys+2isPwYgL3Fssvb/g1EA2pA67VavH7fvDXrUi4r5tTa0HGXqDq5EbfzfsxhboyCOB/4brGpq4SS isYKr1pYJiIdDpA/HKBV2mOII+PGWeDJQ7nBCw04HlT2YFgNF+I/TozmncDF0ycCTMS/+TZqwAC8a hYZegZqwnM8JL4gP/ZvJCOgohvEMhqdXQxMe0MR9yRetlpxLKXYpyTIeqcY8Hg0tdilHMMEHWFXQL fTM4xbZA==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUML-002BBg-70; Thu, 07 Apr 2022 09:47:46 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMD-00021m-1Y; Thu, 07 Apr 2022 09:47:37 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe , Max Gurtovoy , Chaitanya Kulkarni Date: Thu, 7 Apr 2022 09:47:06 -0600 Message-Id: <20220407154717.7695-11-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, logang@deltatee.com, jhubbard@nvidia.com, rcampbell@nvidia.com, mgurtovoy@nvidia.com, kch@nvidia.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 10/21] nvme-pci: convert to using dma_map_sgtable() X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: wth9e1jsicejo5cm53o4sa735prn8i95 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=lNdxkLiy; dmarc=pass (policy=none) header.from=deltatee.com; spf=pass (imf13.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com X-Rspamd-Queue-Id: 8674620004 X-HE-Tag: 1649346491-748123 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The dma_map operations now support P2PDMA pages directly. So remove the calls to pci_p2pdma_[un]map_sg_attrs() and replace them with calls to dma_map_sgtable(). dma_map_sgtable() returns more complete error codes than dma_map_sg() and allows differentiating EREMOTEIO errors in case an unsupported P2PDMA transfer is requested. When this happens, return BLK_STS_TARGET so the request isn't retried. Signed-off-by: Logan Gunthorpe Reviewed-by: Max Gurtovoy Reviewed-by: Chaitanya Kulkarni --- drivers/nvme/host/pci.c | 69 +++++++++++++++++------------------------ 1 file changed, 29 insertions(+), 40 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index fec4c7191310..07412116d4d1 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -230,11 +230,10 @@ struct nvme_iod { bool use_sgl; int aborted; int npages; /* In the PRP list. 0 means small pool in use */ - int nents; /* Used in scatterlist */ dma_addr_t first_dma; unsigned int dma_len; /* length of single DMA segment mapping */ dma_addr_t meta_dma; - struct scatterlist *sg; + struct sg_table sgt; }; static inline unsigned int nvme_dbbuf_size(struct nvme_dev *dev) @@ -524,7 +523,7 @@ static void nvme_commit_rqs(struct blk_mq_hw_ctx *hctx) static void **nvme_pci_iod_list(struct request *req) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - return (void **)(iod->sg + blk_rq_nr_phys_segments(req)); + return (void **)(iod->sgt.sgl + blk_rq_nr_phys_segments(req)); } static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req) @@ -576,17 +575,6 @@ static void nvme_free_sgls(struct nvme_dev *dev, struct request *req) } } -static void nvme_unmap_sg(struct nvme_dev *dev, struct request *req) -{ - struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - - if (is_pci_p2pdma_page(sg_page(iod->sg))) - pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents, - rq_dma_dir(req)); - else - dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req)); -} - static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); @@ -597,9 +585,10 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) return; } - WARN_ON_ONCE(!iod->nents); + WARN_ON_ONCE(!iod->sgt.nents); + + dma_unmap_sgtable(dev->dev, &iod->sgt, rq_dma_dir(req), 0); - nvme_unmap_sg(dev, req); if (iod->npages == 0) dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0], iod->first_dma); @@ -607,7 +596,7 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) nvme_free_sgls(dev, req); else nvme_free_prps(dev, req); - mempool_free(iod->sg, dev->iod_mempool); + mempool_free(iod->sgt.sgl, dev->iod_mempool); } static void nvme_print_sgl(struct scatterlist *sgl, int nents) @@ -630,7 +619,7 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, struct nvme_iod *iod = blk_mq_rq_to_pdu(req); struct dma_pool *pool; int length = blk_rq_payload_bytes(req); - struct scatterlist *sg = iod->sg; + struct scatterlist *sg = iod->sgt.sgl; int dma_len = sg_dma_len(sg); u64 dma_addr = sg_dma_address(sg); int offset = dma_addr & (NVME_CTRL_PAGE_SIZE - 1); @@ -703,16 +692,16 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, dma_len = sg_dma_len(sg); } done: - cmnd->dptr.prp1 = cpu_to_le64(sg_dma_address(iod->sg)); + cmnd->dptr.prp1 = cpu_to_le64(sg_dma_address(iod->sgt.sgl)); cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma); return BLK_STS_OK; free_prps: nvme_free_prps(dev, req); return BLK_STS_RESOURCE; bad_sgl: - WARN(DO_ONCE(nvme_print_sgl, iod->sg, iod->nents), + WARN(DO_ONCE(nvme_print_sgl, iod->sgt.sgl, iod->sgt.nents), "Invalid SGL for payload:%d nents:%d\n", - blk_rq_payload_bytes(req), iod->nents); + blk_rq_payload_bytes(req), iod->sgt.nents); return BLK_STS_IOERR; } @@ -738,12 +727,13 @@ static void nvme_pci_sgl_set_seg(struct nvme_sgl_desc *sge, } static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev, - struct request *req, struct nvme_rw_command *cmd, int entries) + struct request *req, struct nvme_rw_command *cmd) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); struct dma_pool *pool; struct nvme_sgl_desc *sg_list; - struct scatterlist *sg = iod->sg; + struct scatterlist *sg = iod->sgt.sgl; + unsigned int entries = iod->sgt.nents; dma_addr_t sgl_dma; int i = 0; @@ -841,7 +831,7 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); blk_status_t ret = BLK_STS_RESOURCE; - int nr_mapped; + int rc; if (blk_rq_nr_phys_segments(req) == 1) { struct bio_vec bv = req_bvec(req); @@ -859,26 +849,25 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, } iod->dma_len = 0; - iod->sg = mempool_alloc(dev->iod_mempool, GFP_ATOMIC); - if (!iod->sg) + iod->sgt.sgl = mempool_alloc(dev->iod_mempool, GFP_ATOMIC); + if (!iod->sgt.sgl) return BLK_STS_RESOURCE; - sg_init_table(iod->sg, blk_rq_nr_phys_segments(req)); - iod->nents = blk_rq_map_sg(req->q, req, iod->sg); - if (!iod->nents) + sg_init_table(iod->sgt.sgl, blk_rq_nr_phys_segments(req)); + iod->sgt.orig_nents = blk_rq_map_sg(req->q, req, iod->sgt.sgl); + if (!iod->sgt.orig_nents) goto out_free_sg; - if (is_pci_p2pdma_page(sg_page(iod->sg))) - nr_mapped = pci_p2pdma_map_sg_attrs(dev->dev, iod->sg, - iod->nents, rq_dma_dir(req), DMA_ATTR_NO_WARN); - else - nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, - rq_dma_dir(req), DMA_ATTR_NO_WARN); - if (!nr_mapped) + rc = dma_map_sgtable(dev->dev, &iod->sgt, rq_dma_dir(req), + DMA_ATTR_NO_WARN); + if (rc) { + if (rc == -EREMOTEIO) + ret = BLK_STS_TARGET; goto out_free_sg; + } iod->use_sgl = nvme_pci_use_sgls(dev, req); if (iod->use_sgl) - ret = nvme_pci_setup_sgls(dev, req, &cmnd->rw, nr_mapped); + ret = nvme_pci_setup_sgls(dev, req, &cmnd->rw); else ret = nvme_pci_setup_prps(dev, req, &cmnd->rw); if (ret != BLK_STS_OK) @@ -886,9 +875,9 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, return BLK_STS_OK; out_unmap_sg: - nvme_unmap_sg(dev, req); + dma_unmap_sgtable(dev->dev, &iod->sgt, rq_dma_dir(req), 0); out_free_sg: - mempool_free(iod->sg, dev->iod_mempool); + mempool_free(iod->sgt.sgl, dev->iod_mempool); return ret; } @@ -912,7 +901,7 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req) iod->aborted = 0; iod->npages = -1; - iod->nents = 0; + iod->sgt.nents = 0; ret = nvme_setup_cmd(req->q->queuedata, req); if (ret) From patchwork Thu Apr 7 15:47:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805434 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC43EC433F5 for ; Thu, 7 Apr 2022 15:57:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 74C648D0014; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6FE6B8D000C; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 54F678D0014; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0230.hostedemail.com [216.40.44.230]) by kanga.kvack.org (Postfix) with ESMTP id 418FF8D0012 for ; Thu, 7 Apr 2022 11:48:22 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 00953AD6D7 for ; Thu, 7 Apr 2022 15:48:11 +0000 (UTC) X-FDA: 79330514424.23.2BFD21D Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf31.hostedemail.com (Postfix) with ESMTP id 8BB1320002 for ; Thu, 7 Apr 2022 15:48:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=7Sd+DHEjEbLvCTO+1GLA/CAMTcKj0omlqCpjlGTFuVQ=; b=CriRiAEXJkw+LFJYjKuYt/+1lT mt+jXOXBcluFIZ+8gDxxPki9Jq4kR1e6VG8sMlSxcRg7vJJoQn3fRAaYslClrcV5gRY9JgS4MrRaD guSwSvY+8KGenNrx3t34xM7m2FriiBPxIPNxBFr7zkgZKRV7gHFzCc/pQhlBAYpukFmaU2HJO43Br F/BcI0AO5tDbZ4Qv16KPAn8FyyU9W4rxI+bE6e7TQwBYeTXW4iPa4ygs1NxUZa4jL9C34tiSLQvV0 DE56+P9LdSMFtMh93c2a1UhCH7q5u4Wu0yTlS6vgnM+bkYTLQ06k/LgTXtARxbvL8bEEAQWNGiBVl WqGOrJhw==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMK-002BBh-Pv; Thu, 07 Apr 2022 09:47:45 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMD-00021q-9M; Thu, 07 Apr 2022 09:47:37 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe , Jason Gunthorpe , Max Gurtovoy Date: Thu, 7 Apr 2022 09:47:07 -0600 Message-Id: <20220407154717.7695-12-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, logang@deltatee.com, jhubbard@nvidia.com, rcampbell@nvidia.com, jgg@nvidia.com, mgurtovoy@nvidia.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 11/21] RDMA/core: introduce ib_dma_pci_p2p_dma_supported() X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Stat-Signature: zgke8kd6wpb17s7k8p48fwr19iwymrfh Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=CriRiAEX; dmarc=pass (policy=none) header.from=deltatee.com; spf=pass (imf31.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 8BB1320002 X-HE-Tag: 1649346491-423413 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce the helper function ib_dma_pci_p2p_dma_supported() to check if a given ib_device can be used in P2PDMA transfers. This ensures the ib_device is not using virt_dma and also that the underlying dma_device supports P2PDMA. Use the new helper in nvme-rdma to replace the existing check for ib_uses_virt_dma(). Adding the dma_pci_p2pdma_supported() check allows switching away from pci_p2pdma_[un]map_sg(). Signed-off-by: Logan Gunthorpe Reviewed-by: Jason Gunthorpe Reviewed-by: Max Gurtovoy --- drivers/nvme/target/rdma.c | 2 +- include/rdma/ib_verbs.h | 11 +++++++++++ 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index 2fab0b219b25..12258f87ccc8 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -415,7 +415,7 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev, if (ib_dma_mapping_error(ndev->device, r->send_sge.addr)) goto out_free_rsp; - if (!ib_uses_virt_dma(ndev->device)) + if (ib_dma_pci_p2p_dma_supported(ndev->device)) r->req.p2p_client = &ndev->device->dev; r->send_sge.length = sizeof(*r->req.cqe); r->send_sge.lkey = ndev->pd->local_dma_lkey; diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 69d883f7fb41..79609ab73014 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -4003,6 +4003,17 @@ static inline bool ib_uses_virt_dma(struct ib_device *dev) return IS_ENABLED(CONFIG_INFINIBAND_VIRT_DMA) && !dev->dma_device; } +/* + * Check if a IB device's underlying DMA mapping supports P2PDMA transfers. + */ +static inline bool ib_dma_pci_p2p_dma_supported(struct ib_device *dev) +{ + if (ib_uses_virt_dma(dev)) + return false; + + return dma_pci_p2pdma_supported(dev->dma_device); +} + /** * ib_dma_mapping_error - check a DMA addr for error * @dev: The device for which the dma_addr was created From patchwork Thu Apr 7 15:47:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805416 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0E23C433F5 for ; Thu, 7 Apr 2022 15:52:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 885038D0009; Thu, 7 Apr 2022 11:48:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 839628D0005; Thu, 7 Apr 2022 11:48:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FDD48D0009; Thu, 7 Apr 2022 11:48:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 60DDF8D0005 for ; Thu, 7 Apr 2022 11:48:04 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3961A2788C for ; Thu, 7 Apr 2022 15:47:54 +0000 (UTC) X-FDA: 79330513668.12.0E49F19 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf05.hostedemail.com (Postfix) with ESMTP id D30A2100006 for ; Thu, 7 Apr 2022 15:47:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=E3OuyFTlPOMWods4T8GcP7ZVTzRsCfvad0BDfewE94Y=; b=PO4tQ9KG8ZPt5iijnkVghsm8z3 NxvigwSW5N+Kt1drgzPKv7rOlpmuAMl/HocmztdrR6R2XogKlQV9A2qD7918CFX+EomB/ME1qQNIg yOaNwJS1jdJDCoQSofCgCMDJlF5IYXhIR07h+r49U5iuNo4hewTdn9fl17GExjw8ZQNfbYb7DuK7i vQ8sVjkiHdRfhjNtaqYw87AmgxR4TGTwkvTtrwvhjyRtPOVyHeJId3mYEcuqqZvdVich6mB1iX4Vh 7ieYJtaySLHmrkotjdPpnRrEBpUEJ96OOhad+w7HhbaiJ7rg9fcPde71m/sr1YUXKs8im8p4Oxg9a q5Co9XmA==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMK-002BBf-ON; Thu, 07 Apr 2022 09:47:46 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMD-00021u-Fq; Thu, 07 Apr 2022 09:47:37 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe , Jason Gunthorpe Date: Thu, 7 Apr 2022 09:47:08 -0600 Message-Id: <20220407154717.7695-13-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, logang@deltatee.com, jhubbard@nvidia.com, rcampbell@nvidia.com, jgg@nvidia.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 12/21] RDMA/rw: drop pci_p2pdma_[un]map_sg() X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Stat-Signature: h7azw656arm4fs7fpmaiqitbiqm3mw8x Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=PO4tQ9KG; spf=pass (imf05.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com; dmarc=pass (policy=none) header.from=deltatee.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D30A2100006 X-HE-Tag: 1649346473-635902 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: dma_map_sg() now supports the use of P2PDMA pages so pci_p2pdma_map_sg() is no longer necessary and may be dropped. This means the rdma_rw_[un]map_sg() helpers are no longer necessary. Remove it all. Signed-off-by: Logan Gunthorpe Reviewed-by: Jason Gunthorpe --- drivers/infiniband/core/rw.c | 45 ++++++++---------------------------- 1 file changed, 9 insertions(+), 36 deletions(-) diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c index 4d98f931a13d..8367974b7998 100644 --- a/drivers/infiniband/core/rw.c +++ b/drivers/infiniband/core/rw.c @@ -274,33 +274,6 @@ static int rdma_rw_init_single_wr(struct rdma_rw_ctx *ctx, struct ib_qp *qp, return 1; } -static void rdma_rw_unmap_sg(struct ib_device *dev, struct scatterlist *sg, - u32 sg_cnt, enum dma_data_direction dir) -{ - if (is_pci_p2pdma_page(sg_page(sg))) - pci_p2pdma_unmap_sg(dev->dma_device, sg, sg_cnt, dir); - else - ib_dma_unmap_sg(dev, sg, sg_cnt, dir); -} - -static int rdma_rw_map_sgtable(struct ib_device *dev, struct sg_table *sgt, - enum dma_data_direction dir) -{ - int nents; - - if (is_pci_p2pdma_page(sg_page(sgt->sgl))) { - if (WARN_ON_ONCE(ib_uses_virt_dma(dev))) - return 0; - nents = pci_p2pdma_map_sg(dev->dma_device, sgt->sgl, - sgt->orig_nents, dir); - if (!nents) - return -EIO; - sgt->nents = nents; - return 0; - } - return ib_dma_map_sgtable_attrs(dev, sgt, dir, 0); -} - /** * rdma_rw_ctx_init - initialize a RDMA READ/WRITE context * @ctx: context to initialize @@ -327,7 +300,7 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u32 port_num, }; int ret; - ret = rdma_rw_map_sgtable(dev, &sgt, dir); + ret = ib_dma_map_sgtable_attrs(dev, &sgt, dir, 0); if (ret) return ret; sg_cnt = sgt.nents; @@ -366,7 +339,7 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u32 port_num, return ret; out_unmap_sg: - rdma_rw_unmap_sg(dev, sgt.sgl, sgt.orig_nents, dir); + ib_dma_unmap_sgtable_attrs(dev, &sgt, dir, 0); return ret; } EXPORT_SYMBOL(rdma_rw_ctx_init); @@ -414,12 +387,12 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, return -EINVAL; } - ret = rdma_rw_map_sgtable(dev, &sgt, dir); + ret = ib_dma_map_sgtable_attrs(dev, &sgt, dir, 0); if (ret) return ret; if (prot_sg_cnt) { - ret = rdma_rw_map_sgtable(dev, &prot_sgt, dir); + ret = ib_dma_map_sgtable_attrs(dev, &prot_sgt, dir, 0); if (ret) goto out_unmap_sg; } @@ -486,9 +459,9 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, kfree(ctx->reg); out_unmap_prot_sg: if (prot_sgt.nents) - rdma_rw_unmap_sg(dev, prot_sgt.sgl, prot_sgt.orig_nents, dir); + ib_dma_unmap_sgtable_attrs(dev, &prot_sgt, dir, 0); out_unmap_sg: - rdma_rw_unmap_sg(dev, sgt.sgl, sgt.orig_nents, dir); + ib_dma_unmap_sgtable_attrs(dev, &sgt, dir, 0); return ret; } EXPORT_SYMBOL(rdma_rw_ctx_signature_init); @@ -621,7 +594,7 @@ void rdma_rw_ctx_destroy(struct rdma_rw_ctx *ctx, struct ib_qp *qp, break; } - rdma_rw_unmap_sg(qp->pd->device, sg, sg_cnt, dir); + ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir); } EXPORT_SYMBOL(rdma_rw_ctx_destroy); @@ -649,8 +622,8 @@ void rdma_rw_ctx_destroy_signature(struct rdma_rw_ctx *ctx, struct ib_qp *qp, kfree(ctx->reg); if (prot_sg_cnt) - rdma_rw_unmap_sg(qp->pd->device, prot_sg, prot_sg_cnt, dir); - rdma_rw_unmap_sg(qp->pd->device, sg, sg_cnt, dir); + ib_dma_unmap_sg(qp->pd->device, prot_sg, prot_sg_cnt, dir); + ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir); } EXPORT_SYMBOL(rdma_rw_ctx_destroy_signature); From patchwork Thu Apr 7 15:47:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B5BEC433EF for ; Thu, 7 Apr 2022 15:55:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 098658D000F; Thu, 7 Apr 2022 11:48:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 045168D000C; Thu, 7 Apr 2022 11:48:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4E7B8D000F; Thu, 7 Apr 2022 11:48:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id D5DF88D000C for ; Thu, 7 Apr 2022 11:48:16 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A37FD26101 for ; Thu, 7 Apr 2022 15:48:06 +0000 (UTC) X-FDA: 79330514172.14.E09928B Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf04.hostedemail.com (Postfix) with ESMTP id BD0704000D for ; Thu, 7 Apr 2022 15:48:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=jfoTa0crJNOLdR3TnvsqUrQAe21RqnbgMdhBAc58XYc=; b=aCju8nG8aYTulnXJcwNnd/P4lu Y0KUkbzeE8nRQOCseRML4wyYT1JqcfKL/nG9M6f/yB2U5ZkoosIPHrFbexywiJMSvqLrxRbGfyvHh qLy3KWBelgFdpL4gFza53WEJ5+s03l19bGy9a0zhW/EdQWHWrJ4jXkX6rW8UDhaLxEbAinirDylrh dgqKfFHpnBXf7EYGLNl/eSDTjcly9NkuZRdzsyimWBp7HGRkgzhRGVTcDfoePcc1y4LHNp/Wy6rvC xCjvJ2rM/qqAx6bKuWwKAyYnlGPc5LCsJrOAC+Surq9d4yOrYQuRVeTu6r6lnuRVdZ42vL1GrYnKg /J5C8PtQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMK-002BBi-OJ; Thu, 07 Apr 2022 09:47:45 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMD-00021y-NR; Thu, 07 Apr 2022 09:47:37 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy Date: Thu, 7 Apr 2022 09:47:09 -0600 Message-Id: <20220407154717.7695-14-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, logang@deltatee.com, bhelgaas@google.com, jhubbard@nvidia.com, rcampbell@nvidia.com, jgg@nvidia.com, mgurtovoy@nvidia.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 13/21] PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg() X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Stat-Signature: mxk9j5asc5zgnooyxj4a5wfee3i1io38 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: BD0704000D Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=aCju8nG8; dmarc=pass (policy=none) header.from=deltatee.com; spf=pass (imf04.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com X-Rspam-User: X-HE-Tag: 1649346485-994450 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This interface is superseded by support in dma_map_sg() which now supports heterogeneous scatterlists. There are no longer any users, so remove it. Signed-off-by: Logan Gunthorpe Acked-by: Bjorn Helgaas Reviewed-by: Jason Gunthorpe Reviewed-by: Max Gurtovoy --- drivers/pci/p2pdma.c | 66 -------------------------------------- include/linux/pci-p2pdma.h | 27 ---------------- 2 files changed, 93 deletions(-) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 9032c2ed2cdf..4d3cab9da748 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -880,72 +880,6 @@ enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap, return type; } -static int __pci_p2pdma_map_sg(struct pci_p2pdma_pagemap *p2p_pgmap, - struct device *dev, struct scatterlist *sg, int nents) -{ - struct scatterlist *s; - int i; - - for_each_sg(sg, s, nents, i) { - s->dma_address = sg_phys(s) + p2p_pgmap->bus_offset; - sg_dma_len(s) = s->length; - } - - return nents; -} - -/** - * pci_p2pdma_map_sg_attrs - map a PCI peer-to-peer scatterlist for DMA - * @dev: device doing the DMA request - * @sg: scatter list to map - * @nents: elements in the scatterlist - * @dir: DMA direction - * @attrs: DMA attributes passed to dma_map_sg() (if called) - * - * Scatterlists mapped with this function should be unmapped using - * pci_p2pdma_unmap_sg_attrs(). - * - * Returns the number of SG entries mapped or 0 on error. - */ -int pci_p2pdma_map_sg_attrs(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction dir, unsigned long attrs) -{ - struct pci_p2pdma_pagemap *p2p_pgmap = - to_p2p_pgmap(sg_page(sg)->pgmap); - - switch (pci_p2pdma_map_type(sg_page(sg)->pgmap, dev)) { - case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: - return dma_map_sg_attrs(dev, sg, nents, dir, attrs); - case PCI_P2PDMA_MAP_BUS_ADDR: - return __pci_p2pdma_map_sg(p2p_pgmap, dev, sg, nents); - default: - /* Mapping is not Supported */ - return 0; - } -} -EXPORT_SYMBOL_GPL(pci_p2pdma_map_sg_attrs); - -/** - * pci_p2pdma_unmap_sg_attrs - unmap a PCI peer-to-peer scatterlist that was - * mapped with pci_p2pdma_map_sg() - * @dev: device doing the DMA request - * @sg: scatter list to map - * @nents: number of elements returned by pci_p2pdma_map_sg() - * @dir: DMA direction - * @attrs: DMA attributes passed to dma_unmap_sg() (if called) - */ -void pci_p2pdma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction dir, unsigned long attrs) -{ - enum pci_p2pdma_map_type map_type; - - map_type = pci_p2pdma_map_type(sg_page(sg)->pgmap, dev); - - if (map_type == PCI_P2PDMA_MAP_THRU_HOST_BRIDGE) - dma_unmap_sg_attrs(dev, sg, nents, dir, attrs); -} -EXPORT_SYMBOL_GPL(pci_p2pdma_unmap_sg_attrs); - /** * pci_p2pdma_map_segment - map an sg segment determining the mapping type * @state: State structure that should be declared outside of the for_each_sg() diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h index 8318a97c9c61..2c07aa6b7665 100644 --- a/include/linux/pci-p2pdma.h +++ b/include/linux/pci-p2pdma.h @@ -30,10 +30,6 @@ struct scatterlist *pci_p2pmem_alloc_sgl(struct pci_dev *pdev, unsigned int *nents, u32 length); void pci_p2pmem_free_sgl(struct pci_dev *pdev, struct scatterlist *sgl); void pci_p2pmem_publish(struct pci_dev *pdev, bool publish); -int pci_p2pdma_map_sg_attrs(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction dir, unsigned long attrs); -void pci_p2pdma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction dir, unsigned long attrs); int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev, bool *use_p2pdma); ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, @@ -83,17 +79,6 @@ static inline void pci_p2pmem_free_sgl(struct pci_dev *pdev, static inline void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) { } -static inline int pci_p2pdma_map_sg_attrs(struct device *dev, - struct scatterlist *sg, int nents, enum dma_data_direction dir, - unsigned long attrs) -{ - return 0; -} -static inline void pci_p2pdma_unmap_sg_attrs(struct device *dev, - struct scatterlist *sg, int nents, enum dma_data_direction dir, - unsigned long attrs) -{ -} static inline int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev, bool *use_p2pdma) { @@ -119,16 +104,4 @@ static inline struct pci_dev *pci_p2pmem_find(struct device *client) return pci_p2pmem_find_many(&client, 1); } -static inline int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction dir) -{ - return pci_p2pdma_map_sg_attrs(dev, sg, nents, dir, 0); -} - -static inline void pci_p2pdma_unmap_sg(struct device *dev, - struct scatterlist *sg, int nents, enum dma_data_direction dir) -{ - pci_p2pdma_unmap_sg_attrs(dev, sg, nents, dir, 0); -} - #endif /* _LINUX_PCI_P2P_H */ From patchwork Thu Apr 7 15:47:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08A13C433F5 for ; Thu, 7 Apr 2022 15:51:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 886958D0008; Thu, 7 Apr 2022 11:48:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 834E28D0005; Thu, 7 Apr 2022 11:48:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B0278D0008; Thu, 7 Apr 2022 11:48:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 5BE248D0005 for ; Thu, 7 Apr 2022 11:48:02 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 3BD8082911 for ; Thu, 7 Apr 2022 15:47:52 +0000 (UTC) X-FDA: 79330513584.19.19F3CD2 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf12.hostedemail.com (Postfix) with ESMTP id CC3EB40005 for ; Thu, 7 Apr 2022 15:47:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=TrHjSjqnpivShFG29VQJnk19Gkp1lAFeMU5qV38TDWY=; b=a9iG1JYzxrzh9kLb7JxN4eciEW 6wbdIlyUdiCtB3wboQS7+GYVifWLnPXvqtOdrgfNGPKLEFhoWGCVGy9DtF3GpbhMKk5/Nf3DPHc4v pgylaj9143l7sLThF3RdTJbb8Llpv5WxV2ffxwsV9ClyJfXFEUTdP124ykttALbtiQuehwx26lf4k nskMYB4mKvKzpXAhH+cOYdh4H5MhKX72v6Ybqn20DmAHJQyvOnNWonO+D1B0BW86dh8mM/onDpL1+ Z3rrZH6Q9QwGseY8CqfbC45IZki1oiMY3jBhHMBkzH//fX+lxNHLHM2V18BfFrT1j4vn1U63tNxcp 2F31pKfQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMK-002BBe-Nm; Thu, 07 Apr 2022 09:47:45 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMD-000222-VC; Thu, 07 Apr 2022 09:47:38 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe Date: Thu, 7 Apr 2022 09:47:10 -0600 Message-Id: <20220407154717.7695-15-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, jhubbard@nvidia.com, rcampbell@nvidia.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 14/21] mm: introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CC3EB40005 X-Rspam-User: Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=a9iG1JYz; dmarc=pass (policy=none) header.from=deltatee.com; spf=pass (imf12.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com X-Stat-Signature: omjygf7g7iu3p6xpp8acurxstbumcwgc X-HE-Tag: 1649346471-675224 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: GUP Callers that expect PCI P2PDMA pages can now set FOLL_PCI_P2PDMA to allow obtaining P2PDMA pages. If GUP is called without the flag and a P2PDMA page is found, it will return an error. FOLL_PCI_P2PDMA cannot be set if FOLL_LONGTERM is set. Signed-off-by: Logan Gunthorpe --- include/linux/mm.h | 1 + mm/gup.c | 22 +++++++++++++++++++++- 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index e34edb775334..14ef41af8b77 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2936,6 +2936,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ #define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ #define FOLL_FAST_ONLY 0x80000 /* gup_fast: prevent fall-back to slow gup */ +#define FOLL_PCI_P2PDMA 0x100000 /* allow returning PCI P2PDMA pages */ /* * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each diff --git a/mm/gup.c b/mm/gup.c index f598a037eb04..0af6f802ca38 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -490,6 +490,12 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, page = pte_page(pte); else goto no_page; + + if (unlikely(!(flags & FOLL_PCI_P2PDMA) && + is_pci_p2pdma_page(page))) { + page = ERR_PTR(-EREMOTEIO); + goto out; + } } else if (unlikely(!page)) { if (flags & FOLL_DUMP) { /* Avoid special (like zero) pages in core dumps */ @@ -919,6 +925,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma)) return -EOPNOTSUPP; + if ((gup_flags & FOLL_LONGTERM) && (gup_flags & FOLL_PCI_P2PDMA)) + return -EOPNOTSUPP; + if (vma_is_secretmem(vma)) return -EFAULT; @@ -2184,6 +2193,10 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, VM_BUG_ON(!pfn_valid(pte_pfn(pte))); page = pte_page(pte); + if (unlikely(pte_devmap(pte) && !(flags & FOLL_PCI_P2PDMA) && + is_pci_p2pdma_page(page))) + goto pte_unmap; + folio = try_grab_folio(page, 1, flags); if (!folio) goto pte_unmap; @@ -2258,6 +2271,12 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr, undo_dev_pagemap(nr, nr_start, flags, pages); break; } + + if (!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page)) { + undo_dev_pagemap(nr, nr_start, flags, pages); + break; + } + SetPageReferenced(page); pages[*nr] = page; if (unlikely(!try_grab_page(page, flags))) { @@ -2729,7 +2748,8 @@ static int internal_get_user_pages_fast(unsigned long start, if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM | FOLL_FORCE | FOLL_PIN | FOLL_GET | - FOLL_FAST_ONLY | FOLL_NOFAULT))) + FOLL_FAST_ONLY | FOLL_NOFAULT | + FOLL_PCI_P2PDMA))) return -EINVAL; if (gup_flags & FOLL_PIN) From patchwork Thu Apr 7 15:47:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805428 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30885C433EF for ; Thu, 7 Apr 2022 15:53:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3A4198D000A; Thu, 7 Apr 2022 11:48:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 32B828D000C; Thu, 7 Apr 2022 11:48:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1CD4B8D000A; Thu, 7 Apr 2022 11:48:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id 0BB3B8D000C for ; Thu, 7 Apr 2022 11:48:07 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C4933AD6CB for ; Thu, 7 Apr 2022 15:47:56 +0000 (UTC) X-FDA: 79330513752.29.B9739AD Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf04.hostedemail.com (Postfix) with ESMTP id 5A6CE4000B for ; Thu, 7 Apr 2022 15:47:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=NVweqEcjCecZ0bg5dlWjMJWWbRBpQR845+naXv01S1E=; b=h2VvbNMMeOyfd9uAusM38USxqP BFBKHX85h+K9Iouhc5J025ZGSoGwKIXCO7ojH7XY7YyHtqY+FxOrMiXktfFPC7gZLVbBq6QA4Qrqv PwwUyHVKqeCbkzTZVPu2r1p/mePJAyTIMmfWusxrJqTms3K+wv/Mo3p/YBnQtsZcnxrkfsf90KZUZ Xg1LT6yc0J5GbUL+iqWxGKdJq7/8uvyiNuLx7fcBLUIz5E8rLwHtHv/EBxJImTmU6jMWSTAQAZ2Sk jcbEXGAMH4iMByypCuG9AOAn5Wdl8PTU5kKxepYadTEARhyeUEkfv3RDRU6GmNTMuzQ8V4aGuRy3/ YqtIYkZQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMK-002BBg-28; Thu, 07 Apr 2022 09:47:44 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUME-000226-6k; Thu, 07 Apr 2022 09:47:38 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe Date: Thu, 7 Apr 2022 09:47:11 -0600 Message-Id: <20220407154717.7695-16-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, jhubbard@nvidia.com, rcampbell@nvidia.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 15/21] iov_iter: introduce iov_iter_get_pages_[alloc_]flags() X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Rspam-User: Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=h2VvbNMM; spf=pass (imf04.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com; dmarc=pass (policy=none) header.from=deltatee.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 5A6CE4000B X-Stat-Signature: ja6ihdfsskpdshhciti6a189m9r869ac X-HE-Tag: 1649346476-233249 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add iov_iter_get_pages_flags() and iov_iter_get_pages_alloc_flags() which take a flags argument that is passed to get_user_pages_fast(). This is so that FOLL_PCI_P2PDMA can be passed when appropriate. Signed-off-by: Logan Gunthorpe --- include/linux/uio.h | 6 ++++++ lib/iov_iter.c | 25 +++++++++++++++++++------ 2 files changed, 25 insertions(+), 6 deletions(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index 739285fe5a2f..ddf9e4cf4a59 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -232,8 +232,14 @@ void iov_iter_pipe(struct iov_iter *i, unsigned int direction, struct pipe_inode void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count); void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xarray *xarray, loff_t start, size_t count); +ssize_t iov_iter_get_pages_flags(struct iov_iter *i, struct page **pages, + size_t maxsize, unsigned maxpages, size_t *start, + unsigned int gup_flags); ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages, size_t maxsize, unsigned maxpages, size_t *start); +ssize_t iov_iter_get_pages_alloc_flags(struct iov_iter *i, + struct page ***pages, size_t maxsize, size_t *start, + unsigned int gup_flags); ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages, size_t maxsize, size_t *start); int iov_iter_npages(const struct iov_iter *i, int maxpages); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 6dd5330f7a99..9bf6e3af5120 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1515,9 +1515,9 @@ static struct page *first_bvec_segment(const struct iov_iter *i, return page; } -ssize_t iov_iter_get_pages(struct iov_iter *i, +ssize_t iov_iter_get_pages_flags(struct iov_iter *i, struct page **pages, size_t maxsize, unsigned maxpages, - size_t *start) + size_t *start, unsigned int gup_flags) { size_t len; int n, res; @@ -1528,7 +1528,6 @@ ssize_t iov_iter_get_pages(struct iov_iter *i, return 0; if (likely(iter_is_iovec(i))) { - unsigned int gup_flags = 0; unsigned long addr; if (iov_iter_rw(i) != WRITE) @@ -1558,6 +1557,13 @@ ssize_t iov_iter_get_pages(struct iov_iter *i, return iter_xarray_get_pages(i, pages, maxsize, maxpages, start); return -EFAULT; } +EXPORT_SYMBOL_GPL(iov_iter_get_pages_flags); + +ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages, + size_t maxsize, unsigned maxpages, size_t *start) +{ + return iov_iter_get_pages_flags(i, pages, maxsize, maxpages, start, 0); +} EXPORT_SYMBOL(iov_iter_get_pages); static struct page **get_pages_array(size_t n) @@ -1640,9 +1646,9 @@ static ssize_t iter_xarray_get_pages_alloc(struct iov_iter *i, return actual; } -ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, +ssize_t iov_iter_get_pages_alloc_flags(struct iov_iter *i, struct page ***pages, size_t maxsize, - size_t *start) + size_t *start, unsigned int gup_flags) { struct page **p; size_t len; @@ -1654,7 +1660,6 @@ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, return 0; if (likely(iter_is_iovec(i))) { - unsigned int gup_flags = 0; unsigned long addr; if (iov_iter_rw(i) != WRITE) @@ -1667,6 +1672,7 @@ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, p = get_pages_array(n); if (!p) return -ENOMEM; + res = get_user_pages_fast(addr, n, gup_flags, p); if (unlikely(res <= 0)) { kvfree(p); @@ -1694,6 +1700,13 @@ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, return iter_xarray_get_pages_alloc(i, pages, maxsize, start); return -EFAULT; } +EXPORT_SYMBOL_GPL(iov_iter_get_pages_alloc_flags); + +ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages, + size_t maxsize, size_t *start) +{ + return iov_iter_get_pages_alloc_flags(i, pages, maxsize, start, 0); +} EXPORT_SYMBOL(iov_iter_get_pages_alloc); size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, From patchwork Thu Apr 7 15:47:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805414 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D6D8C433EF for ; Thu, 7 Apr 2022 15:51:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CCAFB8D0007; Thu, 7 Apr 2022 11:48:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C7AF48D0005; Thu, 7 Apr 2022 11:48:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B1D398D0007; Thu, 7 Apr 2022 11:48:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id A117C8D0005 for ; Thu, 7 Apr 2022 11:48:00 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 66955AAA15 for ; Thu, 7 Apr 2022 15:47:50 +0000 (UTC) X-FDA: 79330513500.26.26714E7 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf28.hostedemail.com (Postfix) with ESMTP id E47EBC0007 for ; Thu, 7 Apr 2022 15:47:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=NhYwwPYUOPNVkjQIdyegnu/TVIl/nr3VeJZ7F4Q1rQY=; b=kW115vO/lD5rTsxYR5wDs4inqe b62XLPVR1SKPBKhjIMGvH8/MnKPRwidsE+zcAlS8tHsYa6gYUO/lWQKAiq0fskHTB4DfCzP9KwEt2 awuANPVzBGDxwiLlGKvFZcVBO0iijF1vKPmrj95MhayJpJlrS46NDTFfC+t7KI8DNk4ry1m5/8d1J +7oK34SQ8cIiJnBx3+JJHIyr3dMLRP4mSXQh6eF23EX5U4e3yvWW2GUbw6AGAk7S7lbdREAiI4Faf /F4ASkipN0tLk9wxIMmSs5rh/01GZKqdFrmMo+fMt2J66f3X0zQZYZqeS8UlYt9kYOkl7xUEiidL6 7kOVCESQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMJ-002BBi-Mr; Thu, 07 Apr 2022 09:47:44 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUME-00022A-Fc; Thu, 07 Apr 2022 09:47:38 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe Date: Thu, 7 Apr 2022 09:47:12 -0600 Message-Id: <20220407154717.7695-17-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, jhubbard@nvidia.com, rcampbell@nvidia.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 16/21] block: add check when merging zone device pages X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E47EBC0007 X-Stat-Signature: kqjg6nebojrghuophsjs9oiehstx1pu1 X-Rspam-User: Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b="kW115vO/"; dmarc=pass (policy=none) header.from=deltatee.com; spf=pass (imf28.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com X-HE-Tag: 1649346469-371787 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Consecutive zone device pages should not be merged into the same sgl or bvec segment with other types of pages or if they belong to different pgmaps. Otherwise getting the pgmap of a given segment is not possible without scanning the entire segment. This helper returns true either if both pages are not zone device pages or both pages are zone device pages with the same pgmap. Add a helper to determine if zone device pages are mergeable and use this helper in page_is_mergeable(). Signed-off-by: Logan Gunthorpe --- block/bio.c | 2 ++ include/linux/mm.h | 23 +++++++++++++++++++++++ 2 files changed, 25 insertions(+) diff --git a/block/bio.c b/block/bio.c index cdd7b2915c53..3406c0450db3 100644 --- a/block/bio.c +++ b/block/bio.c @@ -834,6 +834,8 @@ static inline bool page_is_mergeable(const struct bio_vec *bv, return false; if (xen_domain() && !xen_biovec_phys_mergeable(bv, page)) return false; + if (!zone_device_pages_have_same_pgmap(bv->bv_page, page)) + return false; *same_page = ((vec_end_addr & PAGE_MASK) == page_addr); if (*same_page) diff --git a/include/linux/mm.h b/include/linux/mm.h index 14ef41af8b77..fb2264a17e4a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1108,6 +1108,24 @@ static inline bool is_zone_device_page(const struct page *page) { return page_zonenum(page) == ZONE_DEVICE; } + +/* + * Consecutive zone device pages should not be merged into the same sgl + * or bvec segment with other types of pages or if they belong to different + * pgmaps. Otherwise getting the pgmap of a given segment is not possible + * without scanning the entire segment. This helper returns true either if + * both pages are not zone device pages or both pages are zone device pages + * with the same pgmap. + */ +static inline bool zone_device_pages_have_same_pgmap(const struct page *a, + const struct page *b) +{ + if (is_zone_device_page(a) != is_zone_device_page(b)) + return false; + if (!is_zone_device_page(a)) + return true; + return a->pgmap == b->pgmap; +} extern void memmap_init_zone_device(struct zone *, unsigned long, unsigned long, struct dev_pagemap *); #else @@ -1115,6 +1133,11 @@ static inline bool is_zone_device_page(const struct page *page) { return false; } +static inline bool zone_device_pages_have_same_pgmap(const struct page *a, + const struct page *b) +{ + return true; +} #endif static inline bool folio_is_zone_device(const struct folio *folio) From patchwork Thu Apr 7 15:47:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76562C433F5 for ; Thu, 7 Apr 2022 15:52:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D68418D0005; Thu, 7 Apr 2022 11:48:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D16B78D000A; Thu, 7 Apr 2022 11:48:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BDE648D0005; Thu, 7 Apr 2022 11:48:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id AA4138D000A for ; Thu, 7 Apr 2022 11:48:04 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 76CD563989 for ; Thu, 7 Apr 2022 15:47:54 +0000 (UTC) X-FDA: 79330513668.17.8E74D4E Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf21.hostedemail.com (Postfix) with ESMTP id 12EDE1C0002 for ; Thu, 7 Apr 2022 15:47:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=UO35KE8YVx2v+yNTFMhmKCQE8ptUbM1lG6ODy8MFD4U=; b=LMw5Rh20DcM5d3Dq/RAQudlUUm pdDtlqxuBvAu0xmse3lD5xY5hmgzp8R947S7R2jsS0GWxslaAcBPvwmJjj7hqRG50GT1x7/0ep8AF bsfusTGJF30H/i8vsUxpfkIpaROaJYKTJFFlEuke+PmjYJ4Vm33iK/ayNz1QIX0B05zhY4icWzaC0 eejbcI76PxreVywpanJaCl/qgBtoDKGMGUXtcX5kOf+RgYNPeC3EA8tP3eHqX1OFxTFAXonV4SxZg oSA3nC2LAVV0jPvP6aCdEbD28nAoffDmVerckIp87s0HVl2Z51lAMgypbtpQlxC269BhhosRUaS84 AEnUWm1Q==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMJ-002BBf-MY; Thu, 07 Apr 2022 09:47:44 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUME-00022E-NV; Thu, 07 Apr 2022 09:47:38 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe Date: Thu, 7 Apr 2022 09:47:13 -0600 Message-Id: <20220407154717.7695-18-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, jhubbard@nvidia.com, rcampbell@nvidia.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 17/21] lib/scatterlist: add check when merging zone device pages X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 12EDE1C0002 X-Stat-Signature: tzsr6jxksbi1fzgfrjkn65jnr6tcscxr X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=LMw5Rh20; dmarc=pass (policy=none) header.from=deltatee.com; spf=pass (imf21.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com X-HE-Tag: 1649346473-141957 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Consecutive zone device pages should not be merged into the same sgl or bvec segment with other types of pages or if they belong to different pgmaps. Otherwise getting the pgmap of a given segment is not possible without scanning the entire segment. This helper returns true either if both pages are not zone device pages or both pages are zone device pages with the same pgmap. Factor out the check for page mergability into a pages_are_mergable() helper and add a check with zone_device_pages_are_mergeable(). Signed-off-by: Logan Gunthorpe --- lib/scatterlist.c | 25 +++++++++++++++---------- 1 file changed, 15 insertions(+), 10 deletions(-) diff --git a/lib/scatterlist.c b/lib/scatterlist.c index d5e82e4a57ad..af53a0984f76 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -410,6 +410,15 @@ static struct scatterlist *get_next_sg(struct sg_append_table *table, return new_sg; } +static bool pages_are_mergeable(struct page *a, struct page *b) +{ + if (page_to_pfn(a) != page_to_pfn(b) + 1) + return false; + if (!zone_device_pages_have_same_pgmap(a, b)) + return false; + return true; +} + /** * sg_alloc_append_table_from_pages - Allocate and initialize an append sg * table from an array of pages @@ -447,6 +456,7 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, unsigned int chunks, cur_page, seg_len, i, prv_len = 0; unsigned int added_nents = 0; struct scatterlist *s = sgt_append->prv; + struct page *last_pg; /* * The algorithm below requires max_segment to be aligned to PAGE_SIZE @@ -460,21 +470,17 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, return -EOPNOTSUPP; if (sgt_append->prv) { - unsigned long paddr = - (page_to_pfn(sg_page(sgt_append->prv)) * PAGE_SIZE + - sgt_append->prv->offset + sgt_append->prv->length) / - PAGE_SIZE; - if (WARN_ON(offset)) return -EINVAL; /* Merge contiguous pages into the last SG */ prv_len = sgt_append->prv->length; - while (n_pages && page_to_pfn(pages[0]) == paddr) { + last_pg = sg_page(sgt_append->prv); + while (n_pages && pages_are_mergeable(last_pg, pages[0])) { if (sgt_append->prv->length + PAGE_SIZE > max_segment) break; sgt_append->prv->length += PAGE_SIZE; - paddr++; + last_pg = pages[0]; pages++; n_pages--; } @@ -488,7 +494,7 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, for (i = 1; i < n_pages; i++) { seg_len += PAGE_SIZE; if (seg_len >= max_segment || - page_to_pfn(pages[i]) != page_to_pfn(pages[i - 1]) + 1) { + !pages_are_mergeable(pages[i], pages[i - 1])) { chunks++; seg_len = 0; } @@ -504,8 +510,7 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, for (j = cur_page + 1; j < n_pages; j++) { seg_len += PAGE_SIZE; if (seg_len >= max_segment || - page_to_pfn(pages[j]) != - page_to_pfn(pages[j - 1]) + 1) + !pages_are_mergeable(pages[j], pages[j - 1])) break; } From patchwork Thu Apr 7 15:47:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805427 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 102BDC433F5 for ; Thu, 7 Apr 2022 15:53:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D67FC8D000B; Thu, 7 Apr 2022 11:48:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D16C18D000A; Thu, 7 Apr 2022 11:48:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB8CC8D000B; Thu, 7 Apr 2022 11:48:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0253.hostedemail.com [216.40.44.253]) by kanga.kvack.org (Postfix) with ESMTP id AB99D8D000A for ; Thu, 7 Apr 2022 11:48:06 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 668A418418895 for ; Thu, 7 Apr 2022 15:47:56 +0000 (UTC) X-FDA: 79330513752.31.A26FAB6 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf08.hostedemail.com (Postfix) with ESMTP id 017C3160008 for ; Thu, 7 Apr 2022 15:47:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=Nx50PJtvGhphGZXf/1XGmZRg6RGOlaGtqP7f50ukagw=; b=jzm7m5DYH4sfzjYBB8mLijU5Mu 8yjfUysXi6ZAqs3yL1ob6tiPOBWuLjowWMQM5MNBbA8fiUTS0HKoMiy3G636mVtT3/1mUfXdEXNv1 KparX8YnOZmwmCSiKji0FXpI0tFNCrypJrr9n86GpGFizaVlBHqWocqQrQPaV8Fz0a2TQKS2odOKt Xlu683AaYUFZ+Cn+nsa8JQCXH7eMVRlcGrJ0/7yTrfVIeVSIxByPcpnPuEDZLoFmQk+JJv+Wkhyoq UpAgsJlvYd73eZsD0WtwYOGwrB8dHpNMf8EAMJPmF0hQRhOt1pB+YssSv3tylIT8VYXPVrE5oO0/W Pu5YgCfQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMJ-002BBh-Mu; Thu, 07 Apr 2022 09:47:44 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUME-00022I-Vr; Thu, 07 Apr 2022 09:47:39 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe Date: Thu, 7 Apr 2022 09:47:14 -0600 Message-Id: <20220407154717.7695-19-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, jhubbard@nvidia.com, rcampbell@nvidia.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 18/21] block: set FOLL_PCI_P2PDMA in __bio_iov_iter_get_pages() X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Stat-Signature: hbs91qwyg69qpkjrikebesxo1dsn3zxj Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=jzm7m5DY; spf=pass (imf08.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com; dmarc=pass (policy=none) header.from=deltatee.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 017C3160008 X-HE-Tag: 1649346475-274972 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be passed from userspace and enables the O_DIRECT path in iomap based filesystems and direct to block devices. Signed-off-by: Logan Gunthorpe --- block/bio.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/block/bio.c b/block/bio.c index 3406c0450db3..271a720a6dc1 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1149,6 +1149,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt; struct page **pages = (struct page **)bv; bool same_page = false; + unsigned int flags = 0; ssize_t size, left; unsigned len, i; size_t offset; @@ -1161,7 +1162,12 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2); pages += entries_left * (PAGE_PTRS_PER_BVEC - 1); - size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); + if (bio->bi_bdev && bio->bi_bdev->bd_disk && + blk_queue_pci_p2pdma(bio->bi_bdev->bd_disk->queue)) + flags |= FOLL_PCI_P2PDMA; + + size = iov_iter_get_pages_flags(iter, pages, LONG_MAX, nr_pages, + &offset, flags); if (unlikely(size <= 0)) return size ? size : -EFAULT; From patchwork Thu Apr 7 15:47:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805408 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B139C433EF for ; Thu, 7 Apr 2022 15:48:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 52A516B0072; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D91B6B0073; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 379F36B0074; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 26BBB6B0072 for ; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D819C26084 for ; Thu, 7 Apr 2022 15:47:47 +0000 (UTC) X-FDA: 79330513374.10.1AC1F86 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf18.hostedemail.com (Postfix) with ESMTP id 46D2C1C0005 for ; Thu, 7 Apr 2022 15:47:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=ynMPFyjctQYmZSaKetGEvfKIotM82q1ISZ1qanr0c8w=; b=VhIgXvEcyj1A/PLDTYzw5XYQWL ARiEaGxvCCJA41jvsk7kg1wMTfovuD3ntVBJrsbf7t9ycUGNGm7MDPZbBT7F9zkLVF4jJ+tCoBUgZ iJM3DNSwzM+51c1dhlXZwuoDAriAeMBz6LcYhb3ZVPfLyLT1Lzvn1U2XjqTc0chjvQ6U11sVpwxJb GFs1/94eKn4QIkXnneT2M5jTgxO641YGfynE71dcLnYU8kg1M25Vqf5EFKpHCbkrOV/LvqJekr91g nFUL1EHBIw637mpoAVVaFgpg7YDOjcKgEnegFXScxw5o2DA9uYrqdAYljF42urPSdfa40ecBAS/TU NL3bwiJA==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMJ-002BBg-7F; Thu, 07 Apr 2022 09:47:43 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMF-00022M-6K; Thu, 07 Apr 2022 09:47:39 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe Date: Thu, 7 Apr 2022 09:47:15 -0600 Message-Id: <20220407154717.7695-20-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, jhubbard@nvidia.com, rcampbell@nvidia.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 19/21] block: set FOLL_PCI_P2PDMA in bio_map_user_iov() X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Rspam-User: X-Stat-Signature: dgaak9ieszx6fh8kq7b5icykxw5qq8gd Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=VhIgXvEc; spf=pass (imf18.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com; dmarc=pass (policy=none) header.from=deltatee.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 46D2C1C0005 X-HE-Tag: 1649346467-598371 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be passed from userspace and enables the NVMe passthru requests to use P2PDMA pages. Signed-off-by: Logan Gunthorpe --- block/blk-map.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/block/blk-map.c b/block/blk-map.c index c7f71d83eff1..85baf922a0e8 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -234,6 +234,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, gfp_t gfp_mask) { unsigned int max_sectors = queue_max_hw_sectors(rq->q); + unsigned int flags = 0; struct bio *bio; int ret; int j; @@ -246,13 +247,17 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, return -ENOMEM; bio->bi_opf |= req_op(rq); + if (blk_queue_pci_p2pdma(rq->q)) + flags |= FOLL_PCI_P2PDMA; + while (iov_iter_count(iter)) { struct page **pages; ssize_t bytes; size_t offs, added = 0; int npages; - bytes = iov_iter_get_pages_alloc(iter, &pages, LONG_MAX, &offs); + bytes = iov_iter_get_pages_alloc_flags(iter, &pages, LONG_MAX, + &offs, flags); if (unlikely(bytes <= 0)) { ret = bytes ? bytes : -EFAULT; goto out_unmap; From patchwork Thu Apr 7 15:47:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805429 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18EA1C433EF for ; Thu, 7 Apr 2022 15:54:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 41A1C8D000D; Thu, 7 Apr 2022 11:48:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C8BC8D000C; Thu, 7 Apr 2022 11:48:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26B328D000D; Thu, 7 Apr 2022 11:48:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0224.hostedemail.com [216.40.44.224]) by kanga.kvack.org (Postfix) with ESMTP id 134198D000C for ; Thu, 7 Apr 2022 11:48:08 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C1834AD853 for ; Thu, 7 Apr 2022 15:47:57 +0000 (UTC) X-FDA: 79330513794.30.8BCEDB6 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf07.hostedemail.com (Postfix) with ESMTP id 280AE40005 for ; Thu, 7 Apr 2022 15:47:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=04waCzc1/d2UZmMXuw2PXk+J6f5XQzdg3pAe0UrfN/Q=; b=oPIlPQycw1S8szXF8c4/CHGjLW WDDeDe9Uey3MFlX3GtCyEOhz+eXrl1620/AAeT3ggWKwmgLqA7KjFZ9Kf1C7YauNtvwL/liMpVZgt +VifA7A6vNtOzvs+EYnliJtR0aG7sp4TGmAXxesxGb09iAHzdIadpsf8Kdw69APIW8qNqk0sa477b kzjFLziSX5kGOFBk7ig4p1AYKkEJkPZHeD1+mxLzTtrcw70zdeiDhUVbmH2+7/7vW8fn7W7ofKDsa ajRu7/EBT65I0zeYp0FJFB2IPG/jfcmG2sBHDcI1sdz8cBj/RTt0yXRfvZh8drzcE5f9n95EVAaZ0 AaDPDijg==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMI-002BBe-Nw; Thu, 07 Apr 2022 09:47:44 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMF-00022Q-Ce; Thu, 07 Apr 2022 09:47:39 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe , Bjorn Helgaas Date: Thu, 7 Apr 2022 09:47:16 -0600 Message-Id: <20220407154717.7695-21-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, jhubbard@nvidia.com, rcampbell@nvidia.com, logang@deltatee.com, bhelgaas@google.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 20/21] PCI/P2PDMA: Introduce pci_mmap_p2pmem() X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 280AE40005 X-Stat-Signature: m7sefet6ezuhi67d9hskkpbtactrz4op Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=oPIlPQyc; dmarc=pass (policy=none) header.from=deltatee.com; spf=pass (imf07.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com X-Rspam-User: X-HE-Tag: 1649346476-229308 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce pci_mmap_p2pmem() which is a helper to allocate and mmap a hunk of p2pmem into userspace. Pages are allocated from the genalloc in bulk with their reference count set to one. They are returned to the genalloc when the page is put through p2pdma_page_free() (the reference count is once again set to 1 in free_zone_device_page()). The VMA does not take a reference to the pages when they are inserted with vmf_insert_mixed() (which is necessary for zone device pages) so the backing P2P memory is stored in a structures in vm_private_data. A pseudo mount is used to allocate an inode for each PCI device. The inode's address_space is used in the file doing the mmap so that all VMAs are collected and can be unmapped if the PCI device is unbound. After unmapping, the VMAs are iterated through and their pages are put so the device can continue to be unbound. An active flag is used to signal to VMAs not to allocate any further P2P memory once the removal process starts. The flag is synchronized with concurrent access with an RCU lock. The VMAs and inode will survive after the unbind of the device, but no pages will be present in the VMA and a subsequent access will result in a SIGBUS error. Signed-off-by: Logan Gunthorpe Acked-by: Bjorn Helgaas --- drivers/pci/p2pdma.c | 340 ++++++++++++++++++++++++++++++++++++- include/linux/pci-p2pdma.h | 11 ++ include/uapi/linux/magic.h | 1 + 3 files changed, 350 insertions(+), 2 deletions(-) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 4d3cab9da748..cce4c7b6dd75 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -17,14 +17,19 @@ #include #include #include +#include +#include #include #include #include +#include struct pci_p2pdma { struct gen_pool *pool; bool p2pmem_published; struct xarray map_types; + struct inode *inode; + bool active; }; struct pci_p2pdma_pagemap { @@ -33,6 +38,17 @@ struct pci_p2pdma_pagemap { u64 bus_offset; }; +struct pci_p2pdma_map { + struct kref ref; + struct rcu_head rcu; + struct pci_dev *pdev; + struct inode *inode; + size_t len; + + spinlock_t kaddr_lock; + void *kaddr; +}; + static struct pci_p2pdma_pagemap *to_p2p_pgmap(struct dev_pagemap *pgmap) { return container_of(pgmap, struct pci_p2pdma_pagemap, pgmap); @@ -101,6 +117,38 @@ static const struct attribute_group p2pmem_group = { .name = "p2pmem", }; +/* + * P2PDMA internal mount + * Fake an internal VFS mount-point in order to allocate struct address_space + * mappings to remove VMAs on unbind events. + */ +static int pci_p2pdma_fs_cnt; +static struct vfsmount *pci_p2pdma_fs_mnt; + +static int pci_p2pdma_fs_init_fs_context(struct fs_context *fc) +{ + return init_pseudo(fc, P2PDMA_MAGIC) ? 0 : -ENOMEM; +} + +static struct file_system_type pci_p2pdma_fs_type = { + .name = "p2dma", + .owner = THIS_MODULE, + .init_fs_context = pci_p2pdma_fs_init_fs_context, + .kill_sb = kill_anon_super, +}; + +static void p2pdma_page_free(struct page *page) +{ + struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(page->pgmap); + + gen_pool_free(pgmap->provider->p2pdma->pool, + (uintptr_t)page_to_virt(page), PAGE_SIZE); +} + +static const struct dev_pagemap_ops p2pdma_pgmap_ops = { + .page_free = p2pdma_page_free, +}; + static void pci_p2pdma_release(void *data) { struct pci_dev *pdev = data; @@ -117,6 +165,9 @@ static void pci_p2pdma_release(void *data) gen_pool_destroy(p2pdma->pool); sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group); xa_destroy(&p2pdma->map_types); + + iput(p2pdma->inode); + simple_release_fs(&pci_p2pdma_fs_mnt, &pci_p2pdma_fs_cnt); } static int pci_p2pdma_setup(struct pci_dev *pdev) @@ -134,17 +185,32 @@ static int pci_p2pdma_setup(struct pci_dev *pdev) if (!p2p->pool) goto out; - error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_release, pdev); + error = simple_pin_fs(&pci_p2pdma_fs_type, &pci_p2pdma_fs_mnt, + &pci_p2pdma_fs_cnt); if (error) goto out_pool_destroy; + p2p->inode = alloc_anon_inode(pci_p2pdma_fs_mnt->mnt_sb); + if (IS_ERR(p2p->inode)) { + error = -ENOMEM; + goto out_unpin_fs; + } + + error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_release, pdev); + if (error) + goto out_put_inode; + error = sysfs_create_group(&pdev->dev.kobj, &p2pmem_group); if (error) - goto out_pool_destroy; + goto out_put_inode; rcu_assign_pointer(pdev->p2pdma, p2p); return 0; +out_put_inode: + iput(p2p->inode); +out_unpin_fs: + simple_release_fs(&pci_p2pdma_fs_mnt, &pci_p2pdma_fs_cnt); out_pool_destroy: gen_pool_destroy(p2p->pool); out: @@ -152,6 +218,54 @@ static int pci_p2pdma_setup(struct pci_dev *pdev) return error; } +static void pci_p2pdma_map_free_pages(struct pci_p2pdma_map *pmap) +{ + int i; + + if (!pmap->kaddr) + return; + + for (i = 0; i < pmap->len; i += PAGE_SIZE) + put_page(virt_to_page(pmap->kaddr + i)); + + pmap->kaddr = NULL; +} + +static void pci_p2pdma_free_mappings(struct address_space *mapping) +{ + struct vm_area_struct *vma; + + i_mmap_lock_write(mapping); + if (RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)) + goto out; + + vma_interval_tree_foreach(vma, &mapping->i_mmap, 0, -1) + pci_p2pdma_map_free_pages(vma->vm_private_data); + +out: + i_mmap_unlock_write(mapping); +} + +static void pci_p2pdma_unmap_mappings(void *data) +{ + struct pci_dev *pdev = data; + struct pci_p2pdma *p2pdma = rcu_dereference_protected(pdev->p2pdma, 1); + + /* Ensure no new pages can be allocated in mappings */ + p2pdma->active = false; + synchronize_rcu(); + + unmap_mapping_range(p2pdma->inode->i_mapping, 0, 0, 1); + + /* + * On some architectures, TLB flushes are done with call_rcu() + * so to ensure GUP fast is done with the pages, call synchronize_rcu() + * before freeing them. + */ + synchronize_rcu(); + pci_p2pdma_free_mappings(p2pdma->inode->i_mapping); +} + /** * pci_p2pdma_add_resource - add memory for use as p2p memory * @pdev: the device to add the memory to @@ -198,6 +312,7 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, pgmap->range.end = pgmap->range.start + size - 1; pgmap->nr_range = 1; pgmap->type = MEMORY_DEVICE_PCI_P2PDMA; + pgmap->ops = &p2pdma_pgmap_ops; p2p_pgmap->provider = pdev; p2p_pgmap->bus_offset = pci_bus_address(pdev, bar) - @@ -209,6 +324,11 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, goto pgmap_free; } + error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_unmap_mappings, + pdev); + if (error) + goto pages_free; + p2pdma = rcu_dereference_protected(pdev->p2pdma, 1); error = gen_pool_add_owner(p2pdma->pool, (unsigned long)addr, pci_bus_address(pdev, bar) + offset, @@ -217,6 +337,7 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, if (error) goto pages_free; + p2pdma->active = true; pci_info(pdev, "added peer-to-peer DMA memory %#llx-%#llx\n", pgmap->range.start, pgmap->range.end); @@ -1018,3 +1139,218 @@ ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, return sprintf(page, "%s\n", pci_name(p2p_dev)); } EXPORT_SYMBOL_GPL(pci_p2pdma_enable_show); + +static struct pci_p2pdma_map *pci_p2pdma_map_alloc(struct pci_dev *pdev, + size_t len) +{ + struct pci_p2pdma_map *pmap; + + pmap = kzalloc(sizeof(*pmap), GFP_KERNEL); + if (!pmap) + return NULL; + + kref_init(&pmap->ref); + pmap->pdev = pci_dev_get(pdev); + pmap->len = len; + + return pmap; +} + +static void pci_p2pdma_map_free(struct rcu_head *rcu) +{ + struct pci_p2pdma_map *pmap = + container_of(rcu, struct pci_p2pdma_map, rcu); + + pci_p2pdma_map_free_pages(pmap); + kfree(pmap); +} + +static void pci_p2pdma_map_release(struct kref *ref) +{ + struct pci_p2pdma_map *pmap = + container_of(ref, struct pci_p2pdma_map, ref); + + iput(pmap->inode); + simple_release_fs(&pci_p2pdma_fs_mnt, &pci_p2pdma_fs_cnt); + pci_dev_put(pmap->pdev); + + if (pmap->kaddr) { + /* + * Make sure to wait for the TLB flush (which some + * architectures do using call_rcu()) before returning the + * pages to the genalloc. This ensures the pages are not reused + * before GUP-fast is finished with them. So the mapping is + * freed using call_rcu() seeing adding synchronize_rcu() to + * the munmap path can cause long delays on large systems + * during process cleanup. + */ + call_rcu(&pmap->rcu, pci_p2pdma_map_free); + return; + } + + /* + * If there are no pages, just free the object immediately. There + * are no more references to it so there is nothing that can race + * with adding the pages. + */ + pci_p2pdma_map_free(&pmap->rcu); +} + +static void pci_p2pdma_vma_open(struct vm_area_struct *vma) +{ + struct pci_p2pdma_map *pmap = vma->vm_private_data; + + kref_get(&pmap->ref); +} + +static void pci_p2pdma_vma_close(struct vm_area_struct *vma) +{ + struct pci_p2pdma_map *pmap = vma->vm_private_data; + + kref_put(&pmap->ref, pci_p2pdma_map_release); +} + +static bool pci_p2pdma_vma_alloc(struct pci_p2pdma_map *pmap) +{ + struct pci_p2pdma *p2pdma; + bool ret = true; + void *kaddr; + + spin_lock(&pmap->kaddr_lock); + if (pmap->kaddr) + goto out_spin_unlock; + + rcu_read_lock(); + ret = false; + p2pdma = rcu_dereference(pmap->pdev->p2pdma); + if (!p2pdma) + goto out_rcu_unlock; + + if (!p2pdma->active) + goto out_rcu_unlock; + + kaddr = (void *)gen_pool_alloc(p2pdma->pool, pmap->len); + if (!kaddr) { + pci_err(pmap->pdev, + "No free P2PDMA memory for userspace mmap\n"); + goto out_rcu_unlock; + } + + WRITE_ONCE(pmap->kaddr, kaddr); + ret = true; + +out_rcu_unlock: + rcu_read_unlock(); +out_spin_unlock: + spin_unlock(&pmap->kaddr_lock); + return ret; +} + +static vm_fault_t pci_p2pdma_vma_fault(struct vm_fault *vmf) +{ + struct pci_p2pdma_map *pmap = vmf->vma->vm_private_data; + void *vaddr; + pfn_t pfn; + + if (!READ_ONCE(pmap->kaddr)) { + if (!pci_p2pdma_vma_alloc(pmap)) + return VM_FAULT_SIGBUS; + } + + vaddr = pmap->kaddr + vmf->address - vmf->vma->vm_start; + pfn = phys_to_pfn_t(virt_to_phys(vaddr), PFN_DEV | PFN_MAP); + + return vmf_insert_mixed(vmf->vma, vmf->address, pfn); +} +static const struct vm_operations_struct pci_p2pdma_vmops = { + .open = pci_p2pdma_vma_open, + .close = pci_p2pdma_vma_close, + .fault = pci_p2pdma_vma_fault, +}; + +/** + * pci_p2pdma_file_open - setup file mapping to store P2PMEM VMAs + * @pdev: the device to allocate memory from + * @vma: the userspace vma to map the memory to + * + * Set f_mapping of the file to the p2pdma inode so that mappings + * are can be torn down on device unbind. + * + * Returns 0 on success, or a negative error code on failure + */ +void pci_p2pdma_file_open(struct pci_dev *pdev, struct file *file) +{ + struct pci_p2pdma *p2pdma; + + rcu_read_lock(); + p2pdma = rcu_dereference(pdev->p2pdma); + if (p2pdma) + file->f_mapping = p2pdma->inode->i_mapping; + rcu_read_unlock(); +} +EXPORT_SYMBOL_GPL(pci_p2pdma_file_open); + +/** + * pci_mmap_p2pmem - setup an mmap region to be backed with P2PDMA memory + * that was registered with pci_p2pdma_add_resource() + * @pdev: the device to allocate memory from + * @vma: the userspace vma to map the memory to + * + * The file must call pci_p2pdma_mmap_file_open() in its open() operation. + * + * Returns 0 on success, or a negative error code on failure + */ +int pci_mmap_p2pmem(struct pci_dev *pdev, struct vm_area_struct *vma) +{ + struct pci_p2pdma_map *pmap; + struct pci_p2pdma *p2pdma; + int ret; + + /* prevent private mappings from being established */ + if ((vma->vm_flags & VM_MAYSHARE) != VM_MAYSHARE) { + pci_info_ratelimited(pdev, + "%s: fail, attempted private mapping\n", + current->comm); + return -EINVAL; + } + + if (vma->vm_pgoff) { + pci_info_ratelimited(pdev, + "%s: fail, attempted mapping with non-zero offset\n", + current->comm); + return -EINVAL; + } + + pmap = pci_p2pdma_map_alloc(pdev, vma->vm_end - vma->vm_start); + if (!pmap) + return -ENOMEM; + + spin_lock_init(&pmap->kaddr_lock); + rcu_read_lock(); + p2pdma = rcu_dereference(pdev->p2pdma); + if (!p2pdma) { + ret = -ENODEV; + goto out; + } + + ret = simple_pin_fs(&pci_p2pdma_fs_type, &pci_p2pdma_fs_mnt, + &pci_p2pdma_fs_cnt); + if (ret) + goto out; + + ihold(p2pdma->inode); + pmap->inode = p2pdma->inode; + rcu_read_unlock(); + + vma->vm_flags |= VM_MIXEDMAP; + vma->vm_private_data = pmap; + vma->vm_ops = &pci_p2pdma_vmops; + + return 0; + +out: + rcu_read_unlock(); + kfree(pmap); + return ret; +} +EXPORT_SYMBOL_GPL(pci_mmap_p2pmem); diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h index 2c07aa6b7665..040d79126463 100644 --- a/include/linux/pci-p2pdma.h +++ b/include/linux/pci-p2pdma.h @@ -34,6 +34,8 @@ int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev, bool *use_p2pdma); ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, bool use_p2pdma); +void pci_p2pdma_file_open(struct pci_dev *pdev, struct file *file); +int pci_mmap_p2pmem(struct pci_dev *pdev, struct vm_area_struct *vma); #else /* CONFIG_PCI_P2PDMA */ static inline int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, u64 offset) @@ -90,6 +92,15 @@ static inline ssize_t pci_p2pdma_enable_show(char *page, { return sprintf(page, "none\n"); } +static inline void pci_p2pdma_file_open(struct pci_dev *pdev, + struct file *file) +{ +} +static inline int pci_mmap_p2pmem(struct pci_dev *pdev, + struct vm_area_struct *vma) +{ + return -EOPNOTSUPP; +} #endif /* CONFIG_PCI_P2PDMA */ diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h index f724129c0425..59ba2e60dc03 100644 --- a/include/uapi/linux/magic.h +++ b/include/uapi/linux/magic.h @@ -95,6 +95,7 @@ #define BPF_FS_MAGIC 0xcafe4a11 #define AAFS_MAGIC 0x5a3c69f0 #define ZONEFS_MAGIC 0x5a4f4653 +#define P2PDMA_MAGIC 0x70327064 /* Since UDF 2.01 is ISO 13346 based... */ #define UDF_SUPER_MAGIC 0x15013346 From patchwork Thu Apr 7 15:47:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12805410 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F87EC433EF for ; Thu, 7 Apr 2022 15:49:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E01288D0003; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BEA5E6B0075; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9947D8D0002; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 795E98D0001 for ; Thu, 7 Apr 2022 11:47:58 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5276A6654E for ; Thu, 7 Apr 2022 15:47:48 +0000 (UTC) X-FDA: 79330513416.14.9E1C0EA Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf05.hostedemail.com (Postfix) with ESMTP id CFCC6100002 for ; Thu, 7 Apr 2022 15:47:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=rMmDd7FK89WBPMzhI41Yh5OWeVuvzuwCzfQzO0Mz/AU=; b=iG+EidunEVskeYVIo6imRxDh47 8oq55Yf7ytuhttopUD3lvf6ThM95I2SsX20Tz5UXUkIHMvEUwkiyLww1Y7lHLgPLamVgzzrDt9fYq fpg8LyY+HXCPuc+m5seNIMP6THmXjEpPrwBk+domuzL5vo/nJyqtYtFUTbuonua5YFS70T17/tsyO Pgfwb6Z8/wqY9TNBk0i3thENlVQAP5CwlZXcgv5Mc/wTHtjxESXLnXBUgGBRbumCU/gpb5+oRv96h HcTc43gYK3lYYuIjoveMt+N0WHXfbSbT3dVPExe0nM5REk4BP4aZAir/OaylNhLOLqVjDfa7vIHRS cLi8EOWQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncUMI-002BBf-HJ; Thu, 07 Apr 2022 09:47:43 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1ncUMF-00022U-JJ; Thu, 07 Apr 2022 09:47:39 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe Date: Thu, 7 Apr 2022 09:47:17 -0600 Message-Id: <20220407154717.7695-22-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220407154717.7695-1-logang@deltatee.com> References: <20220407154717.7695-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, jhubbard@nvidia.com, rcampbell@nvidia.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v6 21/21] nvme-pci: allow mmaping the CMB in userspace X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: CFCC6100002 X-Stat-Signature: n3xqsjocqb3dpm4hjtyfe88qdtrges5h Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=deltatee.com header.s=20200525 header.b=iG+Eidun; dmarc=pass (policy=none) header.from=deltatee.com; spf=pass (imf05.hostedemail.com: domain of gunthorp@deltatee.com designates 204.191.154.188 as permitted sender) smtp.mailfrom=gunthorp@deltatee.com X-Rspam-User: X-HE-Tag: 1649346466-471379 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow userspace to obtain CMB memory by mmaping the controller's char device. The mmap call allocates and returns a hunk of CMB memory, (the offset is ignored) so userspace does not have control over the address within the CMB. A VMA allocated in this way will only be usable by drivers that set FOLL_PCI_P2PDMA when calling GUP. And inter-device support will be checked the first time the pages are mapped for DMA. Currently this is only supported by O_DIRECT to an PCI NVMe device or through the NVMe passthrough IOCTL. Signed-off-by: Logan Gunthorpe --- drivers/nvme/host/core.c | 15 +++++++++++++++ drivers/nvme/host/nvme.h | 2 ++ drivers/nvme/host/pci.c | 17 +++++++++++++++++ 3 files changed, 34 insertions(+) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index bbc276dda49f..1fd3372c2c18 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3114,6 +3114,10 @@ static int nvme_dev_open(struct inode *inode, struct file *file) } file->private_data = ctrl; + + if (ctrl->ops->cdev_file_open) + ctrl->ops->cdev_file_open(ctrl, file); + return 0; } @@ -3127,12 +3131,23 @@ static int nvme_dev_release(struct inode *inode, struct file *file) return 0; } +static int nvme_dev_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct nvme_ctrl *ctrl = file->private_data; + + if (!ctrl->ops->mmap_cmb) + return -ENODEV; + + return ctrl->ops->mmap_cmb(ctrl, vma); +} + static const struct file_operations nvme_dev_fops = { .owner = THIS_MODULE, .open = nvme_dev_open, .release = nvme_dev_release, .unlocked_ioctl = nvme_dev_ioctl, .compat_ioctl = compat_ptr_ioctl, + .mmap = nvme_dev_mmap, }; static ssize_t nvme_sysfs_reset(struct device *dev, diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 7d97bfb2a9e2..24fbcd274c64 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -497,6 +497,8 @@ struct nvme_ctrl_ops { void (*delete_ctrl)(struct nvme_ctrl *ctrl); int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size); bool (*supports_pci_p2pdma)(struct nvme_ctrl *ctrl); + void (*cdev_file_open)(struct nvme_ctrl *ctrl, struct file *file); + int (*mmap_cmb)(struct nvme_ctrl *ctrl, struct vm_area_struct *vma); }; /* diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 07412116d4d1..5946244e0295 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2965,6 +2965,21 @@ static bool nvme_pci_supports_pci_p2pdma(struct nvme_ctrl *ctrl) return dma_pci_p2pdma_supported(dev->dev); } +static void nvme_pci_cdev_file_open(struct nvme_ctrl *ctrl, struct file *file) +{ + struct pci_dev *pdev = to_pci_dev(to_nvme_dev(ctrl)->dev); + + pci_p2pdma_file_open(pdev, file); +} + +static int nvme_pci_mmap_cmb(struct nvme_ctrl *ctrl, + struct vm_area_struct *vma) +{ + struct pci_dev *pdev = to_pci_dev(to_nvme_dev(ctrl)->dev); + + return pci_mmap_p2pmem(pdev, vma); +} + static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = { .name = "pcie", .module = THIS_MODULE, @@ -2976,6 +2991,8 @@ static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = { .submit_async_event = nvme_pci_submit_async_event, .get_address = nvme_pci_get_address, .supports_pci_p2pdma = nvme_pci_supports_pci_p2pdma, + .cdev_file_open = nvme_pci_cdev_file_open, + .mmap_cmb = nvme_pci_mmap_cmb, }; static int nvme_dev_map(struct nvme_dev *dev)