From patchwork Fri Jan 28 00:26:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12727645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C655BC43217 for ; Fri, 28 Jan 2022 00:26:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344517AbiA1A01 (ORCPT ); Thu, 27 Jan 2022 19:26:27 -0500 Received: from ale.deltatee.com ([204.191.154.188]:46978 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235971AbiA1A00 (ORCPT ); Thu, 27 Jan 2022 19:26:26 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=0VjpBnfEXTqeS7LssS80lKjFqlCuU84Zlsdnl3oeMpI=; b=s+iKXLCRsc+FunOheuw5aq1lFe A3yhmwa7yvIPlR+O4DL+kig4M+u0iProbdm2Z/k9TQ6V5GE6VD41v7xCnzaCpBc6VxhWojkEFUGlK yJFShL2es4N5xsnMFnWxQCqOI+6QIkN2s796jh173f2MhCcEo0wMO5G/HBgWs7UXUrfNK4XIomtIN IZLFtYyw+qCrWSirVzZW8oqDHd6scxRM/Bogwwrv+2TB8D16dLCievCu+ti7YfMxUGCeTBwKAtFB1 orWWKfeJH5SIoMbWa/ccX8n2rO/VQI36kH5LbgN8Qj+qk7MBOb/cNvZn5PLstKbV5YphGTQ269YBo wZym8kRA==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1nDF5r-005OcZ-Pd; Thu, 27 Jan 2022 17:26:24 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1nDF5p-0001dP-P3; Thu, 27 Jan 2022 17:26:21 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Martin Oliveira , Chaitanya Kulkarni , Ralph Campbell , Logan Gunthorpe Date: Thu, 27 Jan 2022 17:26:11 -0700 Message-Id: <20220128002614.6136-22-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220128002614.6136-1-logang@deltatee.com> References: <20220128002614.6136-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, martin.oliveira@eideticom.com, ckulkarnilinux@gmail.com, jhubbard@nvidia.com, rcampbell@nvidia.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v5 21/24] block: set FOLL_PCI_P2PDMA in bio_map_user_iov() X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be passed from userspace and enables the NVMe passthru requests to use P2PDMA pages. Signed-off-by: Logan Gunthorpe --- block/blk-map.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/block/blk-map.c b/block/blk-map.c index 4526adde0156..7508448e290c 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -234,6 +234,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, gfp_t gfp_mask) { unsigned int max_sectors = queue_max_hw_sectors(rq->q); + unsigned int flags = 0; struct bio *bio; int ret; int j; @@ -246,13 +247,17 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, return -ENOMEM; bio->bi_opf |= req_op(rq); + if (blk_queue_pci_p2pdma(rq->q)) + flags |= FOLL_PCI_P2PDMA; + while (iov_iter_count(iter)) { struct page **pages; ssize_t bytes; size_t offs, added = 0; int npages; - bytes = iov_iter_get_pages_alloc(iter, &pages, LONG_MAX, &offs); + bytes = iov_iter_get_pages_alloc_flags(iter, &pages, LONG_MAX, + &offs, flags); if (unlikely(bytes <= 0)) { ret = bytes ? bytes : -EFAULT; goto out_unmap;