From patchwork Sun Nov 13 16:35:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13041585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16DB9C4167D for ; Sun, 13 Nov 2022 16:36:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234351AbiKMQgB (ORCPT ); Sun, 13 Nov 2022 11:36:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235380AbiKMQf7 (ORCPT ); Sun, 13 Nov 2022 11:35:59 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3317510FCF; Sun, 13 Nov 2022 08:35:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=QmrATxoWw4i+ZCEet0QcrcoeounoBlCDdVUnFfxBeEE=; b=t5j7lXGJ7y4k7mshKGThP6/oMi PZD5z8vzrJKInWEJS4o8ibNxZiBKoCq226HnMrLpXIidMD9P2aHEZeZmUGoy5rH4j5ndBBQ9aD9x0 4vcmIw7caELmyeuHlHqh+IXIushQHrso5BEN7RojMe/GxKraazPWw4GAs2rI9iZFLXGtag+W/kRGT DtraYp58Tzl37x5w7RJKHitxhUygtt0kYRry70+CnVBhmY6rNXj8W3qXpO7jcZ3SIPm80b201dV5J EieadWtmFHV/T7JuVFxN7h280v67rKSYAjdrGOMPrT6A5pqNqlTALgBPVOkb/WUk3itkWgqMhzXJH vYQjJamw==; Received: from 213-225-8-167.nat.highway.a1.net ([213.225.8.167] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1ouFxR-00CLSB-JY; Sun, 13 Nov 2022 16:35:46 +0000 From: Christoph Hellwig To: Dennis Dalessandro , Mauro Carvalho Chehab , Alexandra Winter , Wenjia Zhang , Marek Szyprowski , Jaroslav Kysela , Takashi Iwai , Russell King , Robin Murphy Cc: linux-arm-kernel@lists.infradead.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-media@vger.kernel.org, netdev@vger.kernel.org, linux-s390@vger.kernel.org, alsa-devel@alsa-project.org Subject: [PATCH 1/7] media: videobuf-dma-contig: use dma_mmap_coherent Date: Sun, 13 Nov 2022 17:35:29 +0100 Message-Id: <20221113163535.884299-2-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221113163535.884299-1-hch@lst.de> References: <20221113163535.884299-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org dma_alloc_coherent does not return a physical address, but a DMA address, which might be remapped or have an offset. Passing the DMA address to vm_iomap_memory is thus broken. Use the proper dma_mmap_coherent helper instead, and stop passing __GFP_COMP to dma_alloc_coherent, as the memory management inside the DMA allocator is hidden from the callers and does not require it. With this the gfp_t argument to __videobuf_dc_alloc can be removed and hard coded to GFP_KERNEL. Fixes: a8f3c203e19b ("[media] videobuf-dma-contig: add cache support") Signed-off-by: Christoph Hellwig Acked-by: Hans Verkuil --- drivers/media/v4l2-core/videobuf-dma-contig.c | 22 +++++++------------ 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/drivers/media/v4l2-core/videobuf-dma-contig.c b/drivers/media/v4l2-core/videobuf-dma-contig.c index 52312ce2ba056..f2c4393595574 100644 --- a/drivers/media/v4l2-core/videobuf-dma-contig.c +++ b/drivers/media/v4l2-core/videobuf-dma-contig.c @@ -36,12 +36,11 @@ struct videobuf_dma_contig_memory { static int __videobuf_dc_alloc(struct device *dev, struct videobuf_dma_contig_memory *mem, - unsigned long size, gfp_t flags) + unsigned long size) { mem->size = size; - mem->vaddr = dma_alloc_coherent(dev, mem->size, - &mem->dma_handle, flags); - + mem->vaddr = dma_alloc_coherent(dev, mem->size, &mem->dma_handle, + GFP_KERNEL); if (!mem->vaddr) { dev_err(dev, "memory alloc size %ld failed\n", mem->size); return -ENOMEM; @@ -258,8 +257,7 @@ static int __videobuf_iolock(struct videobuf_queue *q, return videobuf_dma_contig_user_get(mem, vb); /* allocate memory for the read() method */ - if (__videobuf_dc_alloc(q->dev, mem, PAGE_ALIGN(vb->size), - GFP_KERNEL)) + if (__videobuf_dc_alloc(q->dev, mem, PAGE_ALIGN(vb->size))) return -ENOMEM; break; case V4L2_MEMORY_OVERLAY: @@ -295,22 +293,18 @@ static int __videobuf_mmap_mapper(struct videobuf_queue *q, BUG_ON(!mem); MAGIC_CHECK(mem->magic, MAGIC_DC_MEM); - if (__videobuf_dc_alloc(q->dev, mem, PAGE_ALIGN(buf->bsize), - GFP_KERNEL | __GFP_COMP)) + if (__videobuf_dc_alloc(q->dev, mem, PAGE_ALIGN(buf->bsize))) goto error; - /* Try to remap memory */ - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); - /* the "vm_pgoff" is just used in v4l2 to find the * corresponding buffer data structure which is allocated * earlier and it does not mean the offset from the physical * buffer start address as usual. So set it to 0 to pass - * the sanity check in vm_iomap_memory(). + * the sanity check in dma_mmap_coherent(). */ vma->vm_pgoff = 0; - - retval = vm_iomap_memory(vma, mem->dma_handle, mem->size); + retval = dma_mmap_coherent(q->dev, vma, mem->vaddr, mem->dma_handle, + mem->size); if (retval) { dev_err(q->dev, "mmap: remap failed with error %d. ", retval); From patchwork Sun Nov 13 16:35:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13041586 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF44EC4332F for ; Sun, 13 Nov 2022 16:36:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235403AbiKMQgC (ORCPT ); Sun, 13 Nov 2022 11:36:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235392AbiKMQgA (ORCPT ); Sun, 13 Nov 2022 11:36:00 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4315210FE1; Sun, 13 Nov 2022 08:35:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=rczvEohbP4Tuhdub3uBqOVXlkIWc5YOmLPmvA8PuHak=; b=mi1i5xYt2RJoyJDrEBDhZI+XcY a8Cf8kDgenb+pSjOQ0whpICnRG0uph9i16Ml8EV60aZwINhPhEsroDGa6eSGkNZHDAC+BqFND4LJV R/yXCIbc/z5bgxST/jYUBNPmh83FDvn3rEufPr8KXKJQurOVrEqGM7diEbLXQpzM2zRr2fUhCKcVB CoBXdX5M2T/A/IBmgcD2ta5i4AjE/zCEqeG6NxQyJb2lQEHrk74bjgutlpwxa2x4fCHYwIMJdG2qg 9hN75DQ0rOViioGJqAhzT65rDB1G39BT+G1rOCR83F/K1FZbqGxeWtXhmBrdU5sbV6jY7dCp6iv+U lYx9tjxA==; Received: from 213-225-8-167.nat.highway.a1.net ([213.225.8.167] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1ouFxW-00CLT0-00; Sun, 13 Nov 2022 16:35:50 +0000 From: Christoph Hellwig To: Dennis Dalessandro , Mauro Carvalho Chehab , Alexandra Winter , Wenjia Zhang , Marek Szyprowski , Jaroslav Kysela , Takashi Iwai , Russell King , Robin Murphy Cc: linux-arm-kernel@lists.infradead.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-media@vger.kernel.org, netdev@vger.kernel.org, linux-s390@vger.kernel.org, alsa-devel@alsa-project.org Subject: [PATCH 2/7] RDMA/hfi1: don't pass bogus GFP_ flags to dma_alloc_coherent Date: Sun, 13 Nov 2022 17:35:30 +0100 Message-Id: <20221113163535.884299-3-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221113163535.884299-1-hch@lst.de> References: <20221113163535.884299-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org dma_alloc_coherent is an opaque allocator that only uses the GFP_ flags for allocation context control. Don't pass GFP_USER which doesn't make sense for a kernel DMA allocation or __GFP_COMP which makes no sense for an allocation that can't in any way be converted to a page pointer. Signed-off-by: Christoph Hellwig Acked-by: Jason Gunthorpe Acked-by: Dean Luick Tested-by: Dean Luick --- drivers/infiniband/hw/hfi1/init.c | 21 +++------------------ 1 file changed, 3 insertions(+), 18 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c index 436372b314312..24c0f0d257fc9 100644 --- a/drivers/infiniband/hw/hfi1/init.c +++ b/drivers/infiniband/hw/hfi1/init.c @@ -1761,17 +1761,11 @@ int hfi1_create_rcvhdrq(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd) unsigned amt; if (!rcd->rcvhdrq) { - gfp_t gfp_flags; - amt = rcvhdrq_size(rcd); - if (rcd->ctxt < dd->first_dyn_alloc_ctxt || rcd->is_vnic) - gfp_flags = GFP_KERNEL; - else - gfp_flags = GFP_USER; rcd->rcvhdrq = dma_alloc_coherent(&dd->pcidev->dev, amt, &rcd->rcvhdrq_dma, - gfp_flags | __GFP_COMP); + GFP_KERNEL); if (!rcd->rcvhdrq) { dd_dev_err(dd, @@ -1785,7 +1779,7 @@ int hfi1_create_rcvhdrq(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd) rcd->rcvhdrtail_kvaddr = dma_alloc_coherent(&dd->pcidev->dev, PAGE_SIZE, &rcd->rcvhdrqtailaddr_dma, - gfp_flags); + GFP_KERNEL); if (!rcd->rcvhdrtail_kvaddr) goto bail_free; } @@ -1821,19 +1815,10 @@ int hfi1_setup_eagerbufs(struct hfi1_ctxtdata *rcd) { struct hfi1_devdata *dd = rcd->dd; u32 max_entries, egrtop, alloced_bytes = 0; - gfp_t gfp_flags; u16 order, idx = 0; int ret = 0; u16 round_mtu = roundup_pow_of_two(hfi1_max_mtu); - /* - * GFP_USER, but without GFP_FS, so buffer cache can be - * coalesced (we hope); otherwise, even at order 4, - * heavy filesystem activity makes these fail, and we can - * use compound pages. - */ - gfp_flags = __GFP_RECLAIM | __GFP_IO | __GFP_COMP; - /* * The minimum size of the eager buffers is a groups of MTU-sized * buffers. @@ -1864,7 +1849,7 @@ int hfi1_setup_eagerbufs(struct hfi1_ctxtdata *rcd) dma_alloc_coherent(&dd->pcidev->dev, rcd->egrbufs.rcvtid_size, &rcd->egrbufs.buffers[idx].dma, - gfp_flags); + GFP_KERNEL); if (rcd->egrbufs.buffers[idx].addr) { rcd->egrbufs.buffers[idx].len = rcd->egrbufs.rcvtid_size; From patchwork Sun Nov 13 16:35:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13041587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 482B1C433FE for ; Sun, 13 Nov 2022 16:36:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235444AbiKMQgR (ORCPT ); Sun, 13 Nov 2022 11:36:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235416AbiKMQgN (ORCPT ); Sun, 13 Nov 2022 11:36:13 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2F951183C; Sun, 13 Nov 2022 08:36:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=IZIXQnSGcHYeM4XJWAJat9lbPLpIQMPCw7wwtVshmX8=; b=uXVEEcm5gNDlzy32h/Z90s3u66 L9AFXb3lYOW33nsHgdUJhj2drBZbUy/yrtfv5+jlSl16oZ2K0eUrstr4/vkAOMDeK0YRcE0vpuXRB T7yNkNTfybeOTC2ZzcNUHoPpb1Rw7oCMKLnkaHwulVzTV1oEu9GTjmIoI467k0/4Tbae96MmpZNwB Mef6tL8fexZfMSjajxytehZa28hwRmsizcTReO6x0yMiaLaSvOdXG4oRf1DXvTmlV20+2llg3MTZ4 aZVZsV9nOXU1Xxo7WUU+Y73zq1wo50jxwwpOh4ju6HzoH6lb8zPu54zWSFeJ1poQ8JWYZNDyayNMI 2cTpXJOQ==; Received: from 213-225-8-167.nat.highway.a1.net ([213.225.8.167] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1ouFxa-00CLUM-DA; Sun, 13 Nov 2022 16:35:55 +0000 From: Christoph Hellwig To: Dennis Dalessandro , Mauro Carvalho Chehab , Alexandra Winter , Wenjia Zhang , Marek Szyprowski , Jaroslav Kysela , Takashi Iwai , Russell King , Robin Murphy Cc: linux-arm-kernel@lists.infradead.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-media@vger.kernel.org, netdev@vger.kernel.org, linux-s390@vger.kernel.org, alsa-devel@alsa-project.org Subject: [PATCH 3/7] RDMA/qib: don't pass bogus GFP_ flags to dma_alloc_coherent Date: Sun, 13 Nov 2022 17:35:31 +0100 Message-Id: <20221113163535.884299-4-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221113163535.884299-1-hch@lst.de> References: <20221113163535.884299-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org dma_alloc_coherent is an opaque allocator that only uses the GFP_ flags for allocation context control. Don't pass GFP_USER which doesn't make sense for a kernel DMA allocation or __GFP_COMP which makes no sense for an allocation that can't in any way be converted to a page pointer. Signed-off-by: Christoph Hellwig Acked-by: Jason Gunthorpe --- drivers/infiniband/hw/qib/qib_iba6120.c | 2 +- drivers/infiniband/hw/qib/qib_init.c | 21 ++++----------------- 2 files changed, 5 insertions(+), 18 deletions(-) diff --git a/drivers/infiniband/hw/qib/qib_iba6120.c b/drivers/infiniband/hw/qib/qib_iba6120.c index aea571943768b..07386117f21ad 100644 --- a/drivers/infiniband/hw/qib/qib_iba6120.c +++ b/drivers/infiniband/hw/qib/qib_iba6120.c @@ -2075,7 +2075,7 @@ static void alloc_dummy_hdrq(struct qib_devdata *dd) dd->cspec->dummy_hdrq = dma_alloc_coherent(&dd->pcidev->dev, dd->rcd[0]->rcvhdrq_size, &dd->cspec->dummy_hdrq_phys, - GFP_ATOMIC | __GFP_COMP); + GFP_ATOMIC); if (!dd->cspec->dummy_hdrq) { qib_devinfo(dd->pcidev, "Couldn't allocate dummy hdrq\n"); /* fallback to just 0'ing */ diff --git a/drivers/infiniband/hw/qib/qib_init.c b/drivers/infiniband/hw/qib/qib_init.c index 45211008449fb..33667becd52b0 100644 --- a/drivers/infiniband/hw/qib/qib_init.c +++ b/drivers/infiniband/hw/qib/qib_init.c @@ -1546,18 +1546,14 @@ int qib_create_rcvhdrq(struct qib_devdata *dd, struct qib_ctxtdata *rcd) if (!rcd->rcvhdrq) { dma_addr_t phys_hdrqtail; - gfp_t gfp_flags; amt = ALIGN(dd->rcvhdrcnt * dd->rcvhdrentsize * sizeof(u32), PAGE_SIZE); - gfp_flags = (rcd->ctxt >= dd->first_user_ctxt) ? - GFP_USER : GFP_KERNEL; old_node_id = dev_to_node(&dd->pcidev->dev); set_dev_node(&dd->pcidev->dev, rcd->node_id); - rcd->rcvhdrq = dma_alloc_coherent( - &dd->pcidev->dev, amt, &rcd->rcvhdrq_phys, - gfp_flags | __GFP_COMP); + rcd->rcvhdrq = dma_alloc_coherent(&dd->pcidev->dev, amt, + &rcd->rcvhdrq_phys, GFP_KERNEL); set_dev_node(&dd->pcidev->dev, old_node_id); if (!rcd->rcvhdrq) { @@ -1577,7 +1573,7 @@ int qib_create_rcvhdrq(struct qib_devdata *dd, struct qib_ctxtdata *rcd) set_dev_node(&dd->pcidev->dev, rcd->node_id); rcd->rcvhdrtail_kvaddr = dma_alloc_coherent( &dd->pcidev->dev, PAGE_SIZE, &phys_hdrqtail, - gfp_flags); + GFP_KERNEL); set_dev_node(&dd->pcidev->dev, old_node_id); if (!rcd->rcvhdrtail_kvaddr) goto bail_free; @@ -1621,17 +1617,8 @@ int qib_setup_eagerbufs(struct qib_ctxtdata *rcd) struct qib_devdata *dd = rcd->dd; unsigned e, egrcnt, egrperchunk, chunk, egrsize, egroff; size_t size; - gfp_t gfp_flags; int old_node_id; - /* - * GFP_USER, but without GFP_FS, so buffer cache can be - * coalesced (we hope); otherwise, even at order 4, - * heavy filesystem activity makes these fail, and we can - * use compound pages. - */ - gfp_flags = __GFP_RECLAIM | __GFP_IO | __GFP_COMP; - egrcnt = rcd->rcvegrcnt; egroff = rcd->rcvegr_tid_base; egrsize = dd->rcvegrbufsize; @@ -1663,7 +1650,7 @@ int qib_setup_eagerbufs(struct qib_ctxtdata *rcd) rcd->rcvegrbuf[e] = dma_alloc_coherent(&dd->pcidev->dev, size, &rcd->rcvegrbuf_phys[e], - gfp_flags); + GFP_KERNEL); set_dev_node(&dd->pcidev->dev, old_node_id); if (!rcd->rcvegrbuf[e]) goto bail_rcvegrbuf_phys; From patchwork Sun Nov 13 16:35:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13041588 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04EBAC4167D for ; Sun, 13 Nov 2022 16:36:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235459AbiKMQgT (ORCPT ); Sun, 13 Nov 2022 11:36:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235431AbiKMQgP (ORCPT ); Sun, 13 Nov 2022 11:36:15 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 778F711C2B; Sun, 13 Nov 2022 08:36:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=IefdeDDlkkYOKsyBrDSxlcCl/F9GiRc0htDJzBVpGnI=; b=hv07boN2vtppLb4m/Cty6RgoUz HPOIdZKc9qzdk5F+f5a/MKoyhbSIuiedp9pV3ydtpYNYnPhbkV0xDbUmC3dYrbIEb6HmAtnEDUMLR RtwHuWtm3MiG6uMEQVFreUbETxa+F4jeedT3tys4edoVIFlr1hLjYkt8wr5ewUegYZhQRpYqHvG7W Y1A0M3KX9dPAvFWxCwmomHqg+w/H5Q7EYNqzhu5UDSmTbNOqcgy3gZgy23a1h48+apvZ+lulMpcjC 1UlA8Gd+BCkuF5rrH65+F9/gCGHZxLolu8xe1v8zOdAbQqPfXHUE9GIIaagl6DTBzrfRHv+FRUC2J 8qgiZaBQ==; Received: from 213-225-8-167.nat.highway.a1.net ([213.225.8.167] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1ouFxe-00CLVn-GW; Sun, 13 Nov 2022 16:35:59 +0000 From: Christoph Hellwig To: Dennis Dalessandro , Mauro Carvalho Chehab , Alexandra Winter , Wenjia Zhang , Marek Szyprowski , Jaroslav Kysela , Takashi Iwai , Russell King , Robin Murphy Cc: linux-arm-kernel@lists.infradead.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-media@vger.kernel.org, netdev@vger.kernel.org, linux-s390@vger.kernel.org, alsa-devel@alsa-project.org Subject: [PATCH 4/7] cnic: don't pass bogus GFP_ flags to dma_alloc_coherent Date: Sun, 13 Nov 2022 17:35:32 +0100 Message-Id: <20221113163535.884299-5-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221113163535.884299-1-hch@lst.de> References: <20221113163535.884299-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org dma_alloc_coherent is an opaque allocator that only uses the GFP_ flags for allocation context control. Don't pass __GFP_COMP which makes no sense for an allocation that can't in any way be converted to a page pointer. Signed-off-by: Christoph Hellwig --- drivers/net/ethernet/broadcom/cnic.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/broadcom/cnic.c b/drivers/net/ethernet/broadcom/cnic.c index 2198e35d9e181..ad74b488a80ab 100644 --- a/drivers/net/ethernet/broadcom/cnic.c +++ b/drivers/net/ethernet/broadcom/cnic.c @@ -1027,16 +1027,14 @@ static int __cnic_alloc_uio_rings(struct cnic_uio_dev *udev, int pages) udev->l2_ring_size = pages * CNIC_PAGE_SIZE; udev->l2_ring = dma_alloc_coherent(&udev->pdev->dev, udev->l2_ring_size, - &udev->l2_ring_map, - GFP_KERNEL | __GFP_COMP); + &udev->l2_ring_map, GFP_KERNEL); if (!udev->l2_ring) return -ENOMEM; udev->l2_buf_size = (cp->l2_rx_ring_size + 1) * cp->l2_single_buf_size; udev->l2_buf_size = CNIC_PAGE_ALIGN(udev->l2_buf_size); udev->l2_buf = dma_alloc_coherent(&udev->pdev->dev, udev->l2_buf_size, - &udev->l2_buf_map, - GFP_KERNEL | __GFP_COMP); + &udev->l2_buf_map, GFP_KERNEL); if (!udev->l2_buf) { __cnic_free_uio_rings(udev); return -ENOMEM; From patchwork Sun Nov 13 16:35:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13041589 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2552C433FE for ; Sun, 13 Nov 2022 16:36:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235489AbiKMQgg (ORCPT ); Sun, 13 Nov 2022 11:36:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235418AbiKMQgR (ORCPT ); Sun, 13 Nov 2022 11:36:17 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53E1810FE0; Sun, 13 Nov 2022 08:36:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Apfp3Ntcah1dnmwtlE2hUcJ6OtLUs/yxuytUk8ohreQ=; b=GU53um0FBYaAiJuynSgJeWhHiA rCZFQYQ8j9b9DUMZM9RwUKBc71btd+rroHfHIaPMS/Fxp2UYdMvCRBcgpwRgPcUQEQd/vrOnBq0B9 VkOTpncaIWoMw4vZ1/nvSRxONexbg+kPZU6VZLOKO34N7ze0R7J1HBkIq41zCzYpUD7vOWZQnOhbl 30h41fe8KnQioNICCHkD0ajEgHxT8CFM/Vvyh3EbTmvxMBNVUeWmS2DwBDY5QMpN7WPuJbJh/HYdP OdFOdUFo/ZdkkEiVNqed57AtWzOxiHQq8x8FvpMInyG9aYm1ZCb0mk/919YOxWxWaQumasHPqoUgI 0uKHuF/w==; Received: from 213-225-8-167.nat.highway.a1.net ([213.225.8.167] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1ouFxi-00CLX8-TZ; Sun, 13 Nov 2022 16:36:03 +0000 From: Christoph Hellwig To: Dennis Dalessandro , Mauro Carvalho Chehab , Alexandra Winter , Wenjia Zhang , Marek Szyprowski , Jaroslav Kysela , Takashi Iwai , Russell King , Robin Murphy Cc: linux-arm-kernel@lists.infradead.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-media@vger.kernel.org, netdev@vger.kernel.org, linux-s390@vger.kernel.org, alsa-devel@alsa-project.org Subject: [PATCH 5/7] s390/ism: don't pass bogus GFP_ flags to dma_alloc_coherent Date: Sun, 13 Nov 2022 17:35:33 +0100 Message-Id: <20221113163535.884299-6-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221113163535.884299-1-hch@lst.de> References: <20221113163535.884299-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org dma_alloc_coherent is an opaque allocator that only uses the GFP_ flags for allocation context control. Don't pass __GFP_COMP which makes no sense for an allocation that can't in any way be converted to a page pointer. Signed-off-by: Christoph Hellwig Acked-by: Wenjia Zhang --- drivers/s390/net/ism_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c index d34bb6ec1490f..dfd401d9e3623 100644 --- a/drivers/s390/net/ism_drv.c +++ b/drivers/s390/net/ism_drv.c @@ -243,7 +243,8 @@ static int ism_alloc_dmb(struct ism_dev *ism, struct smcd_dmb *dmb) dmb->cpu_addr = dma_alloc_coherent(&ism->pdev->dev, dmb->dmb_len, &dmb->dma_addr, - GFP_KERNEL | __GFP_NOWARN | __GFP_NOMEMALLOC | __GFP_COMP | __GFP_NORETRY); + GFP_KERNEL | __GFP_NOWARN | + __GFP_NOMEMALLOC | __GFP_NORETRY); if (!dmb->cpu_addr) clear_bit(dmb->sba_idx, ism->sba_bitmap); From patchwork Sun Nov 13 16:35:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13041590 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FD2BC433FE for ; Sun, 13 Nov 2022 16:36:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235435AbiKMQgn (ORCPT ); Sun, 13 Nov 2022 11:36:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235463AbiKMQg2 (ORCPT ); Sun, 13 Nov 2022 11:36:28 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBE731183C; Sun, 13 Nov 2022 08:36:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=FrKTW2jC/YO2/WPGYjBN4j0fLdu07wAq+vKIs1aTDt0=; b=gBRLp/Wqhnjj7vSyEDOzq1Ylp/ 6Tf29mv6KdGs4oNOLvYvc+/P60fwD/5ZTPU1/lbVLUsLtkJHRo+qtCUeCUVDGDwzhgNISfn0/6nLO X3FJ9CEAE8Cr1sflf5EKKHZzHKr8gSwKQbSij8Ga6biPZMe9g2nOdftFxwZ4W9Os6Bg/9VkG2urxB jHSeUJFjQt3sIRtU+TQN9owu8fgaAS4h7UrV/RvQuyO4+qYsqpKmWMw9vuOP0VcFh6W1TLNlHZ9pa N3pPIyId+IGhiNqGFElCB72ffRIGv8YFFQCvUdzCuJhCXItuM/3LKuzCbFOxiD36TC3p36SrAUCVE itKktNFA==; Received: from 213-225-8-167.nat.highway.a1.net ([213.225.8.167] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1ouFxn-00CLab-8i; Sun, 13 Nov 2022 16:36:08 +0000 From: Christoph Hellwig To: Dennis Dalessandro , Mauro Carvalho Chehab , Alexandra Winter , Wenjia Zhang , Marek Szyprowski , Jaroslav Kysela , Takashi Iwai , Russell King , Robin Murphy Cc: linux-arm-kernel@lists.infradead.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-media@vger.kernel.org, netdev@vger.kernel.org, linux-s390@vger.kernel.org, alsa-devel@alsa-project.org Subject: [PATCH 6/7] ALSA: memalloc: don't pass bogus GFP_ flags to dma_alloc_* Date: Sun, 13 Nov 2022 17:35:34 +0100 Message-Id: <20221113163535.884299-7-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221113163535.884299-1-hch@lst.de> References: <20221113163535.884299-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org dma_alloc_coherent/dma_alloc_wc is an opaque allocator that only uses the GFP_ flags for allocation context control. Don't pass __GFP_COMP which makes no sense for an allocation that can't in any way be converted to a page pointer. Note that for dma_alloc_noncoherent and dma_alloc_noncontigous in combination with the DMA mmap helpers __GFP_COMP looks sketchy as well, so I would suggest to drop that as well after a careful audit. Signed-off-by: Christoph Hellwig --- sound/core/memalloc.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c index 03cffe7713667..fe03cf796e8bb 100644 --- a/sound/core/memalloc.c +++ b/sound/core/memalloc.c @@ -20,7 +20,6 @@ #define DEFAULT_GFP \ (GFP_KERNEL | \ - __GFP_COMP | /* compound page lets parts be mapped */ \ __GFP_RETRY_MAYFAIL | /* don't trigger OOM-killer */ \ __GFP_NOWARN) /* no stack trace print - this call is non-critical */ @@ -542,7 +541,7 @@ static void *snd_dma_noncontig_alloc(struct snd_dma_buffer *dmab, size_t size) void *p; sgt = dma_alloc_noncontiguous(dmab->dev.dev, size, dmab->dev.dir, - DEFAULT_GFP, 0); + DEFAULT_GFP | __GFP_COMP, 0); if (!sgt) { #ifdef CONFIG_SND_DMA_SGBUF if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG) @@ -810,7 +809,7 @@ static void *snd_dma_noncoherent_alloc(struct snd_dma_buffer *dmab, size_t size) void *p; p = dma_alloc_noncoherent(dmab->dev.dev, size, &dmab->addr, - dmab->dev.dir, DEFAULT_GFP); + dmab->dev.dir, DEFAULT_GFP | __GFP_COMP); if (p) dmab->dev.need_sync = dma_need_sync(dmab->dev.dev, dmab->addr); return p; From patchwork Sun Nov 13 16:35:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13041603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4F0CC43219 for ; Sun, 13 Nov 2022 16:37:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235417AbiKMQgy (ORCPT ); Sun, 13 Nov 2022 11:36:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235446AbiKMQgb (ORCPT ); Sun, 13 Nov 2022 11:36:31 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D841311A25; Sun, 13 Nov 2022 08:36:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=bm9iJk+kVC/i+ZASfRIM102pV+AxG+Ji6MRA8D8G/hY=; b=D+OjBUBfCQQK7BJwONwAEfMW1V 8jRN6GgBR+VrL0tS3XZiRQ9Izl/KLvRdOZNqOKO2oIvVjbiEGO1c1mbTInhWzw5wANb8webd1H6p5 4/cHh9i8I5M7GvWwpaoJptq5AjmQRt8Uvmx0I5RXPXAGTwzpOau/gcI90NjYFBjagiMm2ZQ5k22MD 7he1X8jQVE3y/TPsjx6SSpehg1Wx7QAkYIJ9T/fxBFt3LhOvqAFBSPxj82bVviM8XSUfaM+n1JJeB F3Tse+cErMM0s3ZTGzoBDhUF3xq2vcmDdNxALOObw3iQy8FqQtrGg2svEfAxxHzAdX4NFzYvz8ba/ OKZme3wA==; Received: from 213-225-8-167.nat.highway.a1.net ([213.225.8.167] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1ouFxr-00CLdm-QM; Sun, 13 Nov 2022 16:36:12 +0000 From: Christoph Hellwig To: Dennis Dalessandro , Mauro Carvalho Chehab , Alexandra Winter , Wenjia Zhang , Marek Szyprowski , Jaroslav Kysela , Takashi Iwai , Russell King , Robin Murphy Cc: linux-arm-kernel@lists.infradead.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-media@vger.kernel.org, netdev@vger.kernel.org, linux-s390@vger.kernel.org, alsa-devel@alsa-project.org Subject: [PATCH 7/7] dma-mapping: reject __GFP_COMP in dma_alloc_attrs Date: Sun, 13 Nov 2022 17:35:35 +0100 Message-Id: <20221113163535.884299-8-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221113163535.884299-1-hch@lst.de> References: <20221113163535.884299-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org DMA allocations can never be turned back into a page pointer, so requesting compound pages doesn't make sense and it can't even be supported at all by various backends. Reject __GFP_COMP with a warning in dma_alloc_attrs, and stop clearing the flag in the arm dma ops and dma-iommu. Signed-off-by: Christoph Hellwig Acked-by: Marek Szyprowski --- arch/arm/mm/dma-mapping.c | 17 ----------------- drivers/iommu/dma-iommu.c | 3 --- kernel/dma/mapping.c | 8 ++++++++ 3 files changed, 8 insertions(+), 20 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index d7909091cf977..c135f6e37a00c 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -564,14 +564,6 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, if (mask < 0xffffffffULL) gfp |= GFP_DMA; - /* - * Following is a work-around (a.k.a. hack) to prevent pages - * with __GFP_COMP being passed to split_page() which cannot - * handle them. The real problem is that this flag probably - * should be 0 on ARM as it is not supported on this - * platform; see CONFIG_HUGETLBFS. - */ - gfp &= ~(__GFP_COMP); args.gfp = gfp; *handle = DMA_MAPPING_ERROR; @@ -1093,15 +1085,6 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size, return __iommu_alloc_simple(dev, size, gfp, handle, coherent_flag, attrs); - /* - * Following is a work-around (a.k.a. hack) to prevent pages - * with __GFP_COMP being passed to split_page() which cannot - * handle them. The real problem is that this flag probably - * should be 0 on ARM as it is not supported on this - * platform; see CONFIG_HUGETLBFS. - */ - gfp &= ~(__GFP_COMP); - pages = __iommu_alloc_buffer(dev, size, gfp, attrs, coherent_flag); if (!pages) return NULL; diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 9297b741f5e80..f798c44e09033 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -744,9 +744,6 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev, /* IOMMU can map any pages, so himem can also be used here */ gfp |= __GFP_NOWARN | __GFP_HIGHMEM; - /* It makes no sense to muck about with huge pages */ - gfp &= ~__GFP_COMP; - while (count) { struct page *page = NULL; unsigned int order_size; diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 33437d6206445..c026a5a5e0466 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -498,6 +498,14 @@ void *dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, WARN_ON_ONCE(!dev->coherent_dma_mask); + /* + * DMA allocations can never be turned back into a page pointer, so + * requesting compound pages doesn't make sense (and can't even be + * supported at all by various backends). + */ + if (WARN_ON_ONCE(flag & __GFP_COMP)) + return NULL; + if (dma_alloc_from_dev_coherent(dev, size, dma_handle, &cpu_addr)) return cpu_addr;