From patchwork Fri Jun 14 13:47:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10995449 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6AB6B76 for ; Fri, 14 Jun 2019 13:48:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5B517205FD for ; Fri, 14 Jun 2019 13:48:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4FA9628420; Fri, 14 Jun 2019 13:48:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F04AC205FD for ; Fri, 14 Jun 2019 13:48:41 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 99A30898C4; Fri, 14 Jun 2019 13:48:40 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by gabe.freedesktop.org (Postfix) with ESMTPS id 51A878992E; Fri, 14 Jun 2019 13:48:39 +0000 (UTC) Received: from 213-225-9-13.nat.highway.a1.net ([213.225.9.13] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hbmZ7-0005kh-Ep; Fri, 14 Jun 2019 13:48:26 +0000 From: Christoph Hellwig To: Maarten Lankhorst , Maxime Ripard , Sean Paul , David Airlie , Daniel Vetter , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Ian Abbott , H Hartley Sweeten Date: Fri, 14 Jun 2019 15:47:25 +0200 Message-Id: <20190614134726.3827-16-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190614134726.3827-1-hch@lst.de> References: <20190614134726.3827-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=RVKpwDD7wUATUASeNR+XdAkTqCzdoA7nasScp5npHqs=; b=InfZ/wOT5rvjRmLvAJ5YP0wj7w MnY2l6T7U9OKNQHOpXU4ecx11Q/eDpbs9QZcUEk1J6VapmhpcCqvhL8/4NsVudmklSgaa2WcxVUBe waQaOUcUb4RL/2HYzMKDhBFk0TbOCBJW2bM1HFoiaO8f2Ig4hC482NJbLtwEkvvRfYlxTkYB8GZfS F0h9hjSdleNEn2dTrk1H5znVe9s/hC3c40GaTOTSLSDPoVROxZizHibMU3rX0xz8EOQplEK9nB80Z xPVZaB3uqh4bQkuTonc69uqBuDRYzL7doJGRuaJI4LbV+3+NfJQNVhZazRojfRluUXrQDfyvbPdjZ 6hG3FXUA==; Subject: [Intel-gfx] [PATCH 15/16] dma-mapping: clear __GFP_COMP in dma_alloc_attrs X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devel@driverdev.osuosl.org, linux-s390@vger.kernel.org, Intel Linux Wireless , linux-rdma@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-wireless@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, "moderated list:ARM PORT" , linux-media@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Lift the code to clear __GFP_COMP from arm into the common DMA allocator path. For one this fixes the various other patches that call alloc_pages_exact or split_page in case a bogus driver passes the argument, and it also prepares for doing exact allocation in the generic dma-direct allocator. Signed-off-by: Christoph Hellwig --- arch/arm/mm/dma-mapping.c | 17 ----------------- kernel/dma/mapping.c | 9 +++++++++ 2 files changed, 9 insertions(+), 17 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 0a75058c11f3..86135feb2c05 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -759,14 +759,6 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, if (mask < 0xffffffffULL) gfp |= GFP_DMA; - /* - * Following is a work-around (a.k.a. hack) to prevent pages - * with __GFP_COMP being passed to split_page() which cannot - * handle them. The real problem is that this flag probably - * should be 0 on ARM as it is not supported on this - * platform; see CONFIG_HUGETLBFS. - */ - gfp &= ~(__GFP_COMP); args.gfp = gfp; *handle = DMA_MAPPING_ERROR; @@ -1527,15 +1519,6 @@ static void *__arm_iommu_alloc_attrs(struct device *dev, size_t size, return __iommu_alloc_simple(dev, size, gfp, handle, coherent_flag, attrs); - /* - * Following is a work-around (a.k.a. hack) to prevent pages - * with __GFP_COMP being passed to split_page() which cannot - * handle them. The real problem is that this flag probably - * should be 0 on ARM as it is not supported on this - * platform; see CONFIG_HUGETLBFS. - */ - gfp &= ~(__GFP_COMP); - pages = __iommu_alloc_buffer(dev, size, gfp, attrs, coherent_flag); if (!pages) return NULL; diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index f7afdadb6770..4b618e1abbc1 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -252,6 +252,15 @@ void *dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, /* let the implementation decide on the zone to allocate from: */ flag &= ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM); + /* + * __GFP_COMP interacts badly with splitting up a larger order + * allocation. But as our allocations might not even come from the + * page allocator, the callers can't rely on the fact that they + * even get pages, never mind which kind. + */ + if (WARN_ON_ONCE(flag & __GFP_COMP)) + flag &= ~__GFP_COMP; + if (dma_is_direct(ops)) cpu_addr = dma_direct_alloc(dev, size, dma_handle, flag, attrs); else if (ops->alloc)