From patchwork Mon Mar 28 06:32:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WW9uZyBXdSAo5ZC05YuHKQ==?= X-Patchwork-Id: 8678361 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C5C2D9F54E for ; Mon, 28 Mar 2016 06:44:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CCDCA20253 for ; Mon, 28 Mar 2016 06:44:24 +0000 (UTC) Received: from bombadil.infradead.org (unknown [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 739652020F for ; Mon, 28 Mar 2016 06:44:22 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1akQku-0004Yv-LU; Mon, 28 Mar 2016 06:34:28 +0000 Received: from [210.61.82.184] (helo=mailgw02.mediatek.com) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1akQkr-0004NX-8T; Mon, 28 Mar 2016 06:34:26 +0000 Received: from mtkhts09.mediatek.inc [(172.21.101.70)] by mailgw02.mediatek.com (envelope-from ) (mhqrelay.mediatek.com ESMTP with TLS) with ESMTP id 892904651; Mon, 28 Mar 2016 14:34:00 +0800 Received: from mhfsdcap03.mhfswrd (10.17.3.153) by mtkhts09.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 14.3.266.1; Mon, 28 Mar 2016 14:33:58 +0800 From: Yong Wu To: Joerg Roedel , Catalin Marinas , Will Deacon Subject: [PATCH v2 2/2] arm64/dma-mapping: Add DMA_ATTR_ALLOC_SINGLE_PAGES support Date: Mon, 28 Mar 2016 14:32:12 +0800 Message-ID: <1459146732-15620-2-git-send-email-yong.wu@mediatek.com> X-Mailer: git-send-email 1.8.1.1.dirty In-Reply-To: <1459146732-15620-1-git-send-email-yong.wu@mediatek.com> References: <1459146732-15620-1-git-send-email-yong.wu@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160327_233425_605409_80314761 X-CRM114-Status: GOOD ( 16.81 ) X-Spam-Score: -1.1 (-) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: srv_heupstream@mediatek.com, Arnd Bergmann , Douglas Anderson , linux-kernel@vger.kernel.org, Tomasz Figa , iommu@lists.linux-foundation.org, Daniel Kurtz , Yong Wu , Matthias Brugger , linux-mediatek@lists.infradead.org, Marek Szyprowski , Robin Murphy , linux-arm-kernel@lists.infradead.org, Lucas Stach Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-3.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RDNS_NONE,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Sometimes it is not worth for the iommu allocating big chunks. Here we enable DMA_ATTR_ALLOC_SINGLE_PAGES which could help avoid to allocate big chunks while iommu allocating buffer. More information about this attribute, please check Doug's commit df05c6f6e0bb ("ARM: 8506/1: common: DMA-mapping: add DMA_ATTR_ALLOC_SINGLE_PAGES attribute") Cc: Robin Murphy Suggested-by: Douglas Anderson Reviewed-by: Douglas Anderson Signed-off-by: Yong Wu --- Our video driver may use this soon. arch/arm64/mm/dma-mapping.c | 4 ++-- drivers/iommu/dma-iommu.c | 12 +++++++++--- include/linux/dma-iommu.h | 4 ++-- 3 files changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index a6e757c..5b104fe 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -562,8 +562,8 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, struct page **pages; pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent); - pages = iommu_dma_alloc(dev, iosize, gfp, ioprot, handle, - flush_page); + pages = iommu_dma_alloc(dev, iosize, gfp, ioprot, attrs, + handle, flush_page); if (!pages) return NULL; diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 75ce71e..c77ef66 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -191,6 +191,7 @@ static void __iommu_dma_free_pages(struct page **pages, int count) } static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp, + struct dma_attrs *attrs, unsigned long pgsize_bitmap) { struct page **pages; @@ -205,6 +206,10 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp, if (!pages) return NULL; + /* Go straight to min_order if caller need SINGLE_PAGES */ + if (dma_get_attr(DMA_ATTR_ALLOC_SINGLE_PAGES, attrs)) + order = min_order; + /* IOMMU can map any pages, so himem can also be used here */ gfp |= __GFP_NOWARN | __GFP_HIGHMEM; @@ -271,6 +276,7 @@ void iommu_dma_free(struct device *dev, struct page **pages, size_t size, * @size: Size of buffer in bytes * @gfp: Allocation flags * @prot: IOMMU mapping flags + * @attrs: DMA attributes flags * @handle: Out argument for allocated DMA handle * @flush_page: Arch callback which must ensure PAGE_SIZE bytes from the * given VA/PA are visible to the given non-coherent device. @@ -281,8 +287,8 @@ void iommu_dma_free(struct device *dev, struct page **pages, size_t size, * Return: Array of struct page pointers describing the buffer, * or NULL on failure. */ -struct page **iommu_dma_alloc(struct device *dev, size_t size, - gfp_t gfp, int prot, dma_addr_t *handle, +struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, + int prot, struct dma_attrs *attrs, dma_addr_t *handle, void (*flush_page)(struct device *, const void *, phys_addr_t)) { struct iommu_domain *domain = iommu_get_domain_for_dev(dev); @@ -295,7 +301,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, *handle = DMA_ERROR_CODE; - pages = __iommu_dma_alloc_pages(count, gfp, + pages = __iommu_dma_alloc_pages(count, gfp, attrs, domain->ops->pgsize_bitmap); if (!pages) return NULL; diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h index fc48103..08d9603 100644 --- a/include/linux/dma-iommu.h +++ b/include/linux/dma-iommu.h @@ -38,8 +38,8 @@ int dma_direction_to_prot(enum dma_data_direction dir, bool coherent); * These implement the bulk of the relevant DMA mapping callbacks, but require * the arch code to take care of attributes and cache maintenance */ -struct page **iommu_dma_alloc(struct device *dev, size_t size, - gfp_t gfp, int prot, dma_addr_t *handle, +struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, + int prot, struct dma_attrs *attrs, dma_addr_t *handle, void (*flush_page)(struct device *, const void *, phys_addr_t)); void iommu_dma_free(struct device *dev, struct page **pages, size_t size, dma_addr_t *handle);