From patchwork Wed Mar 2 18:54:26 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WW9uZyBXdSAo5ZC05YuHKQ==?= X-Patchwork-Id: 8487531 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 3DEB1C0553 for ; Thu, 3 Mar 2016 02:58:16 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3B9AA202AE for ; Thu, 3 Mar 2016 02:58:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4908A20295 for ; Thu, 3 Mar 2016 02:58:14 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1abJRF-00088G-2f; Thu, 03 Mar 2016 02:56:29 +0000 Received: from [210.61.82.183] (helo=mailgw01.mediatek.com) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1abJRB-00085s-0o; Thu, 03 Mar 2016 02:56:26 +0000 Received: from mtkhts09.mediatek.inc [(172.21.101.70)] by mailgw01.mediatek.com (envelope-from ) (mhqrelay.mediatek.com ESMTP with TLS) with ESMTP id 1951025872; Thu, 03 Mar 2016 10:55:54 +0800 Received: from localhost.localdomain (10.17.3.153) by mtkhts09.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 14.3.266.1; Thu, 3 Mar 2016 10:55:39 +0800 From: Yong Wu To: Joerg Roedel , Catalin Marinas , Will Deacon Subject: [PATCH] arm64/dma-mapping: Add DMA_ATTR_ALLOC_SINGLE_PAGES support Date: Thu, 3 Mar 2016 02:54:26 +0800 Message-ID: <1456944866-15990-1-git-send-email-yong.wu@mediatek.com> X-Mailer: git-send-email 1.8.1.1.dirty MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160302_185625_395652_6086626C X-CRM114-Status: GOOD ( 14.85 ) X-Spam-Score: 0.4 (/) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: srv_heupstream@mediatek.com, Arnd Bergmann , Douglas Anderson , linux-kernel@vger.kernel.org, Tomasz Figa , iommu@lists.linux-foundation.org, Daniel Kurtz , Yong Wu , Matthias Brugger , linux-mediatek@lists.infradead.org, Robin Murphy , linux-arm-kernel@lists.infradead.org, Lucas Stach Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00, DATE_IN_PAST_06_12, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Sometimes it is not worth for the iommu allocating big chunks. Here we enable DMA_ATTR_ALLOC_SINGLE_PAGES which could help avoid to allocate big chunks while iommu allocating buffer. More information about this attribute, please check Doug's commit[1]. [1]: https://lkml.org/lkml/2016/1/11/720 Cc: Robin Murphy Suggested-by: Douglas Anderson Signed-off-by: Yong Wu Reviewed-by: Douglas Anderson --- Our video drivers may soon use this. arch/arm64/mm/dma-mapping.c | 4 ++-- drivers/iommu/dma-iommu.c | 14 ++++++++++---- include/linux/dma-iommu.h | 4 ++-- 3 files changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index 331c4ca..3225e3ca 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -562,8 +562,8 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, struct page **pages; pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent); - pages = iommu_dma_alloc(dev, iosize, gfp, ioprot, handle, - flush_page); + pages = iommu_dma_alloc(dev, iosize, gfp, ioprot, attrs, + handle, flush_page); if (!pages) return NULL; diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 72d6182..3569cb6 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -190,7 +190,8 @@ static void __iommu_dma_free_pages(struct page **pages, int count) kvfree(pages); } -static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp) +static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp, + struct dma_attrs *attrs) { struct page **pages; unsigned int i = 0, array_size = count * sizeof(*pages); @@ -203,6 +204,10 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp) if (!pages) return NULL; + /* Go straight to 4K chunks if caller says it's OK. */ + if (dma_get_attr(DMA_ATTR_ALLOC_SINGLE_PAGES, attrs)) + order = 0; + /* IOMMU can map any pages, so himem can also be used here */ gfp |= __GFP_NOWARN | __GFP_HIGHMEM; @@ -268,6 +273,7 @@ void iommu_dma_free(struct device *dev, struct page **pages, size_t size, * @size: Size of buffer in bytes * @gfp: Allocation flags * @prot: IOMMU mapping flags + * @attrs: DMA attributes flags * @handle: Out argument for allocated DMA handle * @flush_page: Arch callback which must ensure PAGE_SIZE bytes from the * given VA/PA are visible to the given non-coherent device. @@ -278,8 +284,8 @@ void iommu_dma_free(struct device *dev, struct page **pages, size_t size, * Return: Array of struct page pointers describing the buffer, * or NULL on failure. */ -struct page **iommu_dma_alloc(struct device *dev, size_t size, - gfp_t gfp, int prot, dma_addr_t *handle, +struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, + int prot, struct dma_attrs *attrs, dma_addr_t *handle, void (*flush_page)(struct device *, const void *, phys_addr_t)) { struct iommu_domain *domain = iommu_get_domain_for_dev(dev); @@ -292,7 +298,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, *handle = DMA_ERROR_CODE; - pages = __iommu_dma_alloc_pages(count, gfp); + pages = __iommu_dma_alloc_pages(count, gfp, attrs); if (!pages) return NULL; diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h index fc48103..08d9603 100644 --- a/include/linux/dma-iommu.h +++ b/include/linux/dma-iommu.h @@ -38,8 +38,8 @@ int dma_direction_to_prot(enum dma_data_direction dir, bool coherent); * These implement the bulk of the relevant DMA mapping callbacks, but require * the arch code to take care of attributes and cache maintenance */ -struct page **iommu_dma_alloc(struct device *dev, size_t size, - gfp_t gfp, int prot, dma_addr_t *handle, +struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, + int prot, struct dma_attrs *attrs, dma_addr_t *handle, void (*flush_page)(struct device *, const void *, phys_addr_t)); void iommu_dma_free(struct device *dev, struct page **pages, size_t size, dma_addr_t *handle);