From patchwork Tue Nov 20 21:56:25 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 1776211 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork1.kernel.org (Postfix) with ESMTP id 33B623FCAE for ; Tue, 20 Nov 2012 21:58:40 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TavoU-0000Z6-AU; Tue, 20 Nov 2012 21:57:02 +0000 Received: from mail.free-electrons.com ([88.190.12.23]) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TavoA-0000Ss-Te for linux-arm-kernel@lists.infradead.org; Tue, 20 Nov 2012 21:56:44 +0000 Received: by mail.free-electrons.com (Postfix, from userid 106) id D05C5186; Tue, 20 Nov 2012 22:56:31 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.free-electrons.com X-Spam-Level: X-Spam-Status: No, score=-3.1 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 shortcircuit=no autolearn=ham version=3.3.1 Received: from localhost (tra42-5-83-152-246-54.fbx.proxad.net [83.152.246.54]) by mail.free-electrons.com (Postfix) with ESMTPSA id 0E04A15C; Tue, 20 Nov 2012 22:56:23 +0100 (CET) From: Gregory CLEMENT To: Jason Cooper , Andrew Lunn , Gregory Clement , Marek Szyprowski Subject: [PATCH V3 1/3] arm: dma mapping: Export dma ops functions Date: Tue, 20 Nov 2012 22:56:25 +0100 Message-Id: <1353448587-2937-2-git-send-email-gregory.clement@free-electrons.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1353448587-2937-1-git-send-email-gregory.clement@free-electrons.com> References: <1353448587-2937-1-git-send-email-gregory.clement@free-electrons.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20121120_165643_391796_92573AB4 X-CRM114-Status: GOOD ( 17.87 ) X-Spam-Score: 0.4 (/) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (0.4 points) pts rule name description ---- ---------------------- -------------------------------------------------- 3.0 KHOP_BIG_TO_CC Sent to 10+ recipients instaed of Bcc or a list -0.0 SPF_PASS SPF: sender matches SPF record -0.7 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Lior Amsalem , Ike Pan , Nadav Haklai , Ian Molton , David Marlin , Yehuda Yitschak , Jani Monoses , Russell King , Tawfik Bayouk , Dan Frazier , Eran Ben-Avi , Leif Lindholm , Sebastian Hesselbarth , Arnd Bergmann , Jon Masters , Rob Herring , Ben Dooks , linux-arm-kernel@lists.infradead.org, Thomas Petazzoni , Chris Van Hoof , Nicolas Pitre , linux-kernel@vger.kernel.org, Maen Suleiman , Shadi Ammouri , Olof Johansson X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Expose the DMA operations functions. Until now only the dma_ops structs in a whole or some dma operation were exposed. This patch exposes all the dma coherents operations. They can be reused when an architecture or a driver need to create its own set of dma_operation. Signed-off-by: Gregory CLEMENT --- arch/arm/include/asm/dma-mapping.h | 48 ++++++++++++++++++++++++++++++++++++ arch/arm/mm/dma-mapping.c | 25 ++++--------------- 2 files changed, 53 insertions(+), 20 deletions(-) diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h index 2300484..b12d7c0 100644 --- a/arch/arm/include/asm/dma-mapping.h +++ b/arch/arm/include/asm/dma-mapping.h @@ -112,6 +112,54 @@ static inline void dma_free_noncoherent(struct device *dev, size_t size, extern int dma_supported(struct device *dev, u64 mask); /** + * arm_dma_map_page - map a portion of a page for streaming DMA + * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices + * @page: page that buffer resides in + * @offset: offset into page for start of buffer + * @size: size of buffer to map + * @dir: DMA transfer direction + * + * Ensure that any data held in the cache is appropriately discarded + * or written back. + * + * The device owns this memory once this call has completed. The CPU + * can regain ownership by calling dma_unmap_page(). + */ +extern dma_addr_t arm_dma_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, + enum dma_data_direction dir, + struct dma_attrs *attrs); + +/** + * arm_dma_unmap_page - unmap a buffer previously mapped through dma_map_page() + * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices + * @handle: DMA address of buffer + * @size: size of buffer (same as passed to dma_map_page) + * @dir: DMA transfer direction (same as passed to dma_map_page) + * + * Unmap a page streaming mode DMA translation. The handle and size + * must match what was provided in the previous dma_map_page() call. + * All other usages are undefined. + * + * After this call, reads by the CPU to the buffer are guaranteed to see + * whatever the device wrote there. + */ +extern void arm_dma_unmap_page(struct device *dev, dma_addr_t handle, + size_t size, enum dma_data_direction dir, + struct dma_attrs *attrs); + +extern void arm_dma_sync_single_for_cpu(struct device *dev, + dma_addr_t handle, size_t size, + enum dma_data_direction dir); + +extern void arm_dma_sync_single_for_device(struct device *dev, + dma_addr_t handle, size_t size, + enum dma_data_direction dir); + +extern int arm_dma_set_mask(struct device *dev, u64 dma_mask); + + +/** * arm_dma_alloc - allocate consistent memory for DMA * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices * @size: required memory size diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 58bc3e4..dbb67ce 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -56,20 +56,13 @@ static void __dma_page_dev_to_cpu(struct page *, unsigned long, size_t, enum dma_data_direction); /** - * arm_dma_map_page - map a portion of a page for streaming DMA - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices - * @page: page that buffer resides in - * @offset: offset into page for start of buffer - * @size: size of buffer to map - * @dir: DMA transfer direction - * * Ensure that any data held in the cache is appropriately discarded * or written back. * * The device owns this memory once this call has completed. The CPU * can regain ownership by calling dma_unmap_page(). */ -static dma_addr_t arm_dma_map_page(struct device *dev, struct page *page, +dma_addr_t arm_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs) { @@ -86,12 +79,6 @@ static dma_addr_t arm_coherent_dma_map_page(struct device *dev, struct page *pag } /** - * arm_dma_unmap_page - unmap a buffer previously mapped through dma_map_page() - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices - * @handle: DMA address of buffer - * @size: size of buffer (same as passed to dma_map_page) - * @dir: DMA transfer direction (same as passed to dma_map_page) - * * Unmap a page streaming mode DMA translation. The handle and size * must match what was provided in the previous dma_map_page() call. * All other usages are undefined. @@ -99,7 +86,7 @@ static dma_addr_t arm_coherent_dma_map_page(struct device *dev, struct page *pag * After this call, reads by the CPU to the buffer are guaranteed to see * whatever the device wrote there. */ -static void arm_dma_unmap_page(struct device *dev, dma_addr_t handle, +void arm_dma_unmap_page(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs) { @@ -108,7 +95,7 @@ static void arm_dma_unmap_page(struct device *dev, dma_addr_t handle, handle & ~PAGE_MASK, size, dir); } -static void arm_dma_sync_single_for_cpu(struct device *dev, +void arm_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { unsigned int offset = handle & (PAGE_SIZE - 1); @@ -116,7 +103,7 @@ static void arm_dma_sync_single_for_cpu(struct device *dev, __dma_page_dev_to_cpu(page, offset, size, dir); } -static void arm_dma_sync_single_for_device(struct device *dev, +void arm_dma_sync_single_for_device(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { unsigned int offset = handle & (PAGE_SIZE - 1); @@ -124,8 +111,6 @@ static void arm_dma_sync_single_for_device(struct device *dev, __dma_page_cpu_to_dev(page, offset, size, dir); } -static int arm_dma_set_mask(struct device *dev, u64 dma_mask); - struct dma_map_ops arm_dma_ops = { .alloc = arm_dma_alloc, .free = arm_dma_free, @@ -971,7 +956,7 @@ int dma_supported(struct device *dev, u64 mask) } EXPORT_SYMBOL(dma_supported); -static int arm_dma_set_mask(struct device *dev, u64 dma_mask) +int arm_dma_set_mask(struct device *dev, u64 dma_mask) { if (!dev->dma_mask || !dma_supported(dev, dma_mask)) return -EIO;