Message ID | 1308556213-24970-4-git-send-email-m.szyprowski@samsung.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi. Great job. On Mon, Jun 20, 2011 at 4:50 PM, Marek Szyprowski <m.szyprowski@samsung.com> wrote: > +static inline void set_dma_ops(struct device *dev, struct dma_map_ops *ops) > +{ > + dev->archdata.dma_ops = ops; > +} > + Who calls set_dma_ops()? In the mach. initialization part? What if a device driver does not want to use arch's dma_map_ops when machine init procedure set a dma_map_ops? Even though, may arch defiens their dma_map_ops in archdata of device structure, I think it is not a good idea that is device structure contains a pointer to dma_map_ops that may not be common to all devices in a board. I also think that it is better to attach and to detach dma_map_ops dynamically. Moreover, a mapping is not permanent in our Exynos platform because a System MMU may be turned off while runtime. DMA API must come with IOMMU API to initialize IOMMU in runtime. Regards, Cho KyongHo.
Hello, On Monday, June 20, 2011 4:33 PM KyongHo Cho wrote: > On Mon, Jun 20, 2011 at 4:50 PM, Marek Szyprowski > <m.szyprowski@samsung.com> wrote: > > +static inline void set_dma_ops(struct device *dev, struct dma_map_ops > *ops) > > +{ > > + dev->archdata.dma_ops = ops; > > +} > > + > > Who calls set_dma_ops()? > In the mach. initialization part? Yes, some board, machine or device bus initialization code is supposed to call this function. Just 'git grep dma_set_ops' and you will see. In my patch series one of the clients of set_dma_ops function is dmabounce framework (it is called in dmabounce_register_dev() function). > What if a device driver does not want to use arch's dma_map_ops > when machine init procedure set a dma_map_ops? Could you elaborate on this case? The whole point of dma-mapping framework is to hide the implementation of DMA mapping operation from the driver. The driver should never fiddle with dma map ops directly. > Even though, may arch defiens their dma_map_ops in archdata of device > structure, > I think it is not a good idea that is device structure contains a > pointer to dma_map_ops > that may not be common to all devices in a board. It is up to the board/bus startup code to set dma ops correctly. > I also think that it is better to attach and to detach dma_map_ops > dynamically. What's the point of such operations? Why do you want to change dma mapping methods in runtime? > Moreover, a mapping is not permanent in our Exynos platform > because a System MMU may be turned off while runtime. This is theoretically possible. The System MMU (Samsung IOMMU controller) driver can change dma_map_ops back to NULL on remove moving back the client device to generic ARM dma mapping implementation. > DMA API must come with IOMMU API to initialize IOMMU in runtime. I don't understand what's the problem here. Best regards
On Tue, Jun 21, 2011 at 01:47:03PM +0200, Marek Szyprowski wrote: > > I also think that it is better to attach and to detach dma_map_ops > > dynamically. > > What's the point of such operations? Why do you want to change dma > mapping methods in runtime? That is dangerous. You have to make sure that there are no mappings granted to the the device driver before changing the dma_ops of a device at runtime. Otherwise existing mappings for a device may disappear and confuse the driver and the device. Joerg
On Monday 20 June 2011, Marek Szyprowski wrote: > This patch modifies dma-mapping implementation on ARM architecture to > use common dma_map_ops structure and asm-generic/dma-mapping-common.h > helpers. > > Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> > Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> This is a good idea in general, but I have a few concerns about details: First of all, should we only allow using dma_map_ops on ARM, or do we also want to support a case where these are all inlined as before? I suppose for the majority of the cases, the overhead of the indirect function call is near-zero, compared to the overhead of the cache management operation, so it would only make a difference for coherent systems without an IOMMU. Do we care about micro-optimizing those? > diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h > index 799669d..f4e4968 100644 > --- a/arch/arm/include/asm/dma-mapping.h > +++ b/arch/arm/include/asm/dma-mapping.h > @@ -10,6 +10,27 @@ > #include <asm-generic/dma-coherent.h> > #include <asm/memory.h> > > +extern struct dma_map_ops dma_ops; > + > +static inline struct dma_map_ops *get_dma_ops(struct device *dev) > +{ > + if (dev->archdata.dma_ops) > + return dev->archdata.dma_ops; > + return &dma_ops; > +} I would not name the global structure just 'dma_ops', the identifier could too easily conflict with a local variable in some driver. How about arm_dma_ops or linear_dma_ops instead? > /* > * The scatter list versions of the above methods. > */ > -extern int dma_map_sg(struct device *, struct scatterlist *, int, > - enum dma_data_direction); > -extern void dma_unmap_sg(struct device *, struct scatterlist *, int, > +extern int arm_dma_map_sg(struct device *, struct scatterlist *, int, > + enum dma_data_direction, struct dma_attrs *attrs); > +extern void arm_dma_unmap_sg(struct device *, struct scatterlist *, int, > + enum dma_data_direction, struct dma_attrs *attrs); > +extern void arm_dma_sync_sg_for_cpu(struct device *, struct scatterlist *, int, > enum dma_data_direction); > -extern void dma_sync_sg_for_cpu(struct device *, struct scatterlist *, int, > +extern void arm_dma_sync_sg_for_device(struct device *, struct scatterlist *, int, > enum dma_data_direction); > -extern void dma_sync_sg_for_device(struct device *, struct scatterlist *, int, > - enum dma_data_direction); > - You should not need to make these symbols visible in the header file any more unless they are used outside of the main file later. Arnd
Hello, On Friday, June 24, 2011 5:37 PM Arnd Bergmann wrote: > On Monday 20 June 2011, Marek Szyprowski wrote: > > This patch modifies dma-mapping implementation on ARM architecture to > > use common dma_map_ops structure and asm-generic/dma-mapping-common.h > > helpers. > > > > Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> > > Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> > > This is a good idea in general, but I have a few concerns about details: > > First of all, should we only allow using dma_map_ops on ARM, or do we > also want to support a case where these are all inlined as before? I really wonder if it is possible to have a clean implementation of dma_map_ops based and linear inlined dma-mapping framework together. Theoretically it should be possible, but it will end with a lot of #ifdef hackery which is really hard to follow and understand for anyone but the authors. > I suppose for the majority of the cases, the overhead of the indirect > function call is near-zero, compared to the overhead of the cache > management operation, so it would only make a difference for coherent > systems without an IOMMU. Do we care about micro-optimizing those? Even in coherent case, the overhead caused by additional function call should have really negligible impact on drivers performance. > > diff --git a/arch/arm/include/asm/dma-mapping.h > b/arch/arm/include/asm/dma-mapping.h > > index 799669d..f4e4968 100644 > > --- a/arch/arm/include/asm/dma-mapping.h > > +++ b/arch/arm/include/asm/dma-mapping.h > > @@ -10,6 +10,27 @@ > > #include <asm-generic/dma-coherent.h> > > #include <asm/memory.h> > > > > +extern struct dma_map_ops dma_ops; > > + > > +static inline struct dma_map_ops *get_dma_ops(struct device *dev) > > +{ > > + if (dev->archdata.dma_ops) > > + return dev->archdata.dma_ops; > > + return &dma_ops; > > +} > > I would not name the global structure just 'dma_ops', the identifier could > too easily conflict with a local variable in some driver. How about > arm_dma_ops or linear_dma_ops instead? I'm fine with both of them. Even arm_linear_dma_ops make some sense. > > /* > > * The scatter list versions of the above methods. > > */ > > -extern int dma_map_sg(struct device *, struct scatterlist *, int, > > - enum dma_data_direction); > > -extern void dma_unmap_sg(struct device *, struct scatterlist *, int, > > +extern int arm_dma_map_sg(struct device *, struct scatterlist *, int, > > + enum dma_data_direction, struct dma_attrs *attrs); > > +extern void arm_dma_unmap_sg(struct device *, struct scatterlist *, int, > > + enum dma_data_direction, struct dma_attrs *attrs); > > +extern void arm_dma_sync_sg_for_cpu(struct device *, struct scatterlist > *, int, > > enum dma_data_direction); > > -extern void dma_sync_sg_for_cpu(struct device *, struct scatterlist *, > int, > > +extern void arm_dma_sync_sg_for_device(struct device *, struct > scatterlist *, int, > > enum dma_data_direction); > > -extern void dma_sync_sg_for_device(struct device *, struct scatterlist *, > int, > > - enum dma_data_direction); > > - > > You should not need to make these symbols visible in the header file any > more unless they are used outside of the main file later. They are used by the dma bounce code once converted to dma_map_ops framework. Best regards
On Monday 27 June 2011, Marek Szyprowski wrote: > On Friday, June 24, 2011 5:37 PM Arnd Bergmann wrote: > > > On Monday 20 June 2011, Marek Szyprowski wrote: > > > This patch modifies dma-mapping implementation on ARM architecture to > > > use common dma_map_ops structure and asm-generic/dma-mapping-common.h > > > helpers. > > > > > > Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> > > > Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> > > > > This is a good idea in general, but I have a few concerns about details: > > > > First of all, should we only allow using dma_map_ops on ARM, or do we > > also want to support a case where these are all inlined as before? > > I really wonder if it is possible to have a clean implementation of > dma_map_ops based and linear inlined dma-mapping framework together. > Theoretically it should be possible, but it will end with a lot of > #ifdef hackery which is really hard to follow and understand for > anyone but the authors. Right. It's probably not worth unless there is a significant overhead in terms of code size or performance in the coherent linear case. > > I suppose for the majority of the cases, the overhead of the indirect > > function call is near-zero, compared to the overhead of the cache > > management operation, so it would only make a difference for coherent > > systems without an IOMMU. Do we care about micro-optimizing those? > > Even in coherent case, the overhead caused by additional function call > should have really negligible impact on drivers performance. What about object code size? I guess since ixp23xx is the only platform that announces itself as coherent, we probably don't need to worry about it too much either. Lennert? On everything else, we only replace a direct functin call with an indirect one. > > > diff --git a/arch/arm/include/asm/dma-mapping.h > > b/arch/arm/include/asm/dma-mapping.h > > > index 799669d..f4e4968 100644 > > > --- a/arch/arm/include/asm/dma-mapping.h > > > +++ b/arch/arm/include/asm/dma-mapping.h > > > @@ -10,6 +10,27 @@ > > > #include <asm-generic/dma-coherent.h> > > > #include <asm/memory.h> > > > > > > +extern struct dma_map_ops dma_ops; > > > + > > > +static inline struct dma_map_ops *get_dma_ops(struct device *dev) > > > +{ > > > + if (dev->archdata.dma_ops) > > > + return dev->archdata.dma_ops; > > > + return &dma_ops; > > > +} > > > > I would not name the global structure just 'dma_ops', the identifier could > > too easily conflict with a local variable in some driver. How about > > arm_dma_ops or linear_dma_ops instead? > > I'm fine with both of them. Even arm_linear_dma_ops make some sense. Ok, just pick one then if nobody has a strong opinion either way. > > You should not need to make these symbols visible in the header file any > > more unless they are used outside of the main file later. > > They are used by the dma bounce code once converted to dma_map_ops framework. Ok, I see. Arnd
On Mon, Jun 27, 2011 at 03:19:43PM +0200, Arnd Bergmann wrote: > > > I suppose for the majority of the cases, the overhead of the indirect > > > function call is near-zero, compared to the overhead of the cache > > > management operation, so it would only make a difference for coherent > > > systems without an IOMMU. Do we care about micro-optimizing those? FWIW, when I was hacking on ARM access point routing performance some time ago, turning the L1/L2 cache maintenance operations into inline functions (inlined into the ethernet driver) gave me a significant and measurable performance boost. Such things can remain product-specific hacks, though. > > Even in coherent case, the overhead caused by additional function call > > should have really negligible impact on drivers performance. > > What about object code size? I guess since ixp23xx is the only platform > that announces itself as coherent, we probably don't need to worry about > it too much either. Lennert? I don't think so. ixp23xx isn't a very popular platform anymore either, having been discontinued some time ago. thanks, Lennert
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 9adc278..0b834c1 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -3,6 +3,7 @@ config ARM default y select HAVE_AOUT select HAVE_DMA_API_DEBUG + select HAVE_DMA_ATTRS select HAVE_IDE select HAVE_MEMBLOCK select RTC_LIB diff --git a/arch/arm/include/asm/device.h b/arch/arm/include/asm/device.h index 9f390ce..d3b35d8 100644 --- a/arch/arm/include/asm/device.h +++ b/arch/arm/include/asm/device.h @@ -7,6 +7,7 @@ #define ASMARM_DEVICE_H struct dev_archdata { + struct dma_map_ops *dma_ops; #ifdef CONFIG_DMABOUNCE struct dmabounce_device_info *dmabounce; #endif diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h index 799669d..f4e4968 100644 --- a/arch/arm/include/asm/dma-mapping.h +++ b/arch/arm/include/asm/dma-mapping.h @@ -10,6 +10,27 @@ #include <asm-generic/dma-coherent.h> #include <asm/memory.h> +extern struct dma_map_ops dma_ops; + +static inline struct dma_map_ops *get_dma_ops(struct device *dev) +{ + if (dev->archdata.dma_ops) + return dev->archdata.dma_ops; + return &dma_ops; +} + +static inline void set_dma_ops(struct device *dev, struct dma_map_ops *ops) +{ + dev->archdata.dma_ops = ops; +} + +#include <asm-generic/dma-mapping-common.h> + +static inline int dma_set_mask(struct device *dev, u64 mask) +{ + return get_dma_ops(dev)->set_dma_mask(dev, mask); +} + #ifdef __arch_page_to_dma #error Please update to __arch_pfn_to_dma #endif @@ -131,24 +152,6 @@ static inline int dma_supported(struct device *dev, u64 mask) return 1; } -static inline int dma_set_mask(struct device *dev, u64 dma_mask) -{ -#ifdef CONFIG_DMABOUNCE - if (dev->archdata.dmabounce) { - if (dma_mask >= ISA_DMA_THRESHOLD) - return 0; - else - return -EIO; - } -#endif - if (!dev->dma_mask || !dma_supported(dev, dma_mask)) - return -EIO; - - *dev->dma_mask = dma_mask; - - return 0; -} - /* * DMA errors are defined by all-bits-set in the DMA address. */ @@ -336,167 +339,17 @@ static inline void __dma_unmap_page(struct device *dev, dma_addr_t handle, } #endif /* CONFIG_DMABOUNCE */ - -/** - * dma_map_page - map a portion of a page for streaming DMA - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices - * @page: page that buffer resides in - * @offset: offset into page for start of buffer - * @size: size of buffer to map - * @dir: DMA transfer direction - * - * Ensure that any data held in the cache is appropriately discarded - * or written back. - * - * The device owns this memory once this call has completed. The CPU - * can regain ownership by calling dma_unmap_page(). - */ -static inline dma_addr_t dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir) -{ - dma_addr_t addr; - - BUG_ON(!valid_dma_direction(dir)); - - addr = __dma_map_page(dev, page, offset, size, dir); - debug_dma_map_page(dev, page, offset, size, dir, addr, false); - - return addr; -} - -/** - * dma_unmap_page - unmap a buffer previously mapped through dma_map_page() - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices - * @handle: DMA address of buffer - * @size: size of buffer (same as passed to dma_map_page) - * @dir: DMA transfer direction (same as passed to dma_map_page) - * - * Unmap a page streaming mode DMA translation. The handle and size - * must match what was provided in the previous dma_map_page() call. - * All other usages are undefined. - * - * After this call, reads by the CPU to the buffer are guaranteed to see - * whatever the device wrote there. - */ - -static inline void dma_unmap_page(struct device *dev, dma_addr_t handle, - size_t size, enum dma_data_direction dir) -{ - debug_dma_unmap_page(dev, handle, size, dir, false); - __dma_unmap_page(dev, handle, size, dir); -} - -/** - * dma_map_single - map a single buffer for streaming DMA - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices - * @cpu_addr: CPU direct mapped address of buffer - * @size: size of buffer to map - * @dir: DMA transfer direction - * - * Ensure that any data held in the cache is appropriately discarded - * or written back. - * - * The device owns this memory once this call has completed. The CPU - * can regain ownership by calling dma_unmap_single() or - * dma_sync_single_for_cpu(). - */ -static inline dma_addr_t dma_map_single(struct device *dev, void *cpu_addr, - size_t size, enum dma_data_direction dir) -{ - return dma_map_page(dev, virt_to_page(cpu_addr), - (unsigned long)cpu_addr & ~PAGE_MASK, size, dir); -} - -/** - * dma_unmap_single - unmap a single buffer previously mapped - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices - * @handle: DMA address of buffer - * @size: size of buffer (same as passed to dma_map_single) - * @dir: DMA transfer direction (same as passed to dma_map_single) - * - * Unmap a single streaming mode DMA translation. The handle and size - * must match what was provided in the previous dma_map_single() call. - * All other usages are undefined. - * - * After this call, reads by the CPU to the buffer are guaranteed to see - * whatever the device wrote there. - */ -static inline void dma_unmap_single(struct device *dev, dma_addr_t handle, - size_t size, enum dma_data_direction dir) -{ - dma_unmap_page(dev, handle, size, dir); -} - -static inline void dma_sync_single_for_cpu(struct device *dev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - BUG_ON(!valid_dma_direction(dir)); - - debug_dma_sync_single_for_cpu(dev, handle, size, dir); - - if (!dmabounce_sync_for_cpu(dev, handle, size, dir)) - return; - - __dma_single_dev_to_cpu(dma_to_virt(dev, handle), size, dir); -} - -static inline void dma_sync_single_for_device(struct device *dev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - BUG_ON(!valid_dma_direction(dir)); - - debug_dma_sync_single_for_device(dev, handle, size, dir); - - if (!dmabounce_sync_for_device(dev, handle, size, dir)) - return; - - __dma_single_cpu_to_dev(dma_to_virt(dev, handle), size, dir); -} - -/** - * dma_sync_single_range_for_cpu - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices - * @handle: DMA address of buffer - * @offset: offset of region to start sync - * @size: size of region to sync - * @dir: DMA transfer direction (same as passed to dma_map_single) - * - * Make physical memory consistent for a single streaming mode DMA - * translation after a transfer. - * - * If you perform a dma_map_single() but wish to interrogate the - * buffer using the cpu, yet do not wish to teardown the PCI dma - * mapping, you must call this function before doing so. At the - * next point you give the PCI dma address back to the card, you - * must first the perform a dma_sync_for_device, and then the - * device again owns the buffer. - */ -static inline void dma_sync_single_range_for_cpu(struct device *dev, - dma_addr_t handle, unsigned long offset, size_t size, - enum dma_data_direction dir) -{ - dma_sync_single_for_cpu(dev, handle + offset, size, dir); -} - -static inline void dma_sync_single_range_for_device(struct device *dev, - dma_addr_t handle, unsigned long offset, size_t size, - enum dma_data_direction dir) -{ - dma_sync_single_for_device(dev, handle + offset, size, dir); -} - /* * The scatter list versions of the above methods. */ -extern int dma_map_sg(struct device *, struct scatterlist *, int, - enum dma_data_direction); -extern void dma_unmap_sg(struct device *, struct scatterlist *, int, +extern int arm_dma_map_sg(struct device *, struct scatterlist *, int, + enum dma_data_direction, struct dma_attrs *attrs); +extern void arm_dma_unmap_sg(struct device *, struct scatterlist *, int, + enum dma_data_direction, struct dma_attrs *attrs); +extern void arm_dma_sync_sg_for_cpu(struct device *, struct scatterlist *, int, enum dma_data_direction); -extern void dma_sync_sg_for_cpu(struct device *, struct scatterlist *, int, +extern void arm_dma_sync_sg_for_device(struct device *, struct scatterlist *, int, enum dma_data_direction); -extern void dma_sync_sg_for_device(struct device *, struct scatterlist *, int, - enum dma_data_direction); - #endif /* __KERNEL__ */ #endif diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index c11f234..5264552 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -25,6 +25,98 @@ #include <asm/tlbflush.h> #include <asm/sizes.h> +/** + * dma_map_page - map a portion of a page for streaming DMA + * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices + * @page: page that buffer resides in + * @offset: offset into page for start of buffer + * @size: size of buffer to map + * @dir: DMA transfer direction + * + * Ensure that any data held in the cache is appropriately discarded + * or written back. + * + * The device owns this memory once this call has completed. The CPU + * can regain ownership by calling dma_unmap_page(). + */ +static inline dma_addr_t arm_dma_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, enum dma_data_direction dir, + struct dma_attrs *attrs) +{ + return __dma_map_page(dev, page, offset, size, dir); +} + +/** + * dma_unmap_page - unmap a buffer previously mapped through dma_map_page() + * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices + * @handle: DMA address of buffer + * @size: size of buffer (same as passed to dma_map_page) + * @dir: DMA transfer direction (same as passed to dma_map_page) + * + * Unmap a page streaming mode DMA translation. The handle and size + * must match what was provided in the previous dma_map_page() call. + * All other usages are undefined. + * + * After this call, reads by the CPU to the buffer are guaranteed to see + * whatever the device wrote there. + */ + +static inline void arm_dma_unmap_page(struct device *dev, dma_addr_t handle, + size_t size, enum dma_data_direction dir, + struct dma_attrs *attrs) +{ + __dma_unmap_page(dev, handle, size, dir); +} + +static inline void arm_dma_sync_single_for_cpu(struct device *dev, + dma_addr_t handle, size_t size, enum dma_data_direction dir) +{ + if (!dmabounce_sync_for_cpu(dev, handle, size, dir)) + return; + + __dma_single_dev_to_cpu(dma_to_virt(dev, handle), size, dir); +} + +static inline void arm_dma_sync_single_for_device(struct device *dev, + dma_addr_t handle, size_t size, enum dma_data_direction dir) +{ + if (!dmabounce_sync_for_device(dev, handle, size, dir)) + return; + + __dma_single_cpu_to_dev(dma_to_virt(dev, handle), size, dir); +} + +static int arm_dma_set_mask(struct device *dev, u64 dma_mask) +{ +#ifdef CONFIG_DMABOUNCE + if (dev->archdata.dmabounce) { + if (dma_mask >= ISA_DMA_THRESHOLD) + return 0; + else + return -EIO; + } +#endif + if (!dev->dma_mask || !dma_supported(dev, dma_mask)) + return -EIO; + + *dev->dma_mask = dma_mask; + + return 0; +} + +struct dma_map_ops dma_ops = { + .map_page = arm_dma_map_page, + .unmap_page = arm_dma_unmap_page, + .map_sg = arm_dma_map_sg, + .unmap_sg = arm_dma_unmap_sg, + .sync_single_for_cpu = arm_dma_sync_single_for_cpu, + .sync_single_for_device = arm_dma_sync_single_for_device, + .sync_sg_for_cpu = arm_dma_sync_sg_for_cpu, + .sync_sg_for_device = arm_dma_sync_sg_for_device, + .set_dma_mask = arm_dma_set_mask, +}; +EXPORT_SYMBOL(dma_ops); + static u64 get_coherent_dma_mask(struct device *dev) { u64 mask = ISA_DMA_THRESHOLD; @@ -558,21 +650,18 @@ EXPORT_SYMBOL(___dma_page_dev_to_cpu); * Device ownership issues as mentioned for dma_map_single are the same * here. */ -int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, - enum dma_data_direction dir) +int arm_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, + enum dma_data_direction dir, struct dma_attrs *attrs) { struct scatterlist *s; int i, j; - BUG_ON(!valid_dma_direction(dir)); - for_each_sg(sg, s, nents, i) { s->dma_address = __dma_map_page(dev, sg_page(s), s->offset, s->length, dir); if (dma_mapping_error(dev, s->dma_address)) goto bad_mapping; } - debug_dma_map_sg(dev, sg, nents, nents, dir); return nents; bad_mapping: @@ -580,7 +669,6 @@ int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, __dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir); return 0; } -EXPORT_SYMBOL(dma_map_sg); /** * dma_unmap_sg - unmap a set of SG buffers mapped by dma_map_sg @@ -592,18 +680,15 @@ EXPORT_SYMBOL(dma_map_sg); * Unmap a set of streaming mode DMA translations. Again, CPU access * rules concerning calls here are the same as for dma_unmap_single(). */ -void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, - enum dma_data_direction dir) +void arm_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, + enum dma_data_direction dir, struct dma_attrs *attrs) { struct scatterlist *s; int i; - debug_dma_unmap_sg(dev, sg, nents, dir); - for_each_sg(sg, s, nents, i) __dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir); } -EXPORT_SYMBOL(dma_unmap_sg); /** * dma_sync_sg_for_cpu @@ -612,7 +697,7 @@ EXPORT_SYMBOL(dma_unmap_sg); * @nents: number of buffers to map (returned from dma_map_sg) * @dir: DMA transfer direction (same as was passed to dma_map_sg) */ -void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, +void arm_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir) { struct scatterlist *s; @@ -626,10 +711,7 @@ void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, __dma_page_dev_to_cpu(sg_page(s), s->offset, s->length, dir); } - - debug_dma_sync_sg_for_cpu(dev, sg, nents, dir); } -EXPORT_SYMBOL(dma_sync_sg_for_cpu); /** * dma_sync_sg_for_device @@ -638,7 +720,7 @@ EXPORT_SYMBOL(dma_sync_sg_for_cpu); * @nents: number of buffers to map (returned from dma_map_sg) * @dir: DMA transfer direction (same as was passed to dma_map_sg) */ -void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, +void arm_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir) { struct scatterlist *s; @@ -652,10 +734,7 @@ void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); } - - debug_dma_sync_sg_for_device(dev, sg, nents, dir); } -EXPORT_SYMBOL(dma_sync_sg_for_device); #define PREALLOC_DMA_DEBUG_ENTRIES 4096