Message ID | 20190305183202.16216-1-nicoleotsuka@gmail.com (mailing list archive) |
---|---|
State | RFC |
Headers | show |
Series | [v2,RFC/RFT] dma-contiguous: Get normal pages for single-page allocations | expand |
On Tue, Mar 05, 2019 at 10:32:02AM -0800, Nicolin Chen wrote: > The addresses within a single page are always contiguous, so it's > not so necessary to always allocate one single page from CMA area. > Since the CMA area has a limited predefined size of space, it may > run out of space in heavy use cases, where there might be quite a > lot CMA pages being allocated for single pages. > > However, there is also a concern that a device might care where a > page comes from -- it might expect the page from CMA area and act > differently if the page doesn't. > > This patch tries to get normal pages for single-page allocations > unless the device has its own CMA area. This would save resources > from the CMA area for more CMA allocations. And it'd also reduce > CMA fragmentations resulted from trivial allocations. This is not sufficient. Some architectures/platforms declare limits on the CMA range so that DMA is possible with all expected devices. For example, on arm64 we keep the CMA in the lower 4GB of the address range, though with this patch you only covered the iommu ops allocation. Do you have any numbers to back this up? You don't seem to address dma_direct_alloc() either but, as I said above, it's not trivial since some platforms expect certain physical range for DMA allocations.
Hi Catalin, Thank you for the review. And I realized that the free() path is missing too. On Tue, Mar 19, 2019 at 02:43:01PM +0000, Catalin Marinas wrote: > On Tue, Mar 05, 2019 at 10:32:02AM -0800, Nicolin Chen wrote: > > The addresses within a single page are always contiguous, so it's > > not so necessary to always allocate one single page from CMA area. > > Since the CMA area has a limited predefined size of space, it may > > run out of space in heavy use cases, where there might be quite a > > lot CMA pages being allocated for single pages. > > > > However, there is also a concern that a device might care where a > > page comes from -- it might expect the page from CMA area and act > > differently if the page doesn't. > > > > This patch tries to get normal pages for single-page allocations > > unless the device has its own CMA area. This would save resources > > from the CMA area for more CMA allocations. And it'd also reduce > > CMA fragmentations resulted from trivial allocations. > > This is not sufficient. Some architectures/platforms declare limits on > the CMA range so that DMA is possible with all expected devices. For > example, on arm64 we keep the CMA in the lower 4GB of the address range, > though with this patch you only covered the iommu ops allocation. I will follow the way of v1 by adding alloc_page()/free_page() function to those callers who don't have fallback allocations. In this way, archs may use different callbacks to alloc pages. > Do you have any numbers to back this up? You don't seem to address > dma_direct_alloc() either but, as I said above, it's not trivial since > some platforms expect certain physical range for DMA allocations. What's the dma_direct_alloc() here about? Mind elaborating?
Hi Nicolin, On Thu, Mar 21, 2019 at 04:32:49PM -0700, Nicolin Chen wrote: > On Tue, Mar 19, 2019 at 02:43:01PM +0000, Catalin Marinas wrote: > > On Tue, Mar 05, 2019 at 10:32:02AM -0800, Nicolin Chen wrote: > > > The addresses within a single page are always contiguous, so it's > > > not so necessary to always allocate one single page from CMA area. > > > Since the CMA area has a limited predefined size of space, it may > > > run out of space in heavy use cases, where there might be quite a > > > lot CMA pages being allocated for single pages. > > > > > > However, there is also a concern that a device might care where a > > > page comes from -- it might expect the page from CMA area and act > > > differently if the page doesn't. > > > > > > This patch tries to get normal pages for single-page allocations > > > unless the device has its own CMA area. This would save resources > > > from the CMA area for more CMA allocations. And it'd also reduce > > > CMA fragmentations resulted from trivial allocations. > > > > This is not sufficient. Some architectures/platforms declare limits on > > the CMA range so that DMA is possible with all expected devices. For > > example, on arm64 we keep the CMA in the lower 4GB of the address range, > > though with this patch you only covered the iommu ops allocation. > > I will follow the way of v1 by adding alloc_page()/free_page() > function to those callers who don't have fallback allocations. > In this way, archs may use different callbacks to alloc pages. > > > Do you have any numbers to back this up? You don't seem to address > > dma_direct_alloc() either but, as I said above, it's not trivial since > > some platforms expect certain physical range for DMA allocations. > > What's the dma_direct_alloc() here about? Mind elaborating? I just did a grep for dma_alloc_from_contiguous() in the 5.1-rc1 kernel and came up with __dma_direct_alloc_pages(). Should your patch cover this as well?
Hi Catalin, On Fri, Mar 22, 2019 at 10:57:13AM +0000, Catalin Marinas wrote: > > > Do you have any numbers to back this up? You don't seem to address > > > dma_direct_alloc() either but, as I said above, it's not trivial since > > > some platforms expect certain physical range for DMA allocations. > > > > What's the dma_direct_alloc() here about? Mind elaborating? > > I just did a grep for dma_alloc_from_contiguous() in the 5.1-rc1 kernel > and came up with __dma_direct_alloc_pages(). Should your patch cover > this as well? I don't get the meaning of "cover this" here. What missing part do you refer to?
On Fri, Mar 22, 2019 at 01:09:26PM -0700, Nicolin Chen wrote: > On Fri, Mar 22, 2019 at 10:57:13AM +0000, Catalin Marinas wrote: > > > > Do you have any numbers to back this up? You don't seem to address > > > > dma_direct_alloc() either but, as I said above, it's not trivial since > > > > some platforms expect certain physical range for DMA allocations. > > > > > > What's the dma_direct_alloc() here about? Mind elaborating? > > > > I just did a grep for dma_alloc_from_contiguous() in the 5.1-rc1 kernel > > and came up with __dma_direct_alloc_pages(). Should your patch cover > > this as well? > > I don't get the meaning of "cover this" here. What missing part > do you refer to? What I meant, do you need this hunk as well in your patch? diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index fcdb23e8d2fc..8955ba6f52fc 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -111,8 +111,7 @@ struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, again: /* CMA can be used only in the context which permits sleeping */ if (gfpflags_allow_blocking(gfp)) { - page = dma_alloc_from_contiguous(dev, count, page_order, - gfp & __GFP_NOWARN); + page = dma_alloc_from_contiguous(dev, count, page_order, gfp); if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { dma_release_from_contiguous(dev, page, count); page = NULL;
On Mon, Mar 25, 2019 at 12:14:37PM +0000, Catalin Marinas wrote: > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index fcdb23e8d2fc..8955ba6f52fc 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -111,8 +111,7 @@ struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, > again: > /* CMA can be used only in the context which permits sleeping */ > if (gfpflags_allow_blocking(gfp)) { > - page = dma_alloc_from_contiguous(dev, count, page_order, > - gfp & __GFP_NOWARN); > + page = dma_alloc_from_contiguous(dev, count, page_order, gfp); Oh...yea...It should have this part. I messed up with some thing else so couldn't get it. Thanks for the reply. I will go for the previous change by returning NULL in the dma_alloc_from_contiguous() so the no_warm parameter would not be touched.
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 8a90f298af96..c39fc2d97712 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -588,7 +588,7 @@ static void *__alloc_from_contiguous(struct device *dev, size_t size, struct page *page; void *ptr = NULL; - page = dma_alloc_from_contiguous(dev, count, order, gfp & __GFP_NOWARN); + page = dma_alloc_from_contiguous(dev, count, order, gfp); if (!page) return NULL; @@ -1293,8 +1293,7 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, unsigned long order = get_order(size); struct page *page; - page = dma_alloc_from_contiguous(dev, count, order, - gfp & __GFP_NOWARN); + page = dma_alloc_from_contiguous(dev, count, order, gfp); if (!page) goto error; diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index 78c0a72f822c..660adedaab5d 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -159,7 +159,7 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, struct page *page; page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT, - get_order(size), gfp & __GFP_NOWARN); + get_order(size), gfp); if (!page) return NULL; diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c index 9171bff76fc4..e15b893caadb 100644 --- a/arch/xtensa/kernel/pci-dma.c +++ b/arch/xtensa/kernel/pci-dma.c @@ -157,7 +157,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, if (gfpflags_allow_blocking(flag)) page = dma_alloc_from_contiguous(dev, count, get_order(size), - flag & __GFP_NOWARN); + flag); if (!page) page = alloc_pages(flag | __GFP_ZERO, get_order(size)); diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 6b0760dafb3e..c54923a9e31f 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -2691,7 +2691,7 @@ static void *alloc_coherent(struct device *dev, size_t size, return NULL; page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT, - get_order(size), flag & __GFP_NOWARN); + get_order(size), flag); if (!page) return NULL; } diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 87274b54febd..6b5cf7313db6 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3791,8 +3791,7 @@ static void *intel_alloc_coherent(struct device *dev, size_t size, if (gfpflags_allow_blocking(flags)) { unsigned int count = size >> PAGE_SHIFT; - page = dma_alloc_from_contiguous(dev, count, order, - flags & __GFP_NOWARN); + page = dma_alloc_from_contiguous(dev, count, order, flags); if (page && iommu_no_mapping(dev) && page_to_phys(page) + size > dev->coherent_dma_mask) { dma_release_from_contiguous(dev, page, count); diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h index f247e8aa5e3d..10ea106b720f 100644 --- a/include/linux/dma-contiguous.h +++ b/include/linux/dma-contiguous.h @@ -112,7 +112,7 @@ static inline int dma_declare_contiguous(struct device *dev, phys_addr_t size, } struct page *dma_alloc_from_contiguous(struct device *dev, size_t count, - unsigned int order, bool no_warn); + unsigned int order, gfp_t gfp); bool dma_release_from_contiguous(struct device *dev, struct page *pages, int count); @@ -145,7 +145,7 @@ int dma_declare_contiguous(struct device *dev, phys_addr_t size, static inline struct page *dma_alloc_from_contiguous(struct device *dev, size_t count, - unsigned int order, bool no_warn) + unsigned int order, gfp_t gfp) { return NULL; } diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index b2a87905846d..11b6d6ef4fc9 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -186,16 +186,31 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base, * * This function allocates memory buffer for specified device. It uses * device specific contiguous memory area if available or the default - * global one. Requires architecture specific dev_get_cma_area() helper - * function. + * global one. + * + * However, it allocates normal pages for one-page size of allocations + * instead of getting from CMA areas. As the addresses within a single + * page are always contiguous, so there is no need to waste CMA pages + * for that kind; it also helps reduce fragmentations in the CMA area. + * + * Requires architecture specific dev_get_cma_area() helper function. */ struct page *dma_alloc_from_contiguous(struct device *dev, size_t count, - unsigned int align, bool no_warn) + unsigned int align, gfp_t gfp) { + struct cma *cma; + if (align > CONFIG_CMA_ALIGNMENT) align = CONFIG_CMA_ALIGNMENT; - return cma_alloc(dev_get_cma_area(dev), count, align, no_warn); + if (dev && dev->cma_area) + cma = dev->cma_area; + else if (count > 1) + cma = dma_contiguous_default_area; + else + return alloc_pages(gfp, align); + + return cma_alloc(cma, count, align, gfp & __GFP_NOWARN); } /**
The addresses within a single page are always contiguous, so it's not so necessary to always allocate one single page from CMA area. Since the CMA area has a limited predefined size of space, it may run out of space in heavy use cases, where there might be quite a lot CMA pages being allocated for single pages. However, there is also a concern that a device might care where a page comes from -- it might expect the page from CMA area and act differently if the page doesn't. This patch tries to get normal pages for single-page allocations unless the device has its own CMA area. This would save resources from the CMA area for more CMA allocations. And it'd also reduce CMA fragmentations resulted from trivial allocations. Also, it updates the API and its callers so as to pass gfp flags. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> --- Changelog v1->v2: * Removed one ';' in header file that caused build errors when CONFIG_DMA_CMA=N. arch/arm/mm/dma-mapping.c | 5 ++--- arch/arm64/mm/dma-mapping.c | 2 +- arch/xtensa/kernel/pci-dma.c | 2 +- drivers/iommu/amd_iommu.c | 2 +- drivers/iommu/intel-iommu.c | 3 +-- include/linux/dma-contiguous.h | 4 ++-- kernel/dma/contiguous.c | 23 +++++++++++++++++++---- 7 files changed, 27 insertions(+), 14 deletions(-)