Message ID | 1456944866-15990-1-git-send-email-yong.wu@mediatek.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi, On Wed, Mar 2, 2016 at 10:54 AM, Yong Wu <yong.wu@mediatek.com> wrote: > Sometimes it is not worth for the iommu allocating big chunks. > Here we enable DMA_ATTR_ALLOC_SINGLE_PAGES which could help avoid to > allocate big chunks while iommu allocating buffer. > > More information about this attribute, please check Doug's commit[1]. > > [1]: https://lkml.org/lkml/2016/1/11/720 > > Cc: Robin Murphy <robin.murphy@arm.com> > Suggested-by: Douglas Anderson <dianders@chromium.org> > Signed-off-by: Yong Wu <yong.wu@mediatek.com> > --- > > Our video drivers may soon use this. > > arch/arm64/mm/dma-mapping.c | 4 ++-- > drivers/iommu/dma-iommu.c | 14 ++++++++++---- > include/linux/dma-iommu.h | 4 ++-- > 3 files changed, 14 insertions(+), 8 deletions(-) It should also be mentioned that this depends on commit df05c6f6e0bb ("ARM: 8506/1: common: DMA-mapping: add DMA_ATTR_ALLOC_SINGLE_PAGES attribute") which is not in mainline quite yet. Also please CC Marek Szyprowski on any future patches in this area since I saw a patch series from him that was touching this same area. Reviewed-by: Douglas Anderson <dianders@chromium.org>
On Thu, 2016-03-03 at 09:44 -0800, Doug Anderson wrote: > Hi, > > On Wed, Mar 2, 2016 at 10:54 AM, Yong Wu <yong.wu@mediatek.com> wrote: > > Sometimes it is not worth for the iommu allocating big chunks. > > Here we enable DMA_ATTR_ALLOC_SINGLE_PAGES which could help avoid to > > allocate big chunks while iommu allocating buffer. > > > > More information about this attribute, please check Doug's commit[1]. > > > > [1]: https://lkml.org/lkml/2016/1/11/720 > > > > Cc: Robin Murphy <robin.murphy@arm.com> > > Suggested-by: Douglas Anderson <dianders@chromium.org> > > Signed-off-by: Yong Wu <yong.wu@mediatek.com> > > --- > > > > Our video drivers may soon use this. > > > > arch/arm64/mm/dma-mapping.c | 4 ++-- > > drivers/iommu/dma-iommu.c | 14 ++++++++++---- > > include/linux/dma-iommu.h | 4 ++-- > > 3 files changed, 14 insertions(+), 8 deletions(-) > > It should also be mentioned that this depends on commit df05c6f6e0bb > ("ARM: 8506/1: common: DMA-mapping: add DMA_ATTR_ALLOC_SINGLE_PAGES > attribute") which is not in mainline quite yet. Thanks. I will add it in the commit message. > > Also please CC Marek Szyprowski on any future patches in this area > since I saw a patch series from him that was touching this same area. Will do. > > Reviewed-by: Douglas Anderson <dianders@chromium.org> Thanks.
On Thu, Mar 03, 2016 at 02:54:26AM +0800, Yong Wu wrote: > Sometimes it is not worth for the iommu allocating big chunks. > Here we enable DMA_ATTR_ALLOC_SINGLE_PAGES which could help avoid to > allocate big chunks while iommu allocating buffer. > > More information about this attribute, please check Doug's commit[1]. > > [1]: https://lkml.org/lkml/2016/1/11/720 > > Cc: Robin Murphy <robin.murphy@arm.com> > Suggested-by: Douglas Anderson <dianders@chromium.org> > Signed-off-by: Yong Wu <yong.wu@mediatek.com> > --- > > Our video drivers may soon use this. > > arch/arm64/mm/dma-mapping.c | 4 ++-- > drivers/iommu/dma-iommu.c | 14 ++++++++++---- > include/linux/dma-iommu.h | 4 ++-- > 3 files changed, 14 insertions(+), 8 deletions(-) > > diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c > index 331c4ca..3225e3ca 100644 > --- a/arch/arm64/mm/dma-mapping.c > +++ b/arch/arm64/mm/dma-mapping.c > @@ -562,8 +562,8 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, > struct page **pages; > pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent); > > - pages = iommu_dma_alloc(dev, iosize, gfp, ioprot, handle, > - flush_page); > + pages = iommu_dma_alloc(dev, iosize, gfp, ioprot, attrs, > + handle, flush_page); > if (!pages) > return NULL; > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 72d6182..3569cb6 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -190,7 +190,8 @@ static void __iommu_dma_free_pages(struct page **pages, int count) > kvfree(pages); > } > > -static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp) > +static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp, > + struct dma_attrs *attrs) > { > struct page **pages; > unsigned int i = 0, array_size = count * sizeof(*pages); > @@ -203,6 +204,10 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp) > if (!pages) > return NULL; > > + /* Go straight to 4K chunks if caller says it's OK. */ > + if (dma_get_attr(DMA_ATTR_ALLOC_SINGLE_PAGES, attrs)) > + order = 0; I have a slight snag with this, in that you don't consult the IOMMU pgsize_bitmap at any point, and assume that it can map pages at the same granularity as the CPU. The documentation for DMA_ATTR_ALLOC_SINGLE_PAGES seems to be weaker than that. Will
Will, On Mon, Mar 21, 2016 at 11:01 AM, Will Deacon <will.deacon@arm.com> wrote: > On Thu, Mar 03, 2016 at 02:54:26AM +0800, Yong Wu wrote: >> Sometimes it is not worth for the iommu allocating big chunks. >> Here we enable DMA_ATTR_ALLOC_SINGLE_PAGES which could help avoid to >> allocate big chunks while iommu allocating buffer. >> >> More information about this attribute, please check Doug's commit[1]. >> >> [1]: https://lkml.org/lkml/2016/1/11/720 >> >> Cc: Robin Murphy <robin.murphy@arm.com> >> Suggested-by: Douglas Anderson <dianders@chromium.org> >> Signed-off-by: Yong Wu <yong.wu@mediatek.com> >> --- >> >> Our video drivers may soon use this. >> >> arch/arm64/mm/dma-mapping.c | 4 ++-- >> drivers/iommu/dma-iommu.c | 14 ++++++++++---- >> include/linux/dma-iommu.h | 4 ++-- >> 3 files changed, 14 insertions(+), 8 deletions(-) >> >> diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c >> index 331c4ca..3225e3ca 100644 >> --- a/arch/arm64/mm/dma-mapping.c >> +++ b/arch/arm64/mm/dma-mapping.c >> @@ -562,8 +562,8 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, >> struct page **pages; >> pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent); >> >> - pages = iommu_dma_alloc(dev, iosize, gfp, ioprot, handle, >> - flush_page); >> + pages = iommu_dma_alloc(dev, iosize, gfp, ioprot, attrs, >> + handle, flush_page); >> if (!pages) >> return NULL; >> >> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c >> index 72d6182..3569cb6 100644 >> --- a/drivers/iommu/dma-iommu.c >> +++ b/drivers/iommu/dma-iommu.c >> @@ -190,7 +190,8 @@ static void __iommu_dma_free_pages(struct page **pages, int count) >> kvfree(pages); >> } >> >> -static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp) >> +static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp, >> + struct dma_attrs *attrs) >> { >> struct page **pages; >> unsigned int i = 0, array_size = count * sizeof(*pages); >> @@ -203,6 +204,10 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp) >> if (!pages) >> return NULL; >> >> + /* Go straight to 4K chunks if caller says it's OK. */ >> + if (dma_get_attr(DMA_ATTR_ALLOC_SINGLE_PAGES, attrs)) >> + order = 0; > > I have a slight snag with this, in that you don't consult the IOMMU > pgsize_bitmap at any point, and assume that it can map pages at the > same granularity as the CPU. The documentation for > DMA_ATTR_ALLOC_SINGLE_PAGES seems to be weaker than that. Interesting. Is that something that exists in the real world? ...or something you think is coming soon? I'd argue that such a case existed in the real world then probably we're already broken. Unless I'm misreading, existing code will already fall all the way back to order 0 allocations. ...so while existing code might could work if it was called on a totally unfragmented system, it would already break once some fragmentation was introduced. I'm not saying that we shouldn't fix the code to handle this, I'm just saying that Yong Wu's patch doesn't appear to break any code that wasn't already broken. That might be reason to land his code first, then debate the finer points of whether IOMMUs with less granularity are likely to exist and whether we need to add complexity to the code to handle them (or just detect this case and return an error). From looking at a WIP patch provided to me by Yong Wu, it looks as if he thinks several more functions need to change to handle this need for IOMMUs that can't handle small pages. That seems to be further evidence that the work should be done in a separate patch. Doug
Hi, On Thu, Mar 24, 2016 at 4:50 AM, Will Deacon <will.deacon@arm.com> wrote: >> > I have a slight snag with this, in that you don't consult the IOMMU >> > pgsize_bitmap at any point, and assume that it can map pages at the >> > same granularity as the CPU. The documentation for >> > DMA_ATTR_ALLOC_SINGLE_PAGES seems to be weaker than that. >> >> Interesting. Is that something that exists in the real world? ...or >> something you think is coming soon? > > All it would take is for an IOMMU driver to choose a granule size that > differs from the CPU. For example, if the SMMU driver chose 64k pages > and the CPU was using 4k pages, then you'd have this problem. > >> I'd argue that such a case existed in the real world then probably >> we're already broken. Unless I'm misreading, existing code will >> already fall all the way back to order 0 allocations. ...so while >> existing code might could work if it was called on a totally >> unfragmented system, it would already break once some fragmentation >> was introduced. > > I disagree. For example, in the case I described previously, they may > well settle on a common supported granule (e.g. 2M), assuming that > contiguous pages were implemented in the page table code. I'm still a little confused how existing code could have worked if IOMMU has 64K pages and CPU has 4K pages if memory is fragmented. Presumably existing code in __iommu_dma_alloc_pages() would keep failing the "alloc_pages(gfp | __GFP_NORETRY, order);" call until order got down to 0. Then we'd allocate order 0 (4K) pages and we'd hit a bug. >> I'm not saying that we shouldn't fix the code to handle this, I'm just >> saying that Yong Wu's patch doesn't appear to break any code that >> wasn't already broken. That might be reason to land his code first, >> then debate the finer points of whether IOMMUs with less granularity >> are likely to exist and whether we need to add complexity to the code >> to handle them (or just detect this case and return an error). >> >> From looking at a WIP patch provided to me by Yong Wu, it looks as if >> he thinks several more functions need to change to handle this need >> for IOMMUs that can't handle small pages. That seems to be further >> evidence that the work should be done in a separate patch. > > Sure, my observations weren't intended to hold up this patch, but we > should double-check that we're no regressing any of the existing IOMMU > drivers/platforms by going straight to order 0 allocations. Argh. I see why I was confused and thought it got complicated. When I looked at diffs I thought Yong Wu's patch was more complicated because he rebased it and my diff showed some other unrelated patches. Dumb. OK, you're right that this is pretty simple. In any case, you're right that we should fix this. ...though assuming my argument above isn't wrong, then existing code is already broken or has a latent bug if you've got an IOMMU that can't map at least as granular as the CPU. To me that means adding that new feature (or fixing that latent bug) should be done in a separate patch. It could come in sequence either before or after this one. Of course, if everyone else thinks it should be one patch I won't block that... -Doug
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index 331c4ca..3225e3ca 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -562,8 +562,8 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, struct page **pages; pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent); - pages = iommu_dma_alloc(dev, iosize, gfp, ioprot, handle, - flush_page); + pages = iommu_dma_alloc(dev, iosize, gfp, ioprot, attrs, + handle, flush_page); if (!pages) return NULL; diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 72d6182..3569cb6 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -190,7 +190,8 @@ static void __iommu_dma_free_pages(struct page **pages, int count) kvfree(pages); } -static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp) +static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp, + struct dma_attrs *attrs) { struct page **pages; unsigned int i = 0, array_size = count * sizeof(*pages); @@ -203,6 +204,10 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp) if (!pages) return NULL; + /* Go straight to 4K chunks if caller says it's OK. */ + if (dma_get_attr(DMA_ATTR_ALLOC_SINGLE_PAGES, attrs)) + order = 0; + /* IOMMU can map any pages, so himem can also be used here */ gfp |= __GFP_NOWARN | __GFP_HIGHMEM; @@ -268,6 +273,7 @@ void iommu_dma_free(struct device *dev, struct page **pages, size_t size, * @size: Size of buffer in bytes * @gfp: Allocation flags * @prot: IOMMU mapping flags + * @attrs: DMA attributes flags * @handle: Out argument for allocated DMA handle * @flush_page: Arch callback which must ensure PAGE_SIZE bytes from the * given VA/PA are visible to the given non-coherent device. @@ -278,8 +284,8 @@ void iommu_dma_free(struct device *dev, struct page **pages, size_t size, * Return: Array of struct page pointers describing the buffer, * or NULL on failure. */ -struct page **iommu_dma_alloc(struct device *dev, size_t size, - gfp_t gfp, int prot, dma_addr_t *handle, +struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, + int prot, struct dma_attrs *attrs, dma_addr_t *handle, void (*flush_page)(struct device *, const void *, phys_addr_t)) { struct iommu_domain *domain = iommu_get_domain_for_dev(dev); @@ -292,7 +298,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, *handle = DMA_ERROR_CODE; - pages = __iommu_dma_alloc_pages(count, gfp); + pages = __iommu_dma_alloc_pages(count, gfp, attrs); if (!pages) return NULL; diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h index fc48103..08d9603 100644 --- a/include/linux/dma-iommu.h +++ b/include/linux/dma-iommu.h @@ -38,8 +38,8 @@ int dma_direction_to_prot(enum dma_data_direction dir, bool coherent); * These implement the bulk of the relevant DMA mapping callbacks, but require * the arch code to take care of attributes and cache maintenance */ -struct page **iommu_dma_alloc(struct device *dev, size_t size, - gfp_t gfp, int prot, dma_addr_t *handle, +struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, + int prot, struct dma_attrs *attrs, dma_addr_t *handle, void (*flush_page)(struct device *, const void *, phys_addr_t)); void iommu_dma_free(struct device *dev, struct page **pages, size_t size, dma_addr_t *handle);
Sometimes it is not worth for the iommu allocating big chunks. Here we enable DMA_ATTR_ALLOC_SINGLE_PAGES which could help avoid to allocate big chunks while iommu allocating buffer. More information about this attribute, please check Doug's commit[1]. [1]: https://lkml.org/lkml/2016/1/11/720 Cc: Robin Murphy <robin.murphy@arm.com> Suggested-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Yong Wu <yong.wu@mediatek.com> --- Our video drivers may soon use this. arch/arm64/mm/dma-mapping.c | 4 ++-- drivers/iommu/dma-iommu.c | 14 ++++++++++---- include/linux/dma-iommu.h | 4 ++-- 3 files changed, 14 insertions(+), 8 deletions(-)