Message ID | daef7f8c134d989c55636a5790d8c0fcaca1bae3.1661205687.git.robin.murphy@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm/panfrost: Update io-pgtable API | expand |
> -static size_t get_pgsize(u64 addr, size_t size) > +static size_t get_pgsize(u64 addr, size_t size, size_t *count) > { > - if (addr & (SZ_2M - 1) || size < SZ_2M) > - return SZ_4K; > + size_t blk_offset = -addr % SZ_2M; addr is unsigned. if this is correct, it's magic.
On 2022-08-23 03:51, Alyssa Rosenzweig wrote: >> -static size_t get_pgsize(u64 addr, size_t size) >> +static size_t get_pgsize(u64 addr, size_t size, size_t *count) >> { >> - if (addr & (SZ_2M - 1) || size < SZ_2M) >> - return SZ_4K; >> + size_t blk_offset = -addr % SZ_2M; > > addr is unsigned. if this is correct, it's magic. Eh, it's just well-defined unsigned integer overflow. Take "SZ_2M - (addr % SZ_2M)", realise the first term can be anything that's zero modulo SZ_2M, including zero, then also that the operations can be done in either order to give the same result, and there you go. Cheers, Robin.
On Tue, Aug 23, 2022 at 11:42:33AM +0100, Robin Murphy wrote: > On 2022-08-23 03:51, Alyssa Rosenzweig wrote: > > > -static size_t get_pgsize(u64 addr, size_t size) > > > +static size_t get_pgsize(u64 addr, size_t size, size_t *count) > > > { > > > - if (addr & (SZ_2M - 1) || size < SZ_2M) > > > - return SZ_4K; > > > + size_t blk_offset = -addr % SZ_2M; > > > > addr is unsigned. if this is correct, it's magic. > > Eh, it's just well-defined unsigned integer overflow. Take "SZ_2M - (addr % > SZ_2M)", realise the first term can be anything that's zero modulo SZ_2M, > including zero, then also that the operations can be done in either order to > give the same result, and there you go. Shrug. It still seems voodoo to me but if this is normal kernel style I'm not going to complain, Acked-by for the patch regardless.
On 22/08/2022 23:01, Robin Murphy wrote: > Convert to io-pgtable's bulk {map,unmap}_pages() APIs, to help the old > single-page interfaces eventually go away. Unmapping heap BOs still > wants to be done a page at a time, but everything else can get the full > benefit of the more efficient interface. > > Signed-off-by: Robin Murphy <robin.murphy@arm.com> Looks correct to me. Although the voodoo magic does take a little bit of figuring out ;) Reviewed-by: Steven Price <steven.price@arm.com> I'll push to drm-misc-next. Thanks, Steve > --- > drivers/gpu/drm/panfrost/panfrost_mmu.c | 40 +++++++++++++++---------- > 1 file changed, 25 insertions(+), 15 deletions(-) > > diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c > index b285a8001b1d..e246d914e7f6 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c > +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c > @@ -248,11 +248,15 @@ void panfrost_mmu_reset(struct panfrost_device *pfdev) > mmu_write(pfdev, MMU_INT_MASK, ~0); > } > > -static size_t get_pgsize(u64 addr, size_t size) > +static size_t get_pgsize(u64 addr, size_t size, size_t *count) > { > - if (addr & (SZ_2M - 1) || size < SZ_2M) > - return SZ_4K; > + size_t blk_offset = -addr % SZ_2M; > > + if (blk_offset || size < SZ_2M) { > + *count = min_not_zero(blk_offset, size) / SZ_4K; > + return SZ_4K; > + } > + *count = size / SZ_2M; > return SZ_2M; > } > > @@ -287,12 +291,16 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu, > dev_dbg(pfdev->dev, "map: as=%d, iova=%llx, paddr=%lx, len=%zx", mmu->as, iova, paddr, len); > > while (len) { > - size_t pgsize = get_pgsize(iova | paddr, len); > + size_t pgcount, mapped = 0; > + size_t pgsize = get_pgsize(iova | paddr, len, &pgcount); > > - ops->map(ops, iova, paddr, pgsize, prot, GFP_KERNEL); > - iova += pgsize; > - paddr += pgsize; > - len -= pgsize; > + ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, > + GFP_KERNEL, &mapped); > + /* Don't get stuck if things have gone wrong */ > + mapped = max(mapped, pgsize); > + iova += mapped; > + paddr += mapped; > + len -= mapped; > } > } > > @@ -344,15 +352,17 @@ void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping) > mapping->mmu->as, iova, len); > > while (unmapped_len < len) { > - size_t unmapped_page; > - size_t pgsize = get_pgsize(iova, len - unmapped_len); > + size_t unmapped_page, pgcount; > + size_t pgsize = get_pgsize(iova, len - unmapped_len, &pgcount); > > - if (ops->iova_to_phys(ops, iova)) { > - unmapped_page = ops->unmap(ops, iova, pgsize, NULL); > - WARN_ON(unmapped_page != pgsize); > + if (bo->is_heap) > + pgcount = 1; > + if (!bo->is_heap || ops->iova_to_phys(ops, iova)) { > + unmapped_page = ops->unmap_pages(ops, iova, pgsize, pgcount, NULL); > + WARN_ON(unmapped_page != pgsize * pgcount); > } > - iova += pgsize; > - unmapped_len += pgsize; > + iova += pgsize * pgcount; > + unmapped_len += pgsize * pgcount; > } > > panfrost_mmu_flush_range(pfdev, mapping->mmu,
On 8/23/22 01:01, Robin Murphy wrote: > Convert to io-pgtable's bulk {map,unmap}_pages() APIs, to help the old > single-page interfaces eventually go away. Unmapping heap BOs still > wants to be done a page at a time, but everything else can get the full > benefit of the more efficient interface. > > Signed-off-by: Robin Murphy <robin.murphy@arm.com> > --- > drivers/gpu/drm/panfrost/panfrost_mmu.c | 40 +++++++++++++++---------- > 1 file changed, 25 insertions(+), 15 deletions(-) > > diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c > index b285a8001b1d..e246d914e7f6 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c > +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c > @@ -248,11 +248,15 @@ void panfrost_mmu_reset(struct panfrost_device *pfdev) > mmu_write(pfdev, MMU_INT_MASK, ~0); > } > > -static size_t get_pgsize(u64 addr, size_t size) > +static size_t get_pgsize(u64 addr, size_t size, size_t *count) > { > - if (addr & (SZ_2M - 1) || size < SZ_2M) > - return SZ_4K; > + size_t blk_offset = -addr % SZ_2M; > > + if (blk_offset || size < SZ_2M) { > + *count = min_not_zero(blk_offset, size) / SZ_4K; > + return SZ_4K; > + } > + *count = size / SZ_2M; > return SZ_2M; > } > > @@ -287,12 +291,16 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu, > dev_dbg(pfdev->dev, "map: as=%d, iova=%llx, paddr=%lx, len=%zx", mmu->as, iova, paddr, len); > > while (len) { > - size_t pgsize = get_pgsize(iova | paddr, len); > + size_t pgcount, mapped = 0; > + size_t pgsize = get_pgsize(iova | paddr, len, &pgcount); > > - ops->map(ops, iova, paddr, pgsize, prot, GFP_KERNEL); > - iova += pgsize; > - paddr += pgsize; > - len -= pgsize; > + ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, > + GFP_KERNEL, &mapped); > + /* Don't get stuck if things have gone wrong */ > + mapped = max(mapped, pgsize); > + iova += mapped; > + paddr += mapped; > + len -= mapped; > } > } > > @@ -344,15 +352,17 @@ void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping) > mapping->mmu->as, iova, len); > > while (unmapped_len < len) { > - size_t unmapped_page; > - size_t pgsize = get_pgsize(iova, len - unmapped_len); > + size_t unmapped_page, pgcount; > + size_t pgsize = get_pgsize(iova, len - unmapped_len, &pgcount); > > - if (ops->iova_to_phys(ops, iova)) { > - unmapped_page = ops->unmap(ops, iova, pgsize, NULL); > - WARN_ON(unmapped_page != pgsize); > + if (bo->is_heap) > + pgcount = 1; > + if (!bo->is_heap || ops->iova_to_phys(ops, iova)) { > + unmapped_page = ops->unmap_pages(ops, iova, pgsize, pgcount, NULL); > + WARN_ON(unmapped_page != pgsize * pgcount); This patch causes this WARN_ON to trigger. It doesn't happen all the time, I see that the whole unmapped area is mapped. Initially, I thought that this happens because it tries to unmap a partially mapped range, but I checked that ops->iova_to_phys() returns address for all 4k chunks. For example the pgsize * pgcount = 0x8000000, while returned unmapped_page = 0x6000000. I don't see this problem with this patch reverted. This is using today's linux-next. Any ideas?
On 2022-11-04 20:11, Dmitry Osipenko wrote: > On 8/23/22 01:01, Robin Murphy wrote: >> Convert to io-pgtable's bulk {map,unmap}_pages() APIs, to help the old >> single-page interfaces eventually go away. Unmapping heap BOs still >> wants to be done a page at a time, but everything else can get the full >> benefit of the more efficient interface. >> >> Signed-off-by: Robin Murphy <robin.murphy@arm.com> >> --- >> drivers/gpu/drm/panfrost/panfrost_mmu.c | 40 +++++++++++++++---------- >> 1 file changed, 25 insertions(+), 15 deletions(-) >> >> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c >> index b285a8001b1d..e246d914e7f6 100644 >> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c >> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c >> @@ -248,11 +248,15 @@ void panfrost_mmu_reset(struct panfrost_device *pfdev) >> mmu_write(pfdev, MMU_INT_MASK, ~0); >> } >> >> -static size_t get_pgsize(u64 addr, size_t size) >> +static size_t get_pgsize(u64 addr, size_t size, size_t *count) >> { >> - if (addr & (SZ_2M - 1) || size < SZ_2M) >> - return SZ_4K; >> + size_t blk_offset = -addr % SZ_2M; >> >> + if (blk_offset || size < SZ_2M) { >> + *count = min_not_zero(blk_offset, size) / SZ_4K; >> + return SZ_4K; >> + } >> + *count = size / SZ_2M; >> return SZ_2M; >> } >> >> @@ -287,12 +291,16 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu, >> dev_dbg(pfdev->dev, "map: as=%d, iova=%llx, paddr=%lx, len=%zx", mmu->as, iova, paddr, len); >> >> while (len) { >> - size_t pgsize = get_pgsize(iova | paddr, len); >> + size_t pgcount, mapped = 0; >> + size_t pgsize = get_pgsize(iova | paddr, len, &pgcount); >> >> - ops->map(ops, iova, paddr, pgsize, prot, GFP_KERNEL); >> - iova += pgsize; >> - paddr += pgsize; >> - len -= pgsize; >> + ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, >> + GFP_KERNEL, &mapped); >> + /* Don't get stuck if things have gone wrong */ >> + mapped = max(mapped, pgsize); >> + iova += mapped; >> + paddr += mapped; >> + len -= mapped; >> } >> } >> >> @@ -344,15 +352,17 @@ void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping) >> mapping->mmu->as, iova, len); >> >> while (unmapped_len < len) { >> - size_t unmapped_page; >> - size_t pgsize = get_pgsize(iova, len - unmapped_len); >> + size_t unmapped_page, pgcount; >> + size_t pgsize = get_pgsize(iova, len - unmapped_len, &pgcount); >> >> - if (ops->iova_to_phys(ops, iova)) { >> - unmapped_page = ops->unmap(ops, iova, pgsize, NULL); >> - WARN_ON(unmapped_page != pgsize); >> + if (bo->is_heap) >> + pgcount = 1; >> + if (!bo->is_heap || ops->iova_to_phys(ops, iova)) { >> + unmapped_page = ops->unmap_pages(ops, iova, pgsize, pgcount, NULL); >> + WARN_ON(unmapped_page != pgsize * pgcount); > > This patch causes this WARN_ON to trigger. It doesn't happen all the > time, I see that the whole unmapped area is mapped. Initially, I thought > that this happens because it tries to unmap a partially mapped range, > but I checked that ops->iova_to_phys() returns address for all 4k chunks. > > For example the pgsize * pgcount = 0x8000000, while returned > unmapped_page = 0x6000000. > > I don't see this problem with this patch reverted. This is using today's > linux-next. Any ideas? What's the base IOVA in such a case? I'm wondering if the truncated size lines up to any interesting boundary. Presumably you're not seeing any additional warnings from io-pgtable itself? Thanks, Robin.
On 11/4/22 23:37, Robin Murphy wrote: > On 2022-11-04 20:11, Dmitry Osipenko wrote: >> On 8/23/22 01:01, Robin Murphy wrote: >>> Convert to io-pgtable's bulk {map,unmap}_pages() APIs, to help the old >>> single-page interfaces eventually go away. Unmapping heap BOs still >>> wants to be done a page at a time, but everything else can get the full >>> benefit of the more efficient interface. >>> >>> Signed-off-by: Robin Murphy <robin.murphy@arm.com> >>> --- >>> drivers/gpu/drm/panfrost/panfrost_mmu.c | 40 +++++++++++++++---------- >>> 1 file changed, 25 insertions(+), 15 deletions(-) >>> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c >>> b/drivers/gpu/drm/panfrost/panfrost_mmu.c >>> index b285a8001b1d..e246d914e7f6 100644 >>> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c >>> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c >>> @@ -248,11 +248,15 @@ void panfrost_mmu_reset(struct panfrost_device >>> *pfdev) >>> mmu_write(pfdev, MMU_INT_MASK, ~0); >>> } >>> -static size_t get_pgsize(u64 addr, size_t size) >>> +static size_t get_pgsize(u64 addr, size_t size, size_t *count) >>> { >>> - if (addr & (SZ_2M - 1) || size < SZ_2M) >>> - return SZ_4K; >>> + size_t blk_offset = -addr % SZ_2M; >>> + if (blk_offset || size < SZ_2M) { >>> + *count = min_not_zero(blk_offset, size) / SZ_4K; >>> + return SZ_4K; >>> + } >>> + *count = size / SZ_2M; >>> return SZ_2M; >>> } >>> @@ -287,12 +291,16 @@ static int mmu_map_sg(struct panfrost_device >>> *pfdev, struct panfrost_mmu *mmu, >>> dev_dbg(pfdev->dev, "map: as=%d, iova=%llx, paddr=%lx, >>> len=%zx", mmu->as, iova, paddr, len); >>> while (len) { >>> - size_t pgsize = get_pgsize(iova | paddr, len); >>> + size_t pgcount, mapped = 0; >>> + size_t pgsize = get_pgsize(iova | paddr, len, &pgcount); >>> - ops->map(ops, iova, paddr, pgsize, prot, GFP_KERNEL); >>> - iova += pgsize; >>> - paddr += pgsize; >>> - len -= pgsize; >>> + ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, >>> + GFP_KERNEL, &mapped); >>> + /* Don't get stuck if things have gone wrong */ >>> + mapped = max(mapped, pgsize); >>> + iova += mapped; >>> + paddr += mapped; >>> + len -= mapped; >>> } >>> } >>> @@ -344,15 +352,17 @@ void panfrost_mmu_unmap(struct >>> panfrost_gem_mapping *mapping) >>> mapping->mmu->as, iova, len); >>> while (unmapped_len < len) { >>> - size_t unmapped_page; >>> - size_t pgsize = get_pgsize(iova, len - unmapped_len); >>> + size_t unmapped_page, pgcount; >>> + size_t pgsize = get_pgsize(iova, len - unmapped_len, &pgcount); >>> - if (ops->iova_to_phys(ops, iova)) { >>> - unmapped_page = ops->unmap(ops, iova, pgsize, NULL); >>> - WARN_ON(unmapped_page != pgsize); >>> + if (bo->is_heap) >>> + pgcount = 1; >>> + if (!bo->is_heap || ops->iova_to_phys(ops, iova)) { >>> + unmapped_page = ops->unmap_pages(ops, iova, pgsize, >>> pgcount, NULL); >>> + WARN_ON(unmapped_page != pgsize * pgcount); >> >> This patch causes this WARN_ON to trigger. It doesn't happen all the >> time, I see that the whole unmapped area is mapped. Initially, I thought >> that this happens because it tries to unmap a partially mapped range, >> but I checked that ops->iova_to_phys() returns address for all 4k chunks. >> >> For example the pgsize * pgcount = 0x8000000, while returned >> unmapped_page = 0x6000000. >> >> I don't see this problem with this patch reverted. This is using today's >> linux-next. Any ideas? > > What's the base IOVA in such a case? I'm wondering if the truncated size > lines up to any interesting boundary. Presumably you're not seeing any > additional warnings from io-pgtable itself? No warnings from io-pgtable. It succeeds for 0x32000000 and fails for 0x3a000000 using same size 0x8000000. It actually fails only for the 0x3a000000 as far as I see from my logs. Perhaps it indeed has to do something with the boundary.
On 2022-11-04 20:48, Dmitry Osipenko wrote: > On 11/4/22 23:37, Robin Murphy wrote: >> On 2022-11-04 20:11, Dmitry Osipenko wrote: >>> On 8/23/22 01:01, Robin Murphy wrote: >>>> Convert to io-pgtable's bulk {map,unmap}_pages() APIs, to help the old >>>> single-page interfaces eventually go away. Unmapping heap BOs still >>>> wants to be done a page at a time, but everything else can get the full >>>> benefit of the more efficient interface. >>>> >>>> Signed-off-by: Robin Murphy <robin.murphy@arm.com> >>>> --- >>>> drivers/gpu/drm/panfrost/panfrost_mmu.c | 40 +++++++++++++++---------- >>>> 1 file changed, 25 insertions(+), 15 deletions(-) >>>> >>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c >>>> b/drivers/gpu/drm/panfrost/panfrost_mmu.c >>>> index b285a8001b1d..e246d914e7f6 100644 >>>> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c >>>> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c >>>> @@ -248,11 +248,15 @@ void panfrost_mmu_reset(struct panfrost_device >>>> *pfdev) >>>> mmu_write(pfdev, MMU_INT_MASK, ~0); >>>> } >>>> -static size_t get_pgsize(u64 addr, size_t size) >>>> +static size_t get_pgsize(u64 addr, size_t size, size_t *count) >>>> { >>>> - if (addr & (SZ_2M - 1) || size < SZ_2M) >>>> - return SZ_4K; >>>> + size_t blk_offset = -addr % SZ_2M; >>>> + if (blk_offset || size < SZ_2M) { >>>> + *count = min_not_zero(blk_offset, size) / SZ_4K; >>>> + return SZ_4K; >>>> + } >>>> + *count = size / SZ_2M; >>>> return SZ_2M; >>>> } >>>> @@ -287,12 +291,16 @@ static int mmu_map_sg(struct panfrost_device >>>> *pfdev, struct panfrost_mmu *mmu, >>>> dev_dbg(pfdev->dev, "map: as=%d, iova=%llx, paddr=%lx, >>>> len=%zx", mmu->as, iova, paddr, len); >>>> while (len) { >>>> - size_t pgsize = get_pgsize(iova | paddr, len); >>>> + size_t pgcount, mapped = 0; >>>> + size_t pgsize = get_pgsize(iova | paddr, len, &pgcount); >>>> - ops->map(ops, iova, paddr, pgsize, prot, GFP_KERNEL); >>>> - iova += pgsize; >>>> - paddr += pgsize; >>>> - len -= pgsize; >>>> + ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, >>>> + GFP_KERNEL, &mapped); >>>> + /* Don't get stuck if things have gone wrong */ >>>> + mapped = max(mapped, pgsize); >>>> + iova += mapped; >>>> + paddr += mapped; >>>> + len -= mapped; >>>> } >>>> } >>>> @@ -344,15 +352,17 @@ void panfrost_mmu_unmap(struct >>>> panfrost_gem_mapping *mapping) >>>> mapping->mmu->as, iova, len); >>>> while (unmapped_len < len) { >>>> - size_t unmapped_page; >>>> - size_t pgsize = get_pgsize(iova, len - unmapped_len); >>>> + size_t unmapped_page, pgcount; >>>> + size_t pgsize = get_pgsize(iova, len - unmapped_len, &pgcount); >>>> - if (ops->iova_to_phys(ops, iova)) { >>>> - unmapped_page = ops->unmap(ops, iova, pgsize, NULL); >>>> - WARN_ON(unmapped_page != pgsize); >>>> + if (bo->is_heap) >>>> + pgcount = 1; >>>> + if (!bo->is_heap || ops->iova_to_phys(ops, iova)) { >>>> + unmapped_page = ops->unmap_pages(ops, iova, pgsize, >>>> pgcount, NULL); >>>> + WARN_ON(unmapped_page != pgsize * pgcount); >>> >>> This patch causes this WARN_ON to trigger. It doesn't happen all the >>> time, I see that the whole unmapped area is mapped. Initially, I thought >>> that this happens because it tries to unmap a partially mapped range, >>> but I checked that ops->iova_to_phys() returns address for all 4k chunks. >>> >>> For example the pgsize * pgcount = 0x8000000, while returned >>> unmapped_page = 0x6000000. >>> >>> I don't see this problem with this patch reverted. This is using today's >>> linux-next. Any ideas? >> >> What's the base IOVA in such a case? I'm wondering if the truncated size >> lines up to any interesting boundary. Presumably you're not seeing any >> additional warnings from io-pgtable itself? > > No warnings from io-pgtable. It succeeds for 0x32000000 and fails for > 0x3a000000 using same size 0x8000000. It actually fails only for the > 0x3a000000 as far as I see from my logs. Perhaps it indeed has to do > something with the boundary. Bleh, indeed even though we don't use 1GB block mappings, we still need to split at 1GB boundaries to match what the IOMMU API will do, and thus what io-pgtable expects. I guess I hadn't really considered that we might ever have that much graphics memory in play at once... The fix probably looks like this: diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index e246d914e7f6..6abc7d3726dd 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -256,7 +256,9 @@ static size_t get_pgsize(u64 addr, size_t size, size_t *count) *count = min_not_zero(blk_offset, size) / SZ_4K; return SZ_4K; } - *count = size / SZ_2M; + + blk_offset = -addr % SZ_1G; + *count = min_not_zero(blk_offset, size) / SZ_2M; return SZ_2M; } Thanks, Robin.
On 11/7/22 19:50, Robin Murphy wrote: > On 2022-11-04 20:48, Dmitry Osipenko wrote: >> On 11/4/22 23:37, Robin Murphy wrote: >>> On 2022-11-04 20:11, Dmitry Osipenko wrote: >>>> On 8/23/22 01:01, Robin Murphy wrote: >>>>> Convert to io-pgtable's bulk {map,unmap}_pages() APIs, to help the old >>>>> single-page interfaces eventually go away. Unmapping heap BOs still >>>>> wants to be done a page at a time, but everything else can get the >>>>> full >>>>> benefit of the more efficient interface. >>>>> >>>>> Signed-off-by: Robin Murphy <robin.murphy@arm.com> >>>>> --- >>>>> drivers/gpu/drm/panfrost/panfrost_mmu.c | 40 >>>>> +++++++++++++++---------- >>>>> 1 file changed, 25 insertions(+), 15 deletions(-) >>>>> >>>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c >>>>> b/drivers/gpu/drm/panfrost/panfrost_mmu.c >>>>> index b285a8001b1d..e246d914e7f6 100644 >>>>> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c >>>>> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c >>>>> @@ -248,11 +248,15 @@ void panfrost_mmu_reset(struct panfrost_device >>>>> *pfdev) >>>>> mmu_write(pfdev, MMU_INT_MASK, ~0); >>>>> } >>>>> -static size_t get_pgsize(u64 addr, size_t size) >>>>> +static size_t get_pgsize(u64 addr, size_t size, size_t *count) >>>>> { >>>>> - if (addr & (SZ_2M - 1) || size < SZ_2M) >>>>> - return SZ_4K; >>>>> + size_t blk_offset = -addr % SZ_2M; >>>>> + if (blk_offset || size < SZ_2M) { >>>>> + *count = min_not_zero(blk_offset, size) / SZ_4K; >>>>> + return SZ_4K; >>>>> + } >>>>> + *count = size / SZ_2M; >>>>> return SZ_2M; >>>>> } >>>>> @@ -287,12 +291,16 @@ static int mmu_map_sg(struct panfrost_device >>>>> *pfdev, struct panfrost_mmu *mmu, >>>>> dev_dbg(pfdev->dev, "map: as=%d, iova=%llx, paddr=%lx, >>>>> len=%zx", mmu->as, iova, paddr, len); >>>>> while (len) { >>>>> - size_t pgsize = get_pgsize(iova | paddr, len); >>>>> + size_t pgcount, mapped = 0; >>>>> + size_t pgsize = get_pgsize(iova | paddr, len, &pgcount); >>>>> - ops->map(ops, iova, paddr, pgsize, prot, GFP_KERNEL); >>>>> - iova += pgsize; >>>>> - paddr += pgsize; >>>>> - len -= pgsize; >>>>> + ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, >>>>> + GFP_KERNEL, &mapped); >>>>> + /* Don't get stuck if things have gone wrong */ >>>>> + mapped = max(mapped, pgsize); >>>>> + iova += mapped; >>>>> + paddr += mapped; >>>>> + len -= mapped; >>>>> } >>>>> } >>>>> @@ -344,15 +352,17 @@ void panfrost_mmu_unmap(struct >>>>> panfrost_gem_mapping *mapping) >>>>> mapping->mmu->as, iova, len); >>>>> while (unmapped_len < len) { >>>>> - size_t unmapped_page; >>>>> - size_t pgsize = get_pgsize(iova, len - unmapped_len); >>>>> + size_t unmapped_page, pgcount; >>>>> + size_t pgsize = get_pgsize(iova, len - unmapped_len, >>>>> &pgcount); >>>>> - if (ops->iova_to_phys(ops, iova)) { >>>>> - unmapped_page = ops->unmap(ops, iova, pgsize, NULL); >>>>> - WARN_ON(unmapped_page != pgsize); >>>>> + if (bo->is_heap) >>>>> + pgcount = 1; >>>>> + if (!bo->is_heap || ops->iova_to_phys(ops, iova)) { >>>>> + unmapped_page = ops->unmap_pages(ops, iova, pgsize, >>>>> pgcount, NULL); >>>>> + WARN_ON(unmapped_page != pgsize * pgcount); >>>> >>>> This patch causes this WARN_ON to trigger. It doesn't happen all the >>>> time, I see that the whole unmapped area is mapped. Initially, I >>>> thought >>>> that this happens because it tries to unmap a partially mapped range, >>>> but I checked that ops->iova_to_phys() returns address for all 4k >>>> chunks. >>>> >>>> For example the pgsize * pgcount = 0x8000000, while returned >>>> unmapped_page = 0x6000000. >>>> >>>> I don't see this problem with this patch reverted. This is using >>>> today's >>>> linux-next. Any ideas? >>> >>> What's the base IOVA in such a case? I'm wondering if the truncated size >>> lines up to any interesting boundary. Presumably you're not seeing any >>> additional warnings from io-pgtable itself? >> >> No warnings from io-pgtable. It succeeds for 0x32000000 and fails for >> 0x3a000000 using same size 0x8000000. It actually fails only for the >> 0x3a000000 as far as I see from my logs. Perhaps it indeed has to do >> something with the boundary. > > Bleh, indeed even though we don't use 1GB block mappings, we still need > to split at 1GB boundaries to match what the IOMMU API will do, and thus > what io-pgtable expects. I guess I hadn't really considered that we > might ever have that much graphics memory in play at once... > > The fix probably looks like this: > > diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c > b/drivers/gpu/drm/panfrost/panfrost_mmu.c > index e246d914e7f6..6abc7d3726dd 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c > +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c > @@ -256,7 +256,9 @@ static size_t get_pgsize(u64 addr, size_t size, > size_t *count) > *count = min_not_zero(blk_offset, size) / SZ_4K; > return SZ_4K; > } > - *count = size / SZ_2M; > + > + blk_offset = -addr % SZ_1G; > + *count = min_not_zero(blk_offset, size) / SZ_2M; > return SZ_2M; > } The warning is gone with this change applied, thank you! Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Will be great if you could add explanatory comment to the code once you'll send a proper patch.
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index b285a8001b1d..e246d914e7f6 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -248,11 +248,15 @@ void panfrost_mmu_reset(struct panfrost_device *pfdev) mmu_write(pfdev, MMU_INT_MASK, ~0); } -static size_t get_pgsize(u64 addr, size_t size) +static size_t get_pgsize(u64 addr, size_t size, size_t *count) { - if (addr & (SZ_2M - 1) || size < SZ_2M) - return SZ_4K; + size_t blk_offset = -addr % SZ_2M; + if (blk_offset || size < SZ_2M) { + *count = min_not_zero(blk_offset, size) / SZ_4K; + return SZ_4K; + } + *count = size / SZ_2M; return SZ_2M; } @@ -287,12 +291,16 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu, dev_dbg(pfdev->dev, "map: as=%d, iova=%llx, paddr=%lx, len=%zx", mmu->as, iova, paddr, len); while (len) { - size_t pgsize = get_pgsize(iova | paddr, len); + size_t pgcount, mapped = 0; + size_t pgsize = get_pgsize(iova | paddr, len, &pgcount); - ops->map(ops, iova, paddr, pgsize, prot, GFP_KERNEL); - iova += pgsize; - paddr += pgsize; - len -= pgsize; + ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, + GFP_KERNEL, &mapped); + /* Don't get stuck if things have gone wrong */ + mapped = max(mapped, pgsize); + iova += mapped; + paddr += mapped; + len -= mapped; } } @@ -344,15 +352,17 @@ void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping) mapping->mmu->as, iova, len); while (unmapped_len < len) { - size_t unmapped_page; - size_t pgsize = get_pgsize(iova, len - unmapped_len); + size_t unmapped_page, pgcount; + size_t pgsize = get_pgsize(iova, len - unmapped_len, &pgcount); - if (ops->iova_to_phys(ops, iova)) { - unmapped_page = ops->unmap(ops, iova, pgsize, NULL); - WARN_ON(unmapped_page != pgsize); + if (bo->is_heap) + pgcount = 1; + if (!bo->is_heap || ops->iova_to_phys(ops, iova)) { + unmapped_page = ops->unmap_pages(ops, iova, pgsize, pgcount, NULL); + WARN_ON(unmapped_page != pgsize * pgcount); } - iova += pgsize; - unmapped_len += pgsize; + iova += pgsize * pgcount; + unmapped_len += pgsize * pgcount; } panfrost_mmu_flush_range(pfdev, mapping->mmu,
Convert to io-pgtable's bulk {map,unmap}_pages() APIs, to help the old single-page interfaces eventually go away. Unmapping heap BOs still wants to be done a page at a time, but everything else can get the full benefit of the more efficient interface. Signed-off-by: Robin Murphy <robin.murphy@arm.com> --- drivers/gpu/drm/panfrost/panfrost_mmu.c | 40 +++++++++++++++---------- 1 file changed, 25 insertions(+), 15 deletions(-)