Message ID | 20240404162515.527802-1-fvdl@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [1/2] mm/cma: drop incorrect alignment check in cma_init_reserved_mem | expand |
On Thu, 4 Apr 2024 16:25:14 +0000 Frank van der Linden <fvdl@google.com> wrote: > cma_init_reserved_mem uses IS_ALIGNED to check if the size > represented by one bit in the cma allocation bitmask is > aligned with CMA_MIN_ALIGNMENT_BYTES (pageblock size). > > However, this is too strict, as this will fail if > order_per_bit > pageblock_order, which is a valid configuration. > > We could check IS_ALIGNED both ways, but since both numbers are > powers of two, no check is needed at all. What are the userspace visible effects of this bug?
On Thu, Apr 4, 2024 at 1:15 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Thu, 4 Apr 2024 16:25:14 +0000 Frank van der Linden <fvdl@google.com> wrote: > > > cma_init_reserved_mem uses IS_ALIGNED to check if the size > > represented by one bit in the cma allocation bitmask is > > aligned with CMA_MIN_ALIGNMENT_BYTES (pageblock size). > > > > However, this is too strict, as this will fail if > > order_per_bit > pageblock_order, which is a valid configuration. > > > > We could check IS_ALIGNED both ways, but since both numbers are > > powers of two, no check is needed at all. > > What are the userspace visible effects of this bug? None that I know of. This bug was exposed because I made the hugetlb code correctly pass the right order_per_bit argument (see the accompanying hugetlb cma fix), which then tripped this check when I backported it to an older kernel, passing an order of 30 (1G hugetlb page) as order_per_bit. This actually won't happen for 6.9-rc, since the (intended) order_per_bit was reduced to HUGETLB_PAGE_ORDER because of hugetlb page demotion. So, no user visible effects. However, if the other fix is going to be backported, this one is a prereq. - Frank
On Thu, Apr 4, 2024 at 1:05 PM David Hildenbrand <david@redhat.com> wrote: > > On 04.04.24 18:25, Frank van der Linden wrote: > > cma_init_reserved_mem uses IS_ALIGNED to check if the size > > represented by one bit in the cma allocation bitmask is > > aligned with CMA_MIN_ALIGNMENT_BYTES (pageblock size). > > I recall the important part is that our area always covers full > pageblocks (CMA_MIN_ALIGNMENT_BYTES), because we cannot have "partial > CMA" pageblocks. > > Internally, allocating from multiple pageblock should just work. > > It's late in Germany, hopefully I am not missing something > > Acked-by: David Hildenbrand <david@redhat.com> > > > > > However, this is too strict, as this will fail if > > order_per_bit > pageblock_order, which is a valid configuration. > > > > We could check IS_ALIGNED both ways, but since both numbers are > > powers of two, no check is needed at all. > > > > Signed-off-by: Frank van der Linden <fvdl@google.com> > > Cc: Marek Szyprowski <m.szyprowski@samsung.com> > > Cc: David Hildenbrand <david@redhat.com> > > Fixes: de9e14eebf33 ("drivers: dma-contiguous: add initialization from device tree") > > Is there are real setup/BUG we are fixing? Why did we not stumble over > that earlier? > > If so, please describe that in the patch description. Nobody stumbled over it because the only user of CMA that should have passed in an order_per_bit large enough to trigger this was hugetlb_cma. However, because of a bug, it didn't :) When I fixed that, I noticed that this check fired. - Frank
diff --git a/mm/cma.c b/mm/cma.c index 01f5a8f71ddf..3e9724716bad 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -182,10 +182,6 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, if (!size || !memblock_is_region_reserved(base, size)) return -EINVAL; - /* alignment should be aligned with order_per_bit */ - if (!IS_ALIGNED(CMA_MIN_ALIGNMENT_PAGES, 1 << order_per_bit)) - return -EINVAL; - /* ensure minimal alignment required by mm core */ if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES)) return -EINVAL;
cma_init_reserved_mem uses IS_ALIGNED to check if the size represented by one bit in the cma allocation bitmask is aligned with CMA_MIN_ALIGNMENT_BYTES (pageblock size). However, this is too strict, as this will fail if order_per_bit > pageblock_order, which is a valid configuration. We could check IS_ALIGNED both ways, but since both numbers are powers of two, no check is needed at all. Signed-off-by: Frank van der Linden <fvdl@google.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: David Hildenbrand <david@redhat.com> Fixes: de9e14eebf33 ("drivers: dma-contiguous: add initialization from device tree") --- mm/cma.c | 4 ---- 1 file changed, 4 deletions(-)