diff mbox series

[RFC,1/2] cma: Fix CMA_MIN_ALIGNMENT_BYTES during early_init

Message ID c1e66d3e69c8d90988c02b84c79db5d9dd93f053.1728386179.git.ritesh.list@gmail.com (mailing list archive)
State New
Headers show
Series [RFC,1/2] cma: Fix CMA_MIN_ALIGNMENT_BYTES during early_init | expand

Commit Message

Ritesh Harjani (IBM) Oct. 8, 2024, 1:27 p.m. UTC
During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE,
since pageblock_order is still zero and it gets initialized
later during paging_init() e.g.
paging_init() -> free_area_init() -> set_pageblock_order().

One such use case is -
early_setup() -> early_init_devtree() -> fadump_reserve_mem()

This causes CMA memory alignment check to be bypassed in
cma_init_reserved_mem(). Then later cma_activate_area() can hit
a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory
area was not pageblock_order aligned.

Instead of fixing it locally for fadump case on PowerPC, I believe
this should be fixed for CMA_MIN_ALIGNMENT_BYTES.

<stack trace>
==============
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010
flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0x7ffff) CMA
raw: 013ffff800000000 5deadbeef0000100 5deadbeef0000122 0000000000000000
raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: VM_BUG_ON_PAGE(pfn & ((1 << order) - 1))
------------[ cut here ]------------
kernel BUG at mm/page_alloc.c:778!

Call Trace:
__free_one_page+0x57c/0x7b0 (unreliable)
free_pcppages_bulk+0x1a8/0x2c8
free_unref_page_commit+0x3d4/0x4e4
free_unref_page+0x458/0x6d0
init_cma_reserved_pageblock+0x114/0x198
cma_init_reserved_areas+0x270/0x3e0
do_one_initcall+0x80/0x2f8
kernel_init_freeable+0x33c/0x530
kernel_init+0x34/0x26c
ret_from_kernel_user_thread+0x14/0x1c

Reported-by: Sachin P Bappalige <sachinpb@linux.ibm.com>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
 include/linux/cma.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

--
2.46.0

Comments

David Hildenbrand Oct. 8, 2024, 1:50 p.m. UTC | #1
On 08.10.24 15:27, Ritesh Harjani (IBM) wrote:
> During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE,
> since pageblock_order is still zero and it gets initialized
> later during paging_init() e.g.
> paging_init() -> free_area_init() -> set_pageblock_order().
> 
> One such use case is -
> early_setup() -> early_init_devtree() -> fadump_reserve_mem()
> 
> This causes CMA memory alignment check to be bypassed in
> cma_init_reserved_mem(). Then later cma_activate_area() can hit
> a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory
> area was not pageblock_order aligned.
> 
> Instead of fixing it locally for fadump case on PowerPC, I believe
> this should be fixed for CMA_MIN_ALIGNMENT_BYTES.

I think we should add a way to catch the usage of 
CMA_MIN_ALIGNMENT_BYTES before it actually has meaning (before 
pageblock_order was set) and fix the PowerPC usage by reshuffling the 
code accordingly.
Ritesh Harjani (IBM) Oct. 10, 2024, 3:19 a.m. UTC | #2
David Hildenbrand <david@redhat.com> writes:

> On 08.10.24 15:27, Ritesh Harjani (IBM) wrote:
>> During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE,
>> since pageblock_order is still zero and it gets initialized
>> later during paging_init() e.g.
>> paging_init() -> free_area_init() -> set_pageblock_order().
>> 
>> One such use case is -
>> early_setup() -> early_init_devtree() -> fadump_reserve_mem()
>> 
>> This causes CMA memory alignment check to be bypassed in
>> cma_init_reserved_mem(). Then later cma_activate_area() can hit
>> a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory
>> area was not pageblock_order aligned.
>> 
>> Instead of fixing it locally for fadump case on PowerPC, I believe
>> this should be fixed for CMA_MIN_ALIGNMENT_BYTES.
>
> I think we should add a way to catch the usage of 
> CMA_MIN_ALIGNMENT_BYTES before it actually has meaning (before 
> pageblock_order was set)

Maybe by enforcing that the pageblock_order should not be zero where we
do the alignment check then?

i.e. in cma_init_reserved_mem() 

diff --git a/mm/cma.c b/mm/cma.c
index 3e9724716bad..36d753e7a0bf 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
        if (!size || !memblock_is_region_reserved(base, size))
                return -EINVAL;

+       /*
+        * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which
+        * needs pageblock_order to be initialized. Let's enforce it.
+        */
+       if (!pageblock_order) {
+               pr_err("pageblock_order not yet initialized. Called during early boot?\n");
+               return -EINVAL;
+       }
+
        /* ensure minimal alignment required by mm core */
        if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES))
                return -EINVAL;


> and fix the PowerPC usage by reshuffling the 
> code accordingly.

Ok. I will submit a v2 with the above patch incldued.

Thanks for the review!
-ritesh
diff mbox series

Patch

diff --git a/include/linux/cma.h b/include/linux/cma.h
index 9db877506ea8..20abc6561bcd 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -5,6 +5,7 @@ 
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/numa.h>
+#include <linux/minmax.h>

 #ifdef CONFIG_CMA_AREAS
 #define MAX_CMA_AREAS	CONFIG_CMA_AREAS
@@ -17,7 +18,8 @@ 
  * -- can deal with only some pageblocks of a higher-order page being
  *  MIGRATE_CMA, we can use pageblock_nr_pages.
  */
-#define CMA_MIN_ALIGNMENT_PAGES pageblock_nr_pages
+#define CMA_MIN_ALIGNMENT_PAGES \
+	(1ULL << min_not_zero(MAX_PAGE_ORDER, pageblock_order))
 #define CMA_MIN_ALIGNMENT_BYTES (PAGE_SIZE * CMA_MIN_ALIGNMENT_PAGES)

 struct cma;