diff mbox series

powerpc/mm: Move CMA reservations after initmem_init()

Message ID 20220616120033.1976732-1-mpe@ellerman.id.au (mailing list archive)
State New
Headers show
Series powerpc/mm: Move CMA reservations after initmem_init() | expand

Commit Message

Michael Ellerman June 16, 2022, noon UTC
After commit 11ac3e87ce09 ("mm: cma: use pageblock_order as the single
alignment") there is an error at boot about the KVM CMA reservation
failing, eg:

  kvm_cma_reserve: reserving 6553 MiB for global area
  cma: Failed to reserve 6553 MiB

That makes it impossible to start KVM guests using the hash MMU with
more than 2G of memory, because the VM is unable to allocate a large
enough region for the hash page table, eg:

  $ qemu-system-ppc64 -enable-kvm -M pseries -m 4G ...
  qemu-system-ppc64: Failed to allocate KVM HPT of order 25: Cannot allocate memory

Aneesh pointed out that this happens because when kvm_cma_reserve() is
called, pageblock_order has not been initialised yet, and is still zero,
causing the checks in cma_init_reserved_mem() against
CMA_MIN_ALIGNMENT_PAGES to fail.

Fix it by moving the call to kvm_cma_reserve() after initmem_init(). The
pageblock_order is initialised in sparse_init() which is called from
initmem_init().

Also move the hugetlb CMA reservation.

Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/setup-common.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

Comments

Aneesh Kumar K.V June 16, 2022, 1:07 p.m. UTC | #1
Michael Ellerman <mpe@ellerman.id.au> writes:

> After commit 11ac3e87ce09 ("mm: cma: use pageblock_order as the single
> alignment") there is an error at boot about the KVM CMA reservation
> failing, eg:
>
>   kvm_cma_reserve: reserving 6553 MiB for global area
>   cma: Failed to reserve 6553 MiB
>
> That makes it impossible to start KVM guests using the hash MMU with
> more than 2G of memory, because the VM is unable to allocate a large
> enough region for the hash page table, eg:
>
>   $ qemu-system-ppc64 -enable-kvm -M pseries -m 4G ...
>   qemu-system-ppc64: Failed to allocate KVM HPT of order 25: Cannot allocate memory
>
> Aneesh pointed out that this happens because when kvm_cma_reserve() is
> called, pageblock_order has not been initialised yet, and is still zero,
> causing the checks in cma_init_reserved_mem() against
> CMA_MIN_ALIGNMENT_PAGES to fail.
>
> Fix it by moving the call to kvm_cma_reserve() after initmem_init(). The
> pageblock_order is initialised in sparse_init() which is called from
> initmem_init().
>
> Also move the hugetlb CMA reservation.
>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
>  arch/powerpc/kernel/setup-common.c | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index eb0077b302e2..1a02629ec70b 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -935,12 +935,6 @@ void __init setup_arch(char **cmdline_p)
>  	/* Print various info about the machine that has been gathered so far. */
>  	print_system_info();
>  
> -	/* Reserve large chunks of memory for use by CMA for KVM. */
> -	kvm_cma_reserve();
> -
> -	/*  Reserve large chunks of memory for us by CMA for hugetlb */
> -	gigantic_hugetlb_cma_reserve();
> -
>  	klp_init_thread_info(&init_task);
>  
>  	setup_initial_init_mm(_stext, _etext, _edata, _end);
> @@ -955,6 +949,13 @@ void __init setup_arch(char **cmdline_p)
>  
>  	initmem_init();
>  
> +	/*
> +	 * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
> +	 * be called after initmem_init(), so that pageblock_order is initialised.
> +	 */
> +	kvm_cma_reserve();
> +	gigantic_hugetlb_cma_reserve();
> +
>  	early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
>  
>  	if (ppc_md.setup_arch)
> -- 
> 2.35.3
Zi Yan June 16, 2022, 1:33 p.m. UTC | #2
On 16 Jun 2022, at 8:00, Michael Ellerman wrote:

> After commit 11ac3e87ce09 ("mm: cma: use pageblock_order as the single
> alignment") there is an error at boot about the KVM CMA reservation
> failing, eg:
>
>   kvm_cma_reserve: reserving 6553 MiB for global area
>   cma: Failed to reserve 6553 MiB
>
> That makes it impossible to start KVM guests using the hash MMU with
> more than 2G of memory, because the VM is unable to allocate a large
> enough region for the hash page table, eg:
>
>   $ qemu-system-ppc64 -enable-kvm -M pseries -m 4G ...
>   qemu-system-ppc64: Failed to allocate KVM HPT of order 25: Cannot allocate memory
>
> Aneesh pointed out that this happens because when kvm_cma_reserve() is
> called, pageblock_order has not been initialised yet, and is still zero,
> causing the checks in cma_init_reserved_mem() against
> CMA_MIN_ALIGNMENT_PAGES to fail.
>
> Fix it by moving the call to kvm_cma_reserve() after initmem_init(). The
> pageblock_order is initialised in sparse_init() which is called from
> initmem_init().
>
> Also move the hugetlb CMA reservation.
>
> Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
>  arch/powerpc/kernel/setup-common.c | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index eb0077b302e2..1a02629ec70b 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -935,12 +935,6 @@ void __init setup_arch(char **cmdline_p)
>  	/* Print various info about the machine that has been gathered so far. */
>  	print_system_info();
>
> -	/* Reserve large chunks of memory for use by CMA for KVM. */
> -	kvm_cma_reserve();
> -
> -	/*  Reserve large chunks of memory for us by CMA for hugetlb */
> -	gigantic_hugetlb_cma_reserve();
> -
>  	klp_init_thread_info(&init_task);
>
>  	setup_initial_init_mm(_stext, _etext, _edata, _end);
> @@ -955,6 +949,13 @@ void __init setup_arch(char **cmdline_p)
>
>  	initmem_init();
>
> +	/*
> +	 * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
> +	 * be called after initmem_init(), so that pageblock_order is initialised.
> +	 */
> +	kvm_cma_reserve();
> +	gigantic_hugetlb_cma_reserve();
> +
>  	early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
>
>  	if (ppc_md.setup_arch)
> -- 
> 2.35.3

Thank you for the fix.

Reviewed-by: Zi Yan <ziy@nvidia.com>


--
Best Regards,
Yan, Zi
Michael Ellerman June 26, 2022, 12:28 a.m. UTC | #3
On Thu, 16 Jun 2022 22:00:33 +1000, Michael Ellerman wrote:
> After commit 11ac3e87ce09 ("mm: cma: use pageblock_order as the single
> alignment") there is an error at boot about the KVM CMA reservation
> failing, eg:
> 
>   kvm_cma_reserve: reserving 6553 MiB for global area
>   cma: Failed to reserve 6553 MiB
> 
> [...]

Applied to powerpc/fixes.

[1/1] powerpc/mm: Move CMA reservations after initmem_init()
      https://git.kernel.org/powerpc/c/6cf06c17e94f26c290fd3370a5c36514ae15ac43

cheers
diff mbox series

Patch

diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index eb0077b302e2..1a02629ec70b 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -935,12 +935,6 @@  void __init setup_arch(char **cmdline_p)
 	/* Print various info about the machine that has been gathered so far. */
 	print_system_info();
 
-	/* Reserve large chunks of memory for use by CMA for KVM. */
-	kvm_cma_reserve();
-
-	/*  Reserve large chunks of memory for us by CMA for hugetlb */
-	gigantic_hugetlb_cma_reserve();
-
 	klp_init_thread_info(&init_task);
 
 	setup_initial_init_mm(_stext, _etext, _edata, _end);
@@ -955,6 +949,13 @@  void __init setup_arch(char **cmdline_p)
 
 	initmem_init();
 
+	/*
+	 * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
+	 * be called after initmem_init(), so that pageblock_order is initialised.
+	 */
+	kvm_cma_reserve();
+	gigantic_hugetlb_cma_reserve();
+
 	early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
 
 	if (ppc_md.setup_arch)