diff mbox series

mm/cma.c: use exact_nid true to fix possible per-numa cma leak

Message ID 20200628074345.27228-1-song.bao.hua@hisilicon.com (mailing list archive)
State New, archived
Headers show
Series mm/cma.c: use exact_nid true to fix possible per-numa cma leak | expand

Commit Message

Song Bao Hua (Barry Song) June 28, 2020, 7:43 a.m. UTC
Calling cma_declare_contiguous_nid() with false exact_nid for per-numa
reservation can easily cause cma leak and various confusion.
For example, mm/hugetlb.c is trying to reserve per-numa cma for gigantic
pages. But it can easily leak cma and make users confused when system has
memoryless nodes.

In case the system has 4 numa nodes, and only numa node0 has memory.
if we set hugetlb_cma=4G in bootargs, mm/hugetlb.c will get 4 cma areas
for 4 different numa nodes. since exact_nid=false in current code, all
4 numa nodes will get cma successfully from node0, but hugetlb_cma[1 to 3]
will never be available to hugepage will only allocate memory from
hugetlb_cma[0].

In case the system has 4 numa nodes, both numa node0&2 has memory, other
nodes have no memory.
if we set hugetlb_cma=4G in bootargs, mm/hugetlb.c will get 4 cma areas
for 4 different numa nodes. since exact_nid=false in current code, all
4 numa nodes will get cma successfully from node0 or 2, but hugetlb_cma[1]
and [3] will never be available to hugepage as mm/hugetlb.c will only
allocate memory from hugetlb_cma[0] and hugetlb_cma[2].
This causes permanent leak of the cma areas which are supposed to be
used by memoryless node.

Of cource we can workaround the issue by letting mm/hugetlb.c scan all
cma areas in alloc_gigantic_page() even node_mask includes node0 only.
that means when node_mask includes node0 only, we can get page from
hugetlb_cma[1] to hugetlb_cma[3]. But this will cause kernel crash in
free_gigantic_page() while it wants to free page by:
cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)

On the other hand, exact_nid=false won't consider numa distance, it
might be not that useful to leverage cma areas on remote nodes.
I feel it is much simpler to make exact_nid true to make everything
clear. After that, memoryless nodes won't be able to reserve per-numa
CMA from other nodes which have memory.

Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Aslan Bakirov <aslan@fb.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Andreas Schaufler <andreas.schaufler@gmx.de>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 mm/cma.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Roman Gushchin June 30, 2020, 7:08 p.m. UTC | #1
On Sun, Jun 28, 2020 at 07:43:45PM +1200, Barry Song wrote:
> Calling cma_declare_contiguous_nid() with false exact_nid for per-numa
> reservation can easily cause cma leak and various confusion.
> For example, mm/hugetlb.c is trying to reserve per-numa cma for gigantic
> pages. But it can easily leak cma and make users confused when system has
> memoryless nodes.
> 
> In case the system has 4 numa nodes, and only numa node0 has memory.
> if we set hugetlb_cma=4G in bootargs, mm/hugetlb.c will get 4 cma areas
> for 4 different numa nodes. since exact_nid=false in current code, all
> 4 numa nodes will get cma successfully from node0, but hugetlb_cma[1 to 3]
> will never be available to hugepage will only allocate memory from
> hugetlb_cma[0].
> 
> In case the system has 4 numa nodes, both numa node0&2 has memory, other
> nodes have no memory.
> if we set hugetlb_cma=4G in bootargs, mm/hugetlb.c will get 4 cma areas
> for 4 different numa nodes. since exact_nid=false in current code, all
> 4 numa nodes will get cma successfully from node0 or 2, but hugetlb_cma[1]
> and [3] will never be available to hugepage as mm/hugetlb.c will only
> allocate memory from hugetlb_cma[0] and hugetlb_cma[2].
> This causes permanent leak of the cma areas which are supposed to be
> used by memoryless node.
> 
> Of cource we can workaround the issue by letting mm/hugetlb.c scan all
> cma areas in alloc_gigantic_page() even node_mask includes node0 only.
> that means when node_mask includes node0 only, we can get page from
> hugetlb_cma[1] to hugetlb_cma[3]. But this will cause kernel crash in
> free_gigantic_page() while it wants to free page by:
> cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)
> 
> On the other hand, exact_nid=false won't consider numa distance, it
> might be not that useful to leverage cma areas on remote nodes.
> I feel it is much simpler to make exact_nid true to make everything
> clear. After that, memoryless nodes won't be able to reserve per-numa
> CMA from other nodes which have memory.

Totally agree.

Acked-by: Roman Gushchin <guro@fb.com>

Thanks!

> Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Cc: Aslan Bakirov <aslan@fb.com>
> Cc: Roman Gushchin <guro@fb.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Andreas Schaufler <andreas.schaufler@gmx.de>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: Joonsoo Kim <js1304@gmail.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> ---
>  mm/cma.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/cma.c b/mm/cma.c
> index b24151fa2101..f472f398026f 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -338,13 +338,13 @@ int __init cma_declare_contiguous_nid(phys_addr_t base,
>  		 */
>  		if (base < highmem_start && limit > highmem_start) {
>  			addr = memblock_alloc_range_nid(size, alignment,
> -					highmem_start, limit, nid, false);
> +					highmem_start, limit, nid, true);
>  			limit = highmem_start;
>  		}
>  
>  		if (!addr) {
>  			addr = memblock_alloc_range_nid(size, alignment, base,
> -					limit, nid, false);
> +					limit, nid, true);
>  			if (!addr) {
>  				ret = -ENOMEM;
>  				goto err;
> -- 
> 2.27.0
> 
>
Andrew Morton July 1, 2020, 2:09 a.m. UTC | #2
On Tue, 30 Jun 2020 12:08:25 -0700 Roman Gushchin <guro@fb.com> wrote:

> On Sun, Jun 28, 2020 at 07:43:45PM +1200, Barry Song wrote:
> > Calling cma_declare_contiguous_nid() with false exact_nid for per-numa
> > reservation can easily cause cma leak and various confusion.
> > For example, mm/hugetlb.c is trying to reserve per-numa cma for gigantic
> > pages. But it can easily leak cma and make users confused when system has
> > memoryless nodes.
> > 
> > In case the system has 4 numa nodes, and only numa node0 has memory.
> > if we set hugetlb_cma=4G in bootargs, mm/hugetlb.c will get 4 cma areas
> > for 4 different numa nodes. since exact_nid=false in current code, all
> > 4 numa nodes will get cma successfully from node0, but hugetlb_cma[1 to 3]
> > will never be available to hugepage will only allocate memory from
> > hugetlb_cma[0].
> > 
> > In case the system has 4 numa nodes, both numa node0&2 has memory, other
> > nodes have no memory.
> > if we set hugetlb_cma=4G in bootargs, mm/hugetlb.c will get 4 cma areas
> > for 4 different numa nodes. since exact_nid=false in current code, all
> > 4 numa nodes will get cma successfully from node0 or 2, but hugetlb_cma[1]
> > and [3] will never be available to hugepage as mm/hugetlb.c will only
> > allocate memory from hugetlb_cma[0] and hugetlb_cma[2].
> > This causes permanent leak of the cma areas which are supposed to be
> > used by memoryless node.
> > 
> > Of cource we can workaround the issue by letting mm/hugetlb.c scan all
> > cma areas in alloc_gigantic_page() even node_mask includes node0 only.
> > that means when node_mask includes node0 only, we can get page from
> > hugetlb_cma[1] to hugetlb_cma[3]. But this will cause kernel crash in
> > free_gigantic_page() while it wants to free page by:
> > cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)
> > 
> > On the other hand, exact_nid=false won't consider numa distance, it
> > might be not that useful to leverage cma areas on remote nodes.
> > I feel it is much simpler to make exact_nid true to make everything
> > clear. After that, memoryless nodes won't be able to reserve per-numa
> > CMA from other nodes which have memory.
> 
> Totally agree.
> 
> Acked-by: Roman Gushchin <guro@fb.com>
> 

Do we feel this merits a cc:stable?
Roman Gushchin July 1, 2020, 2:23 a.m. UTC | #3
On Tue, Jun 30, 2020 at 07:09:31PM -0700, Andrew Morton wrote:
> On Tue, 30 Jun 2020 12:08:25 -0700 Roman Gushchin <guro@fb.com> wrote:
> 
> > On Sun, Jun 28, 2020 at 07:43:45PM +1200, Barry Song wrote:
> > > Calling cma_declare_contiguous_nid() with false exact_nid for per-numa
> > > reservation can easily cause cma leak and various confusion.
> > > For example, mm/hugetlb.c is trying to reserve per-numa cma for gigantic
> > > pages. But it can easily leak cma and make users confused when system has
> > > memoryless nodes.
> > > 
> > > In case the system has 4 numa nodes, and only numa node0 has memory.
> > > if we set hugetlb_cma=4G in bootargs, mm/hugetlb.c will get 4 cma areas
> > > for 4 different numa nodes. since exact_nid=false in current code, all
> > > 4 numa nodes will get cma successfully from node0, but hugetlb_cma[1 to 3]
> > > will never be available to hugepage will only allocate memory from
> > > hugetlb_cma[0].
> > > 
> > > In case the system has 4 numa nodes, both numa node0&2 has memory, other
> > > nodes have no memory.
> > > if we set hugetlb_cma=4G in bootargs, mm/hugetlb.c will get 4 cma areas
> > > for 4 different numa nodes. since exact_nid=false in current code, all
> > > 4 numa nodes will get cma successfully from node0 or 2, but hugetlb_cma[1]
> > > and [3] will never be available to hugepage as mm/hugetlb.c will only
> > > allocate memory from hugetlb_cma[0] and hugetlb_cma[2].
> > > This causes permanent leak of the cma areas which are supposed to be
> > > used by memoryless node.
> > > 
> > > Of cource we can workaround the issue by letting mm/hugetlb.c scan all
> > > cma areas in alloc_gigantic_page() even node_mask includes node0 only.
> > > that means when node_mask includes node0 only, we can get page from
> > > hugetlb_cma[1] to hugetlb_cma[3]. But this will cause kernel crash in
> > > free_gigantic_page() while it wants to free page by:
> > > cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)
> > > 
> > > On the other hand, exact_nid=false won't consider numa distance, it
> > > might be not that useful to leverage cma areas on remote nodes.
> > > I feel it is much simpler to make exact_nid true to make everything
> > > clear. After that, memoryless nodes won't be able to reserve per-numa
> > > CMA from other nodes which have memory.
> > 
> > Totally agree.
> > 
> > Acked-by: Roman Gushchin <guro@fb.com>
> > 
> 
> Do we feel this merits a cc:stable?

It would be nice.

Thanks!
diff mbox series

Patch

diff --git a/mm/cma.c b/mm/cma.c
index b24151fa2101..f472f398026f 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -338,13 +338,13 @@  int __init cma_declare_contiguous_nid(phys_addr_t base,
 		 */
 		if (base < highmem_start && limit > highmem_start) {
 			addr = memblock_alloc_range_nid(size, alignment,
-					highmem_start, limit, nid, false);
+					highmem_start, limit, nid, true);
 			limit = highmem_start;
 		}
 
 		if (!addr) {
 			addr = memblock_alloc_range_nid(size, alignment, base,
-					limit, nid, false);
+					limit, nid, true);
 			if (!addr) {
 				ret = -ENOMEM;
 				goto err;