Message ID | 20241017064449.5235-1-suhua1@kingsoft.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | memblock: Uniform initialization all reserved pages to MIGRATE_MOVABLE | expand |
On Thu, Oct 17, 2024 at 02:44:49PM +0800, suhua wrote: >Currently when CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set, the reserved >pages are initialized to MIGRATE_MOVABLE by default in memmap_init. > >Reserved memory mainly stores the metadata of struct page. When >HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON=Y and hugepages are allocated, >the memory occupied by the struct page metadata will be freed. > >Before this patch: >when CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set, the freed memory was >placed on the Movable list; >When CONFIG_DEFERRED_STRUCT_PAGE_INIT=Y, the freed memory was placed on >the Unmovable list. > >After this patch, the freed memory is placed on the Movable list >regardless of whether CONFIG_DEFERRED_STRUCT_PAGE_INIT is set. > >Eg: >echo 500000 > /proc/sys/vm/nr_hugepages >cat /proc/pagetypeinfo > >before: >Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 >… >Node 0, zone Normal, type Unmovable 51 2 1 28 53 35 35 43 40 69 3852 >Node 0, zone Normal, type Movable 6485 4610 666 202 200 185 208 87 54 2 240 >Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0 >Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0 >Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0 >Unmovable ≈ 15GB > >after: >Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 >… >Node 0, zone Normal, type Unmovable 0 1 1 0 0 0 0 1 1 1 0 >Node 0, zone Normal, type Movable 1563 4107 1119 189 256 368 286 132 109 4 3841 >Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0 >Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0 >Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0 > >Signed-off-by: suhua <suhua1@kingsoft.com> Looks good to me. Reviewed-by: Wei Yang <richard.weiyang@gmail.com> >--- > mm/mm_init.c | 4 ++++ > 1 file changed, 4 insertions(+) > >diff --git a/mm/mm_init.c b/mm/mm_init.c >index 4ba5607aaf19..6dbf2df23eee 100644 >--- a/mm/mm_init.c >+++ b/mm/mm_init.c >@@ -722,6 +722,10 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid) > if (zone_spans_pfn(zone, pfn)) > break; > } >+ >+ if (pageblock_aligned(pfn)) >+ set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_MOVABLE); >+ > __init_single_page(pfn_to_page(pfn), pfn, zid, nid); > } > #else >-- >2.34.1 >
On Thu, Oct 17, 2024 at 02:44:49PM +0800, suhua wrote: > Subject: memblock: Uniform initialization all reserved pages to MIGRATE_MOVABLE I'd suggest: memblock: uniformly initialize all reserved pages to MIGRATE_MOVABLE > Currently when CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set, the reserved > pages are initialized to MIGRATE_MOVABLE by default in memmap_init. > > Reserved memory mainly stores the metadata of struct page. When > HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON=Y and hugepages are allocated, > the memory occupied by the struct page metadata will be freed. The struct page metadata is not freed with HVO, it is rather pages used for vmemmap. > Before this patch: > when CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set, the freed memory was > placed on the Movable list; > When CONFIG_DEFERRED_STRUCT_PAGE_INIT=Y, the freed memory was placed on > the Unmovable list. > > After this patch, the freed memory is placed on the Movable list > regardless of whether CONFIG_DEFERRED_STRUCT_PAGE_INIT is set. > > Eg: Please add back the description of the hardware used for this test and how much huge pages were allocated at boot. > echo 500000 > /proc/sys/vm/nr_hugepages > cat /proc/pagetypeinfo > > before: > Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 > … > Node 0, zone Normal, type Unmovable 51 2 1 28 53 35 35 43 40 69 3852 > Node 0, zone Normal, type Movable 6485 4610 666 202 200 185 208 87 54 2 240 > Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0 > Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0 > Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0 > Unmovable ≈ 15GB > > after: > Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 > … > Node 0, zone Normal, type Unmovable 0 1 1 0 0 0 0 1 1 1 0 > Node 0, zone Normal, type Movable 1563 4107 1119 189 256 368 286 132 109 4 3841 > Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0 > Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0 > Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0 > > Signed-off-by: suhua <suhua1@kingsoft.com> checkpatch.pl gives this warning: WARNING: From:/Signed-off-by: email address mismatch: 'From: suhua <suhua.tanke@gmail.com>' != 'Signed-off-by: suhua <suhua1@kingsoft.com>' Please update the commit authorship or signed-off to match. Also, Signed-off-by should use a known identity, i.e. Name Lastname. > --- > mm/mm_init.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/mm/mm_init.c b/mm/mm_init.c > index 4ba5607aaf19..6dbf2df23eee 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -722,6 +722,10 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid) > if (zone_spans_pfn(zone, pfn)) > break; > } > + > + if (pageblock_aligned(pfn)) > + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_MOVABLE); > + > __init_single_page(pfn_to_page(pfn), pfn, zid, nid); > } > #else > -- > 2.34.1 >
> On Thu, Oct 17, 2024 at 02:44:49PM +0800, suhua wrote: > > Subject: memblock: Uniform initialization all reserved pages to MIGRATE_MOVABLE > > I'd suggest: > > memblock: uniformly initialize all reserved pages to MIGRATE_MOVABLE Thanks for the correction. > > Currently when CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set, the reserved > > pages are initialized to MIGRATE_MOVABLE by default in memmap_init. > > > > Reserved memory mainly stores the metadata of struct page. When > > HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON=Y and hugepages are allocated, > > the memory occupied by the struct page metadata will be freed. > > The struct page metadata is not freed with HVO, it is rather pages used for > vmemmap. Yes, I will update the description. > > Before this patch: > > when CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set, the freed memory was > > placed on the Movable list; > > When CONFIG_DEFERRED_STRUCT_PAGE_INIT=Y, the freed memory was placed on > > the Unmovable list. > > > > After this patch, the freed memory is placed on the Movable list > > regardless of whether CONFIG_DEFERRED_STRUCT_PAGE_INIT is set. > > > > Eg: > > Please add back the description of the hardware used for this test and how > much huge pages were allocated at boot. Well, the new patch will add this information. > > echo 500000 > /proc/sys/vm/nr_hugepages > > cat /proc/pagetypeinfo > > > > before: > > Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 > > … > > Node 0, zone Normal, type Unmovable 51 2 1 28 53 35 35 43 40 69 3852 > > Node 0, zone Normal, type Movable 6485 4610 666 202 200 185 208 87 54 2 240 > > Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0 > > Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0 > > Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0 > > Unmovable ≈ 15GB > > > > after: > > Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 > > … > > Node 0, zone Normal, type Unmovable 0 1 1 0 0 0 0 1 1 1 0 > > Node 0, zone Normal, type Movable 1563 4107 1119 189 256 368 286 132 109 4 3841 > > Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0 > > Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0 > > Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0 > > > > Signed-off-by: suhua <suhua1@kingsoft.com> > > checkpatch.pl gives this warning: > > WARNING: From:/Signed-off-by: email address mismatch: 'From: suhua <suhua.tanke@gmail.com>' != 'Signed-off-by: suhua <suhua1@kingsoft.com>' > Please update the commit authorship or signed-off to match. > > Also, Signed-off-by should use a known identity, i.e. Name Lastname. Oh, this is my oversight. > > --- > > mm/mm_init.c | 4 ++++ > > 1 file changed, 4 insertions(+) > > > > diff --git a/mm/mm_init.c b/mm/mm_init.c > > index 4ba5607aaf19..6dbf2df23eee 100644 > > --- a/mm/mm_init.c > > +++ b/mm/mm_init.c > > @@ -722,6 +722,10 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid) > > if (zone_spans_pfn(zone, pfn)) > > break; > > } > > + > > + if (pageblock_aligned(pfn)) > > + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_MOVABLE); > > + > > __init_single_page(pfn_to_page(pfn), pfn, zid, nid); > > } > > #else > > -- > > 2.34.1 > > Sincerely yours, Su Mike Rapoport <rppt@kernel.org> 于2024年10月20日周日 15:15写道: > > On Thu, Oct 17, 2024 at 02:44:49PM +0800, suhua wrote: > > Subject: memblock: Uniform initialization all reserved pages to MIGRATE_MOVABLE > > I'd suggest: > > memblock: uniformly initialize all reserved pages to MIGRATE_MOVABLE > > > Currently when CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set, the reserved > > pages are initialized to MIGRATE_MOVABLE by default in memmap_init. > > > > Reserved memory mainly stores the metadata of struct page. When > > HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON=Y and hugepages are allocated, > > the memory occupied by the struct page metadata will be freed. > > The struct page metadata is not freed with HVO, it is rather pages used for > vmemmap. > > > Before this patch: > > when CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set, the freed memory was > > placed on the Movable list; > > When CONFIG_DEFERRED_STRUCT_PAGE_INIT=Y, the freed memory was placed on > > the Unmovable list. > > > > After this patch, the freed memory is placed on the Movable list > > regardless of whether CONFIG_DEFERRED_STRUCT_PAGE_INIT is set. > > > > Eg: > > Please add back the description of the hardware used for this test and how > much huge pages were allocated at boot. > > > echo 500000 > /proc/sys/vm/nr_hugepages > > cat /proc/pagetypeinfo > > > > before: > > Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 > > … > > Node 0, zone Normal, type Unmovable 51 2 1 28 53 35 35 43 40 69 3852 > > Node 0, zone Normal, type Movable 6485 4610 666 202 200 185 208 87 54 2 240 > > Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0 > > Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0 > > Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0 > > Unmovable ≈ 15GB > > > > after: > > Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 > > … > > Node 0, zone Normal, type Unmovable 0 1 1 0 0 0 0 1 1 1 0 > > Node 0, zone Normal, type Movable 1563 4107 1119 189 256 368 286 132 109 4 3841 > > Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0 > > Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0 > > Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0 > > > > Signed-off-by: suhua <suhua1@kingsoft.com> > > checkpatch.pl gives this warning: > > WARNING: From:/Signed-off-by: email address mismatch: 'From: suhua <suhua.tanke@gmail.com>' != 'Signed-off-by: suhua <suhua1@kingsoft.com>' > Please update the commit authorship or signed-off to match. > > Also, Signed-off-by should use a known identity, i.e. Name Lastname. > > > --- > > mm/mm_init.c | 4 ++++ > > 1 file changed, 4 insertions(+) > > > > diff --git a/mm/mm_init.c b/mm/mm_init.c > > index 4ba5607aaf19..6dbf2df23eee 100644 > > --- a/mm/mm_init.c > > +++ b/mm/mm_init.c > > @@ -722,6 +722,10 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid) > > if (zone_spans_pfn(zone, pfn)) > > break; > > } > > + > > + if (pageblock_aligned(pfn)) > > + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_MOVABLE); > > + > > __init_single_page(pfn_to_page(pfn), pfn, zid, nid); > > } > > #else > > -- > > 2.34.1 > > > > -- > Sincerely yours, > Mike.
diff --git a/mm/mm_init.c b/mm/mm_init.c index 4ba5607aaf19..6dbf2df23eee 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -722,6 +722,10 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid) if (zone_spans_pfn(zone, pfn)) break; } + + if (pageblock_aligned(pfn)) + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_MOVABLE); + __init_single_page(pfn_to_page(pfn), pfn, zid, nid); } #else
Currently when CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set, the reserved pages are initialized to MIGRATE_MOVABLE by default in memmap_init. Reserved memory mainly stores the metadata of struct page. When HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON=Y and hugepages are allocated, the memory occupied by the struct page metadata will be freed. Before this patch: when CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set, the freed memory was placed on the Movable list; When CONFIG_DEFERRED_STRUCT_PAGE_INIT=Y, the freed memory was placed on the Unmovable list. After this patch, the freed memory is placed on the Movable list regardless of whether CONFIG_DEFERRED_STRUCT_PAGE_INIT is set. Eg: echo 500000 > /proc/sys/vm/nr_hugepages cat /proc/pagetypeinfo before: Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 … Node 0, zone Normal, type Unmovable 51 2 1 28 53 35 35 43 40 69 3852 Node 0, zone Normal, type Movable 6485 4610 666 202 200 185 208 87 54 2 240 Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0 Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0 Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0 Unmovable ≈ 15GB after: Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 … Node 0, zone Normal, type Unmovable 0 1 1 0 0 0 0 1 1 1 0 Node 0, zone Normal, type Movable 1563 4107 1119 189 256 368 286 132 109 4 3841 Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0 Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0 Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0 Signed-off-by: suhua <suhua1@kingsoft.com> --- mm/mm_init.c | 4 ++++ 1 file changed, 4 insertions(+)