Message ID | 20220321142128.2471199-2-zi.yan@sent.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Split a huge page to any lower order pages | expand |
On Mon, Mar 21, 2022 at 10:21:24AM -0400, Zi Yan wrote: > From: Zi Yan <ziy@nvidia.com> > > It sets memcg information for the pages after the split. A new parameter > new_order is added to tell the new page order, always 0 for now. It > prepares for upcoming changes to support split huge page to any lower > order. > > Signed-off-by: Zi Yan <ziy@nvidia.com> > --- > include/linux/memcontrol.h | 2 +- > mm/huge_memory.c | 2 +- > mm/memcontrol.c | 10 +++++----- > mm/page_alloc.c | 2 +- > 4 files changed, 8 insertions(+), 8 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 89b14729d59f..e71189454bf0 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -1116,7 +1116,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, > rcu_read_unlock(); > } > > -void split_page_memcg(struct page *head, unsigned int nr); > +void split_page_memcg(struct page *head, unsigned int nr, unsigned int new_order); It looks a bit inconsistent, can't we switch to use either nr or order for both arguments? The latter is preferable. Other than that, the patch looks good to me. Thanks!
On 21 Mar 2022, at 14:57, Roman Gushchin wrote: > On Mon, Mar 21, 2022 at 10:21:24AM -0400, Zi Yan wrote: >> From: Zi Yan <ziy@nvidia.com> >> >> It sets memcg information for the pages after the split. A new parameter >> new_order is added to tell the new page order, always 0 for now. It >> prepares for upcoming changes to support split huge page to any lower >> order. >> >> Signed-off-by: Zi Yan <ziy@nvidia.com> >> --- >> include/linux/memcontrol.h | 2 +- >> mm/huge_memory.c | 2 +- >> mm/memcontrol.c | 10 +++++----- >> mm/page_alloc.c | 2 +- >> 4 files changed, 8 insertions(+), 8 deletions(-) >> >> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h >> index 89b14729d59f..e71189454bf0 100644 >> --- a/include/linux/memcontrol.h >> +++ b/include/linux/memcontrol.h >> @@ -1116,7 +1116,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, >> rcu_read_unlock(); >> } >> >> -void split_page_memcg(struct page *head, unsigned int nr); >> +void split_page_memcg(struct page *head, unsigned int nr, unsigned int new_order); > > It looks a bit inconsistent, can't we switch to use either nr or order for both > arguments? The latter is preferable. Yes. Will change it to new_nr to be consistent. > > Other than that, the patch looks good to me. Thank you for the review. -- Best Regards, Yan, Zi
On Mon, Mar 21, 2022 at 03:07:46PM -0400, Zi Yan wrote:
> Yes. Will change it to new_nr to be consistent.
uh, you're going to call ilog2?
I think this would look less inconsistent if 'nr' were an unsigned long
(how long until we need 16GB pages? Think PPC already supports those)
On 21 Mar 2022, at 15:54, Matthew Wilcox wrote: > On Mon, Mar 21, 2022 at 03:07:46PM -0400, Zi Yan wrote: >> Yes. Will change it to new_nr to be consistent. > > uh, you're going to call ilog2? fortunately, no. Inside split_page_memcg(), I probably need to add VM_BUG_ON(nr % new_nr != 0) to make sure new_nr is a divisor of nr, since there are a couple of nr / new_nr operations. Otherwise, new_nr works. > > I think this would look less inconsistent if 'nr' were an unsigned long > (how long until we need 16GB pages? Think PPC already supports those) -- Best Regards, Yan, Zi
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 89b14729d59f..e71189454bf0 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1116,7 +1116,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, rcu_read_unlock(); } -void split_page_memcg(struct page *head, unsigned int nr); +void split_page_memcg(struct page *head, unsigned int nr, unsigned int new_order); unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, gfp_t gfp_mask, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2fe38212e07c..640040c386f0 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2371,7 +2371,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, int i; /* complete memcg works before add pages to LRU */ - split_page_memcg(head, nr); + split_page_memcg(head, nr, 0); if (PageAnon(head) && PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 43b2a22ce812..e7da413ac174 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3262,22 +3262,22 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) /* * Because page_memcg(head) is not set on tails, set it now. */ -void split_page_memcg(struct page *head, unsigned int nr) +void split_page_memcg(struct page *head, unsigned int nr, unsigned int new_order) { struct folio *folio = page_folio(head); struct mem_cgroup *memcg = folio_memcg(folio); - int i; + int i, new_nr = 1 << new_order; if (mem_cgroup_disabled() || !memcg) return; - for (i = 1; i < nr; i++) + for (i = new_nr; i < nr; i += new_nr) folio_page(folio, i)->memcg_data = folio->memcg_data; if (folio_memcg_kmem(folio)) - obj_cgroup_get_many(__folio_objcg(folio), nr - 1); + obj_cgroup_get_many(__folio_objcg(folio), nr / new_nr - 1); else - css_get_many(&memcg->css, nr - 1); + css_get_many(&memcg->css, nr / new_nr - 1); } #ifdef CONFIG_MEMCG_SWAP diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f648decfe39d..d982919b9e51 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3515,7 +3515,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, 1 << order); - split_page_memcg(page, 1 << order); + split_page_memcg(page, 1 << order, 0); } EXPORT_SYMBOL_GPL(split_page);