Message ID | 20240710095503.3193901-1-ryan.roberts@arm.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2] mm: shmem: Rename mTHP shmem counters | expand |
On 10 Jul 2024, at 5:55, Ryan Roberts wrote: > The legacy PMD-sized THP counters at /proc/vmstat include > thp_file_alloc, thp_file_fallback and thp_file_fallback_charge, which > rather confusingly refer to shmem THP and do not include any other types > of file pages. This is inconsistent since in most other places in the > kernel, THP counters are explicitly separated for anon, shmem and file > flavours. However, we are stuck with it since it constitutes a user ABI. > > Recently, commit 66f44583f9b6 ("mm: shmem: add mTHP counters for > anonymous shmem") added equivalent mTHP stats for shmem, keeping the > same "file_" prefix in the names. But in future, we may want to add > extra stats to cover actual file pages, at which point, it would all > become very confusing. > > So let's take the opportunity to rename these new counters "shmem_" > before the change makes it upstream and the ABI becomes immutable. While > we are at it, let's improve the documentation for the legacy counters to > make it clear that they count shmem pages only. > > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> > Reviewed-by: Lance Yang <ioworker0@gmail.com> > --- > > Hi All, > > Applies on top of yesterday's mm-unstable (2073cda629a4) and tested with mm > selftests; no regressions observed. > > The backstory here is that I'd like to introduce some counters for regular file > folio allocations to observe how often large folio allocation succeeds, but > these shmem counters are named "file" which is going to make things confusing. > So hoping to solve that before commit 66f44583f9b6 ("mm: shmem: add mTHP > counters for anonymous shmem") goes upstream (it is currently in mm-stable). > > Changes since v1 [1] > ==================== > - Updated documentation for existing legacy "file_" counters to make it clear > they only count shmem pages. > > [1] https://lore.kernel.org/linux-mm/20240708112445.2690631-1-ryan.roberts@arm.com/ > > Thanks, > Ryan > > Documentation/admin-guide/mm/transhuge.rst | 29 ++++++++++++---------- > include/linux/huge_mm.h | 6 ++--- > mm/huge_memory.c | 12 ++++----- > mm/shmem.c | 8 +++--- > 4 files changed, 29 insertions(+), 26 deletions(-) > LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com> -- Best Regards, Yan, Zi
On Wed, Jul 10, 2024 at 5:55 PM Ryan Roberts <ryan.roberts@arm.com> wrote: > > The legacy PMD-sized THP counters at /proc/vmstat include > thp_file_alloc, thp_file_fallback and thp_file_fallback_charge, which > rather confusingly refer to shmem THP and do not include any other types > of file pages. This is inconsistent since in most other places in the > kernel, THP counters are explicitly separated for anon, shmem and file > flavours. However, we are stuck with it since it constitutes a user ABI. > > Recently, commit 66f44583f9b6 ("mm: shmem: add mTHP counters for > anonymous shmem") added equivalent mTHP stats for shmem, keeping the > same "file_" prefix in the names. But in future, we may want to add > extra stats to cover actual file pages, at which point, it would all > become very confusing. > > So let's take the opportunity to rename these new counters "shmem_" > before the change makes it upstream and the ABI becomes immutable. While > we are at it, let's improve the documentation for the legacy counters to > make it clear that they count shmem pages only. > > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> > Reviewed-by: Lance Yang <ioworker0@gmail.com> Reviewed-by: Barry Song <baohua@kernel.org> > --- > > Hi All, > > Applies on top of yesterday's mm-unstable (2073cda629a4) and tested with mm > selftests; no regressions observed. > > The backstory here is that I'd like to introduce some counters for regular file > folio allocations to observe how often large folio allocation succeeds, but > these shmem counters are named "file" which is going to make things confusing. > So hoping to solve that before commit 66f44583f9b6 ("mm: shmem: add mTHP > counters for anonymous shmem") goes upstream (it is currently in mm-stable). > > Changes since v1 [1] > ==================== > - Updated documentation for existing legacy "file_" counters to make it clear > they only count shmem pages. > > [1] https://lore.kernel.org/linux-mm/20240708112445.2690631-1-ryan.roberts@arm.com/ > > Thanks, > Ryan > > Documentation/admin-guide/mm/transhuge.rst | 29 ++++++++++++---------- > include/linux/huge_mm.h | 6 ++--- > mm/huge_memory.c | 12 ++++----- > mm/shmem.c | 8 +++--- > 4 files changed, 29 insertions(+), 26 deletions(-) > > diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst > index 747c811ee8f1..3528daa1f239 100644 > --- a/Documentation/admin-guide/mm/transhuge.rst > +++ b/Documentation/admin-guide/mm/transhuge.rst > @@ -412,20 +412,23 @@ thp_collapse_alloc_failed > the allocation. > > thp_file_alloc > - is incremented every time a file huge page is successfully > - allocated. > + is incremented every time a shmem huge page is successfully > + allocated (Note that despite being named after "file", the counter > + measures only shmem). > > thp_file_fallback > - is incremented if a file huge page is attempted to be allocated > - but fails and instead falls back to using small pages. > + is incremented if a shmem huge page is attempted to be allocated > + but fails and instead falls back to using small pages. (Note that > + despite being named after "file", the counter measures only shmem). > > thp_file_fallback_charge > - is incremented if a file huge page cannot be charged and instead > + is incremented if a shmem huge page cannot be charged and instead > falls back to using small pages even though the allocation was > - successful. > + successful. (Note that despite being named after "file", the > + counter measures only shmem). > > thp_file_mapped > - is incremented every time a file huge page is mapped into > + is incremented every time a file or shmem huge page is mapped into > user address space. > > thp_split_page > @@ -496,16 +499,16 @@ swpout_fallback > Usually because failed to allocate some continuous swap space > for the huge page. > > -file_alloc > - is incremented every time a file huge page is successfully > +shmem_alloc > + is incremented every time a shmem huge page is successfully > allocated. > > -file_fallback > - is incremented if a file huge page is attempted to be allocated > +shmem_fallback > + is incremented if a shmem huge page is attempted to be allocated > but fails and instead falls back to using small pages. > > -file_fallback_charge > - is incremented if a file huge page cannot be charged and instead > +shmem_fallback_charge > + is incremented if a shmem huge page cannot be charged and instead > falls back to using small pages even though the allocation was > successful. > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index acb6ac24a07e..cff002be83eb 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -269,9 +269,9 @@ enum mthp_stat_item { > MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, > MTHP_STAT_SWPOUT, > MTHP_STAT_SWPOUT_FALLBACK, > - MTHP_STAT_FILE_ALLOC, > - MTHP_STAT_FILE_FALLBACK, > - MTHP_STAT_FILE_FALLBACK_CHARGE, > + MTHP_STAT_SHMEM_ALLOC, > + MTHP_STAT_SHMEM_FALLBACK, > + MTHP_STAT_SHMEM_FALLBACK_CHARGE, > MTHP_STAT_SPLIT, > MTHP_STAT_SPLIT_FAILED, > MTHP_STAT_SPLIT_DEFERRED, > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 9ec64aa2be94..f9696c94e211 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -568,9 +568,9 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); > DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); > DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT); > DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK); > -DEFINE_MTHP_STAT_ATTR(file_alloc, MTHP_STAT_FILE_ALLOC); > -DEFINE_MTHP_STAT_ATTR(file_fallback, MTHP_STAT_FILE_FALLBACK); > -DEFINE_MTHP_STAT_ATTR(file_fallback_charge, MTHP_STAT_FILE_FALLBACK_CHARGE); > +DEFINE_MTHP_STAT_ATTR(shmem_alloc, MTHP_STAT_SHMEM_ALLOC); > +DEFINE_MTHP_STAT_ATTR(shmem_fallback, MTHP_STAT_SHMEM_FALLBACK); > +DEFINE_MTHP_STAT_ATTR(shmem_fallback_charge, MTHP_STAT_SHMEM_FALLBACK_CHARGE); > DEFINE_MTHP_STAT_ATTR(split, MTHP_STAT_SPLIT); > DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FAILED); > DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED); > @@ -581,9 +581,9 @@ static struct attribute *stats_attrs[] = { > &anon_fault_fallback_charge_attr.attr, > &swpout_attr.attr, > &swpout_fallback_attr.attr, > - &file_alloc_attr.attr, > - &file_fallback_attr.attr, > - &file_fallback_charge_attr.attr, > + &shmem_alloc_attr.attr, > + &shmem_fallback_attr.attr, > + &shmem_fallback_charge_attr.attr, > &split_attr.attr, > &split_failed_attr.attr, > &split_deferred_attr.attr, > diff --git a/mm/shmem.c b/mm/shmem.c > index 921d59c3d669..f24dfbd387ba 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1777,7 +1777,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, > if (pages == HPAGE_PMD_NR) > count_vm_event(THP_FILE_FALLBACK); > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > - count_mthp_stat(order, MTHP_STAT_FILE_FALLBACK); > + count_mthp_stat(order, MTHP_STAT_SHMEM_FALLBACK); > #endif > order = next_order(&suitable_orders, order); > } > @@ -1804,8 +1804,8 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, > count_vm_event(THP_FILE_FALLBACK_CHARGE); > } > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > - count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_FALLBACK); > - count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_FALLBACK_CHARGE); > + count_mthp_stat(folio_order(folio), MTHP_STAT_SHMEM_FALLBACK); > + count_mthp_stat(folio_order(folio), MTHP_STAT_SHMEM_FALLBACK_CHARGE); > #endif > } > goto unlock; > @@ -2181,7 +2181,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, > if (folio_test_pmd_mappable(folio)) > count_vm_event(THP_FILE_ALLOC); > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > - count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_ALLOC); > + count_mthp_stat(folio_order(folio), MTHP_STAT_SHMEM_ALLOC); > #endif > goto alloced; > } > -- > 2.43.0 > Thanks Barry
On 10.07.24 11:55, Ryan Roberts wrote: > The legacy PMD-sized THP counters at /proc/vmstat include > thp_file_alloc, thp_file_fallback and thp_file_fallback_charge, which > rather confusingly refer to shmem THP and do not include any other types > of file pages. This is inconsistent since in most other places in the > kernel, THP counters are explicitly separated for anon, shmem and file > flavours. However, we are stuck with it since it constitutes a user ABI. > > Recently, commit 66f44583f9b6 ("mm: shmem: add mTHP counters for > anonymous shmem") added equivalent mTHP stats for shmem, keeping the > same "file_" prefix in the names. But in future, we may want to add > extra stats to cover actual file pages, at which point, it would all > become very confusing. > > So let's take the opportunity to rename these new counters "shmem_" > before the change makes it upstream and the ABI becomes immutable. While > we are at it, let's improve the documentation for the legacy counters to > make it clear that they count shmem pages only. > > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> > Reviewed-by: Lance Yang <ioworker0@gmail.com> > --- Acked-by: David Hildenbrand <david@redhat.com>
diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index 747c811ee8f1..3528daa1f239 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -412,20 +412,23 @@ thp_collapse_alloc_failed the allocation. thp_file_alloc - is incremented every time a file huge page is successfully - allocated. + is incremented every time a shmem huge page is successfully + allocated (Note that despite being named after "file", the counter + measures only shmem). thp_file_fallback - is incremented if a file huge page is attempted to be allocated - but fails and instead falls back to using small pages. + is incremented if a shmem huge page is attempted to be allocated + but fails and instead falls back to using small pages. (Note that + despite being named after "file", the counter measures only shmem). thp_file_fallback_charge - is incremented if a file huge page cannot be charged and instead + is incremented if a shmem huge page cannot be charged and instead falls back to using small pages even though the allocation was - successful. + successful. (Note that despite being named after "file", the + counter measures only shmem). thp_file_mapped - is incremented every time a file huge page is mapped into + is incremented every time a file or shmem huge page is mapped into user address space. thp_split_page @@ -496,16 +499,16 @@ swpout_fallback Usually because failed to allocate some continuous swap space for the huge page. -file_alloc - is incremented every time a file huge page is successfully +shmem_alloc + is incremented every time a shmem huge page is successfully allocated. -file_fallback - is incremented if a file huge page is attempted to be allocated +shmem_fallback + is incremented if a shmem huge page is attempted to be allocated but fails and instead falls back to using small pages. -file_fallback_charge - is incremented if a file huge page cannot be charged and instead +shmem_fallback_charge + is incremented if a shmem huge page cannot be charged and instead falls back to using small pages even though the allocation was successful. diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index acb6ac24a07e..cff002be83eb 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -269,9 +269,9 @@ enum mthp_stat_item { MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, MTHP_STAT_SWPOUT, MTHP_STAT_SWPOUT_FALLBACK, - MTHP_STAT_FILE_ALLOC, - MTHP_STAT_FILE_FALLBACK, - MTHP_STAT_FILE_FALLBACK_CHARGE, + MTHP_STAT_SHMEM_ALLOC, + MTHP_STAT_SHMEM_FALLBACK, + MTHP_STAT_SHMEM_FALLBACK_CHARGE, MTHP_STAT_SPLIT, MTHP_STAT_SPLIT_FAILED, MTHP_STAT_SPLIT_DEFERRED, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9ec64aa2be94..f9696c94e211 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -568,9 +568,9 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT); DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK); -DEFINE_MTHP_STAT_ATTR(file_alloc, MTHP_STAT_FILE_ALLOC); -DEFINE_MTHP_STAT_ATTR(file_fallback, MTHP_STAT_FILE_FALLBACK); -DEFINE_MTHP_STAT_ATTR(file_fallback_charge, MTHP_STAT_FILE_FALLBACK_CHARGE); +DEFINE_MTHP_STAT_ATTR(shmem_alloc, MTHP_STAT_SHMEM_ALLOC); +DEFINE_MTHP_STAT_ATTR(shmem_fallback, MTHP_STAT_SHMEM_FALLBACK); +DEFINE_MTHP_STAT_ATTR(shmem_fallback_charge, MTHP_STAT_SHMEM_FALLBACK_CHARGE); DEFINE_MTHP_STAT_ATTR(split, MTHP_STAT_SPLIT); DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FAILED); DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED); @@ -581,9 +581,9 @@ static struct attribute *stats_attrs[] = { &anon_fault_fallback_charge_attr.attr, &swpout_attr.attr, &swpout_fallback_attr.attr, - &file_alloc_attr.attr, - &file_fallback_attr.attr, - &file_fallback_charge_attr.attr, + &shmem_alloc_attr.attr, + &shmem_fallback_attr.attr, + &shmem_fallback_charge_attr.attr, &split_attr.attr, &split_failed_attr.attr, &split_deferred_attr.attr, diff --git a/mm/shmem.c b/mm/shmem.c index 921d59c3d669..f24dfbd387ba 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1777,7 +1777,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, if (pages == HPAGE_PMD_NR) count_vm_event(THP_FILE_FALLBACK); #ifdef CONFIG_TRANSPARENT_HUGEPAGE - count_mthp_stat(order, MTHP_STAT_FILE_FALLBACK); + count_mthp_stat(order, MTHP_STAT_SHMEM_FALLBACK); #endif order = next_order(&suitable_orders, order); } @@ -1804,8 +1804,8 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, count_vm_event(THP_FILE_FALLBACK_CHARGE); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE - count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_FALLBACK); - count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_FALLBACK_CHARGE); + count_mthp_stat(folio_order(folio), MTHP_STAT_SHMEM_FALLBACK); + count_mthp_stat(folio_order(folio), MTHP_STAT_SHMEM_FALLBACK_CHARGE); #endif } goto unlock; @@ -2181,7 +2181,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, if (folio_test_pmd_mappable(folio)) count_vm_event(THP_FILE_ALLOC); #ifdef CONFIG_TRANSPARENT_HUGEPAGE - count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_ALLOC); + count_mthp_stat(folio_order(folio), MTHP_STAT_SHMEM_ALLOC); #endif goto alloced; }