diff mbox series

[v2] mm: Fix for negative counter: nr_file_hugepages

Message ID 20231107181805.4188397-1-shr@devkernel.io (mailing list archive)
State New
Headers show
Series [v2] mm: Fix for negative counter: nr_file_hugepages | expand

Commit Message

Stefan Roesch Nov. 7, 2023, 6:18 p.m. UTC
While qualifiying the 6.4 release, the following warning was detected in
messages:

vmstat_refresh: nr_file_hugepages -15664

The warning is caused by the incorrect updating of the NR_FILE_THPS
counter in the function split_huge_page_to_list. The if case is checking
for folio_test_swapbacked, but the else case is missing the check for
folio_test_pmd_mappable. The other functions that manipulate the counter
like __filemap_add_folio and filemap_unaccount_folio have the
corresponding check.

I have a test case, which reproduces the problem. It can be found here:
  https://github.com/sroeschus/testcase/blob/main/vmstat_refresh/madv.c

The test case reproduces on an XFS filesystem. Running the same test
case on a BTRFS filesystem does not reproduce the problem.

AFAIK version 6.1 until 6.6 are affected by this problem.

Signed-off-by: Stefan Roesch <shr@devkernel.io>
Co-debugged-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
---
 mm/huge_memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


base-commit: ffc253263a1375a65fa6c9f62a893e9767fbebfa

Comments

Matthew Wilcox Nov. 7, 2023, 7:35 p.m. UTC | #1
On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote:
> +++ b/mm/huge_memory.c
> @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
>  			if (folio_test_swapbacked(folio)) {
>  				__lruvec_stat_mod_folio(folio, NR_SHMEM_THPS,
>  							-nr);
> -			} else {
> +			} else if (folio_test_pmd_mappable(folio)) {
>  				__lruvec_stat_mod_folio(folio, NR_FILE_THPS,
>  							-nr);
>  				filemap_nr_thps_dec(mapping);

As I said, we also need the folio_test_pmd_mappable() for swapbacked.
Not because there's currently a problem, but because we don't leave
landmines for other people to trip over in future!
Johannes Weiner Nov. 7, 2023, 8:06 p.m. UTC | #2
On Tue, Nov 07, 2023 at 07:35:37PM +0000, Matthew Wilcox wrote:
> On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote:
> > +++ b/mm/huge_memory.c
> > @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> >  			if (folio_test_swapbacked(folio)) {
> >  				__lruvec_stat_mod_folio(folio, NR_SHMEM_THPS,
> >  							-nr);
> > -			} else {
> > +			} else if (folio_test_pmd_mappable(folio)) {
> >  				__lruvec_stat_mod_folio(folio, NR_FILE_THPS,
> >  							-nr);
> >  				filemap_nr_thps_dec(mapping);
> 
> As I said, we also need the folio_test_pmd_mappable() for swapbacked.
> Not because there's currently a problem, but because we don't leave
> landmines for other people to trip over in future!

Do we need to fix filemap_unaccount_folio() as well?
Matthew Wilcox Nov. 7, 2023, 8:07 p.m. UTC | #3
On Tue, Nov 07, 2023 at 03:06:16PM -0500, Johannes Weiner wrote:
> On Tue, Nov 07, 2023 at 07:35:37PM +0000, Matthew Wilcox wrote:
> > On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote:
> > > +++ b/mm/huge_memory.c
> > > @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> > >  			if (folio_test_swapbacked(folio)) {
> > >  				__lruvec_stat_mod_folio(folio, NR_SHMEM_THPS,
> > >  							-nr);
> > > -			} else {
> > > +			} else if (folio_test_pmd_mappable(folio)) {
> > >  				__lruvec_stat_mod_folio(folio, NR_FILE_THPS,
> > >  							-nr);
> > >  				filemap_nr_thps_dec(mapping);
> > 
> > As I said, we also need the folio_test_pmd_mappable() for swapbacked.
> > Not because there's currently a problem, but because we don't leave
> > landmines for other people to trip over in future!
> 
> Do we need to fix filemap_unaccount_folio() as well?

Looks to me like it is already correct?

        __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr);
        if (folio_test_swapbacked(folio)) {
                __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr);
                if (folio_test_pmd_mappable(folio))
                        __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr);
        } else if (folio_test_pmd_mappable(folio)) {
                __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr);
                filemap_nr_thps_dec(mapping);
        }
Johannes Weiner Nov. 7, 2023, 8:20 p.m. UTC | #4
On Tue, Nov 07, 2023 at 08:07:59PM +0000, Matthew Wilcox wrote:
> On Tue, Nov 07, 2023 at 03:06:16PM -0500, Johannes Weiner wrote:
> > On Tue, Nov 07, 2023 at 07:35:37PM +0000, Matthew Wilcox wrote:
> > > On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote:
> > > > +++ b/mm/huge_memory.c
> > > > @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> > > >  			if (folio_test_swapbacked(folio)) {
> > > >  				__lruvec_stat_mod_folio(folio, NR_SHMEM_THPS,
> > > >  							-nr);
> > > > -			} else {
> > > > +			} else if (folio_test_pmd_mappable(folio)) {
> > > >  				__lruvec_stat_mod_folio(folio, NR_FILE_THPS,
> > > >  							-nr);
> > > >  				filemap_nr_thps_dec(mapping);
> > > 
> > > As I said, we also need the folio_test_pmd_mappable() for swapbacked.
> > > Not because there's currently a problem, but because we don't leave
> > > landmines for other people to trip over in future!
> > 
> > Do we need to fix filemap_unaccount_folio() as well?
> 
> Looks to me like it is already correct?
> 
>         __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr);
>         if (folio_test_swapbacked(folio)) {
>                 __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr);
>                 if (folio_test_pmd_mappable(folio))
>                         __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr);
>         } else if (folio_test_pmd_mappable(folio)) {
>                 __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr);
>                 filemap_nr_thps_dec(mapping);
>         }

Argh, I overlooked it because it's nested further in due to that
NR_SHMEM update. Sorry about the noise.
Stefan Roesch Nov. 8, 2023, 5:09 p.m. UTC | #5
Matthew Wilcox <willy@infradead.org> writes:

> On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote:
>> +++ b/mm/huge_memory.c
>> @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
>>  			if (folio_test_swapbacked(folio)) {
>>  				__lruvec_stat_mod_folio(folio, NR_SHMEM_THPS,
>>  							-nr);
>> -			} else {
>> +			} else if (folio_test_pmd_mappable(folio)) {
>>  				__lruvec_stat_mod_folio(folio, NR_FILE_THPS,
>>  							-nr);
>>  				filemap_nr_thps_dec(mapping);
>
> As I said, we also need the folio_test_pmd_mappable() for swapbacked.
> Not because there's currently a problem, but because we don't leave
> landmines for other people to trip over in future!

I'll add it in the next version.
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 064fbd90822b4..9dbd5ef5a3902 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2740,7 +2740,7 @@  int split_huge_page_to_list(struct page *page, struct list_head *list)
 			if (folio_test_swapbacked(folio)) {
 				__lruvec_stat_mod_folio(folio, NR_SHMEM_THPS,
 							-nr);
-			} else {
+			} else if (folio_test_pmd_mappable(folio)) {
 				__lruvec_stat_mod_folio(folio, NR_FILE_THPS,
 							-nr);
 				filemap_nr_thps_dec(mapping);