Message ID | 20231107181805.4188397-1-shr@devkernel.io (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2] mm: Fix for negative counter: nr_file_hugepages | expand |
On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote: > +++ b/mm/huge_memory.c > @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > if (folio_test_swapbacked(folio)) { > __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, > -nr); > - } else { > + } else if (folio_test_pmd_mappable(folio)) { > __lruvec_stat_mod_folio(folio, NR_FILE_THPS, > -nr); > filemap_nr_thps_dec(mapping); As I said, we also need the folio_test_pmd_mappable() for swapbacked. Not because there's currently a problem, but because we don't leave landmines for other people to trip over in future!
On Tue, Nov 07, 2023 at 07:35:37PM +0000, Matthew Wilcox wrote: > On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote: > > +++ b/mm/huge_memory.c > > @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > > if (folio_test_swapbacked(folio)) { > > __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, > > -nr); > > - } else { > > + } else if (folio_test_pmd_mappable(folio)) { > > __lruvec_stat_mod_folio(folio, NR_FILE_THPS, > > -nr); > > filemap_nr_thps_dec(mapping); > > As I said, we also need the folio_test_pmd_mappable() for swapbacked. > Not because there's currently a problem, but because we don't leave > landmines for other people to trip over in future! Do we need to fix filemap_unaccount_folio() as well?
On Tue, Nov 07, 2023 at 03:06:16PM -0500, Johannes Weiner wrote: > On Tue, Nov 07, 2023 at 07:35:37PM +0000, Matthew Wilcox wrote: > > On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote: > > > +++ b/mm/huge_memory.c > > > @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > > > if (folio_test_swapbacked(folio)) { > > > __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, > > > -nr); > > > - } else { > > > + } else if (folio_test_pmd_mappable(folio)) { > > > __lruvec_stat_mod_folio(folio, NR_FILE_THPS, > > > -nr); > > > filemap_nr_thps_dec(mapping); > > > > As I said, we also need the folio_test_pmd_mappable() for swapbacked. > > Not because there's currently a problem, but because we don't leave > > landmines for other people to trip over in future! > > Do we need to fix filemap_unaccount_folio() as well? Looks to me like it is already correct? __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr); if (folio_test_swapbacked(folio)) { __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr); if (folio_test_pmd_mappable(folio)) __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); } else if (folio_test_pmd_mappable(folio)) { __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); }
On Tue, Nov 07, 2023 at 08:07:59PM +0000, Matthew Wilcox wrote: > On Tue, Nov 07, 2023 at 03:06:16PM -0500, Johannes Weiner wrote: > > On Tue, Nov 07, 2023 at 07:35:37PM +0000, Matthew Wilcox wrote: > > > On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote: > > > > +++ b/mm/huge_memory.c > > > > @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > > > > if (folio_test_swapbacked(folio)) { > > > > __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, > > > > -nr); > > > > - } else { > > > > + } else if (folio_test_pmd_mappable(folio)) { > > > > __lruvec_stat_mod_folio(folio, NR_FILE_THPS, > > > > -nr); > > > > filemap_nr_thps_dec(mapping); > > > > > > As I said, we also need the folio_test_pmd_mappable() for swapbacked. > > > Not because there's currently a problem, but because we don't leave > > > landmines for other people to trip over in future! > > > > Do we need to fix filemap_unaccount_folio() as well? > > Looks to me like it is already correct? > > __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr); > if (folio_test_swapbacked(folio)) { > __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr); > if (folio_test_pmd_mappable(folio)) > __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); > } else if (folio_test_pmd_mappable(folio)) { > __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); > filemap_nr_thps_dec(mapping); > } Argh, I overlooked it because it's nested further in due to that NR_SHMEM update. Sorry about the noise.
Matthew Wilcox <willy@infradead.org> writes: > On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote: >> +++ b/mm/huge_memory.c >> @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) >> if (folio_test_swapbacked(folio)) { >> __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, >> -nr); >> - } else { >> + } else if (folio_test_pmd_mappable(folio)) { >> __lruvec_stat_mod_folio(folio, NR_FILE_THPS, >> -nr); >> filemap_nr_thps_dec(mapping); > > As I said, we also need the folio_test_pmd_mappable() for swapbacked. > Not because there's currently a problem, but because we don't leave > landmines for other people to trip over in future! I'll add it in the next version.
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 064fbd90822b4..9dbd5ef5a3902 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) if (folio_test_swapbacked(folio)) { __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); - } else { + } else if (folio_test_pmd_mappable(folio)) { __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping);