diff mbox series

[mmotm] mm/thp: ClearPageDoubleMap in first page_add_file_rmap()

Message ID 61c5cf99-a962-9a25-597a-53ab1bd8fbc0@google.com (mailing list archive)
State New
Headers show
Series [mmotm] mm/thp: ClearPageDoubleMap in first page_add_file_rmap() | expand

Commit Message

Hugh Dickins March 3, 2022, 1:50 a.m. UTC
PageDoubleMap is maintained differently for anon and for shmem+file:
the shmem+file one was never cleared, because a safe place to do so
could not be found; so it would blight future use of the cached
hugepage until evicted.

See https://lore.kernel.org/lkml/1571938066-29031-1-git-send-email-yang.shi@linux.alibaba.com/

But page_add_file_rmap() does provide a safe place to do so (though
later than one might wish): allowing testing to return to an initial
state without a damaging drop_caches.

Fixes: 9a73f61bdb8a ("thp, mlock: do not mlock PTE-mapped file huge pages")
Signed-off-by: Hugh Dickins <hughd@google.com>
---

 mm/rmap.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

Comments

Yang Shi March 3, 2022, 8:12 p.m. UTC | #1
On Wed, Mar 2, 2022 at 5:50 PM Hugh Dickins <hughd@google.com> wrote:
>
> PageDoubleMap is maintained differently for anon and for shmem+file:
> the shmem+file one was never cleared, because a safe place to do so
> could not be found; so it would blight future use of the cached
> hugepage until evicted.
>
> See https://lore.kernel.org/lkml/1571938066-29031-1-git-send-email-yang.shi@linux.alibaba.com/
>
> But page_add_file_rmap() does provide a safe place to do so (though
> later than one might wish): allowing testing to return to an initial
> state without a damaging drop_caches.
>
> Fixes: 9a73f61bdb8a ("thp, mlock: do not mlock PTE-mapped file huge pages")
> Signed-off-by: Hugh Dickins <hughd@google.com>
> ---
>
>  mm/rmap.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
>
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1251,6 +1251,17 @@ void page_add_file_rmap(struct page *page,
>                 }
>                 if (!atomic_inc_and_test(compound_mapcount_ptr(page)))
>                         goto out;
> +
> +               /*
> +                * It is racy to ClearPageDoubleMap in page_remove_file_rmap();
> +                * but page lock is held by all page_add_file_rmap() compound
> +                * callers, and SetPageDoubleMap below warns if !PageLocked:
> +                * so here is a place that DoubleMap can be safely cleared.
> +                */
> +               VM_WARN_ON_ONCE(!PageLocked(page));
> +               if (nr == nr_pages && PageDoubleMap(page))
> +                       ClearPageDoubleMap(page);

Nice idea!

Reviewed-by: Yang Shi <shy828301@gmail.com>

> +
>                 if (PageSwapBacked(page))
>                         __mod_lruvec_page_state(page, NR_SHMEM_PMDMAPPED,
>                                                 nr_pages);
diff mbox series

Patch

--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1251,6 +1251,17 @@  void page_add_file_rmap(struct page *page,
 		}
 		if (!atomic_inc_and_test(compound_mapcount_ptr(page)))
 			goto out;
+
+		/*
+		 * It is racy to ClearPageDoubleMap in page_remove_file_rmap();
+		 * but page lock is held by all page_add_file_rmap() compound
+		 * callers, and SetPageDoubleMap below warns if !PageLocked:
+		 * so here is a place that DoubleMap can be safely cleared.
+		 */
+		VM_WARN_ON_ONCE(!PageLocked(page));
+		if (nr == nr_pages && PageDoubleMap(page))
+			ClearPageDoubleMap(page);
+
 		if (PageSwapBacked(page))
 			__mod_lruvec_page_state(page, NR_SHMEM_PMDMAPPED,
 						nr_pages);