mbox series

[0/5] batched remove rmap in try_to_unmap_one()

Message ID 20230223083200.3149015-1-fengwei.yin@intel.com (mailing list archive)
Headers show
Series batched remove rmap in try_to_unmap_one() | expand

Message

Yin Fengwei Feb. 23, 2023, 8:31 a.m. UTC
This series is trying to bring the batched rmap removing to
try_to_unmap_one(). It's expected that the batched rmap
removing bring performance gain than remove rmap per page.

The changes are organized as:
Patch1/2 move the hugetlb and normal page unmap to dedicated
functions to make try_to_unmap_one() logic clearer and easy
to add batched rmap removing. To make code review easier, no
function change.

Patch3 cleanup the try_to_unmap_one_page(). Try to removed
some duplicated function calls.

Patch4 adds folio_remove_rmap_range() which batched remove rmap.

Patch5 make try_to_unmap_one() to batched remove rmap.

Testing done with the series in a qemu guest
with 4G mem + 512M zram:
  - kernel mm selftest to trigger vmscan() and final hit
    try_to_unmap_one().
  - Inject hwpoison to hugetlb page to trigger try_to_unmap_one()
    call against hugetlb.
  - 24 hours stress testing: Firefox + kernel mm selftest + kernel
    build.

This series is based on next-20230222.

Yin Fengwei (5):
  rmap: move hugetlb try_to_unmap to dedicated function
  rmap: move page unmap operation to dedicated function
  rmap: cleanup exit path of try_to_unmap_one_page()
  rmap:addd folio_remove_rmap_range()
  try_to_unmap_one: batched remove rmap, update folio refcount

 include/linux/rmap.h |   5 +
 mm/page_vma_mapped.c |  30 +++
 mm/rmap.c            | 628 +++++++++++++++++++++++++------------------
 3 files changed, 403 insertions(+), 260 deletions(-)