mbox series

[-v2,0/9] migrate_pages(): batch TLB flushing

Message ID 20230110075327.590514-1-ying.huang@intel.com (mailing list archive)
Headers show
Series migrate_pages(): batch TLB flushing | expand

Message

Huang, Ying Jan. 10, 2023, 7:53 a.m. UTC
From: "Huang, Ying" <ying.huang@intel.com>

Now, migrate_pages() migrate folios one by one, like the fake code as
follows,

  for each folio
    unmap
    flush TLB
    copy
    restore map

If multiple folios are passed to migrate_pages(), there are
opportunities to batch the TLB flushing and copying.  That is, we can
change the code to something as follows,

  for each folio
    unmap
  for each folio
    flush TLB
  for each folio
    copy
  for each folio
    restore map

The total number of TLB flushing IPI can be reduced considerably.  And
we may use some hardware accelerator such as DSA to accelerate the
folio copying.

So in this patch, we refactor the migrate_pages() implementation and
implement the TLB flushing batching.  Base on this, hardware
accelerated folio copying can be implemented.

If too many folios are passed to migrate_pages(), in the naive batched
implementation, we may unmap too many folios at the same time.  The
possibility for a task to wait for the migrated folios to be mapped
again increases.  So the latency may be hurt.  To deal with this
issue, the max number of folios be unmapped in batch is restricted to
no more than HPAGE_PMD_NR in the unit of page.  That is, the influence
is at the same level of THP migration.

We use the following test to measure the performance impact of the
patchset,

On a 2-socket Intel server,

 - Run pmbench memory accessing benchmark

 - Run `migratepages` to migrate pages of pmbench between node 0 and
   node 1 back and forth.

With the patch, the TLB flushing IPI reduces 99.1% during the test and
the number of pages migrated successfully per second increases 291.7%.

This patchset is based on mm-unstable.

Changes:

v2:

- Rebased on v6.2-rc3

- Fixed type force cast warning.  Thanks Kees!

- Added more comments and cleaned up the code.  Thanks Andrew, Zi, Alistair, Dan!

- Collected reviewed-by.

from rfc to v1:

- Rebased on v6.2-rc1

- Fix the deadlock issue caused by locking multiple pages synchronously
  per Alistair's comments.  Thanks!

- Fix the autonumabench panic per Rao's comments and fix.  Thanks!

- Other minor fixes per comments. Thanks!

Best Regards,
Huang, Ying

Comments

Mike Kravetz Jan. 11, 2023, 1:53 a.m. UTC | #1
Just saw the following easily reproducible issue on next-20230110.  Have not
verified it is related to/caused by this series, but it looks suspicious.

Reproduce by,

# echo <very large number> > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

This will 'attempt' to allocate hugetlb pages until it fails.
Mike Kravetz Jan. 11, 2023, 10:21 p.m. UTC | #2
On 01/10/23 17:53, Mike Kravetz wrote:
> Just saw the following easily reproducible issue on next-20230110.  Have not
> verified it is related to/caused by this series, but it looks suspicious.

Verified this is caused by the series,

734cbddcfe72 migrate_pages: organize stats with struct migrate_pages_stats
to
323b933ba062 migrate_pages: batch flushing TLB

in linux-next.
Huang, Ying Jan. 12, 2023, 12:09 a.m. UTC | #3
Hi, Mike,

Mike Kravetz <mike.kravetz@oracle.com> writes:

> On 01/10/23 17:53, Mike Kravetz wrote:
>> Just saw the following easily reproducible issue on next-20230110.  Have not
>> verified it is related to/caused by this series, but it looks suspicious.
>
> Verified this is caused by the series,
>
> 734cbddcfe72 migrate_pages: organize stats with struct migrate_pages_stats
> to
> 323b933ba062 migrate_pages: batch flushing TLB
>
> in linux-next.

Thanks for reporting.

I tried this yesterday (next-20230111), but failed to reproduce it.  Can
you share your kernel config?  Is there any other setup needed?

BTW: can you bisect to one specific commit which causes the bug in the
series?

Best Regards,
Huang, Ying
Mike Kravetz Jan. 12, 2023, 2:15 a.m. UTC | #4
On 01/12/23 08:09, Huang, Ying wrote:
> Hi, Mike,
> 
> Mike Kravetz <mike.kravetz@oracle.com> writes:
> 
> > On 01/10/23 17:53, Mike Kravetz wrote:
> >> Just saw the following easily reproducible issue on next-20230110.  Have not
> >> verified it is related to/caused by this series, but it looks suspicious.
> >
> > Verified this is caused by the series,
> >
> > 734cbddcfe72 migrate_pages: organize stats with struct migrate_pages_stats
> > to
> > 323b933ba062 migrate_pages: batch flushing TLB
> >
> > in linux-next.
> 
> Thanks for reporting.
> 
> I tried this yesterday (next-20230111), but failed to reproduce it.  Can
> you share your kernel config?  Is there any other setup needed?

Config file is attached.

Are you writing a REALLY big value to nr_hugepages?  By REALLY big I
mean a value that is impossible to fulfill.  This will result in
successful hugetlb allocations until __alloc_pages starts to fail.  At
this point we will be stressing compaction/migration trying to find more
contiguous pages.

Not sure if it matters, but I am running on a 2 node VM.  The 2 nodes
may be important as the hugetlb allocation code will try a little harder
alternating between nodes that may perhaps stress compaction/migration
more.

> BTW: can you bisect to one specific commit which causes the bug in the
> series?

I should have some time to isolate in the next day or so.
Huang, Ying Jan. 12, 2023, 7:17 a.m. UTC | #5
Mike Kravetz <mike.kravetz@oracle.com> writes:

> On 01/12/23 08:09, Huang, Ying wrote:
>> Hi, Mike,
>> 
>> Mike Kravetz <mike.kravetz@oracle.com> writes:
>> 
>> > On 01/10/23 17:53, Mike Kravetz wrote:
>> >> Just saw the following easily reproducible issue on next-20230110.  Have not
>> >> verified it is related to/caused by this series, but it looks suspicious.
>> >
>> > Verified this is caused by the series,
>> >
>> > 734cbddcfe72 migrate_pages: organize stats with struct migrate_pages_stats
>> > to
>> > 323b933ba062 migrate_pages: batch flushing TLB
>> >
>> > in linux-next.
>> 
>> Thanks for reporting.
>> 
>> I tried this yesterday (next-20230111), but failed to reproduce it.  Can
>> you share your kernel config?  Is there any other setup needed?
>
> Config file is attached.

Thanks!

> Are you writing a REALLY big value to nr_hugepages?  By REALLY big I
> mean a value that is impossible to fulfill.  This will result in
> successful hugetlb allocations until __alloc_pages starts to fail.  At
> this point we will be stressing compaction/migration trying to find more
> contiguous pages.
>
> Not sure if it matters, but I am running on a 2 node VM.  The 2 nodes
> may be important as the hugetlb allocation code will try a little harder
> alternating between nodes that may perhaps stress compaction/migration
> more.

Tried again on a 2-node machine.  Still cannot reproduce it.

>> BTW: can you bisect to one specific commit which causes the bug in the
>> series?
>
> I should have some time to isolate in the next day or so.

Thanks!

Best Regards,
Huang, Ying
Mike Kravetz Jan. 13, 2023, 12:11 a.m. UTC | #6
On 01/12/23 15:17, Huang, Ying wrote:
> Mike Kravetz <mike.kravetz@oracle.com> writes:
> 
> > On 01/12/23 08:09, Huang, Ying wrote:
> >> Hi, Mike,
> >> 
> >> Mike Kravetz <mike.kravetz@oracle.com> writes:
> >> 
> >> > On 01/10/23 17:53, Mike Kravetz wrote:
> >> >> Just saw the following easily reproducible issue on next-20230110.  Have not
> >> >> verified it is related to/caused by this series, but it looks suspicious.
> >> >
> >> > Verified this is caused by the series,
> >> >
> >> > 734cbddcfe72 migrate_pages: organize stats with struct migrate_pages_stats
> >> > to
> >> > 323b933ba062 migrate_pages: batch flushing TLB
> >> >
> >> > in linux-next.
> >> 
> >> Thanks for reporting.
> >> 
> >> I tried this yesterday (next-20230111), but failed to reproduce it.  Can
> >> you share your kernel config?  Is there any other setup needed?
> >
> > Config file is attached.
> 
> Thanks!
> 
> > Are you writing a REALLY big value to nr_hugepages?  By REALLY big I
> > mean a value that is impossible to fulfill.  This will result in
> > successful hugetlb allocations until __alloc_pages starts to fail.  At
> > this point we will be stressing compaction/migration trying to find more
> > contiguous pages.
> >
> > Not sure if it matters, but I am running on a 2 node VM.  The 2 nodes
> > may be important as the hugetlb allocation code will try a little harder
> > alternating between nodes that may perhaps stress compaction/migration
> > more.
> 
> Tried again on a 2-node machine.  Still cannot reproduce it.
> 
> >> BTW: can you bisect to one specific commit which causes the bug in the
> >> series?
> >
> > I should have some time to isolate in the next day or so.

Isolated to patch,
[PATCH -v2 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move()

Actually, recreated/isolated by just applying this series to v6.2-rc3 in an
effort to eliminate any possible noise in linux-next.

Spent a little time looking at modifications made there, but nothing stood out.
Will investigate more as time allows.
Huang, Ying Jan. 13, 2023, 2:42 a.m. UTC | #7
Mike Kravetz <mike.kravetz@oracle.com> writes:

> On 01/12/23 15:17, Huang, Ying wrote:
>> Mike Kravetz <mike.kravetz@oracle.com> writes:
>> 
>> > On 01/12/23 08:09, Huang, Ying wrote:
>> >> Hi, Mike,
>> >> 
>> >> Mike Kravetz <mike.kravetz@oracle.com> writes:
>> >> 
>> >> > On 01/10/23 17:53, Mike Kravetz wrote:
>> >> >> Just saw the following easily reproducible issue on next-20230110.  Have not
>> >> >> verified it is related to/caused by this series, but it looks suspicious.
>> >> >
>> >> > Verified this is caused by the series,
>> >> >
>> >> > 734cbddcfe72 migrate_pages: organize stats with struct migrate_pages_stats
>> >> > to
>> >> > 323b933ba062 migrate_pages: batch flushing TLB
>> >> >
>> >> > in linux-next.
>> >> 
>> >> Thanks for reporting.
>> >> 
>> >> I tried this yesterday (next-20230111), but failed to reproduce it.  Can
>> >> you share your kernel config?  Is there any other setup needed?
>> >
>> > Config file is attached.
>> 
>> Thanks!
>> 
>> > Are you writing a REALLY big value to nr_hugepages?  By REALLY big I
>> > mean a value that is impossible to fulfill.  This will result in
>> > successful hugetlb allocations until __alloc_pages starts to fail.  At
>> > this point we will be stressing compaction/migration trying to find more
>> > contiguous pages.
>> >
>> > Not sure if it matters, but I am running on a 2 node VM.  The 2 nodes
>> > may be important as the hugetlb allocation code will try a little harder
>> > alternating between nodes that may perhaps stress compaction/migration
>> > more.
>> 
>> Tried again on a 2-node machine.  Still cannot reproduce it.
>> 
>> >> BTW: can you bisect to one specific commit which causes the bug in the
>> >> series?
>> >
>> > I should have some time to isolate in the next day or so.
>
> Isolated to patch,
> [PATCH -v2 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move()
>
> Actually, recreated/isolated by just applying this series to v6.2-rc3 in an
> effort to eliminate any possible noise in linux-next.
>
> Spent a little time looking at modifications made there, but nothing stood out.
> Will investigate more as time allows.

Thank you very much!  That's really helpful.

Checked that patch again, found that there's an issue about non-lru
pages.  Do you enable zram in your test system?  Can you try the
below debug patch on top of

[PATCH -v2 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move()

The following patches in series need to be rebased for the below
change.  I will test with zram enabled too.

Best Regards,
Huang, Ying

---------------------------8<------------------------------------------------------
diff --git a/mm/migrate.c b/mm/migrate.c
index 4c35c2a49574..7153d954b8a2 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1187,10 +1187,13 @@ static int __migrate_folio_move(struct folio *src, struct folio *dst,
 	int rc;
 	int page_was_mapped = 0;
 	struct anon_vma *anon_vma = NULL;
+	bool is_lru = !__PageMovable(&src->page);
 
 	__migrate_folio_extract(dst, &page_was_mapped, &anon_vma);
 
 	rc = move_to_new_folio(dst, src, mode);
+	if (!unlikely(is_lru))
+		goto out_unlock_both;
 
 	/*
 	 * When successful, push dst to LRU immediately: so that if it
@@ -1211,6 +1214,7 @@ static int __migrate_folio_move(struct folio *src, struct folio *dst,
 		remove_migration_ptes(src,
 			rc == MIGRATEPAGE_SUCCESS ? dst : src, false);
 
+out_unlock_both:
 	folio_unlock(dst);
 	/* Drop an anon_vma reference if we took one */
 	if (anon_vma)
Mike Kravetz Jan. 13, 2023, 4:49 a.m. UTC | #8
On 01/13/23 10:42, Huang, Ying wrote:
> Mike Kravetz <mike.kravetz@oracle.com> writes:
> > On 01/12/23 15:17, Huang, Ying wrote:
> >> Mike Kravetz <mike.kravetz@oracle.com> writes:
> >> > On 01/12/23 08:09, Huang, Ying wrote:
> >
> > Isolated to patch,
> > [PATCH -v2 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move()
> >
> > Actually, recreated/isolated by just applying this series to v6.2-rc3 in an
> > effort to eliminate any possible noise in linux-next.
> >
> > Spent a little time looking at modifications made there, but nothing stood out.
> > Will investigate more as time allows.
> 
> Thank you very much!  That's really helpful.
> 
> Checked that patch again, found that there's an issue about non-lru
> pages.  Do you enable zram in your test system?  Can you try the
> below debug patch on top of

CONFIG_ZRAM=y

> 
> [PATCH -v2 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move()
> 
> The following patches in series need to be rebased for the below
> change.  I will test with zram enabled too.

A quick test with below patch on top of [PATCH -v2 4/9] (without the rest of
the series applied) makes the issue go away.  Thanks!
Huang, Ying Jan. 13, 2023, 5:39 a.m. UTC | #9
Mike Kravetz <mike.kravetz@oracle.com> writes:

> On 01/13/23 10:42, Huang, Ying wrote:
>> Mike Kravetz <mike.kravetz@oracle.com> writes:
>> > On 01/12/23 15:17, Huang, Ying wrote:
>> >> Mike Kravetz <mike.kravetz@oracle.com> writes:
>> >> > On 01/12/23 08:09, Huang, Ying wrote:
>> >
>> > Isolated to patch,
>> > [PATCH -v2 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move()
>> >
>> > Actually, recreated/isolated by just applying this series to v6.2-rc3 in an
>> > effort to eliminate any possible noise in linux-next.
>> >
>> > Spent a little time looking at modifications made there, but nothing stood out.
>> > Will investigate more as time allows.
>> 
>> Thank you very much!  That's really helpful.
>> 
>> Checked that patch again, found that there's an issue about non-lru
>> pages.  Do you enable zram in your test system?  Can you try the
>> below debug patch on top of
>
> CONFIG_ZRAM=y
>
>> 
>> [PATCH -v2 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move()
>> 
>> The following patches in series need to be rebased for the below
>> change.  I will test with zram enabled too.
>
> A quick test with below patch on top of [PATCH -v2 4/9] (without the rest of
> the series applied) makes the issue go away.  Thanks!

Thanks you very much!  Now I can reproduce the issue with zram
configured.  Will send out a new version to fix the issue.

Best Regards,
Huang, Ying
Matthew Wilcox Jan. 16, 2023, 4:35 a.m. UTC | #10
On Fri, Jan 13, 2023 at 10:42:08AM +0800, Huang, Ying wrote:
> +++ b/mm/migrate.c
> @@ -1187,10 +1187,13 @@ static int __migrate_folio_move(struct folio *src, struct folio *dst,
>  	int rc;
>  	int page_was_mapped = 0;
>  	struct anon_vma *anon_vma = NULL;
> +	bool is_lru = !__PageMovable(&src->page);
>  
>  	__migrate_folio_extract(dst, &page_was_mapped, &anon_vma);
>  
>  	rc = move_to_new_folio(dst, src, mode);
> +	if (!unlikely(is_lru))
> +		goto out_unlock_both;

This reads a little awkwardly.  Could it be:

	if (likely(!is_lru))

... but honestly, I think the polarity here is wrong.  LRU pages tend to
outnumber !LRU pages, so shouldn't this be:

	if (unlikely(!is_lru)) {

just like it is in migrate_folio_unmap()?
Huang, Ying Jan. 16, 2023, 6:05 a.m. UTC | #11
Matthew Wilcox <willy@infradead.org> writes:

> On Fri, Jan 13, 2023 at 10:42:08AM +0800, Huang, Ying wrote:
>> +++ b/mm/migrate.c
>> @@ -1187,10 +1187,13 @@ static int __migrate_folio_move(struct folio *src, struct folio *dst,
>>  	int rc;
>>  	int page_was_mapped = 0;
>>  	struct anon_vma *anon_vma = NULL;
>> +	bool is_lru = !__PageMovable(&src->page);
>>  
>>  	__migrate_folio_extract(dst, &page_was_mapped, &anon_vma);
>>  
>>  	rc = move_to_new_folio(dst, src, mode);
>> +	if (!unlikely(is_lru))
>> +		goto out_unlock_both;
>
> This reads a little awkwardly.  Could it be:
>
> 	if (likely(!is_lru))
>
> ... but honestly, I think the polarity here is wrong.  LRU pages tend to
> outnumber !LRU pages, so shouldn't this be:
>
> 	if (unlikely(!is_lru)) {
>
> just like it is in migrate_folio_unmap()?

Yes.  Thank you very much for pointing this out.  Will fix it for the
next version.

Best Regards,
Huang, Ying