mbox series

[v17,0/9] mm / virtio: Provide support for free page reporting

Message ID 20200211224416.29318.44077.stgit@localhost.localdomain (mailing list archive)
Headers show
Series mm / virtio: Provide support for free page reporting | expand

Message

Alexander Duyck Feb. 11, 2020, 10:45 p.m. UTC
This series provides an asynchronous means of reporting free guest pages
to a hypervisor so that the memory associated with those pages can be
dropped and reused by other processes and/or guests on the host. Using
this it is possible to avoid unnecessary I/O to disk and greatly improve
performance in the case of memory overcommit on the host.

When enabled we will be performing a scan of free memory every 2 seconds
while pages of sufficiently high order are being freed. In each pass at
least one sixteenth of each free list will be reported. By doing this we
avoid racing against other threads that may be causing a high amount of
memory churn.

The lowest page order currently scanned when reporting pages is
pageblock_order so that this feature will not interfere with the use of
Transparent Huge Pages in the case of virtualization.

Currently this is only in use by virtio-balloon however there is the hope
that at some point in the future other hypervisors might be able to make
use of it. In the virtio-balloon/QEMU implementation the hypervisor is
currently using MADV_DONTNEED to indicate to the host kernel that the page
is currently free. It will be zeroed and faulted back into the guest the
next time the page is accessed.

To track if a page is reported or not the Uptodate flag was repurposed and
used as a Reported flag for Buddy pages. We walk though the free list
isolating pages and adding them to the scatterlist until we either
encounter the end of the list or have processed at least one sixteenth of
the pages that were listed in nr_free prior to us starting. If we fill the
scatterlist before we reach the end of the list we rotate the list so that
the first unreported page we encounter is moved to the head of the list as
that is where we will resume after we have freed the reported pages back
into the tail of the list.

Below are the results from various benchmarks. I primarily focused on two
tests. The first is the will-it-scale/page_fault2 test, and the other is
a modified version of will-it-scale/page_fault1 that was enabled to use
THP. I did this as it allows for better visibility into different parts
of the memory subsystem. The guest is running with 32G for RAM on one
node of a E5-2630 v3. The host has had some features such as CPU turbo
disabled in the BIOS.

Test                   page_fault1 (THP)    page_fault2
Name            tasks  Process Iter  STDEV  Process Iter  STDEV
Baseline            1    1012402.50  0.14%     361855.25  0.81%
                   16    8827457.25  0.09%    3282347.00  0.34%

Patches Applied     1    1007897.00  0.23%     361887.00  0.26%
                   16    8784741.75  0.39%    3240669.25  0.48%

Patches Enabled     1    1010227.50  0.39%     359749.25  0.56%
                   16    8756219.00  0.24%    3226608.75  0.97%

Patches Enabled     1    1050982.00  4.26%     357966.25  0.14%
 page shuffle      16    8672601.25  0.49%    3223177.75  0.40%

Patches enabled     1    1003238.00  0.22%     360211.00  0.22%
 shuffle w/ RFC    16    8767010.50  0.32%    3199874.00  0.71%

The results above are for a baseline with a linux-next-20191219 kernel,
that kernel with this patch set applied but page reporting disabled in
virtio-balloon, the patches applied and page reporting fully enabled, the
patches enabled with page shuffling enabled, and the patches applied with
page shuffling enabled and an RFC patch that makes used of MADV_FREE in
QEMU. These results include the deviation seen between the average value
reported here versus the high and/or low value. I observed that during the
test memory usage for the first three tests never dropped whereas with the
patches fully enabled the VM would drop to using only a few GB of the
host's memory when switching from memhog to page fault tests.

Any of the overhead visible with this patch set enabled seems due to page
faults caused by accessing the reported pages and the host zeroing the page
before giving it back to the guest. This overhead is much more visible when
using THP than with standard 4K pages. In addition page shuffling seemed to
increase the amount of faults generated due to an increase in memory churn.
The overehad is reduced when using MADV_FREE as we can avoid the extra
zeroing of the pages when they are reintroduced to the host, as can be seen
when the RFC is applied with shuffling enabled.

The overall guest size is kept fairly small to only a few GB while the test
is running. If the host memory were oversubscribed this patch set should
result in a performance improvement as swapping memory in the host can be
avoided.

A brief history on the background of free page reporting can be found at:
https://lore.kernel.org/lkml/29f43d5796feed0dec8e8bb98b187d9dac03b900.camel@linux.intel.com/

Changes from v14:
https://lore.kernel.org/lkml/20191119214454.24996.66289.stgit@localhost.localdomain/
Renamed "unused page reporting" to "free page reporting"
  Updated code, kconfig, and patch descriptions
Split out patch for __free_isolated_page
  Renamed function to __putback_isolated_page
Rewrote core reporting functionality
  Added logic to reschedule worker in 2 seconds instead of run to completion
  Removed reported_pages statistics
  Removed REPORTING_REQUESTED bit used in zone flags
  Replaced page_reporting_dev_info refcount with state variable
  Removed scatterlist from page_reporting_dev_info
  Removed capacity from page reporting device
  Added dynamic scatterlist allocation/free at start/end of reporting process
  Updated __free_one_page so that reported pages are not always added to tail
  Added logic to handle error from report function
Updated virtio-balloon patch that adds support for page reporting
  Updated patch description to try and highlight differences in approaches
  Updated logic to reflect that we cannot limit the scatterlist from device
  Added logic to return error from report function
Moved documentation patch to end of patch set

Changes from v15:
https://lore.kernel.org/lkml/20191205161928.19548.41654.stgit@localhost.localdomain/
Rebased on linux-next-20191219
Split out patches for budget and moving head to last page processed
Updated budget code to reduce how much memory is reported per pass
Added logic to also rotate the list if we exit due a page isolation failure
Added migratetype as argument in __putback_isolated_page

Changes from v16:
https://lore.kernel.org/lkml/20200103210509.29237.18426.stgit@localhost.localdomain/
Rebased on linux-next-20200122
  Updated patch 2 to to account for removal of pr_info in __isolate_free_page 
Updated patch title for patches 7, 8, and 9 to included mm/page_reporting

Changes from v16.1:
https://lore.kernel.org/lkml/20200122173040.6142.39116.stgit@localhost.localdomain/
Rebased QEMU patches to latest
Rebased on linux-next-20200211
  Tweaked cover page to more accurately describe hinting process
  Verified results have not changed in significant way

---

Alexander Duyck (9):
      mm: Adjust shuffle code to allow for future coalescing
      mm: Use zone and order instead of free area in free_list manipulators
      mm: Add function __putback_isolated_page
      mm: Introduce Reported pages
      virtio-balloon: Pull page poisoning config out of free page hinting
      virtio-balloon: Add support for providing free page reports to host
      mm/page_reporting: Rotate reported pages to the tail of the list
      mm/page_reporting: Add budget limit on how many pages can be reported per pass
      mm/page_reporting: Add free page reporting documentation


 Documentation/vm/free_page_reporting.rst |   41 +++
 drivers/virtio/Kconfig                   |    1 
 drivers/virtio/virtio_balloon.c          |   87 +++++++
 include/linux/mmzone.h                   |   44 ----
 include/linux/page-flags.h               |   11 +
 include/linux/page_reporting.h           |   26 ++
 include/uapi/linux/virtio_balloon.h      |    1 
 mm/Kconfig                               |   11 +
 mm/Makefile                              |    1 
 mm/internal.h                            |    2 
 mm/page_alloc.c                          |  164 ++++++++++----
 mm/page_isolation.c                      |    6 
 mm/page_reporting.c                      |  364 ++++++++++++++++++++++++++++++
 mm/page_reporting.h                      |   54 ++++
 mm/shuffle.c                             |   12 -
 mm/shuffle.h                             |    6 
 16 files changed, 725 insertions(+), 106 deletions(-)
 create mode 100644 Documentation/vm/free_page_reporting.rst
 create mode 100644 include/linux/page_reporting.h
 create mode 100644 mm/page_reporting.c
 create mode 100644 mm/page_reporting.h

--

Comments

Andrew Morton Feb. 11, 2020, 11:05 p.m. UTC | #1
On Tue, 11 Feb 2020 14:45:51 -0800 Alexander Duyck <alexander.duyck@gmail.com> wrote:

> This series provides an asynchronous means of reporting free guest pages
> to a hypervisor so that the memory associated with those pages can be
> dropped and reused by other processes and/or guests on the host. Using
> this it is possible to avoid unnecessary I/O to disk and greatly improve
> performance in the case of memory overcommit on the host.

"greatly improve" sounds nice.

> When enabled we will be performing a scan of free memory every 2 seconds
> while pages of sufficiently high order are being freed. In each pass at
> least one sixteenth of each free list will be reported. By doing this we
> avoid racing against other threads that may be causing a high amount of
> memory churn.
> 
> The lowest page order currently scanned when reporting pages is
> pageblock_order so that this feature will not interfere with the use of
> Transparent Huge Pages in the case of virtualization.
> 
> Currently this is only in use by virtio-balloon however there is the hope
> that at some point in the future other hypervisors might be able to make
> use of it. In the virtio-balloon/QEMU implementation the hypervisor is
> currently using MADV_DONTNEED to indicate to the host kernel that the page
> is currently free. It will be zeroed and faulted back into the guest the
> next time the page is accessed.
> 
> To track if a page is reported or not the Uptodate flag was repurposed and
> used as a Reported flag for Buddy pages. We walk though the free list
> isolating pages and adding them to the scatterlist until we either
> encounter the end of the list or have processed at least one sixteenth of
> the pages that were listed in nr_free prior to us starting. If we fill the
> scatterlist before we reach the end of the list we rotate the list so that
> the first unreported page we encounter is moved to the head of the list as
> that is where we will resume after we have freed the reported pages back
> into the tail of the list.
> 
> Below are the results from various benchmarks. I primarily focused on two
> tests. The first is the will-it-scale/page_fault2 test, and the other is
> a modified version of will-it-scale/page_fault1 that was enabled to use
> THP. I did this as it allows for better visibility into different parts
> of the memory subsystem. The guest is running with 32G for RAM on one
> node of a E5-2630 v3. The host has had some features such as CPU turbo
> disabled in the BIOS.
> 
> Test                   page_fault1 (THP)    page_fault2
> Name            tasks  Process Iter  STDEV  Process Iter  STDEV
> Baseline            1    1012402.50  0.14%     361855.25  0.81%
>                    16    8827457.25  0.09%    3282347.00  0.34%
> 
> Patches Applied     1    1007897.00  0.23%     361887.00  0.26%
>                    16    8784741.75  0.39%    3240669.25  0.48%
> 
> Patches Enabled     1    1010227.50  0.39%     359749.25  0.56%
>                    16    8756219.00  0.24%    3226608.75  0.97%
> 
> Patches Enabled     1    1050982.00  4.26%     357966.25  0.14%
>  page shuffle      16    8672601.25  0.49%    3223177.75  0.40%
> 
> Patches enabled     1    1003238.00  0.22%     360211.00  0.22%
>  shuffle w/ RFC    16    8767010.50  0.32%    3199874.00  0.71%

But these differences seem really small - around 1%?  I think we're
just showing not much harm was caused?

> The results above are for a baseline with a linux-next-20191219 kernel,
> that kernel with this patch set applied but page reporting disabled in
> virtio-balloon, the patches applied and page reporting fully enabled, the
> patches enabled with page shuffling enabled, and the patches applied with
> page shuffling enabled and an RFC patch that makes used of MADV_FREE in
> QEMU. These results include the deviation seen between the average value
> reported here versus the high and/or low value. I observed that during the
> test memory usage for the first three tests never dropped whereas with the
> patches fully enabled the VM would drop to using only a few GB of the
> host's memory when switching from memhog to page fault tests.

And this is the "great improvement", yes?

Is it possible to measure the end-user-visible benefits of this?

> Any of the overhead visible with this patch set enabled seems due to page
> faults caused by accessing the reported pages and the host zeroing the page
> before giving it back to the guest. This overhead is much more visible when
> using THP than with standard 4K pages. In addition page shuffling seemed to
> increase the amount of faults generated due to an increase in memory churn.
> The overehad is reduced when using MADV_FREE as we can avoid the extra
> zeroing of the pages when they are reintroduced to the host, as can be seen
> when the RFC is applied with shuffling enabled.
> 
> The overall guest size is kept fairly small to only a few GB while the test
> is running. If the host memory were oversubscribed this patch set should
> result in a performance improvement as swapping memory in the host can be
> avoided.

"should result".  Can we firm this up a lot?
Alexander Duyck Feb. 11, 2020, 11:55 p.m. UTC | #2
On Tue, 2020-02-11 at 15:05 -0800, Andrew Morton wrote:
> On Tue, 11 Feb 2020 14:45:51 -0800 Alexander Duyck <alexander.duyck@gmail.com> wrote:
> 
> > This series provides an asynchronous means of reporting free guest pages
> > to a hypervisor so that the memory associated with those pages can be
> > dropped and reused by other processes and/or guests on the host. Using
> > this it is possible to avoid unnecessary I/O to disk and greatly improve
> > performance in the case of memory overcommit on the host.
> 
> "greatly improve" sounds nice.
> 
> > When enabled we will be performing a scan of free memory every 2 seconds
> > while pages of sufficiently high order are being freed. In each pass at
> > least one sixteenth of each free list will be reported. By doing this we
> > avoid racing against other threads that may be causing a high amount of
> > memory churn.
> > 
> > The lowest page order currently scanned when reporting pages is
> > pageblock_order so that this feature will not interfere with the use of
> > Transparent Huge Pages in the case of virtualization.
> > 
> > Currently this is only in use by virtio-balloon however there is the hope
> > that at some point in the future other hypervisors might be able to make
> > use of it. In the virtio-balloon/QEMU implementation the hypervisor is
> > currently using MADV_DONTNEED to indicate to the host kernel that the page
> > is currently free. It will be zeroed and faulted back into the guest the
> > next time the page is accessed.
> > 
> > To track if a page is reported or not the Uptodate flag was repurposed and
> > used as a Reported flag for Buddy pages. We walk though the free list
> > isolating pages and adding them to the scatterlist until we either
> > encounter the end of the list or have processed at least one sixteenth of
> > the pages that were listed in nr_free prior to us starting. If we fill the
> > scatterlist before we reach the end of the list we rotate the list so that
> > the first unreported page we encounter is moved to the head of the list as
> > that is where we will resume after we have freed the reported pages back
> > into the tail of the list.
> > 
> > Below are the results from various benchmarks. I primarily focused on two
> > tests. The first is the will-it-scale/page_fault2 test, and the other is
> > a modified version of will-it-scale/page_fault1 that was enabled to use
> > THP. I did this as it allows for better visibility into different parts
> > of the memory subsystem. The guest is running with 32G for RAM on one
> > node of a E5-2630 v3. The host has had some features such as CPU turbo
> > disabled in the BIOS.
> > 
> > Test                   page_fault1 (THP)    page_fault2
> > Name            tasks  Process Iter  STDEV  Process Iter  STDEV
> > Baseline            1    1012402.50  0.14%     361855.25  0.81%
> >                    16    8827457.25  0.09%    3282347.00  0.34%
> > 
> > Patches Applied     1    1007897.00  0.23%     361887.00  0.26%
> >                    16    8784741.75  0.39%    3240669.25  0.48%
> > 
> > Patches Enabled     1    1010227.50  0.39%     359749.25  0.56%
> >                    16    8756219.00  0.24%    3226608.75  0.97%
> > 
> > Patches Enabled     1    1050982.00  4.26%     357966.25  0.14%
> >  page shuffle      16    8672601.25  0.49%    3223177.75  0.40%
> > 
> > Patches enabled     1    1003238.00  0.22%     360211.00  0.22%
> >  shuffle w/ RFC    16    8767010.50  0.32%    3199874.00  0.71%
> 
> But these differences seem really small - around 1%?  I think we're
> just showing not much harm was caused?

Yes. Basically I am just showing the iterations are not negatively
impacted. The big difference between the cases where it is enabled versus
the cases where it is not is that the guest memory footprint is much
smaller in the enabled cases than in the baseline or "Applied" cases.

> > The results above are for a baseline with a linux-next-20191219 kernel,
> > that kernel with this patch set applied but page reporting disabled in
> > virtio-balloon, the patches applied and page reporting fully enabled, the
> > patches enabled with page shuffling enabled, and the patches applied with
> > page shuffling enabled and an RFC patch that makes used of MADV_FREE in
> > QEMU. These results include the deviation seen between the average value
> > reported here versus the high and/or low value. I observed that during the
> > test memory usage for the first three tests never dropped whereas with the
> > patches fully enabled the VM would drop to using only a few GB of the
> > host's memory when switching from memhog to page fault tests.
> 
> And this is the "great improvement", yes?

Yes, this is the great improvement. Basically what we get is effectively
auto-ballooning so the guests aren't having to go to swap when they start
to get loaded up since they are returning the memory to the host when they
are done with it.

> Is it possible to measure the end-user-visible benefits of this?

If I clear the page cache on my host via drop_caches, fire up my 32G VM,
and run the following commands:
  memhog 32g
  echo 3 > /proc/sys/vm/drop_caches

On the host I just have to monitor /proc/meminfo and I can see the
difference. I get the following results on the host, in the enabled case
it takes about 30 seconds for it to settle into the final state since I
only report page a bit at a time:
Baseline/Applied
  MemTotal:    131963012 kB
  MemFree:      95189740 kB

Enabled:
  MemTotal:    131963012 kB
  MemFree:     126459472 kB

This is what I was referring to with the comment above. I had a test I was
running back around the first RFC that consisted of bringing up enough VMs
so that there was a bit of memory overcommit and then having the VMs in
turn run memhog. As I recall the difference between the two was  something
like a couple minutes to run through all the VMs as the memhog would take
up to 40+ seconds for one that was having to pull from swap while it took
only 5 to 7 seconds for the VMs that were all running the page hinting.

I had referenced it here in the RFC:
https://lore.kernel.org/lkml/20190204181118.12095.38300.stgit@localhost.localdomain/

I have been verifying the memory has been getting freed but didn't feel
like the test added much value so I haven't added it to the cover page for
a while since the time could vary widely and is dependent on things like
the disk type used for the host swap since my SSD is likely faster than
spinning rust, but may not be as fast as other SSDs on the market. Since
the disk speed can play such a huge role I wasn't comfortable posting
numbers since the benefits could vary so widely.

> > Any of the overhead visible with this patch set enabled seems due to page
> > faults caused by accessing the reported pages and the host zeroing the page
> > before giving it back to the guest. This overhead is much more visible when
> > using THP than with standard 4K pages. In addition page shuffling seemed to
> > increase the amount of faults generated due to an increase in memory churn.
> > The overehad is reduced when using MADV_FREE as we can avoid the extra
> > zeroing of the pages when they are reintroduced to the host, as can be seen
> > when the RFC is applied with shuffling enabled.
> > 
> > The overall guest size is kept fairly small to only a few GB while the test
> > is running. If the host memory were oversubscribed this patch set should
> > result in a performance improvement as swapping memory in the host can be
> > avoided.
> 
> "should result".  Can we firm this up a lot?

I said "should result" here because if the guests are using all of their
memory then the free page reporting won't make a difference since you have
to have free pages before they can be reported. Also we cannot use free
page reporting in the cases such as when a device is direct assigned into
the guest as that currently prevents us from disassociating a page from
the guest.
Andrew Morton Feb. 12, 2020, 12:19 a.m. UTC | #3
On Tue, 11 Feb 2020 15:55:31 -0800 Alexander Duyck <alexander.h.duyck@linux.intel.com> wrote:

> On the host I just have to monitor /proc/meminfo and I can see the
> difference. I get the following results on the host, in the enabled case
> it takes about 30 seconds for it to settle into the final state since I
> only report page a bit at a time:
> Baseline/Applied
>   MemTotal:    131963012 kB
>   MemFree:      95189740 kB
> 
> Enabled:
>   MemTotal:    131963012 kB
>   MemFree:     126459472 kB
> 
> This is what I was referring to with the comment above. I had a test I was
> running back around the first RFC that consisted of bringing up enough VMs
> so that there was a bit of memory overcommit and then having the VMs in
> turn run memhog. As I recall the difference between the two was  something
> like a couple minutes to run through all the VMs as the memhog would take
> up to 40+ seconds for one that was having to pull from swap while it took
> only 5 to 7 seconds for the VMs that were all running the page hinting.
> 
> I had referenced it here in the RFC:
> https://lore.kernel.org/lkml/20190204181118.12095.38300.stgit@localhost.localdomain/
> 
> I have been verifying the memory has been getting freed but didn't feel
> like the test added much value so I haven't added it to the cover page for
> a while since the time could vary widely and is dependent on things like
> the disk type used for the host swap since my SSD is likely faster than
> spinning rust, but may not be as fast as other SSDs on the market. Since
> the disk speed can play such a huge role I wasn't comfortable posting
> numbers since the benefits could vary so widely.

OK, thanks.  I'll add the patches to the mm pile.  The new
mm/page_reporting.c is unreviewed afaict, so I guess you own that for
now ;)

It would be very nice to get some feedback from testers asserting "yes,
this really helped my workload" but I understand this sort of testing
is hard to obtain at this stage.
Alexander Duyck Feb. 12, 2020, 1:19 a.m. UTC | #4
On Tue, Feb 11, 2020 at 4:19 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Tue, 11 Feb 2020 15:55:31 -0800 Alexander Duyck <alexander.h.duyck@linux.intel.com> wrote:
>
> > On the host I just have to monitor /proc/meminfo and I can see the
> > difference. I get the following results on the host, in the enabled case
> > it takes about 30 seconds for it to settle into the final state since I
> > only report page a bit at a time:
> > Baseline/Applied
> >   MemTotal:    131963012 kB
> >   MemFree:      95189740 kB
> >
> > Enabled:
> >   MemTotal:    131963012 kB
> >   MemFree:     126459472 kB
> >
> > This is what I was referring to with the comment above. I had a test I was
> > running back around the first RFC that consisted of bringing up enough VMs
> > so that there was a bit of memory overcommit and then having the VMs in
> > turn run memhog. As I recall the difference between the two was  something
> > like a couple minutes to run through all the VMs as the memhog would take
> > up to 40+ seconds for one that was having to pull from swap while it took
> > only 5 to 7 seconds for the VMs that were all running the page hinting.
> >
> > I had referenced it here in the RFC:
> > https://lore.kernel.org/lkml/20190204181118.12095.38300.stgit@localhost.localdomain/
> >
> > I have been verifying the memory has been getting freed but didn't feel
> > like the test added much value so I haven't added it to the cover page for
> > a while since the time could vary widely and is dependent on things like
> > the disk type used for the host swap since my SSD is likely faster than
> > spinning rust, but may not be as fast as other SSDs on the market. Since
> > the disk speed can play such a huge role I wasn't comfortable posting
> > numbers since the benefits could vary so widely.
>
> OK, thanks.  I'll add the patches to the mm pile.  The new
> mm/page_reporting.c is unreviewed afaict, so I guess you own that for
> now ;)

I will see what I can do to get some additional review of those
patches. There has been some review, but I rewrote that block after
suggestions as I had to split it out over several patches to account
for the gains from the changes in patches 7 and 8.

> It would be very nice to get some feedback from testers asserting "yes,
> this really helped my workload" but I understand this sort of testing
> is hard to obtain at this stage.

Without the QEMU patches applied there isn't much that this patch set
can do on its own, so that is another piece I have to work on. Yet
another reason to make sure it does no harm if it is not enabled.

So far the one that surprised me the most is that somebody from Huawei
was working to add device pass-thru support to it already.
https://lore.kernel.org/lkml/1578408399-20092-1-git-send-email-weiqi4@huawei.com/

Thanks.

- Alex
Alexander Duyck Feb. 18, 2020, 4:37 p.m. UTC | #5
On Tue, 2020-02-11 at 16:19 -0800, Andrew Morton wrote:
> On Tue, 11 Feb 2020 15:55:31 -0800 Alexander Duyck <alexander.h.duyck@linux.intel.com> wrote:
> 
> > On the host I just have to monitor /proc/meminfo and I can see the
> > difference. I get the following results on the host, in the enabled case
> > it takes about 30 seconds for it to settle into the final state since I
> > only report page a bit at a time:
> > Baseline/Applied
> >   MemTotal:    131963012 kB
> >   MemFree:      95189740 kB
> > 
> > Enabled:
> >   MemTotal:    131963012 kB
> >   MemFree:     126459472 kB
> > 
> > This is what I was referring to with the comment above. I had a test I was
> > running back around the first RFC that consisted of bringing up enough VMs
> > so that there was a bit of memory overcommit and then having the VMs in
> > turn run memhog. As I recall the difference between the two was  something
> > like a couple minutes to run through all the VMs as the memhog would take
> > up to 40+ seconds for one that was having to pull from swap while it took
> > only 5 to 7 seconds for the VMs that were all running the page hinting.
> > 
> > I had referenced it here in the RFC:
> > https://lore.kernel.org/lkml/20190204181118.12095.38300.stgit@localhost.localdomain/
> > 
> > I have been verifying the memory has been getting freed but didn't feel
> > like the test added much value so I haven't added it to the cover page for
> > a while since the time could vary widely and is dependent on things like
> > the disk type used for the host swap since my SSD is likely faster than
> > spinning rust, but may not be as fast as other SSDs on the market. Since
> > the disk speed can play such a huge role I wasn't comfortable posting
> > numbers since the benefits could vary so widely.
> 
> OK, thanks.  I'll add the patches to the mm pile.  The new
> mm/page_reporting.c is unreviewed afaict, so I guess you own that for
> now ;)
> 
> It would be very nice to get some feedback from testers asserting "yes,
> this really helped my workload" but I understand this sort of testing
> is hard to obtain at this stage.
> 

Mel,

Any ETA on when you would be available to review these patches? They are
now in Andrew's tree and in linux-next. I am hoping to get any remaining
review from the community sorted out in the next few weeks so I can move
onto focusing on how best to exert pressure on the page cache so that we
can keep the guest memory footprint small.

Thanks.

- Alex
Mel Gorman Feb. 19, 2020, 8:49 a.m. UTC | #6
On Tue, Feb 18, 2020 at 08:37:46AM -0800, Alexander Duyck wrote:
> On Tue, 2020-02-11 at 16:19 -0800, Andrew Morton wrote:
> > On Tue, 11 Feb 2020 15:55:31 -0800 Alexander Duyck <alexander.h.duyck@linux.intel.com> wrote:
> > 
> > > On the host I just have to monitor /proc/meminfo and I can see the
> > > difference. I get the following results on the host, in the enabled case
> > > it takes about 30 seconds for it to settle into the final state since I
> > > only report page a bit at a time:
> > > Baseline/Applied
> > >   MemTotal:    131963012 kB
> > >   MemFree:      95189740 kB
> > > 
> > > Enabled:
> > >   MemTotal:    131963012 kB
> > >   MemFree:     126459472 kB
> > > 
> > > This is what I was referring to with the comment above. I had a test I was
> > > running back around the first RFC that consisted of bringing up enough VMs
> > > so that there was a bit of memory overcommit and then having the VMs in
> > > turn run memhog. As I recall the difference between the two was  something
> > > like a couple minutes to run through all the VMs as the memhog would take
> > > up to 40+ seconds for one that was having to pull from swap while it took
> > > only 5 to 7 seconds for the VMs that were all running the page hinting.
> > > 
> > > I had referenced it here in the RFC:
> > > https://lore.kernel.org/lkml/20190204181118.12095.38300.stgit@localhost.localdomain/
> > > 
> > > I have been verifying the memory has been getting freed but didn't feel
> > > like the test added much value so I haven't added it to the cover page for
> > > a while since the time could vary widely and is dependent on things like
> > > the disk type used for the host swap since my SSD is likely faster than
> > > spinning rust, but may not be as fast as other SSDs on the market. Since
> > > the disk speed can play such a huge role I wasn't comfortable posting
> > > numbers since the benefits could vary so widely.
> > 
> > OK, thanks.  I'll add the patches to the mm pile.  The new
> > mm/page_reporting.c is unreviewed afaict, so I guess you own that for
> > now ;)
> > 
> > It would be very nice to get some feedback from testers asserting "yes,
> > this really helped my workload" but I understand this sort of testing
> > is hard to obtain at this stage.
> > 
> 
> Mel,
> 
> Any ETA on when you would be available to review these patches? They are
> now in Andrew's tree and in linux-next. I am hoping to get any remaining
> review from the community sorted out in the next few weeks so I can move
> onto focusing on how best to exert pressure on the page cache so that we
> can keep the guest memory footprint small.
> 

I hope to get to it soon. I'm trying to finalise a scheduler-related
series that reconciles NUMA and CPU balancing and it's occupying much of
my attention available for mainline development :(
Mel Gorman Feb. 19, 2020, 3:06 p.m. UTC | #7
On Tue, Feb 18, 2020 at 08:37:46AM -0800, Alexander Duyck wrote:
> On Tue, 2020-02-11 at 16:19 -0800, Andrew Morton wrote:
> > On Tue, 11 Feb 2020 15:55:31 -0800 Alexander Duyck <alexander.h.duyck@linux.intel.com> wrote:
> > 
> > > On the host I just have to monitor /proc/meminfo and I can see the
> > > difference. I get the following results on the host, in the enabled case
> > > it takes about 30 seconds for it to settle into the final state since I
> > > only report page a bit at a time:
> > > Baseline/Applied
> > >   MemTotal:    131963012 kB
> > >   MemFree:      95189740 kB
> > > 
> > > Enabled:
> > >   MemTotal:    131963012 kB
> > >   MemFree:     126459472 kB
> > > 
> > > This is what I was referring to with the comment above. I had a test I was
> > > running back around the first RFC that consisted of bringing up enough VMs
> > > so that there was a bit of memory overcommit and then having the VMs in
> > > turn run memhog. As I recall the difference between the two was  something
> > > like a couple minutes to run through all the VMs as the memhog would take
> > > up to 40+ seconds for one that was having to pull from swap while it took
> > > only 5 to 7 seconds for the VMs that were all running the page hinting.
> > > 
> > > I had referenced it here in the RFC:
> > > https://lore.kernel.org/lkml/20190204181118.12095.38300.stgit@localhost.localdomain/
> > > 
> > > I have been verifying the memory has been getting freed but didn't feel
> > > like the test added much value so I haven't added it to the cover page for
> > > a while since the time could vary widely and is dependent on things like
> > > the disk type used for the host swap since my SSD is likely faster than
> > > spinning rust, but may not be as fast as other SSDs on the market. Since
> > > the disk speed can play such a huge role I wasn't comfortable posting
> > > numbers since the benefits could vary so widely.
> > 
> > OK, thanks.  I'll add the patches to the mm pile.  The new
> > mm/page_reporting.c is unreviewed afaict, so I guess you own that for
> > now ;)
> > 
> > It would be very nice to get some feedback from testers asserting "yes,
> > this really helped my workload" but I understand this sort of testing
> > is hard to obtain at this stage.
> > 
> 
> Mel,
> 
> Any ETA on when you would be available to review these patches? They are
> now in Andrew's tree and in linux-next. I am hoping to get any remaining
> review from the community sorted out in the next few weeks so I can move
> onto focusing on how best to exert pressure on the page cache so that we
> can keep the guest memory footprint small.
> 

Sorry for the delay but I got through most of the patches -- I ignored
the qemu ones. Overall I think it's fine, while I made a suggestion on
how one section can be a bit more defensive, I don't consider it
mandatory to update the series.

Last time I got upset by the use of zone lock, premature optimisation and
some potential page allocator overhead. Now how you use the zone lock is
justified and I think the state managed by atomics is ok. The optimisations
are split out and so if there is a bug lurking in there, they can be
backed out relatively easily and it was easier to review the series in
this way. Finally, the overhead to the page allocator when the feature
is disabled should be non-existent as it's protected by a static branch
so I have no further objections.

Thanks!