mbox series

[0/2] Fix hugetlb free path race with memory errors

Message ID 20230711220942.43706-1-mike.kravetz@oracle.com (mailing list archive)
Headers show
Series Fix hugetlb free path race with memory errors | expand

Message

Mike Kravetz July 11, 2023, 10:09 p.m. UTC
In the discussion of Jiaqi Yan's series "Improve hugetlbfs read on
HWPOISON hugepages" the race window was discovered.
https://lore.kernel.org/linux-mm/20230616233447.GB7371@monkey/

Freeing a hugetlb page back to low level memory allocators is performed
in two steps.
1) Under hugetlb lock, remove page from hugetlb lists and clear destructor
2) Outside lock, allocate vmemmap if necessary and call low level free
Between these two steps, the hugetlb page will appear as a normal
compound page.  However, vmemmap for tail pages could be missing.
If a memory error occurs at this time, we could try to update page
flags non-existant page structs.

A much more detailed description is in the first patch.

The first patch addresses the race window.  However, it adds a
hugetlb_lock lock/unlock cycle to every vmemmap optimized hugetlb
page free operation.  This could lead to slowdowns if one is freeing
a large number of hugetlb pages.

The second path optimizes the update_and_free_pages_bulk routine
to only take the lock once in bulk operations.

The second patch is technically not a bug fix, but includes a Fixes
tag and Cc stable to avoid a performance regression.  It can be
combined with the first, but was done separately make reviewing easier.

Mike Kravetz (2):
  hugetlb: Do not clear hugetlb dtor until allocating vmemmap
  hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles

 mm/hugetlb.c | 110 +++++++++++++++++++++++++++++++++++++++------------
 1 file changed, 85 insertions(+), 25 deletions(-)

Comments

Andrew Morton July 13, 2023, 5:34 p.m. UTC | #1
On Tue, 11 Jul 2023 15:09:40 -0700 Mike Kravetz <mike.kravetz@oracle.com> wrote:

> In the discussion of Jiaqi Yan's series "Improve hugetlbfs read on
> HWPOISON hugepages" the race window was discovered.
> https://lore.kernel.org/linux-mm/20230616233447.GB7371@monkey/
> 
> Freeing a hugetlb page back to low level memory allocators is performed
> in two steps.
> 1) Under hugetlb lock, remove page from hugetlb lists and clear destructor
> 2) Outside lock, allocate vmemmap if necessary and call low level free
> Between these two steps, the hugetlb page will appear as a normal
> compound page.  However, vmemmap for tail pages could be missing.
> If a memory error occurs at this time, we could try to update page
> flags non-existant page structs.
> 
> A much more detailed description is in the first patch.
> 
> The first patch addresses the race window.  However, it adds a
> hugetlb_lock lock/unlock cycle to every vmemmap optimized hugetlb
> page free operation.  This could lead to slowdowns if one is freeing
> a large number of hugetlb pages.
> 
> The second path optimizes the update_and_free_pages_bulk routine
> to only take the lock once in bulk operations.
> 
> The second patch is technically not a bug fix, but includes a Fixes
> tag and Cc stable to avoid a performance regression.  It can be
> combined with the first, but was done separately make reviewing easier.
> 

I feel that backporting performance improvements into -stable is not a
usual thing to do.  Perhaps the fact that it's a regression fix changes
this, but why?

Much hinges on the magnitude of the performance change.  Are you able
to quantify this at all?