diff mbox series

[v2,1/3] mm: enable MADV_DONTNEED for hugetlb mappings

Message ID 20220202014034.182008-2-mike.kravetz@oracle.com (mailing list archive)
State New
Headers show
Series Add hugetlb MADV_DONTNEED support | expand

Commit Message

Mike Kravetz Feb. 2, 2022, 1:40 a.m. UTC
MADV_DONTNEED is currently disabled for hugetlb mappings.  This
certainly makes sense in shared file mappings as the pagecache maintains
a reference to the page and it will never be freed.  However, it could
be useful to unmap and free pages in private mappings.

The only thing preventing MADV_DONTNEED from working on hugetlb mappings
is a check in can_madv_lru_vma().  To allow support for hugetlb mappings
create and use a new routine madvise_dontneed_free_valid_vma() that will
allow hugetlb mappings.  Also, before calling zap_page_range in the
DONTNEED case align start and size to huge page size for hugetlb vmas.
madvise only requires PAGE_SIZE alignment, but the hugetlb unmap routine
requires huge page size alignment.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 mm/madvise.c | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

Comments

David Hildenbrand Feb. 2, 2022, 8:14 a.m. UTC | #1
On 02.02.22 02:40, Mike Kravetz wrote:
> MADV_DONTNEED is currently disabled for hugetlb mappings.  This
> certainly makes sense in shared file mappings as the pagecache maintains
> a reference to the page and it will never be freed.  However, it could
> be useful to unmap and free pages in private mappings.
> 
> The only thing preventing MADV_DONTNEED from working on hugetlb mappings
> is a check in can_madv_lru_vma().  To allow support for hugetlb mappings
> create and use a new routine madvise_dontneed_free_valid_vma() that will
> allow hugetlb mappings.  Also, before calling zap_page_range in the
> DONTNEED case align start and size to huge page size for hugetlb vmas.
> madvise only requires PAGE_SIZE alignment, but the hugetlb unmap routine
> requires huge page size alignment.
> 
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
>  mm/madvise.c | 24 ++++++++++++++++++++++--
>  1 file changed, 22 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 5604064df464..7ae891e030a4 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -796,10 +796,30 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
>  static long madvise_dontneed_single_vma(struct vm_area_struct *vma,
>  					unsigned long start, unsigned long end)
>  {
> +	/*
> +	 * start and size (end - start) must be huge page size aligned
> +	 * for hugetlb vmas.
> +	 */
> +	if (is_vm_hugetlb_page(vma)) {
> +		struct hstate *h = hstate_vma(vma);
> +
> +		start = ALIGN_DOWN(start, huge_page_size(h));
> +		end = ALIGN(end, huge_page_size(h));

So you effectively extend the range silently. IIUC, if someone would zap
a 4k range you would implicitly zap a whole 2M page and effectively zero
out more data than requested.


Looking at do_madvise(), we:
(1) reject start addresses that are not page-aligned
(2) shrink lengths that are not page-aligned and refuse if it turns 0

The man page documents (1) but doesn't really document (2).

Naturally I'd have assume that we apply the same logic to huge page
sizes and documenting it in the man page accordingly.


Why did you decide to extend the range? I'd assume MADV_REMOVE behaves
like FALLOC_FL_PUNCH_HOLE:
  "Within the specified range, partial filesystem blocks are zeroed, and
   whole filesystem blocks are removed from the file.  After a
   successful call, subsequent reads from  this  range will return
   zeros."
So we don't "discard more than requested".


I see the following possible alternatives:
(a) Fail if the range is not aligned
-> Clear semantics
(b) Fail if the start is not aligned, shrink the end if required
-> Same rules as for PAGE_SIZE
(c) Zero out the requested part
-> Same semantics as FALLOC_FL_PUNCH_HOLE.

My preference would be a), properly documenting it in the man page.
Mike Kravetz Feb. 2, 2022, 7:32 p.m. UTC | #2
On 2/2/22 00:14, David Hildenbrand wrote:
> On 02.02.22 02:40, Mike Kravetz wrote:
>> MADV_DONTNEED is currently disabled for hugetlb mappings.  This
>> certainly makes sense in shared file mappings as the pagecache maintains
>> a reference to the page and it will never be freed.  However, it could
>> be useful to unmap and free pages in private mappings.
>>
>> The only thing preventing MADV_DONTNEED from working on hugetlb mappings
>> is a check in can_madv_lru_vma().  To allow support for hugetlb mappings
>> create and use a new routine madvise_dontneed_free_valid_vma() that will
>> allow hugetlb mappings.  Also, before calling zap_page_range in the
>> DONTNEED case align start and size to huge page size for hugetlb vmas.
>> madvise only requires PAGE_SIZE alignment, but the hugetlb unmap routine
>> requires huge page size alignment.
>>
>> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
>> ---
>>  mm/madvise.c | 24 ++++++++++++++++++++++--
>>  1 file changed, 22 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/madvise.c b/mm/madvise.c
>> index 5604064df464..7ae891e030a4 100644
>> --- a/mm/madvise.c
>> +++ b/mm/madvise.c
>> @@ -796,10 +796,30 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
>>  static long madvise_dontneed_single_vma(struct vm_area_struct *vma,
>>  					unsigned long start, unsigned long end)
>>  {
>> +	/*
>> +	 * start and size (end - start) must be huge page size aligned
>> +	 * for hugetlb vmas.
>> +	 */
>> +	if (is_vm_hugetlb_page(vma)) {
>> +		struct hstate *h = hstate_vma(vma);
>> +
>> +		start = ALIGN_DOWN(start, huge_page_size(h));
>> +		end = ALIGN(end, huge_page_size(h));
> 
> So you effectively extend the range silently. IIUC, if someone would zap
> a 4k range you would implicitly zap a whole 2M page and effectively zero
> out more data than requested.
> 
> 
> Looking at do_madvise(), we:
> (1) reject start addresses that are not page-aligned
> (2) shrink lengths that are not page-aligned and refuse if it turns 0

I believe length is extended (rounded up) by this line:
	len = PAGE_ALIGN(len_in);

but, I see your point.

> The man page documents (1) but doesn't really document (2).
> 
> Naturally I'd have assume that we apply the same logic to huge page
> sizes and documenting it in the man page accordingly.
> 
> 
> Why did you decide to extend the range? I'd assume MADV_REMOVE behaves
> like FALLOC_FL_PUNCH_HOLE:
>   "Within the specified range, partial filesystem blocks are zeroed, and
>    whole filesystem blocks are removed from the file.  After a
>    successful call, subsequent reads from  this  range will return
>    zeros."
> So we don't "discard more than requested".

Well.  hugetlbfs does not follow the man page. :(  It does not zero
partial blocks.  I assume a filesystem block would be a huge page.
Instead it does,

        /*
         * For hole punch round up the beginning offset of the hole and
         * round down the end.
         */
        hole_start = round_up(offset, hpage_size);
        hole_end = round_down(offset + len, hpage_size);

So, not only is this patch not following the man page.  It is not even
following the existing MADV_REMOVE hugetlb code.  Thanks for pointing
that out.  Part of my reason for adding this functionality was to make
hugetlb be more like 'normal' memory.  I clearly failed.

Related comment about madvise man page for PAGE_SIZE MADV_REMOVE.  The man
page says.

       MADV_REMOVE (since Linux 2.6.16)
              Free up a given range of pages and its associated backing store.
              This is equivalent to punching a hole in the corresponding  byte
              range  of  the backing store (see fallocate(2)).  Subsequent ac‐
              cesses in the specified address range will see bytes  containing
              zero.

This may need some clarification.  It says it will free pages.  We know
madvise only operates on pages (PAGE_ALIGN(len)).  Yet, the statement about
equivalent to a fallocate byte range may lead one to believe that length is
treated the same in madvise and fallocate.

> I see the following possible alternatives:
> (a) Fail if the range is not aligned
> -> Clear semantics
> (b) Fail if the start is not aligned, shrink the end if required
> -> Same rules as for PAGE_SIZE
> (c) Zero out the requested part
> -> Same semantics as FALLOC_FL_PUNCH_HOLE.
> 
> My preference would be a), properly documenting it in the man page.

However, a) would make hugetlb behave differently than other memory as
len does not need to be aligned.

I would prefer b) as it is more in line with PAGE_SIZE.  But, that does
make it different than MADV_REMOVE hugetlb alignment.

I thought this was simple. :)
David Hildenbrand Feb. 4, 2022, 8:35 a.m. UTC | #3
>>> +	/*
>>> +	 * start and size (end - start) must be huge page size aligned
>>> +	 * for hugetlb vmas.
>>> +	 */
>>> +	if (is_vm_hugetlb_page(vma)) {
>>> +		struct hstate *h = hstate_vma(vma);
>>> +
>>> +		start = ALIGN_DOWN(start, huge_page_size(h));
>>> +		end = ALIGN(end, huge_page_size(h));
>>
>> So you effectively extend the range silently. IIUC, if someone would zap
>> a 4k range you would implicitly zap a whole 2M page and effectively zero
>> out more data than requested.
>>
>>
>> Looking at do_madvise(), we:
>> (1) reject start addresses that are not page-aligned
>> (2) shrink lengths that are not page-aligned and refuse if it turns 0
> 
> I believe length is extended (rounded up) by this line:
> 	len = PAGE_ALIGN(len_in);

Ah, right. I was confused by the "!len" check below that, but the
comment explains how this applies to negative values only.

> 
> but, I see your point.
> 
>> The man page documents (1) but doesn't really document (2).
>>
>> Naturally I'd have assume that we apply the same logic to huge page
>> sizes and documenting it in the man page accordingly.
>>
>>
>> Why did you decide to extend the range? I'd assume MADV_REMOVE behaves
>> like FALLOC_FL_PUNCH_HOLE:
>>   "Within the specified range, partial filesystem blocks are zeroed, and
>>    whole filesystem blocks are removed from the file.  After a
>>    successful call, subsequent reads from  this  range will return
>>    zeros."
>> So we don't "discard more than requested".
> 
> Well.  hugetlbfs does not follow the man page. :(  It does not zero
> partial blocks.  I assume a filesystem block would be a huge page.
> Instead it does,
> 
>         /*
>          * For hole punch round up the beginning offset of the hole and
>          * round down the end.
>          */
>         hole_start = round_up(offset, hpage_size);
>         hole_end = round_down(offset + len, hpage_size);

Okay, so we skip any zeroing and only free completely covered blocks. We
might want to document that behavior. See below.

> 
> So, not only is this patch not following the man page.  It is not even
> following the existing MADV_REMOVE hugetlb code.  Thanks for pointing
> that out.  Part of my reason for adding this functionality was to make
> hugetlb be more like 'normal' memory.  I clearly failed.

:)

> 
> Related comment about madvise man page for PAGE_SIZE MADV_REMOVE.  The man
> page says.
> 
>        MADV_REMOVE (since Linux 2.6.16)
>               Free up a given range of pages and its associated backing store.
>               This is equivalent to punching a hole in the corresponding  byte
>               range  of  the backing store (see fallocate(2)).  Subsequent ac‐
>               cesses in the specified address range will see bytes  containing
>               zero.
> 
> This may need some clarification.  It says it will free pages.  We know
> madvise only operates on pages (PAGE_ALIGN(len)).  Yet, the statement about
> equivalent to a fallocate byte range may lead one to believe that length is
> treated the same in madvise and fallocate.

Yes

> 
>> I see the following possible alternatives:
>> (a) Fail if the range is not aligned
>> -> Clear semantics
>> (b) Fail if the start is not aligned, shrink the end if required
>> -> Same rules as for PAGE_SIZE
>> (c) Zero out the requested part
>> -> Same semantics as FALLOC_FL_PUNCH_HOLE.
>>
>> My preference would be a), properly documenting it in the man page.
> 
> However, a) would make hugetlb behave differently than other memory as
> len does not need to be aligned.
> 
> I would prefer b) as it is more in line with PAGE_SIZE.  But, that does
> make it different than MADV_REMOVE hugetlb alignment.
> 
> I thought this was simple. :)

It really bugs me that it's under-specified what's supposed to happen
when the length is not aligned.

BUT: in the posix world, "calling posix_madvise() shall not affect the
semantics of access to memory in the specified range". So we don't care
too much about if we align up/down, because it wouldn't affect the
semantics. Especially for MADV_DONTNEED/MADV_REMOVE as implemented by
Linux this is certainly different and the alignment handling matters.

So I guess especially for MADV_DONTNEED/MADV_REMOVE we need a clear
specification what's supposed to happen if the length falls into the
middle of a huge page. We should document alignment handling for
madvise() in general I assume.

IMHO we should have bailed out right from the start whenever something
is not properly aligned, but that ship has sailed. So I agree, maybe we
can make at least hugetlb MADV_DONTNEED obey the same (weird) rules as
ordinary pages.

So b) would mean, requiring start to be hugepage aligned and aligning-up
the end. Still feels wrong but at least matches existing semantics.

Hugetlb MADV_REMOVE semantics are unfortunate and we should document the
exception.
Mike Kravetz Feb. 7, 2022, 11:47 p.m. UTC | #4
On 2/4/22 00:35, David Hildenbrand wrote:
>> I thought this was simple. :)
> 
> It really bugs me that it's under-specified what's supposed to happen
> when the length is not aligned.
> 
> BUT: in the posix world, "calling posix_madvise() shall not affect the
> semantics of access to memory in the specified range". So we don't care
> too much about if we align up/down, because it wouldn't affect the
> semantics. Especially for MADV_DONTNEED/MADV_REMOVE as implemented by
> Linux this is certainly different and the alignment handling matters.
> 
> So I guess especially for MADV_DONTNEED/MADV_REMOVE we need a clear
> specification what's supposed to happen if the length falls into the
> middle of a huge page. We should document alignment handling for
> madvise() in general I assume.
> 
> IMHO we should have bailed out right from the start whenever something
> is not properly aligned, but that ship has sailed. So I agree, maybe we
> can make at least hugetlb MADV_DONTNEED obey the same (weird) rules as
> ordinary pages.
> 
> So b) would mean, requiring start to be hugepage aligned and aligning-up
> the end. Still feels wrong but at least matches existing semantics.
> 
> Hugetlb MADV_REMOVE semantics are unfortunate and we should document the
> exception.

Thank you for all your comments David!

So, my plan was to make MADV_DONTNEED behave as described above:
- EINVAL if start address not huge page size aligned
- Align end/length up to huge page size.

The code I had for this was very specific to MADV_DONTNEED.  I then thought,
why not do the same for MADV_REMOVE as well?  Or even more general, add this
check and alignment to the vma parsing code in madvise.

It was then that I realized there are several madvise behaviors that take
non-huge page size aligned addresses for hugetlb mappings today.  Making
huge page size alignment a requirement for all madvise behaviors could break
existing code.  So, it seems like it could only be added to MADV_DONTNEED as
this functionality does not exist today.  We then end up with MADV_DONTNEED
as the only behavior requiring huge page size alignment for hugetlb mappings.
Sigh!!!

I am now rethinking the decision to proceed with b) as described above.

With the exception of MADV_REMOVE (which we may be able to change for
hugetlb),  madvise operations operate on huge page size pages for hugetlb
mappings.  If start address is in the middle of a hugetlb page, we essentially
align down to the beginning of the hugetlb page.  If length lands in the
middle of a hugetlb page, we essentially round up.

When adding MADV_REMOVE perhaps we should go with this align down start and
align up end strategy that is used everywhere else?  I really wish we could
go back and change things, but as you know it is too late for that.
Peter Xu Feb. 10, 2022, 3:21 a.m. UTC | #5
(Sorry for the late comment)

On Tue, Feb 01, 2022 at 05:40:32PM -0800, Mike Kravetz wrote:
> MADV_DONTNEED is currently disabled for hugetlb mappings.  This
> certainly makes sense in shared file mappings as the pagecache maintains
> a reference to the page and it will never be freed.  However, it could
> be useful to unmap and free pages in private mappings.
> 
> The only thing preventing MADV_DONTNEED from working on hugetlb mappings
> is a check in can_madv_lru_vma().  To allow support for hugetlb mappings
> create and use a new routine madvise_dontneed_free_valid_vma() that will
> allow hugetlb mappings.  Also, before calling zap_page_range in the
> DONTNEED case align start and size to huge page size for hugetlb vmas.
> madvise only requires PAGE_SIZE alignment, but the hugetlb unmap routine
> requires huge page size alignment.
> 
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
>  mm/madvise.c | 24 ++++++++++++++++++++++--
>  1 file changed, 22 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 5604064df464..7ae891e030a4 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -796,10 +796,30 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
>  static long madvise_dontneed_single_vma(struct vm_area_struct *vma,
>  					unsigned long start, unsigned long end)
>  {
> +	/*
> +	 * start and size (end - start) must be huge page size aligned
> +	 * for hugetlb vmas.
> +	 */
> +	if (is_vm_hugetlb_page(vma)) {
> +		struct hstate *h = hstate_vma(vma);
> +
> +		start = ALIGN_DOWN(start, huge_page_size(h));
> +		end = ALIGN(end, huge_page_size(h));
> +	}
> +

Maybe check the alignment before userfaultfd_remove()?  Otherwise there'll be a
fake message generated to the tracer.

>  	zap_page_range(vma, start, end - start);
>  	return 0;
>  }
>  
> +static bool madvise_dontneed_free_valid_vma(struct vm_area_struct *vma,
> +						int behavior)
> +{
> +	if (is_vm_hugetlb_page(vma))
> +		return behavior == MADV_DONTNEED;
> +	else
> +		return can_madv_lru_vma(vma);
> +}

can_madv_lru_vma() will check hugetlb again which looks a bit weird.  Would it
look better to write it as:

madvise_dontneed_free_valid_vma()
{
    return !(vma->vm_flags & (VM_LOCKED|VM_PFNMAP));
}

can_madv_lru_vma()
{
    return madvise_dontneed_free_valid_vma() && !is_vm_hugetlb_page(vma);
}

?

Another use case of DONTNEED upon hugetlbfs could be uffd-minor, because afaiu
this is the only api that can force strip the hugetlb mapped pgtable without
losing pagecache data.

Thanks,
David Hildenbrand Feb. 10, 2022, 1:09 p.m. UTC | #6
On 08.02.22 00:47, Mike Kravetz wrote:
> On 2/4/22 00:35, David Hildenbrand wrote:
>>> I thought this was simple. :)
>>
>> It really bugs me that it's under-specified what's supposed to happen
>> when the length is not aligned.
>>
>> BUT: in the posix world, "calling posix_madvise() shall not affect the
>> semantics of access to memory in the specified range". So we don't care
>> too much about if we align up/down, because it wouldn't affect the
>> semantics. Especially for MADV_DONTNEED/MADV_REMOVE as implemented by
>> Linux this is certainly different and the alignment handling matters.
>>
>> So I guess especially for MADV_DONTNEED/MADV_REMOVE we need a clear
>> specification what's supposed to happen if the length falls into the
>> middle of a huge page. We should document alignment handling for
>> madvise() in general I assume.
>>
>> IMHO we should have bailed out right from the start whenever something
>> is not properly aligned, but that ship has sailed. So I agree, maybe we
>> can make at least hugetlb MADV_DONTNEED obey the same (weird) rules as
>> ordinary pages.
>>
>> So b) would mean, requiring start to be hugepage aligned and aligning-up
>> the end. Still feels wrong but at least matches existing semantics.
>>
>> Hugetlb MADV_REMOVE semantics are unfortunate and we should document the
>> exception.
> 
> Thank you for all your comments David!
> 
> So, my plan was to make MADV_DONTNEED behave as described above:
> - EINVAL if start address not huge page size aligned
> - Align end/length up to huge page size.
> 
> The code I had for this was very specific to MADV_DONTNEED.  I then thought,
> why not do the same for MADV_REMOVE as well?  Or even more general, add this
> check and alignment to the vma parsing code in madvise.
> 
> It was then that I realized there are several madvise behaviors that take
> non-huge page size aligned addresses for hugetlb mappings today.  Making
> huge page size alignment a requirement for all madvise behaviors could break
> existing code.  So, it seems like it could only be added to MADV_DONTNEED as
> this functionality does not exist today.  We then end up with MADV_DONTNEED
> as the only behavior requiring huge page size alignment for hugetlb mappings.
> Sigh!!!

:/

> 
> I am now rethinking the decision to proceed with b) as described above.
> 
> With the exception of MADV_REMOVE (which we may be able to change for
> hugetlb),  madvise operations operate on huge page size pages for hugetlb
> mappings.  If start address is in the middle of a hugetlb page, we essentially
> align down to the beginning of the hugetlb page.  If length lands in the
> middle of a hugetlb page, we essentially round up.

Which MADV calls would be affected?

The "bad" thing about MADV_DONTNEED and MADV_REMOVE are that they
destroy data, which is why they heavily diverge from the original posix
madvise odea.

> 
> When adding MADV_REMOVE perhaps we should go with this align down start and
> align up end strategy that is used everywhere else?  I really wish we could
> go back and change things, but as you know it is too late for that.

I assume whatever we do, we should document it at least cleanly in the
man page. Unfortunately, hugetlb is a gift that keeps on giving. Making
it at least somehow consistent, even if it's "hugtlb being consistent in
its own mess", that would be preferable I guess.
Mike Kravetz Feb. 10, 2022, 9:36 p.m. UTC | #7
On 2/9/22 19:21, Peter Xu wrote:
> (Sorry for the late comment)

Thanks for taking a look.

> 
> On Tue, Feb 01, 2022 at 05:40:32PM -0800, Mike Kravetz wrote:
>> MADV_DONTNEED is currently disabled for hugetlb mappings.  This
>> certainly makes sense in shared file mappings as the pagecache maintains
>> a reference to the page and it will never be freed.  However, it could
>> be useful to unmap and free pages in private mappings.
>>
>> The only thing preventing MADV_DONTNEED from working on hugetlb mappings
>> is a check in can_madv_lru_vma().  To allow support for hugetlb mappings
>> create and use a new routine madvise_dontneed_free_valid_vma() that will
>> allow hugetlb mappings.  Also, before calling zap_page_range in the
>> DONTNEED case align start and size to huge page size for hugetlb vmas.
>> madvise only requires PAGE_SIZE alignment, but the hugetlb unmap routine
>> requires huge page size alignment.
>>
>> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
>> ---
>>  mm/madvise.c | 24 ++++++++++++++++++++++--
>>  1 file changed, 22 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/madvise.c b/mm/madvise.c
>> index 5604064df464..7ae891e030a4 100644
>> --- a/mm/madvise.c
>> +++ b/mm/madvise.c
>> @@ -796,10 +796,30 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
>>  static long madvise_dontneed_single_vma(struct vm_area_struct *vma,
>>  					unsigned long start, unsigned long end)
>>  {
>> +	/*
>> +	 * start and size (end - start) must be huge page size aligned
>> +	 * for hugetlb vmas.
>> +	 */
>> +	if (is_vm_hugetlb_page(vma)) {
>> +		struct hstate *h = hstate_vma(vma);
>> +
>> +		start = ALIGN_DOWN(start, huge_page_size(h));
>> +		end = ALIGN(end, huge_page_size(h));
>> +	}
>> +
> 
> Maybe check the alignment before userfaultfd_remove()?  Otherwise there'll be a
> fake message generated to the tracer.

Yes, we should pass the aligned addresses to userfaultfd_remove.  We will
also need to potentially align again after the call.

> 
>>  	zap_page_range(vma, start, end - start);
>>  	return 0;
>>  }
>>  
>> +static bool madvise_dontneed_free_valid_vma(struct vm_area_struct *vma,
>> +						int behavior)
>> +{
>> +	if (is_vm_hugetlb_page(vma))
>> +		return behavior == MADV_DONTNEED;
>> +	else
>> +		return can_madv_lru_vma(vma);
>> +}
> 
> can_madv_lru_vma() will check hugetlb again which looks a bit weird.  Would it
> look better to write it as:
> 
> madvise_dontneed_free_valid_vma()
> {
>     return !(vma->vm_flags & (VM_LOCKED|VM_PFNMAP));
> }
> 
> can_madv_lru_vma()
> {
>     return madvise_dontneed_free_valid_vma() && !is_vm_hugetlb_page(vma);
> }
> 
> ?

Yes, that would look better.

> 
> Another use case of DONTNEED upon hugetlbfs could be uffd-minor, because afaiu
> this is the only api that can force strip the hugetlb mapped pgtable without
> losing pagecache data.
> 

Correct.  However, I do not know if uffd-minor users would ever want to
do this.  Perhaps?
Mike Kravetz Feb. 10, 2022, 10:11 p.m. UTC | #8
On 2/10/22 05:09, David Hildenbrand wrote:
> On 08.02.22 00:47, Mike Kravetz wrote:
>> On 2/4/22 00:35, David Hildenbrand wrote:
>>>> I thought this was simple. :)
>>>
>>> It really bugs me that it's under-specified what's supposed to happen
>>> when the length is not aligned.
>>>
>>> BUT: in the posix world, "calling posix_madvise() shall not affect the
>>> semantics of access to memory in the specified range". So we don't care
>>> too much about if we align up/down, because it wouldn't affect the
>>> semantics. Especially for MADV_DONTNEED/MADV_REMOVE as implemented by
>>> Linux this is certainly different and the alignment handling matters.
>>>
>>> So I guess especially for MADV_DONTNEED/MADV_REMOVE we need a clear
>>> specification what's supposed to happen if the length falls into the
>>> middle of a huge page. We should document alignment handling for
>>> madvise() in general I assume.
>>>
>>> IMHO we should have bailed out right from the start whenever something
>>> is not properly aligned, but that ship has sailed. So I agree, maybe we
>>> can make at least hugetlb MADV_DONTNEED obey the same (weird) rules as
>>> ordinary pages.
>>>
>>> So b) would mean, requiring start to be hugepage aligned and aligning-up
>>> the end. Still feels wrong but at least matches existing semantics.
>>>
>>> Hugetlb MADV_REMOVE semantics are unfortunate and we should document the
>>> exception.
>>
>> Thank you for all your comments David!
>>
>> So, my plan was to make MADV_DONTNEED behave as described above:
>> - EINVAL if start address not huge page size aligned
>> - Align end/length up to huge page size.
>>
>> The code I had for this was very specific to MADV_DONTNEED.  I then thought,
>> why not do the same for MADV_REMOVE as well?  Or even more general, add this
>> check and alignment to the vma parsing code in madvise.
>>
>> It was then that I realized there are several madvise behaviors that take
>> non-huge page size aligned addresses for hugetlb mappings today.  Making
>> huge page size alignment a requirement for all madvise behaviors could break
>> existing code.  So, it seems like it could only be added to MADV_DONTNEED as
>> this functionality does not exist today.  We then end up with MADV_DONTNEED
>> as the only behavior requiring huge page size alignment for hugetlb mappings.
>> Sigh!!!
> 
> :/
> 
>>
>> I am now rethinking the decision to proceed with b) as described above.
>>
>> With the exception of MADV_REMOVE (which we may be able to change for
>> hugetlb),  madvise operations operate on huge page size pages for hugetlb
>> mappings.  If start address is in the middle of a hugetlb page, we essentially
>> align down to the beginning of the hugetlb page.  If length lands in the
>> middle of a hugetlb page, we essentially round up.
> 
> Which MADV calls would be affected?

Not sure I understand the question.  I was saying that madvise calls which
operate on hugetlb mappings today only operate on huge pages.  So, this is
essentially align down starting address and align up end address.
For example consider the MADV_POPULATE calls you recently added.  They will
only fault in huge pages in a hugetlb vma.

> The "bad" thing about MADV_DONTNEED and MADV_REMOVE are that they
> destroy data, which is why they heavily diverge from the original posix
> madvise odea.

Hmmm.  That may be a good argument for strict alignment.  We do not want
to remove (or unmap) more than the user intended.  The unmap system call
has such alignment requirements.

Darn!  I'm thinking that I should go back to requiring alignment for
MADV_DONTNEED.

There really is no 'right' answer.

>>
>> When adding MADV_REMOVE perhaps we should go with this align down start and
>> align up end strategy that is used everywhere else?  I really wish we could
>> go back and change things, but as you know it is too late for that.
> 
> I assume whatever we do, we should document it at least cleanly in the
> man page. Unfortunately, hugetlb is a gift that keeps on giving. Making
> it at least somehow consistent, even if it's "hugtlb being consistent in
> its own mess", that would be preferable I guess.

Yes, more than anything we need to document behavior.
Peter Xu Feb. 11, 2022, 2:28 a.m. UTC | #9
On Thu, Feb 10, 2022 at 01:36:57PM -0800, Mike Kravetz wrote:
> > Another use case of DONTNEED upon hugetlbfs could be uffd-minor, because afaiu
> > this is the only api that can force strip the hugetlb mapped pgtable without
> > losing pagecache data.
> 
> Correct.  However, I do not know if uffd-minor users would ever want to
> do this.  Perhaps?

My understanding is before this patch uffd-minor upon hugetlbfs requires the
huge file to be mapped twice, one to populate the content, then we'll be able
to trap MINOR faults via the other mapping.  Or we could munmap() the range and
remap it again on the same file offset to drop the pgtables, I think. But that
sounds tricky.  MINOR faults only works with pgtables dropped.

With DONTNEED upon hugetlbfs we can rely on one single mapping of the file,
because we can explicitly drop the pgtables of hugetlbfs files without any
other tricks.

However I have no real use case of it.  Initially I thought it could be useful
for QEMU because QEMU migration routine is run with the same mm context with
the hypervisor, so by default is doesn't have two mappings of the same guest
memory.  If QEMU wants to leverage minor faults, DONTNEED could help.

However when I was measuring bitmap transfer (assuming that's what minor fault
could help with qemu's postcopy) there some months ago I found it's not as slow
as I thought at all..  Either I could have missed something, or we're facing
different problems with what it is when uffd minor is firstly proposed by Axel.

This is probably too out of topic, though..  Let me go back..

Said that, one thing I'm not sure about DONTNEED on hugetlb is whether this
could further abuse DONTNEED, as the original POSIX definition is as simple as:

  The application expects that it will not access the specified address range
  in the near future.

Linux did it by tearing down pgtable, which looks okay so far.  It could be a
bit more weird to apply it to hugetlbfs because from its definition it's a hint
to page reclaims, however hugetlbfs is not a target of page reclaim, neither is
it LRU-aware.  It goes further into some MADV_ZAP styled syscall.

I think it could still be fine as posix doesn't define that behavior
specifically on hugetlb so it can be defined by Linux, but not sure whether
there can be other implications.

Thanks,
David Hildenbrand Feb. 11, 2022, 8:43 a.m. UTC | #10
>>>
>>> I am now rethinking the decision to proceed with b) as described above.
>>>
>>> With the exception of MADV_REMOVE (which we may be able to change for
>>> hugetlb),  madvise operations operate on huge page size pages for hugetlb
>>> mappings.  If start address is in the middle of a hugetlb page, we essentially
>>> align down to the beginning of the hugetlb page.  If length lands in the
>>> middle of a hugetlb page, we essentially round up.
>>
>> Which MADV calls would be affected?
> 
> Not sure I understand the question.  I was saying that madvise calls which
> operate on hugetlb mappings today only operate on huge pages.  So, this is
> essentially align down starting address and align up end address.

Let me clarify:

If you accidentially
MADV_NORMAL/MADV_RANDOM/MADV_SEQUENTIAL/MADV_WILLNEED a range that's
slightly bigger/smaller than the requested one you don't actually care,
because it will only slightly affect the performance of an application,
if at all.  MADV_COLD/MADV_PAGEOUT should be similar. I assume these
don't apply to hugetlb at all.

The effects of
MADV_MERGEABLE/MADV_UNMERGEABLE/MADV_HUGEPAGE/MADV_NOHUGEPAGE should in
theory be similar, however, there can be some user-space visible effects
when you get it wrong. I assume these don't apply to hugetlb at all.

However, for
MADV_DONTNEED/MADV_REMOVE/MADV_DONTFORK/MADV_DOFORK/MADV_FREE/MADV_WIPEONFORK/MADV_KEEPONFORK/MADV_DONTDUMP/MADV_DODUMP/....
the application could easily detect the difference of the actual range
handling.


> For example consider the MADV_POPULATE calls you recently added.  They will
> only fault in huge pages in a hugetlb vma.

On a related note: I don't see my man page updates upstream yet. And the
last update upstream seems to have happened 5 months ago ... not sure
why the man project seems to have stalled.

https://git.kernel.org/pub/scm/docs/man-pages/man-pages.git/
Axel Rasmussen Feb. 11, 2022, 7:08 p.m. UTC | #11
On Thu, Feb 10, 2022 at 6:29 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Thu, Feb 10, 2022 at 01:36:57PM -0800, Mike Kravetz wrote:
> > > Another use case of DONTNEED upon hugetlbfs could be uffd-minor, because afaiu
> > > this is the only api that can force strip the hugetlb mapped pgtable without
> > > losing pagecache data.
> >
> > Correct.  However, I do not know if uffd-minor users would ever want to
> > do this.  Perhaps?

I talked with some colleagues, and I didn't come up with any
production *requirement* for it, but it may be a convenience in some
cases (make certain code cleaner, e.g. not having to unmap-and-remap
to tear down page tables as Peter mentioned). I think Peter's
assessment below is right.

>
> My understanding is before this patch uffd-minor upon hugetlbfs requires the
> huge file to be mapped twice, one to populate the content, then we'll be able
> to trap MINOR faults via the other mapping.  Or we could munmap() the range and
> remap it again on the same file offset to drop the pgtables, I think. But that
> sounds tricky.  MINOR faults only works with pgtables dropped.
>
> With DONTNEED upon hugetlbfs we can rely on one single mapping of the file,
> because we can explicitly drop the pgtables of hugetlbfs files without any
> other tricks.
>
> However I have no real use case of it.  Initially I thought it could be useful
> for QEMU because QEMU migration routine is run with the same mm context with
> the hypervisor, so by default is doesn't have two mappings of the same guest
> memory.  If QEMU wants to leverage minor faults, DONTNEED could help.).
>
> However when I was measuring bitmap transfer (assuming that's what minor fault
> could help with qemu's postcopy) there some months ago I found it's not as slow
> as I thought at all..  Either I could have missed something, or we're facing
> different problems with what it is when uffd minor is firstly proposed by Axel.

Re: the bitmap, that matters most on machines with lots of RAM. For
example, GCE offers some VMs with up to 12 *TB* of RAM
(https://cloud.google.com/compute/docs/memory-optimized-machines), I
think with this size of machine we see a significant benefit, as it
may take some significant time for the bitmap to arrive over the
network.

But I think that's a bit of an edge case, most machines are not that
big. :) I think the benefit is more often seen just in avoiding
copies. E.g. if we find a page is already up-to-date after precopy, we
just install PTEs, no copying or page allocation needed. And even when
we have to go fetch a page over the network, one can imagine an RDMA
setup where we can avoid any copies/allocations at all even in that
case. I suppose this also has a bigger effect on larger machines, e.g.
ones that are backed by 1G pages instead of 4k.

>
> This is probably too out of topic, though..  Let me go back..
>
> Said that, one thing I'm not sure about DONTNEED on hugetlb is whether this
> could further abuse DONTNEED, as the original POSIX definition is as simple as:
>
>   The application expects that it will not access the specified address range
>   in the near future.
>
> Linux did it by tearing down pgtable, which looks okay so far.  It could be a
> bit more weird to apply it to hugetlbfs because from its definition it's a hint
> to page reclaims, however hugetlbfs is not a target of page reclaim, neither is
> it LRU-aware.  It goes further into some MADV_ZAP styled syscall.
>
> I think it could still be fine as posix doesn't define that behavior
> specifically on hugetlb so it can be defined by Linux, but not sure whether
> there can be other implications.
>
> Thanks,
>
> --
> Peter Xu
>
Mike Kravetz Feb. 11, 2022, 7:18 p.m. UTC | #12
On 2/11/22 11:08, Axel Rasmussen wrote:
> On Thu, Feb 10, 2022 at 6:29 PM Peter Xu <peterx@redhat.com> wrote:
>>
>> On Thu, Feb 10, 2022 at 01:36:57PM -0800, Mike Kravetz wrote:
>>>> Another use case of DONTNEED upon hugetlbfs could be uffd-minor, because afaiu
>>>> this is the only api that can force strip the hugetlb mapped pgtable without
>>>> losing pagecache data.
>>>
>>> Correct.  However, I do not know if uffd-minor users would ever want to
>>> do this.  Perhaps?
> 
> I talked with some colleagues, and I didn't come up with any
> production *requirement* for it, but it may be a convenience in some
> cases (make certain code cleaner, e.g. not having to unmap-and-remap
> to tear down page tables as Peter mentioned). I think Peter's
> assessment below is right.
> 
>>
>> My understanding is before this patch uffd-minor upon hugetlbfs requires the
>> huge file to be mapped twice, one to populate the content, then we'll be able
>> to trap MINOR faults via the other mapping.  Or we could munmap() the range and
>> remap it again on the same file offset to drop the pgtables, I think. But that
>> sounds tricky.  MINOR faults only works with pgtables dropped.
>>
>> With DONTNEED upon hugetlbfs we can rely on one single mapping of the file,
>> because we can explicitly drop the pgtables of hugetlbfs files without any
>> other tricks.
>>
>> However I have no real use case of it.  Initially I thought it could be useful
>> for QEMU because QEMU migration routine is run with the same mm context with
>> the hypervisor, so by default is doesn't have two mappings of the same guest
>> memory.  If QEMU wants to leverage minor faults, DONTNEED could help.).
>>
>> However when I was measuring bitmap transfer (assuming that's what minor fault
>> could help with qemu's postcopy) there some months ago I found it's not as slow
>> as I thought at all..  Either I could have missed something, or we're facing
>> different problems with what it is when uffd minor is firstly proposed by Axel.
> 
> Re: the bitmap, that matters most on machines with lots of RAM. For
> example, GCE offers some VMs with up to 12 *TB* of RAM
> (https://cloud.google.com/compute/docs/memory-optimized-machines), I
> think with this size of machine we see a significant benefit, as it
> may take some significant time for the bitmap to arrive over the
> network.
> 
> But I think that's a bit of an edge case, most machines are not that
> big. :) I think the benefit is more often seen just in avoiding
> copies. E.g. if we find a page is already up-to-date after precopy, we
> just install PTEs, no copying or page allocation needed. And even when
> we have to go fetch a page over the network, one can imagine an RDMA
> setup where we can avoid any copies/allocations at all even in that
> case. I suppose this also has a bigger effect on larger machines, e.g.
> ones that are backed by 1G pages instead of 4k.
> 

Thanks Peter and Axel!

As mentioned, my primary motivation was to simply clean up the userfaultfd
selftest.  Glad to see there might be more use cases.  If we can simplify
other code as in the case of userfaultfd selftest, that would be a win.
diff mbox series

Patch

diff --git a/mm/madvise.c b/mm/madvise.c
index 5604064df464..7ae891e030a4 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -796,10 +796,30 @@  static int madvise_free_single_vma(struct vm_area_struct *vma,
 static long madvise_dontneed_single_vma(struct vm_area_struct *vma,
 					unsigned long start, unsigned long end)
 {
+	/*
+	 * start and size (end - start) must be huge page size aligned
+	 * for hugetlb vmas.
+	 */
+	if (is_vm_hugetlb_page(vma)) {
+		struct hstate *h = hstate_vma(vma);
+
+		start = ALIGN_DOWN(start, huge_page_size(h));
+		end = ALIGN(end, huge_page_size(h));
+	}
+
 	zap_page_range(vma, start, end - start);
 	return 0;
 }
 
+static bool madvise_dontneed_free_valid_vma(struct vm_area_struct *vma,
+						int behavior)
+{
+	if (is_vm_hugetlb_page(vma))
+		return behavior == MADV_DONTNEED;
+	else
+		return can_madv_lru_vma(vma);
+}
+
 static long madvise_dontneed_free(struct vm_area_struct *vma,
 				  struct vm_area_struct **prev,
 				  unsigned long start, unsigned long end,
@@ -808,7 +828,7 @@  static long madvise_dontneed_free(struct vm_area_struct *vma,
 	struct mm_struct *mm = vma->vm_mm;
 
 	*prev = vma;
-	if (!can_madv_lru_vma(vma))
+	if (!madvise_dontneed_free_valid_vma(vma, behavior))
 		return -EINVAL;
 
 	if (!userfaultfd_remove(vma, start, end)) {
@@ -830,7 +850,7 @@  static long madvise_dontneed_free(struct vm_area_struct *vma,
 			 */
 			return -ENOMEM;
 		}
-		if (!can_madv_lru_vma(vma))
+		if (!madvise_dontneed_free_valid_vma(vma, behavior))
 			return -EINVAL;
 		if (end > vma->vm_end) {
 			/*