diff mbox series

[v1,05/18] mm: improve folio_likely_mapped_shared() using the mapcount of large folios

Message ID 20240409192301.907377-6-david@redhat.com (mailing list archive)
State New
Headers show
Series mm: mapcount for large folios + page_mapcount() cleanups | expand

Commit Message

David Hildenbrand April 9, 2024, 7:22 p.m. UTC
We can now read the mapcount of large folios very efficiently. Use it to
improve our handling of partially-mappable folios, falling back
to making a guess only in case the folio is not "obviously mapped shared".

We can now better detect partially-mappable folios where the first page is
not mapped as "mapped shared", reducing "false negatives"; but false
negatives are still possible.

While at it, fixup a wrong comment (false positive vs. false negative)
for KSM folios.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mm.h | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

Comments

Lance Yang April 16, 2024, 10:40 a.m. UTC | #1
Hey David,

Maybe I spotted a bug below.

[...]
 static inline bool folio_likely_mapped_shared(struct folio *folio)
 {
-	return page_mapcount(folio_page(folio, 0)) > 1;
+	int mapcount = folio_mapcount(folio);
+
+	/* Only partially-mappable folios require more care. */
+	if (!folio_test_large(folio) || unlikely(folio_test_hugetlb(folio)))
+		return mapcount > 1;
+
+	/* A single mapping implies "mapped exclusively". */
+	if (mapcount <= 1)
+		return false;
+
+	/* If any page is mapped more than once we treat it "mapped shared". */
+	if (folio_entire_mapcount(folio) || mapcount > folio_nr_pages(folio))
+		return true;

bug: if a PMD-mapped THP is exclusively mapped, the folio_entire_mapcount()
function will return 1 (atomic_read(&folio->_entire_mapcount) + 1).

IIUC, when mapping a PMD entry for the entire THP, folio->_entire_mapcount
increments from -1 to 0.

Thanks,
Lance

+
+	/* Let's guess based on the first subpage. */
+	return atomic_read(&folio->_mapcount) > 0;
 }
[...]
David Hildenbrand April 16, 2024, 10:47 a.m. UTC | #2
On 16.04.24 12:40, Lance Yang wrote:
> Hey David,
> 
> Maybe I spotted a bug below.

Thanks for the review!

> 
> [...]
>   static inline bool folio_likely_mapped_shared(struct folio *folio)
>   {
> -	return page_mapcount(folio_page(folio, 0)) > 1;
> +	int mapcount = folio_mapcount(folio);
> +
> +	/* Only partially-mappable folios require more care. */
> +	if (!folio_test_large(folio) || unlikely(folio_test_hugetlb(folio)))
> +		return mapcount > 1;
> +
> +	/* A single mapping implies "mapped exclusively". */
> +	if (mapcount <= 1)
> +		return false;
> +
> +	/* If any page is mapped more than once we treat it "mapped shared". */
> +	if (folio_entire_mapcount(folio) || mapcount > folio_nr_pages(folio))
> +		return true;
> 
> bug: if a PMD-mapped THP is exclusively mapped, the folio_entire_mapcount()
> function will return 1 (atomic_read(&folio->_entire_mapcount) + 1).

If it's exclusively mapped, then folio_mapcount(folio)==1. In which case 
the previous statement:

if (mapcount <= 1)
	return false;

Catches it.

IOW, once we reach this point we now that folio_mapcount(folio) > 1, and 
there must be something else besides the entire mapping ("more than once").


Or did I not address your concern?
Lance Yang April 16, 2024, 10:52 a.m. UTC | #3
On Tue, Apr 16, 2024 at 6:47 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 16.04.24 12:40, Lance Yang wrote:
> > Hey David,
> >
> > Maybe I spotted a bug below.
>
> Thanks for the review!
>
> >
> > [...]
> >   static inline bool folio_likely_mapped_shared(struct folio *folio)
> >   {
> > -     return page_mapcount(folio_page(folio, 0)) > 1;
> > +     int mapcount = folio_mapcount(folio);
> > +
> > +     /* Only partially-mappable folios require more care. */
> > +     if (!folio_test_large(folio) || unlikely(folio_test_hugetlb(folio)))
> > +             return mapcount > 1;
> > +
> > +     /* A single mapping implies "mapped exclusively". */
> > +     if (mapcount <= 1)
> > +             return false;
> > +
> > +     /* If any page is mapped more than once we treat it "mapped shared". */
> > +     if (folio_entire_mapcount(folio) || mapcount > folio_nr_pages(folio))
> > +             return true;
> >
> > bug: if a PMD-mapped THP is exclusively mapped, the folio_entire_mapcount()
> > function will return 1 (atomic_read(&folio->_entire_mapcount) + 1).
>
> If it's exclusively mapped, then folio_mapcount(folio)==1. In which case
> the previous statement:
>
> if (mapcount <= 1)
>         return false;
>
> Catches it.

You're right!

>
> IOW, once we reach this point we now that folio_mapcount(folio) > 1, and
> there must be something else besides the entire mapping ("more than once").
>
>
> Or did I not address your concern?

Sorry, my mistake :(

Thanks,
Lance

>
> --
> Cheers,
>
> David / dhildenb
>
David Hildenbrand April 16, 2024, 10:53 a.m. UTC | #4
On 16.04.24 12:52, Lance Yang wrote:
> On Tue, Apr 16, 2024 at 6:47 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 16.04.24 12:40, Lance Yang wrote:
>>> Hey David,
>>>
>>> Maybe I spotted a bug below.
>>
>> Thanks for the review!
>>
>>>
>>> [...]
>>>    static inline bool folio_likely_mapped_shared(struct folio *folio)
>>>    {
>>> -     return page_mapcount(folio_page(folio, 0)) > 1;
>>> +     int mapcount = folio_mapcount(folio);
>>> +
>>> +     /* Only partially-mappable folios require more care. */
>>> +     if (!folio_test_large(folio) || unlikely(folio_test_hugetlb(folio)))
>>> +             return mapcount > 1;
>>> +
>>> +     /* A single mapping implies "mapped exclusively". */
>>> +     if (mapcount <= 1)
>>> +             return false;
>>> +
>>> +     /* If any page is mapped more than once we treat it "mapped shared". */
>>> +     if (folio_entire_mapcount(folio) || mapcount > folio_nr_pages(folio))
>>> +             return true;
>>>
>>> bug: if a PMD-mapped THP is exclusively mapped, the folio_entire_mapcount()
>>> function will return 1 (atomic_read(&folio->_entire_mapcount) + 1).
>>
>> If it's exclusively mapped, then folio_mapcount(folio)==1. In which case
>> the previous statement:
>>
>> if (mapcount <= 1)
>>          return false;
>>
>> Catches it.
> 
> You're right!
> 
>>
>> IOW, once we reach this point we now that folio_mapcount(folio) > 1, and
>> there must be something else besides the entire mapping ("more than once").
>>
>>
>> Or did I not address your concern?
> 
> Sorry, my mistake :(

No worries, thanks for the review and thinking this through!
Yin Fengwei April 19, 2024, 2:29 a.m. UTC | #5
On 4/10/2024 3:22 AM, David Hildenbrand wrote:
> @@ -2200,7 +2200,22 @@ static inline size_t folio_size(struct folio *folio)
>    */
>   static inline bool folio_likely_mapped_shared(struct folio *folio)
>   {
> -	return page_mapcount(folio_page(folio, 0)) > 1;
> +	int mapcount = folio_mapcount(folio);
> +
> +	/* Only partially-mappable folios require more care. */
> +	if (!folio_test_large(folio) || unlikely(folio_test_hugetlb(folio)))
> +		return mapcount > 1;
My understanding is that mapcount > folio_nr_pages(folio) can cover
order 0 folio. And also folio_entire_mapcount() can cover hugetlb (I am
not 100% sure for this one).  I am wondering whether we can drop above
two lines? Thanks.


Regards
Yin, Fengwei

> +
> +	/* A single mapping implies "mapped exclusively". */
> +	if (mapcount <= 1)
> +		return false;
> +
> +	/* If any page is mapped more than once we treat it "mapped shared". */
> +	if (folio_entire_mapcount(folio) || mapcount > folio_nr_pages(folio))
> +		return true;
> +
> +	/* Let's guess based on the first subpage. */
> +	return atomic_read(&folio->_mapcount) > 0;
>   }
David Hildenbrand April 19, 2024, 9:19 a.m. UTC | #6
On 19.04.24 04:29, Yin, Fengwei wrote:
> 
> 
> On 4/10/2024 3:22 AM, David Hildenbrand wrote:
>> @@ -2200,7 +2200,22 @@ static inline size_t folio_size(struct folio *folio)
>>     */
>>    static inline bool folio_likely_mapped_shared(struct folio *folio)
>>    {
>> -	return page_mapcount(folio_page(folio, 0)) > 1;
>> +	int mapcount = folio_mapcount(folio);
>> +
>> +	/* Only partially-mappable folios require more care. */
>> +	if (!folio_test_large(folio) || unlikely(folio_test_hugetlb(folio)))
>> +		return mapcount > 1;
> My understanding is that mapcount > folio_nr_pages(folio) can cover
> order 0 folio. And also folio_entire_mapcount() can cover hugetlb (I am
> not 100% sure for this one).  I am wondering whether we can drop above
> two lines? Thanks.

folio_entire_mapcount() does not apply to small folios, so we must not 
call that for small folios.

Regarding hugetlb, subpage mapcounts are completely unused, except 
subpage 0 mapcount, which is now *always* negative (storing a page type) 
-- so there is no trusting on that value at all.

So in the end, it all looked cleanest when only special-casing on 
partially-mappable folios where we know the entire mapcount exists and 
we know that subapge mapcount 0 actually stores something reasonable 
(not a type).

Thanks!
Yin Fengwei April 19, 2024, 1:47 p.m. UTC | #7
On 4/19/2024 5:19 PM, David Hildenbrand wrote:
> On 19.04.24 04:29, Yin, Fengwei wrote:
>>
>>
>> On 4/10/2024 3:22 AM, David Hildenbrand wrote:
>>> @@ -2200,7 +2200,22 @@ static inline size_t folio_size(struct folio 
>>> *folio)
>>>     */
>>>    static inline bool folio_likely_mapped_shared(struct folio *folio)
>>>    {
>>> -    return page_mapcount(folio_page(folio, 0)) > 1;
>>> +    int mapcount = folio_mapcount(folio);
>>> +
>>> +    /* Only partially-mappable folios require more care. */
>>> +    if (!folio_test_large(folio) || 
>>> unlikely(folio_test_hugetlb(folio)))
>>> +        return mapcount > 1;
>> My understanding is that mapcount > folio_nr_pages(folio) can cover
>> order 0 folio. And also folio_entire_mapcount() can cover hugetlb (I am
>> not 100% sure for this one).  I am wondering whether we can drop above
>> two lines? Thanks.
> 
> folio_entire_mapcount() does not apply to small folios, so we must not 
> call that for small folios.
Right. I missed this part. Thanks for clarification.


Regards
Yin, Fengwei

> 
> Regarding hugetlb, subpage mapcounts are completely unused, except 
> subpage 0 mapcount, which is now *always* negative (storing a page type) 
> -- so there is no trusting on that value at all.
> 
> So in the end, it all looked cleanest when only special-casing on 
> partially-mappable folios where we know the entire mapcount exists and 
> we know that subapge mapcount 0 actually stores something reasonable 
> (not a type).
> 
> Thanks!
>
Yin Fengwei April 19, 2024, 2:03 p.m. UTC | #8
On 4/10/2024 3:22 AM, David Hildenbrand wrote:
> We can now read the mapcount of large folios very efficiently. Use it to
> improve our handling of partially-mappable folios, falling back
> to making a guess only in case the folio is not "obviously mapped shared".
> 
> We can now better detect partially-mappable folios where the first page is
> not mapped as "mapped shared", reducing "false negatives"; but false
> negatives are still possible.
> 
> While at it, fixup a wrong comment (false positive vs. false negative)
> for KSM folios.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
diff mbox series

Patch

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1862a216af15..daf687f0e8e5 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2183,7 +2183,7 @@  static inline size_t folio_size(struct folio *folio)
  *       indicate "mapped shared" (false positive) when two VMAs in the same MM
  *       cover the same file range.
  *    #. For (small) KSM folios, the return value can wrongly indicate "mapped
- *       shared" (false negative), when the folio is mapped multiple times into
+ *       shared" (false positive), when the folio is mapped multiple times into
  *       the same MM.
  *
  * Further, this function only considers current page table mappings that
@@ -2200,7 +2200,22 @@  static inline size_t folio_size(struct folio *folio)
  */
 static inline bool folio_likely_mapped_shared(struct folio *folio)
 {
-	return page_mapcount(folio_page(folio, 0)) > 1;
+	int mapcount = folio_mapcount(folio);
+
+	/* Only partially-mappable folios require more care. */
+	if (!folio_test_large(folio) || unlikely(folio_test_hugetlb(folio)))
+		return mapcount > 1;
+
+	/* A single mapping implies "mapped exclusively". */
+	if (mapcount <= 1)
+		return false;
+
+	/* If any page is mapped more than once we treat it "mapped shared". */
+	if (folio_entire_mapcount(folio) || mapcount > folio_nr_pages(folio))
+		return true;
+
+	/* Let's guess based on the first subpage. */
+	return atomic_read(&folio->_mapcount) > 0;
 }
 
 #ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE