Message ID | 20230717143110.260162-2-ryan.roberts@arm.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Optimize large folio interaction with deferred split | expand |
On Mon, Jul 17, 2023 at 03:31:08PM +0100, Ryan Roberts wrote: > In preparation for the introduction of large folios for anonymous > memory, we would like to be able to split them when they have unmapped > subpages, in order to free those unused pages under memory pressure. So > remove the artificial requirement that the large folio needed to be at > least PMD-sized. > > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > Reviewed-by: Yu Zhao <yuzhao@google.com> > Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> > */ > - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) > + if (folio_test_large(folio) && folio_test_anon(folio)) > if (!compound || nr < nr_pmdmapped) > deferred_split_folio(folio); I wonder if it's worth introducing a folio_test_deferred_split() (better naming appreciated ...) to allow us to allocate order-1 folios and not do horrible things. Maybe it's not worth supporting order-1 folios; we're always better off going to order-2 immediately. Just thinking.
On 17/07/2023 16:30, Matthew Wilcox wrote: > On Mon, Jul 17, 2023 at 03:31:08PM +0100, Ryan Roberts wrote: >> In preparation for the introduction of large folios for anonymous >> memory, we would like to be able to split them when they have unmapped >> subpages, in order to free those unused pages under memory pressure. So >> remove the artificial requirement that the large folio needed to be at >> least PMD-sized. >> >> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >> Reviewed-by: Yu Zhao <yuzhao@google.com> >> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> > > Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Thanks! > >> */ >> - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) >> + if (folio_test_large(folio) && folio_test_anon(folio)) >> if (!compound || nr < nr_pmdmapped) >> deferred_split_folio(folio); > > I wonder if it's worth introducing a folio_test_deferred_split() (better > naming appreciated ...) to allow us to allocate order-1 folios and not > do horrible things. Maybe it's not worth supporting order-1 folios; > we're always better off going to order-2 immediately. Just thinking. There is more than just _deferred_list in the 3rd page; you also have _flags_2a and _head_2a. I guess you know much better than me what they store. But I'm guessing its harder than jsut not splitting an order-1 page? With the direction of large anon folios (_not_ retrying with every order down to 0), I'm not sure what the use case would be for order-1 anyway?
On 17.07.23 16:31, Ryan Roberts wrote: > In preparation for the introduction of large folios for anonymous > memory, we would like to be able to split them when they have unmapped > subpages, in order to free those unused pages under memory pressure. So > remove the artificial requirement that the large folio needed to be at > least PMD-sized. > > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > Reviewed-by: Yu Zhao <yuzhao@google.com> > Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> > --- > mm/rmap.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 0c0d8857dfce..2baf57d65c23 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1430,7 +1430,7 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, > * page of the folio is unmapped and at least one page > * is still mapped. > */ > - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) > + if (folio_test_large(folio) && folio_test_anon(folio)) > if (!compound || nr < nr_pmdmapped) > deferred_split_folio(folio); !compound will always be true I guess, so nr_pmdmapped == 0 (which will always be the case) will be ignored. Reviewed-by: David Hildenbrand <david@redhat.com>
On 17.07.23 17:41, Ryan Roberts wrote: > On 17/07/2023 16:30, Matthew Wilcox wrote: >> On Mon, Jul 17, 2023 at 03:31:08PM +0100, Ryan Roberts wrote: >>> In preparation for the introduction of large folios for anonymous >>> memory, we would like to be able to split them when they have unmapped >>> subpages, in order to free those unused pages under memory pressure. So >>> remove the artificial requirement that the large folio needed to be at >>> least PMD-sized. >>> >>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >>> Reviewed-by: Yu Zhao <yuzhao@google.com> >>> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> >> >> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> > > Thanks! > >> >>> */ >>> - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) >>> + if (folio_test_large(folio) && folio_test_anon(folio)) >>> if (!compound || nr < nr_pmdmapped) >>> deferred_split_folio(folio); >> >> I wonder if it's worth introducing a folio_test_deferred_split() (better >> naming appreciated ...) to allow us to allocate order-1 folios and not >> do horrible things. Maybe it's not worth supporting order-1 folios; >> we're always better off going to order-2 immediately. Just thinking. > > There is more than just _deferred_list in the 3rd page; you also have _flags_2a > and _head_2a. I guess you know much better than me what they store. But I'm > guessing its harder than jsut not splitting an order-1 page? > > With the direction of large anon folios (_not_ retrying with every order down to > 0), I'm not sure what the use case would be for order-1 anyway? Just noting that we might need some struct-page space for better mapcount/shared tracking, which might get hard for order-1 pages.
On Mon, Jul 17, 2023 at 05:43:40PM +0200, David Hildenbrand wrote: > On 17.07.23 17:41, Ryan Roberts wrote: > > On 17/07/2023 16:30, Matthew Wilcox wrote: > > > On Mon, Jul 17, 2023 at 03:31:08PM +0100, Ryan Roberts wrote: > > > > In preparation for the introduction of large folios for anonymous > > > > memory, we would like to be able to split them when they have unmapped > > > > subpages, in order to free those unused pages under memory pressure. So > > > > remove the artificial requirement that the large folio needed to be at > > > > least PMD-sized. > > > > > > > > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > > > > Reviewed-by: Yu Zhao <yuzhao@google.com> > > > > Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> > > > > > > Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> > > > > Thanks! > > > > > > > > > */ > > > > - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) > > > > + if (folio_test_large(folio) && folio_test_anon(folio)) > > > > if (!compound || nr < nr_pmdmapped) > > > > deferred_split_folio(folio); > > > > > > I wonder if it's worth introducing a folio_test_deferred_split() (better > > > naming appreciated ...) to allow us to allocate order-1 folios and not > > > do horrible things. Maybe it's not worth supporting order-1 folios; > > > we're always better off going to order-2 immediately. Just thinking. > > > > There is more than just _deferred_list in the 3rd page; you also have _flags_2a > > and _head_2a. I guess you know much better than me what they store. But I'm > > guessing its harder than jsut not splitting an order-1 page? Those are page->flags and page->compound_head for the third page in the folio. They don't really need a name; nothing refers to them, but it's important that space not be reused ;-) This is slightly different from _flags_1; we do have some flags which reuse the bits (they're labelled as PF_SECOND). Right now, it's only PF_has_hwpoisoned, but we used to have PF_double_map. Others may arise. > > With the direction of large anon folios (_not_ retrying with every order down to > > 0), I'm not sure what the use case would be for order-1 anyway? > > Just noting that we might need some struct-page space for better > mapcount/shared tracking, which might get hard for order-1 pages. My assumption had been that we'd be able to reuse the _entire_mapcount and _nr_pages_mapped fields and not spill into the third page, but the third page is definitely available today if we want it. I'm fine with disallowing order-1 anon/file folios forever.
On 17/07/2023 16:42, David Hildenbrand wrote: > On 17.07.23 16:31, Ryan Roberts wrote: >> In preparation for the introduction of large folios for anonymous >> memory, we would like to be able to split them when they have unmapped >> subpages, in order to free those unused pages under memory pressure. So >> remove the artificial requirement that the large folio needed to be at >> least PMD-sized. >> >> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >> Reviewed-by: Yu Zhao <yuzhao@google.com> >> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> >> --- >> mm/rmap.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/mm/rmap.c b/mm/rmap.c >> index 0c0d8857dfce..2baf57d65c23 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -1430,7 +1430,7 @@ void page_remove_rmap(struct page *page, struct >> vm_area_struct *vma, >> * page of the folio is unmapped and at least one page >> * is still mapped. >> */ >> - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) >> + if (folio_test_large(folio) && folio_test_anon(folio)) >> if (!compound || nr < nr_pmdmapped) >> deferred_split_folio(folio); > > !compound will always be true I guess, so nr_pmdmapped == 0 (which will always > be the case) will be ignored. I don't follow why !compound will always be true. This function is page_remove_rmap() (not folio_remove_rmap_range() which I add in a later patch). page_remove_rmap() can work on pmd-mapped pages where compound=true is passed in. > > Reviewed-by: David Hildenbrand <david@redhat.com> >
On Mon, Jul 17, 2023 at 04:54:58PM +0100, Matthew Wilcox wrote: > Those are page->flags and page->compound_head for the third page in > the folio. They don't really need a name; nothing refers to them, > but it's important that space not be reused ;-) > > This is slightly different from _flags_1; we do have some flags which > reuse the bits (they're labelled as PF_SECOND). Right now, it's only > PG_has_hwpoisoned, but we used to have PG_double_map. Others may arise. Sorry, this was incomplete. We do still have per-page flags! HWPoison is the obvious one, but PG_head is per-page (... think about it ...) PG_anon_exclusive is actually per-page. Most of the flags labelled as PF_ANY are mislabelled. PG_private and PG_private2 are never set/cleared/tested on tail pages. PG_young and PG_idle are only ever tested on the head page, but some code incorrectly sets them on tail pages, where those bits are ignored. I tried to fix that a while ago, but the patch was overlooked and I couldn't be bothered to try all that hard. I have no clue about PG_vmemmap_self_hosted. I think PG_isolated is probably never set on compound pages. PG_owner_priv_1 is a disaster, as you might expect.
On 17.07.23 18:01, Ryan Roberts wrote: > On 17/07/2023 16:42, David Hildenbrand wrote: >> On 17.07.23 16:31, Ryan Roberts wrote: >>> In preparation for the introduction of large folios for anonymous >>> memory, we would like to be able to split them when they have unmapped >>> subpages, in order to free those unused pages under memory pressure. So >>> remove the artificial requirement that the large folio needed to be at >>> least PMD-sized. >>> >>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >>> Reviewed-by: Yu Zhao <yuzhao@google.com> >>> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> >>> --- >>> mm/rmap.c | 2 +- >>> 1 file changed, 1 insertion(+), 1 deletion(-) >>> >>> diff --git a/mm/rmap.c b/mm/rmap.c >>> index 0c0d8857dfce..2baf57d65c23 100644 >>> --- a/mm/rmap.c >>> +++ b/mm/rmap.c >>> @@ -1430,7 +1430,7 @@ void page_remove_rmap(struct page *page, struct >>> vm_area_struct *vma, >>> * page of the folio is unmapped and at least one page >>> * is still mapped. >>> */ >>> - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) >>> + if (folio_test_large(folio) && folio_test_anon(folio)) >>> if (!compound || nr < nr_pmdmapped) >>> deferred_split_folio(folio); >> >> !compound will always be true I guess, so nr_pmdmapped == 0 (which will always >> be the case) will be ignored. > > I don't follow why !compound will always be true. This function is > page_remove_rmap() (not folio_remove_rmap_range() which I add in a later patch). > page_remove_rmap() can work on pmd-mapped pages where compound=true is passed in. I was talking about the folio_test_pmd_mappable() -> folio_test_large() change. For folio_test_large() && !folio_test_pmd_mappable() I expect that we'll never pass in "compound=true".
On 17.07.23 17:54, Matthew Wilcox wrote: > On Mon, Jul 17, 2023 at 05:43:40PM +0200, David Hildenbrand wrote: >> On 17.07.23 17:41, Ryan Roberts wrote: >>> On 17/07/2023 16:30, Matthew Wilcox wrote: >>>> On Mon, Jul 17, 2023 at 03:31:08PM +0100, Ryan Roberts wrote: >>>>> In preparation for the introduction of large folios for anonymous >>>>> memory, we would like to be able to split them when they have unmapped >>>>> subpages, in order to free those unused pages under memory pressure. So >>>>> remove the artificial requirement that the large folio needed to be at >>>>> least PMD-sized. >>>>> >>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >>>>> Reviewed-by: Yu Zhao <yuzhao@google.com> >>>>> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> >>>> >>>> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> >>> >>> Thanks! >>> >>>> >>>>> */ >>>>> - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) >>>>> + if (folio_test_large(folio) && folio_test_anon(folio)) >>>>> if (!compound || nr < nr_pmdmapped) >>>>> deferred_split_folio(folio); >>>> >>>> I wonder if it's worth introducing a folio_test_deferred_split() (better >>>> naming appreciated ...) to allow us to allocate order-1 folios and not >>>> do horrible things. Maybe it's not worth supporting order-1 folios; >>>> we're always better off going to order-2 immediately. Just thinking. >>> >>> There is more than just _deferred_list in the 3rd page; you also have _flags_2a >>> and _head_2a. I guess you know much better than me what they store. But I'm >>> guessing its harder than jsut not splitting an order-1 page? > > Those are page->flags and page->compound_head for the third page in > the folio. They don't really need a name; nothing refers to them, > but it's important that space not be reused ;-) > > This is slightly different from _flags_1; we do have some flags which > reuse the bits (they're labelled as PF_SECOND). Right now, it's only > PF_has_hwpoisoned, but we used to have PF_double_map. Others may arise. > >>> With the direction of large anon folios (_not_ retrying with every order down to >>> 0), I'm not sure what the use case would be for order-1 anyway? >> >> Just noting that we might need some struct-page space for better >> mapcount/shared tracking, which might get hard for order-1 pages. > > My assumption had been that we'd be able to reuse the _entire_mapcount > and _nr_pages_mapped fields and not spill into the third page, but the We most likely have to keep _entire_mapcount to keep "PMD mapped" working (I don't think we can not account that, some user space relies on that). Reusing _nr_pages_mapped for _total_mapcount would work until we need more bits. But once we want to sort out some other questions like "is this folio mapped shared or mapped exclusive" we might need more space. What I am playing with right now to tackle that would most probably not fit in there (but I'll keep trying ;) ). > third page is definitely available today if we want it. I'm fine with > disallowing order-1 anon/file folios forever. Yes, let's first sort out the open issues before going down that path (might not really be worth it after all).
On 17/07/2023 17:48, David Hildenbrand wrote: > On 17.07.23 18:01, Ryan Roberts wrote: >> On 17/07/2023 16:42, David Hildenbrand wrote: >>> On 17.07.23 16:31, Ryan Roberts wrote: >>>> In preparation for the introduction of large folios for anonymous >>>> memory, we would like to be able to split them when they have unmapped >>>> subpages, in order to free those unused pages under memory pressure. So >>>> remove the artificial requirement that the large folio needed to be at >>>> least PMD-sized. >>>> >>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >>>> Reviewed-by: Yu Zhao <yuzhao@google.com> >>>> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> >>>> --- >>>> mm/rmap.c | 2 +- >>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>> >>>> diff --git a/mm/rmap.c b/mm/rmap.c >>>> index 0c0d8857dfce..2baf57d65c23 100644 >>>> --- a/mm/rmap.c >>>> +++ b/mm/rmap.c >>>> @@ -1430,7 +1430,7 @@ void page_remove_rmap(struct page *page, struct >>>> vm_area_struct *vma, >>>> * page of the folio is unmapped and at least one page >>>> * is still mapped. >>>> */ >>>> - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) >>>> + if (folio_test_large(folio) && folio_test_anon(folio)) >>>> if (!compound || nr < nr_pmdmapped) >>>> deferred_split_folio(folio); >>> >>> !compound will always be true I guess, so nr_pmdmapped == 0 (which will always >>> be the case) will be ignored. >> >> I don't follow why !compound will always be true. This function is >> page_remove_rmap() (not folio_remove_rmap_range() which I add in a later patch). >> page_remove_rmap() can work on pmd-mapped pages where compound=true is passed in. > > I was talking about the folio_test_pmd_mappable() -> folio_test_large() change. > For folio_test_large() && !folio_test_pmd_mappable() I expect that we'll never > pass in "compound=true". > Sorry David, I've been staring at the code and your comment, and I still don't understand your point. I assumed you were trying to say that compound is always false and therefore "if (!compound || nr < nr_pmdmapped)" can be removed? But its not the case that compound is always false; it will be true when called to remove a pmd-mapped compound page. What change are you suggesting, exactly?
On 18.07.23 10:58, Ryan Roberts wrote: > On 17/07/2023 17:48, David Hildenbrand wrote: >> On 17.07.23 18:01, Ryan Roberts wrote: >>> On 17/07/2023 16:42, David Hildenbrand wrote: >>>> On 17.07.23 16:31, Ryan Roberts wrote: >>>>> In preparation for the introduction of large folios for anonymous >>>>> memory, we would like to be able to split them when they have unmapped >>>>> subpages, in order to free those unused pages under memory pressure. So >>>>> remove the artificial requirement that the large folio needed to be at >>>>> least PMD-sized. >>>>> >>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >>>>> Reviewed-by: Yu Zhao <yuzhao@google.com> >>>>> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> >>>>> --- >>>>> mm/rmap.c | 2 +- >>>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>>> >>>>> diff --git a/mm/rmap.c b/mm/rmap.c >>>>> index 0c0d8857dfce..2baf57d65c23 100644 >>>>> --- a/mm/rmap.c >>>>> +++ b/mm/rmap.c >>>>> @@ -1430,7 +1430,7 @@ void page_remove_rmap(struct page *page, struct >>>>> vm_area_struct *vma, >>>>> * page of the folio is unmapped and at least one page >>>>> * is still mapped. >>>>> */ >>>>> - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) >>>>> + if (folio_test_large(folio) && folio_test_anon(folio)) >>>>> if (!compound || nr < nr_pmdmapped) >>>>> deferred_split_folio(folio); >>>> >>>> !compound will always be true I guess, so nr_pmdmapped == 0 (which will always >>>> be the case) will be ignored. >>> >>> I don't follow why !compound will always be true. This function is >>> page_remove_rmap() (not folio_remove_rmap_range() which I add in a later patch). >>> page_remove_rmap() can work on pmd-mapped pages where compound=true is passed in. >> >> I was talking about the folio_test_pmd_mappable() -> folio_test_large() change. >> For folio_test_large() && !folio_test_pmd_mappable() I expect that we'll never >> pass in "compound=true". >> > > Sorry David, I've been staring at the code and your comment, and I still don't > understand your point. I assumed you were trying to say that compound is always > false and therefore "if (!compound || nr < nr_pmdmapped)" can be removed? But > its not the case that compound is always false; it will be true when called to > remove a pmd-mapped compound page. Let me try again: Assume, as I wrote, that we are given a folio that is "folio_test_large() && !folio_test_pmd_mappable()". That is, a folio that is *not* pmd mappable. If it's not pmd-mappable, certainly, nr_pmdmapped == 0, and therefore, "nr < nr_pmdmapped" will never ever trigger. The only way to have it added to the deferred split queue is, therefore "if (!compound)". So *for these folios*, we will always pass "compound == false" to make that "if (!compound)" succeed. Does that make sense? > What change are you suggesting, exactly? Oh, I never suggested a change (I even gave you my RB). I was just thinking out loud.
On 18/07/2023 10:08, David Hildenbrand wrote: > On 18.07.23 10:58, Ryan Roberts wrote: >> On 17/07/2023 17:48, David Hildenbrand wrote: >>> On 17.07.23 18:01, Ryan Roberts wrote: >>>> On 17/07/2023 16:42, David Hildenbrand wrote: >>>>> On 17.07.23 16:31, Ryan Roberts wrote: >>>>>> In preparation for the introduction of large folios for anonymous >>>>>> memory, we would like to be able to split them when they have unmapped >>>>>> subpages, in order to free those unused pages under memory pressure. So >>>>>> remove the artificial requirement that the large folio needed to be at >>>>>> least PMD-sized. >>>>>> >>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >>>>>> Reviewed-by: Yu Zhao <yuzhao@google.com> >>>>>> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> >>>>>> --- >>>>>> mm/rmap.c | 2 +- >>>>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>>>> >>>>>> diff --git a/mm/rmap.c b/mm/rmap.c >>>>>> index 0c0d8857dfce..2baf57d65c23 100644 >>>>>> --- a/mm/rmap.c >>>>>> +++ b/mm/rmap.c >>>>>> @@ -1430,7 +1430,7 @@ void page_remove_rmap(struct page *page, struct >>>>>> vm_area_struct *vma, >>>>>> * page of the folio is unmapped and at least one page >>>>>> * is still mapped. >>>>>> */ >>>>>> - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) >>>>>> + if (folio_test_large(folio) && folio_test_anon(folio)) >>>>>> if (!compound || nr < nr_pmdmapped) >>>>>> deferred_split_folio(folio); >>>>> >>>>> !compound will always be true I guess, so nr_pmdmapped == 0 (which will always >>>>> be the case) will be ignored. >>>> >>>> I don't follow why !compound will always be true. This function is >>>> page_remove_rmap() (not folio_remove_rmap_range() which I add in a later >>>> patch). >>>> page_remove_rmap() can work on pmd-mapped pages where compound=true is >>>> passed in. >>> >>> I was talking about the folio_test_pmd_mappable() -> folio_test_large() change. >>> For folio_test_large() && !folio_test_pmd_mappable() I expect that we'll never >>> pass in "compound=true". >>> >> >> Sorry David, I've been staring at the code and your comment, and I still don't >> understand your point. I assumed you were trying to say that compound is always >> false and therefore "if (!compound || nr < nr_pmdmapped)" can be removed? But >> its not the case that compound is always false; it will be true when called to >> remove a pmd-mapped compound page. > > Let me try again: > > Assume, as I wrote, that we are given a folio that is "folio_test_large() && > !folio_test_pmd_mappable()". That is, a folio that is *not* pmd mappable. > > If it's not pmd-mappable, certainly, nr_pmdmapped == 0, and therefore, "nr < > nr_pmdmapped" will never ever trigger. > > The only way to have it added to the deferred split queue is, therefore "if > (!compound)". > > So *for these folios*, we will always pass "compound == false" to make that "if > (!compound)" succeed. > > > Does that make sense? Yes I agree with all of this. I thought you were pointing out an issue or proposing a change to the logic. Hence my confusion. > >> What change are you suggesting, exactly? > > Oh, I never suggested a change (I even gave you my RB). I was just thinking out > loud. >
diff --git a/mm/rmap.c b/mm/rmap.c index 0c0d8857dfce..2baf57d65c23 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1430,7 +1430,7 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, * page of the folio is unmapped and at least one page * is still mapped. */ - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) + if (folio_test_large(folio) && folio_test_anon(folio)) if (!compound || nr < nr_pmdmapped) deferred_split_folio(folio); }