diff mbox series

mm: thp: don't need drain lru cache when splitting and mlocking THP

Message ID 1585337380-97368-1-git-send-email-yang.shi@linux.alibaba.com (mailing list archive)
State New, archived
Headers show
Series mm: thp: don't need drain lru cache when splitting and mlocking THP | expand

Commit Message

Yang Shi March 27, 2020, 7:29 p.m. UTC
Since the commit 8f182270dfec ("mm/swap.c: flush lru pvecs on compound
page arrival") THP would not stay in pagevec anymore.  So the
optimization made by commit d965432234db ("thp: increase
split_huge_page() success rate") doesn't make sense anymore, which tries
to unpin munlocked THPs from pagevec by draining pagevec.

And draining lru cache before isolating THP in mlock path is unnecessary
either.

Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/huge_memory.c | 7 -------
 1 file changed, 7 deletions(-)

Comments

Daniel Jordan April 2, 2020, 11:04 p.m. UTC | #1
On Sat, Mar 28, 2020 at 03:29:40AM +0800, Yang Shi wrote:
> Since the commit 8f182270dfec ("mm/swap.c: flush lru pvecs on compound
> page arrival") THP would not stay in pagevec anymore.  So the
> optimization made by commit d965432234db ("thp: increase
> split_huge_page() success rate") doesn't make sense anymore, which tries
> to unpin munlocked THPs from pagevec by draining pagevec.
> 
> And draining lru cache before isolating THP in mlock path is unnecessary
> either.

Can we get some of that nice history in this part too?

Draining lru cache before isolating THP in mlock path is also unnecessary.
b676b293fb48 ("mm, thp: fix mapped pages avoiding unevictable list on mlock")
added it and 9a73f61bdb8a ("thp, mlock: do not mlock PTE-mapped file huge
pages") accidentally carried it over after the above optimization went in.

> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>

Since we don't mlock pte-mapped THP, it seems these huge pages wouldn't ever be
in the pagevecs if I'm understanding it all.

Saves lines and some amount of overhead and lru contention, so looks good.

Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Yang Shi April 2, 2020, 11:14 p.m. UTC | #2
On 4/2/20 4:04 PM, Daniel Jordan wrote:
> On Sat, Mar 28, 2020 at 03:29:40AM +0800, Yang Shi wrote:
>> Since the commit 8f182270dfec ("mm/swap.c: flush lru pvecs on compound
>> page arrival") THP would not stay in pagevec anymore.  So the
>> optimization made by commit d965432234db ("thp: increase
>> split_huge_page() success rate") doesn't make sense anymore, which tries
>> to unpin munlocked THPs from pagevec by draining pagevec.
>>
>> And draining lru cache before isolating THP in mlock path is unnecessary
>> either.
> Can we get some of that nice history in this part too?
>
> Draining lru cache before isolating THP in mlock path is also unnecessary.
> b676b293fb48 ("mm, thp: fix mapped pages avoiding unevictable list on mlock")
> added it and 9a73f61bdb8a ("thp, mlock: do not mlock PTE-mapped file huge
> pages") accidentally carried it over after the above optimization went in.

Thanks for finding out this, I didn't dig that far. Will add it into v2.

>
>> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>> Cc: Hugh Dickins <hughd@google.com>
>> Cc: Andrea Arcangeli <aarcange@redhat.com>
>> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> Since we don't mlock pte-mapped THP, it seems these huge pages wouldn't ever be
> in the pagevecs if I'm understanding it all.

Yes, it is correct.

>
> Saves lines and some amount of overhead and lru contention, so looks good.
>
> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>

Thanks.
Daniel Jordan April 2, 2020, 11:17 p.m. UTC | #3
On Thu, Apr 02, 2020 at 07:04:11PM -0400, Daniel Jordan wrote:
> On Sat, Mar 28, 2020 at 03:29:40AM +0800, Yang Shi wrote:
> > Since the commit 8f182270dfec ("mm/swap.c: flush lru pvecs on compound
> > page arrival") THP would not stay in pagevec anymore.  So the
> > optimization made by commit d965432234db ("thp: increase
> > split_huge_page() success rate") doesn't make sense anymore, which tries
> > to unpin munlocked THPs from pagevec by draining pagevec.
> > 
> > And draining lru cache before isolating THP in mlock path is unnecessary
> > either.
> 
> Can we get some of that nice history in this part too?
> 
> Draining lru cache before isolating THP in mlock path is also unnecessary.
> b676b293fb48 ("mm, thp: fix mapped pages avoiding unevictable list on mlock")
> added it and 9a73f61bdb8a ("thp, mlock: do not mlock PTE-mapped file huge
> pages") accidentally carried it over after the above optimization went in.
> 
> > Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > Cc: Hugh Dickins <hughd@google.com>
> > Cc: Andrea Arcangeli <aarcange@redhat.com>
> > Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> 
> Since we don't mlock pte-mapped THP, it seems these huge pages wouldn't ever be
> in the pagevecs if I'm understanding it all.

Actually pte-mapped THP doesn't matter for this, both paths always drain when
they're working with pmd-mapped THP.
Yang Shi April 2, 2020, 11:37 p.m. UTC | #4
On 4/2/20 4:17 PM, Daniel Jordan wrote:
> On Thu, Apr 02, 2020 at 07:04:11PM -0400, Daniel Jordan wrote:
>> On Sat, Mar 28, 2020 at 03:29:40AM +0800, Yang Shi wrote:
>>> Since the commit 8f182270dfec ("mm/swap.c: flush lru pvecs on compound
>>> page arrival") THP would not stay in pagevec anymore.  So the
>>> optimization made by commit d965432234db ("thp: increase
>>> split_huge_page() success rate") doesn't make sense anymore, which tries
>>> to unpin munlocked THPs from pagevec by draining pagevec.
>>>
>>> And draining lru cache before isolating THP in mlock path is unnecessary
>>> either.
>> Can we get some of that nice history in this part too?
>>
>> Draining lru cache before isolating THP in mlock path is also unnecessary.
>> b676b293fb48 ("mm, thp: fix mapped pages avoiding unevictable list on mlock")
>> added it and 9a73f61bdb8a ("thp, mlock: do not mlock PTE-mapped file huge
>> pages") accidentally carried it over after the above optimization went in.
>>
>>> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>>> Cc: Hugh Dickins <hughd@google.com>
>>> Cc: Andrea Arcangeli <aarcange@redhat.com>
>>> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
>> Since we don't mlock pte-mapped THP, it seems these huge pages wouldn't ever be
>> in the pagevecs if I'm understanding it all.
> Actually pte-mapped THP doesn't matter for this, both paths always drain when
> they're working with pmd-mapped THP.

Actually either pte-mapped or pmd-mapped doesn't matter, as long as it 
is compound page the lru cache would be flushed immediately upon the 
arrival of the compound page.
Daniel Jordan April 3, 2020, 1:12 p.m. UTC | #5
On Thu, Apr 02, 2020 at 04:37:06PM -0700, Yang Shi wrote:
> 
> 
> On 4/2/20 4:17 PM, Daniel Jordan wrote:
> > On Thu, Apr 02, 2020 at 07:04:11PM -0400, Daniel Jordan wrote:
> > > On Sat, Mar 28, 2020 at 03:29:40AM +0800, Yang Shi wrote:
> > > > Since the commit 8f182270dfec ("mm/swap.c: flush lru pvecs on compound
> > > > page arrival") THP would not stay in pagevec anymore.  So the
> > > > optimization made by commit d965432234db ("thp: increase
> > > > split_huge_page() success rate") doesn't make sense anymore, which tries
> > > > to unpin munlocked THPs from pagevec by draining pagevec.
> > > > 
> > > > And draining lru cache before isolating THP in mlock path is unnecessary
> > > > either.
> > > Can we get some of that nice history in this part too?
> > > 
> > > Draining lru cache before isolating THP in mlock path is also unnecessary.
> > > b676b293fb48 ("mm, thp: fix mapped pages avoiding unevictable list on mlock")
> > > added it and 9a73f61bdb8a ("thp, mlock: do not mlock PTE-mapped file huge
> > > pages") accidentally carried it over after the above optimization went in.
> > > 
> > > > Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > > > Cc: Hugh Dickins <hughd@google.com>
> > > > Cc: Andrea Arcangeli <aarcange@redhat.com>
> > > > Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> > > Since we don't mlock pte-mapped THP, it seems these huge pages wouldn't ever be
> > > in the pagevecs if I'm understanding it all.
> > Actually pte-mapped THP doesn't matter for this, both paths always drain when
> > they're working with pmd-mapped THP.
> 
> Actually either pte-mapped or pmd-mapped doesn't matter, as long as it is
> compound page the lru cache would be flushed immediately upon the arrival of
> the compound page.

Ah, that's right!  The checks in swap.c are for PageCompound.
Kirill A. Shutemov April 3, 2020, 2:35 p.m. UTC | #6
On Sat, Mar 28, 2020 at 03:29:40AM +0800, Yang Shi wrote:
> Since the commit 8f182270dfec ("mm/swap.c: flush lru pvecs on compound
> page arrival") THP would not stay in pagevec anymore.  So the
> optimization made by commit d965432234db ("thp: increase
> split_huge_page() success rate") doesn't make sense anymore, which tries
> to unpin munlocked THPs from pagevec by draining pagevec.
> 
> And draining lru cache before isolating THP in mlock path is unnecessary
> either.
> 
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index b08b199..1af2e7d6 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1527,7 +1527,6 @@  struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 			goto skip_mlock;
 		if (!trylock_page(page))
 			goto skip_mlock;
-		lru_add_drain();
 		if (page->mapping && !PageDoubleMap(page))
 			mlock_vma_page(page);
 		unlock_page(page);
@@ -2711,7 +2710,6 @@  int split_huge_page_to_list(struct page *page, struct list_head *list)
 	struct anon_vma *anon_vma = NULL;
 	struct address_space *mapping = NULL;
 	int count, mapcount, extra_pins, ret;
-	bool mlocked;
 	unsigned long flags;
 	pgoff_t end;
 
@@ -2770,14 +2768,9 @@  int split_huge_page_to_list(struct page *page, struct list_head *list)
 		goto out_unlock;
 	}
 
-	mlocked = PageMlocked(head);
 	unmap_page(head);
 	VM_BUG_ON_PAGE(compound_mapcount(head), head);
 
-	/* Make sure the page is not on per-CPU pagevec as it takes pin */
-	if (mlocked)
-		lru_add_drain();
-
 	/* prevent PageLRU to go away from under us, and freeze lru stats */
 	spin_lock_irqsave(&pgdata->lru_lock, flags);