Message ID | 20220102215729.2943705-10-willy@infradead.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Convert GUP to folios | expand |
On Sun, Jan 02, 2022 at 09:57:21PM +0000, Matthew Wilcox (Oracle) wrote: > We still call try_grab_folio() once per PTE; a future patch could > optimise to just adjust the reference count for each page within > the folio. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
On 1/2/22 13:57, Matthew Wilcox (Oracle) wrote: > We still call try_grab_folio() once per PTE; a future patch could > optimise to just adjust the reference count for each page within > the folio. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > --- > mm/gup.c | 16 +++++++--------- > 1 file changed, 7 insertions(+), 9 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index 2307b2917055..d8535f9d5622 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -2260,7 +2260,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > ptem = ptep = pte_offset_map(&pmd, addr); > do { > pte_t pte = ptep_get_lockless(ptep); > - struct page *head, *page; > + struct page *page; > + struct folio *folio; > > /* > * Similar to the PMD case below, NUMA hinting must take slow > @@ -2287,22 +2288,20 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > VM_BUG_ON(!pfn_valid(pte_pfn(pte))); > page = pte_page(pte); > > - head = try_grab_compound_head(page, 1, flags); > - if (!head) > + folio = try_grab_folio(page, 1, flags); > + if (!folio) > goto pte_unmap; > > if (unlikely(page_is_secretmem(page))) { > - put_compound_head(head, 1, flags); > + gup_put_folio(folio, 1, flags); > goto pte_unmap; > } > > if (unlikely(pte_val(pte) != pte_val(*ptep))) { > - put_compound_head(head, 1, flags); > + gup_put_folio(folio, 1, flags); > goto pte_unmap; > } > > - VM_BUG_ON_PAGE(compound_head(page) != head, page); > - > /* > * We need to make the page accessible if and only if we are > * going to access its content (the FOLL_PIN case). Please > @@ -2316,10 +2315,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > goto pte_unmap; > } > } > - SetPageReferenced(page); > + folio_set_referenced(folio); For the case of a tail page, I *think* the above hunk changes the behavior. Previously, the tail page's flags were affected, but now, the head page's (folio's) page flags are being changed...right? thanks,
On Tue, Jan 04, 2022 at 11:36:29PM -0800, John Hubbard wrote: > > - SetPageReferenced(page); > > + folio_set_referenced(folio); > > For the case of a tail page, I *think* the above hunk changes the > behavior. Previously, the tail page's flags were affected, but now, the > head page's (folio's) page flags are being changed...right? So ... most flags don't really "exist" for tail pages. PAGEFLAG(Referenced, referenced, PF_HEAD) #define PF_HEAD(page, enforce) PF_POISONED_CHECK(compound_head(page)) so any time you call SetPageReferenced(page), we have a hidden call to compound_head() in order to set PG_referenced on the head page. It's only the PF_ANY flags which are (theoretically) settable on tail pages: PAGEFLAG(Private, private, PF_ANY) PAGEFLAG(Private2, private_2, PF_ANY) PAGEFLAG(OwnerPriv1, owner_priv_1, PF_ANY) PAGEFLAG(HWPoison, hwpoison, PF_ANY) TESTPAGEFLAG(Young, young, PF_ANY) PAGEFLAG(Idle, idle, PF_ANY) __PAGEFLAG(Head, head, PF_ANY) __PAGEFLAG(Isolated, isolated, PF_ANY) I honestly doubt many of these are actually settable on tail pages; it's simply that nobody's really thought about them for compound pages.
On 1/4/22 23:52, Matthew Wilcox wrote: > On Tue, Jan 04, 2022 at 11:36:29PM -0800, John Hubbard wrote: >>> - SetPageReferenced(page); >>> + folio_set_referenced(folio); >> >> For the case of a tail page, I *think* the above hunk changes the >> behavior. Previously, the tail page's flags were affected, but now, the >> head page's (folio's) page flags are being changed...right? > > So ... most flags don't really "exist" for tail pages. > > PAGEFLAG(Referenced, referenced, PF_HEAD) > > #define PF_HEAD(page, enforce) PF_POISONED_CHECK(compound_head(page)) > > so any time you call SetPageReferenced(page), we have a hidden call to > compound_head() in order to set PG_referenced on the head page. > > It's only the PF_ANY flags which are (theoretically) settable on tail > pages: > > PAGEFLAG(Private, private, PF_ANY) > PAGEFLAG(Private2, private_2, PF_ANY) > PAGEFLAG(OwnerPriv1, owner_priv_1, PF_ANY) > PAGEFLAG(HWPoison, hwpoison, PF_ANY) > TESTPAGEFLAG(Young, young, PF_ANY) > PAGEFLAG(Idle, idle, PF_ANY) > __PAGEFLAG(Head, head, PF_ANY) > __PAGEFLAG(Isolated, isolated, PF_ANY) > > I honestly doubt many of these are actually settable on tail pages; > it's simply that nobody's really thought about them for compound pages. OK, I see. Thanks for spelling that out for me. :) Please feel free to add: Reviewed-by: John Hubbard <jhubbard@nvidia.com> thanks,
diff --git a/mm/gup.c b/mm/gup.c index 2307b2917055..d8535f9d5622 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2260,7 +2260,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, ptem = ptep = pte_offset_map(&pmd, addr); do { pte_t pte = ptep_get_lockless(ptep); - struct page *head, *page; + struct page *page; + struct folio *folio; /* * Similar to the PMD case below, NUMA hinting must take slow @@ -2287,22 +2288,20 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, VM_BUG_ON(!pfn_valid(pte_pfn(pte))); page = pte_page(pte); - head = try_grab_compound_head(page, 1, flags); - if (!head) + folio = try_grab_folio(page, 1, flags); + if (!folio) goto pte_unmap; if (unlikely(page_is_secretmem(page))) { - put_compound_head(head, 1, flags); + gup_put_folio(folio, 1, flags); goto pte_unmap; } if (unlikely(pte_val(pte) != pte_val(*ptep))) { - put_compound_head(head, 1, flags); + gup_put_folio(folio, 1, flags); goto pte_unmap; } - VM_BUG_ON_PAGE(compound_head(page) != head, page); - /* * We need to make the page accessible if and only if we are * going to access its content (the FOLL_PIN case). Please @@ -2316,10 +2315,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, goto pte_unmap; } } - SetPageReferenced(page); + folio_set_referenced(folio); pages[*nr] = page; (*nr)++; - } while (ptep++, addr += PAGE_SIZE, addr != end); ret = 1;
We still call try_grab_folio() once per PTE; a future patch could optimise to just adjust the reference count for each page within the folio. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- mm/gup.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-)