Message ID | 20231211162214.2146080-9-willy@infradead.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Finish two folio conversions | expand |
On 11.12.23 17:22, Matthew Wilcox (Oracle) wrote: > Replace three calls to compound_head() with one. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > --- > mm/khugepaged.c | 15 ++++++++------- > 1 file changed, 8 insertions(+), 7 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index e722f754797f..f7ec73976c38 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -1087,6 +1087,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > pmd_t *pmd, _pmd; > pte_t *pte; > pgtable_t pgtable; > + struct folio *folio; > struct page *hpage; > spinlock_t *pmd_ptl, *pte_ptl; > int result = SCAN_FAIL; > @@ -1209,13 +1210,13 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > if (unlikely(result != SCAN_SUCCEED)) > goto out_up_write; > > + folio = page_folio(hpage); > /* > - * spin_lock() below is not the equivalent of smp_wmb(), but > - * the smp_wmb() inside __SetPageUptodate() can be reused to > - * avoid the copy_huge_page writes to become visible after > - * the set_pmd_at() write. > + * The smp_wmb() inside __folio_mark_uptodate() ensures the > + * copy_huge_page writes become visible before the set_pmd_at() > + * write. > */ > - __SetPageUptodate(hpage); > + __folio_mark_uptodate(folio); > pgtable = pmd_pgtable(_pmd); > > _pmd = mk_huge_pmd(hpage, vma->vm_page_prot); > @@ -1223,8 +1224,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > > spin_lock(pmd_ptl); > BUG_ON(!pmd_none(*pmd)); > - page_add_new_anon_rmap(hpage, vma, address); > - lru_cache_add_inactive_or_unevictable(hpage, vma); > + folio_add_new_anon_rmap(folio, vma, address); > + folio_add_lru_vma(folio, vma); > pgtable_trans_huge_deposit(mm, pmd, pgtable); > set_pmd_at(mm, address, pmd, _pmd); > update_mmu_cache_pmd(vma, address, pmd); Reviewed-by: David Hildenbrand <david@redhat.com>
diff --git a/mm/khugepaged.c b/mm/khugepaged.c index e722f754797f..f7ec73976c38 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1087,6 +1087,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, pmd_t *pmd, _pmd; pte_t *pte; pgtable_t pgtable; + struct folio *folio; struct page *hpage; spinlock_t *pmd_ptl, *pte_ptl; int result = SCAN_FAIL; @@ -1209,13 +1210,13 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, if (unlikely(result != SCAN_SUCCEED)) goto out_up_write; + folio = page_folio(hpage); /* - * spin_lock() below is not the equivalent of smp_wmb(), but - * the smp_wmb() inside __SetPageUptodate() can be reused to - * avoid the copy_huge_page writes to become visible after - * the set_pmd_at() write. + * The smp_wmb() inside __folio_mark_uptodate() ensures the + * copy_huge_page writes become visible before the set_pmd_at() + * write. */ - __SetPageUptodate(hpage); + __folio_mark_uptodate(folio); pgtable = pmd_pgtable(_pmd); _pmd = mk_huge_pmd(hpage, vma->vm_page_prot); @@ -1223,8 +1224,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, spin_lock(pmd_ptl); BUG_ON(!pmd_none(*pmd)); - page_add_new_anon_rmap(hpage, vma, address); - lru_cache_add_inactive_or_unevictable(hpage, vma); + folio_add_new_anon_rmap(folio, vma, address); + folio_add_lru_vma(folio, vma); pgtable_trans_huge_deposit(mm, pmd, pgtable); set_pmd_at(mm, address, pmd, _pmd); update_mmu_cache_pmd(vma, address, pmd);
Replace three calls to compound_head() with one. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- mm/khugepaged.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-)