Message ID | 20230630121310.165700-3-zhangpeng362@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: remove page_rmapping() | expand |
On 6/30/23 5:13 AM, Peng Zhang wrote: > From: ZhangPeng <zhangpeng362@huawei.com> > > We can replace four implicit calls to compound_head() with one by using > folio. > > Signed-off-by: ZhangPeng <zhangpeng362@huawei.com> > --- > mm/memory.c | 10 +++++----- > 1 file changed, 5 insertions(+), 5 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 6921df44a99f..73b03706451c 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2967,20 +2967,20 @@ static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf) > { > struct vm_area_struct *vma = vmf->vma; > struct address_space *mapping; > - struct page *page = vmf->page; > + struct folio *folio = page_folio(vmf->page); > bool dirtied; > bool page_mkwrite = vma->vm_ops && vma->vm_ops->page_mkwrite; > > - dirtied = set_page_dirty(page); > - VM_BUG_ON_PAGE(PageAnon(page), page); > + dirtied = folio_mark_dirty(folio); > + VM_BUG_ON_FOLIO(folio_test_anon(folio), folio); > /* > * Take a local copy of the address_space - page.mapping may be zeroed > * by truncate after unlock_page(). The address_space itself remains > * pinned by vma->vm_file's reference. We rely on unlock_page()'s > * release semantics to prevent the compiler from undoing this copying. > */ > - mapping = folio_raw_mapping(page_folio(page)); > - unlock_page(page); > + mapping = folio_raw_mapping(folio); > + folio_unlock(folio); > > if (!page_mkwrite) > file_update_time(vma->vm_file); Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
diff --git a/mm/memory.c b/mm/memory.c index 6921df44a99f..73b03706451c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2967,20 +2967,20 @@ static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; struct address_space *mapping; - struct page *page = vmf->page; + struct folio *folio = page_folio(vmf->page); bool dirtied; bool page_mkwrite = vma->vm_ops && vma->vm_ops->page_mkwrite; - dirtied = set_page_dirty(page); - VM_BUG_ON_PAGE(PageAnon(page), page); + dirtied = folio_mark_dirty(folio); + VM_BUG_ON_FOLIO(folio_test_anon(folio), folio); /* * Take a local copy of the address_space - page.mapping may be zeroed * by truncate after unlock_page(). The address_space itself remains * pinned by vma->vm_file's reference. We rely on unlock_page()'s * release semantics to prevent the compiler from undoing this copying. */ - mapping = folio_raw_mapping(page_folio(page)); - unlock_page(page); + mapping = folio_raw_mapping(folio); + folio_unlock(folio); if (!page_mkwrite) file_update_time(vma->vm_file);