Message ID | 20230812062612.3184990-1-zhangpeng362@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2] mm/secretmem: use a folio in secretmem_fault() | expand |
On 12.08.23 08:26, Peng Zhang wrote: > From: ZhangPeng <zhangpeng362@huawei.com> > > Saves four implicit call to compound_head(). > > Signed-off-by: ZhangPeng <zhangpeng362@huawei.com> > Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> > --- > v2: update commit message per Matthew Wilcox > --- > mm/secretmem.c | 14 ++++++++------ > 1 file changed, 8 insertions(+), 6 deletions(-) > > diff --git a/mm/secretmem.c b/mm/secretmem.c > index 86442a15d12f..3afb5ad701e1 100644 > --- a/mm/secretmem.c > +++ b/mm/secretmem.c > @@ -55,6 +55,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf) > gfp_t gfp = vmf->gfp_mask; > unsigned long addr; > struct page *page; > + struct folio *folio; > vm_fault_t ret; > int err; > > @@ -66,23 +67,24 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf) > retry: > page = find_lock_page(mapping, offset); > if (!page) { > - page = alloc_page(gfp | __GFP_ZERO); > - if (!page) { > + folio = folio_alloc(gfp | __GFP_ZERO, 0); > + if (!folio) { > ret = VM_FAULT_OOM; > goto out; > } > > + page = &folio->page; > err = set_direct_map_invalid_noflush(page); > if (err) { > - put_page(page); > + folio_put(folio); > ret = vmf_error(err); > goto out; > } > > - __SetPageUptodate(page); > - err = add_to_page_cache_lru(page, mapping, offset, gfp); > + __folio_mark_uptodate(folio); > + err = filemap_add_folio(mapping, folio, offset, gfp); > if (unlikely(err)) { > - put_page(page); > + folio_put(folio); > /* > * If a split of large page was required, it > * already happened when we marked the page invalid Reviewed-by: David Hildenbrand <david@redhat.com>
diff --git a/mm/secretmem.c b/mm/secretmem.c index 86442a15d12f..3afb5ad701e1 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -55,6 +55,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf) gfp_t gfp = vmf->gfp_mask; unsigned long addr; struct page *page; + struct folio *folio; vm_fault_t ret; int err; @@ -66,23 +67,24 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf) retry: page = find_lock_page(mapping, offset); if (!page) { - page = alloc_page(gfp | __GFP_ZERO); - if (!page) { + folio = folio_alloc(gfp | __GFP_ZERO, 0); + if (!folio) { ret = VM_FAULT_OOM; goto out; } + page = &folio->page; err = set_direct_map_invalid_noflush(page); if (err) { - put_page(page); + folio_put(folio); ret = vmf_error(err); goto out; } - __SetPageUptodate(page); - err = add_to_page_cache_lru(page, mapping, offset, gfp); + __folio_mark_uptodate(folio); + err = filemap_add_folio(mapping, folio, offset, gfp); if (unlikely(err)) { - put_page(page); + folio_put(folio); /* * If a split of large page was required, it * already happened when we marked the page invalid