Message ID | 20230331093937.945725-2-zhangpeng362@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | userfaultfd: convert userfaultfd functions to use folios | expand |
On 03/31/23 17:39, Peng Zhang wrote: > From: ZhangPeng <zhangpeng362@huawei.com> > > Call vma_alloc_folio() directly instead of alloc_page_vma() and convert > page_kaddr to kaddr in mfill_atomic_pte_copy(). Removes several calls to > compound_head(). > > Signed-off-by: ZhangPeng <zhangpeng362@huawei.com> > Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> > --- > mm/userfaultfd.c | 33 +++++++++++++++++---------------- > 1 file changed, 17 insertions(+), 16 deletions(-) Looks good, Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> > } else { > - page = *pagep; > + folio = page_folio(*pagep); > + VM_BUG_ON_FOLIO(folio_test_large(folio), folio); > *pagep = NULL; > } However, I am still unsure about the reason for adding the VM_BUG_ON_FOLIO here.
On 2023/4/7 5:31, Mike Kravetz wrote: > On 03/31/23 17:39, Peng Zhang wrote: >> From: ZhangPeng <zhangpeng362@huawei.com> >> >> Call vma_alloc_folio() directly instead of alloc_page_vma() and convert >> page_kaddr to kaddr in mfill_atomic_pte_copy(). Removes several calls to >> compound_head(). >> >> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com> >> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> >> --- >> mm/userfaultfd.c | 33 +++++++++++++++++---------------- >> 1 file changed, 17 insertions(+), 16 deletions(-) > Looks good, > > Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> > >> } else { >> - page = *pagep; >> + folio = page_folio(*pagep); >> + VM_BUG_ON_FOLIO(folio_test_large(folio), folio); >> *pagep = NULL; >> } > However, I am still unsure about the reason for adding the VM_BUG_ON_FOLIO > here. VM_BUG_ON_FOLIO was added to ensure that folio is a single-page folio. However, the folio corresponding to the foliop is always a single-page folio. We just don't need this check. I'll drop VM_BUG_ON_FOLIO. Thanks for your review. Best Regards, Peng
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 7f1b5f8b712c..24d6ed7ff302 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -135,17 +135,18 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, uffd_flags_t flags, struct page **pagep) { - void *page_kaddr; + void *kaddr; int ret; - struct page *page; + struct folio *folio; if (!*pagep) { ret = -ENOMEM; - page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, dst_vma, dst_addr); - if (!page) + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, dst_vma, + dst_addr, false); + if (!folio) goto out; - page_kaddr = kmap_local_page(page); + kaddr = kmap_local_folio(folio, 0); /* * The read mmap_lock is held here. Despite the * mmap_lock being read recursive a deadlock is still @@ -162,45 +163,45 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, * and retry the copy outside the mmap_lock. */ pagefault_disable(); - ret = copy_from_user(page_kaddr, - (const void __user *) src_addr, + ret = copy_from_user(kaddr, (const void __user *) src_addr, PAGE_SIZE); pagefault_enable(); - kunmap_local(page_kaddr); + kunmap_local(kaddr); /* fallback to copy_from_user outside mmap_lock */ if (unlikely(ret)) { ret = -ENOENT; - *pagep = page; + *pagep = &folio->page; /* don't free the page */ goto out; } - flush_dcache_page(page); + flush_dcache_folio(folio); } else { - page = *pagep; + folio = page_folio(*pagep); + VM_BUG_ON_FOLIO(folio_test_large(folio), folio); *pagep = NULL; } /* - * The memory barrier inside __SetPageUptodate makes sure that + * The memory barrier inside __folio_mark_uptodate makes sure that * preceding stores to the page contents become visible before * the set_pte_at() write. */ - __SetPageUptodate(page); + __folio_mark_uptodate(folio); ret = -ENOMEM; - if (mem_cgroup_charge(page_folio(page), dst_vma->vm_mm, GFP_KERNEL)) + if (mem_cgroup_charge(folio, dst_vma->vm_mm, GFP_KERNEL)) goto out_release; ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, - page, true, flags); + &folio->page, true, flags); if (ret) goto out_release; out: return ret; out_release: - put_page(page); + folio_put(folio); goto out; }