Message ID | 20230330134045.375163-2-zhangpeng362@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | userfaultfd: convert userfaultfd functions to use folios | expand |
On 3/30/23 6:40 AM, Peng Zhang wrote: > From: ZhangPeng <zhangpeng362@huawei.com> > > Call vma_alloc_folio() directly instead of alloc_page_vma(). Add an > assertion that this is a single-page folio and removes several calls to > compound_head(). > > Signed-off-by: ZhangPeng <zhangpeng362@huawei.com> > --- > mm/userfaultfd.c | 25 +++++++++++++------------ > 1 file changed, 13 insertions(+), 12 deletions(-) > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index 7f1b5f8b712c..efa9e1d681ee 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -137,15 +137,15 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, > { > void *page_kaddr; > int ret; > - struct page *page; > + struct folio *folio; > > if (!*pagep) { > ret = -ENOMEM; > - page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, dst_vma, dst_addr); > - if (!page) > + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, dst_vma, dst_addr, false); > + if (!folio) > goto out; > > - page_kaddr = kmap_local_page(page); > + page_kaddr = kmap_local_folio(folio, 0); Should this variable name be kept as page_kaddr or should it be changed to something like folio_kaddr? kmap_local_folio() returns page_address(), so maybe page_kaddr is better. > /* > * The read mmap_lock is held here. Despite the > * mmap_lock being read recursive a deadlock is still > @@ -171,36 +171,37 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, > /* fallback to copy_from_user outside mmap_lock */ > if (unlikely(ret)) { > ret = -ENOENT; > - *pagep = page; > + *pagep = &folio->page; > /* don't free the page */ > goto out; > } > > - flush_dcache_page(page); > + flush_dcache_folio(folio); > } else { > - page = *pagep; > + folio = page_folio(*pagep); > + VM_BUG_ON_FOLIO(folio_test_large(folio), folio); > *pagep = NULL; > } > > /* > - * The memory barrier inside __SetPageUptodate makes sure that > + * The memory barrier inside __folio_mark_uptodate makes sure that > * preceding stores to the page contents become visible before > * the set_pte_at() write. > */ > - __SetPageUptodate(page); > + __folio_mark_uptodate(folio); > > ret = -ENOMEM; > - if (mem_cgroup_charge(page_folio(page), dst_vma->vm_mm, GFP_KERNEL)) > + if (mem_cgroup_charge(folio, dst_vma->vm_mm, GFP_KERNEL)) > goto out_release; > > ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, > - page, true, flags); > + &folio->page, true, flags); > if (ret) > goto out_release; > out: > return ret; > out_release: > - put_page(page); > + folio_put(folio); > goto out; > } Thanks, Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> >
On Thu, Mar 30, 2023 at 10:02:52AM -0700, Sidhartha Kumar wrote: > On 3/30/23 6:40 AM, Peng Zhang wrote: > > From: ZhangPeng <zhangpeng362@huawei.com> > > > > Call vma_alloc_folio() directly instead of alloc_page_vma(). Add an > > assertion that this is a single-page folio and removes several calls to > > compound_head(). There's no added assertion in this patch any more, so I'd drop that part of the description. > > if (!*pagep) { > > ret = -ENOMEM; > > - page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, dst_vma, dst_addr); > > - if (!page) > > + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, dst_vma, dst_addr, false); > > + if (!folio) > > goto out; > > - page_kaddr = kmap_local_page(page); > > + page_kaddr = kmap_local_folio(folio, 0); > > Should this variable name be kept as page_kaddr or should it be changed to > something like folio_kaddr? kmap_local_folio() returns page_address(), so > maybe page_kaddr is better. I'd just call it 'kaddr'. Or 'addr'.
On 2023/3/31 2:02, Matthew Wilcox wrote: > On Thu, Mar 30, 2023 at 10:02:52AM -0700, Sidhartha Kumar wrote: >> On 3/30/23 6:40 AM, Peng Zhang wrote: >>> From: ZhangPeng <zhangpeng362@huawei.com> >>> >>> Call vma_alloc_folio() directly instead of alloc_page_vma(). Add an >>> assertion that this is a single-page folio and removes several calls to >>> compound_head(). > There's no added assertion in this patch any more, so I'd drop that > part of the description. Thanks. I'll update this description. >>> if (!*pagep) { >>> ret = -ENOMEM; >>> - page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, dst_vma, dst_addr); >>> - if (!page) >>> + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, dst_vma, dst_addr, false); >>> + if (!folio) >>> goto out; >>> - page_kaddr = kmap_local_page(page); >>> + page_kaddr = kmap_local_folio(folio, 0); >> Should this variable name be kept as page_kaddr or should it be changed to >> something like folio_kaddr? kmap_local_folio() returns page_address(), so >> maybe page_kaddr is better. > I'd just call it 'kaddr'. Or 'addr'. Agreed. I'll change page_kaddr to kaddr. Thanks for your review. Best Regards, Peng
On 03/30/23 19:02, Matthew Wilcox wrote: > On Thu, Mar 30, 2023 at 10:02:52AM -0700, Sidhartha Kumar wrote: > > On 3/30/23 6:40 AM, Peng Zhang wrote: > > > From: ZhangPeng <zhangpeng362@huawei.com> > > > > > > Call vma_alloc_folio() directly instead of alloc_page_vma(). Add an > > > assertion that this is a single-page folio and removes several calls to > > > compound_head(). > > There's no added assertion in this patch any more, so I'd drop that > part of the description. > I thought it was this. > > - flush_dcache_page(page); > + flush_dcache_folio(folio); > } else { > - page = *pagep; > + folio = page_folio(*pagep); > + VM_BUG_ON_FOLIO(folio_test_large(folio), folio); > *pagep = NULL; > } I was wondering what that VM_BUG_ON_FOLIO was added?
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 7f1b5f8b712c..efa9e1d681ee 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -137,15 +137,15 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, { void *page_kaddr; int ret; - struct page *page; + struct folio *folio; if (!*pagep) { ret = -ENOMEM; - page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, dst_vma, dst_addr); - if (!page) + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, dst_vma, dst_addr, false); + if (!folio) goto out; - page_kaddr = kmap_local_page(page); + page_kaddr = kmap_local_folio(folio, 0); /* * The read mmap_lock is held here. Despite the * mmap_lock being read recursive a deadlock is still @@ -171,36 +171,37 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, /* fallback to copy_from_user outside mmap_lock */ if (unlikely(ret)) { ret = -ENOENT; - *pagep = page; + *pagep = &folio->page; /* don't free the page */ goto out; } - flush_dcache_page(page); + flush_dcache_folio(folio); } else { - page = *pagep; + folio = page_folio(*pagep); + VM_BUG_ON_FOLIO(folio_test_large(folio), folio); *pagep = NULL; } /* - * The memory barrier inside __SetPageUptodate makes sure that + * The memory barrier inside __folio_mark_uptodate makes sure that * preceding stores to the page contents become visible before * the set_pte_at() write. */ - __SetPageUptodate(page); + __folio_mark_uptodate(folio); ret = -ENOMEM; - if (mem_cgroup_charge(page_folio(page), dst_vma->vm_mm, GFP_KERNEL)) + if (mem_cgroup_charge(folio, dst_vma->vm_mm, GFP_KERNEL)) goto out_release; ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, - page, true, flags); + &folio->page, true, flags); if (ret) goto out_release; out: return ret; out_release: - put_page(page); + folio_put(folio); goto out; }