Message ID | 20230920035336.854212-1-willy@infradead.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [1/2] mm: Report success more often from filemap_map_folio_range() | expand |
On 9/20/23 11:53, Matthew Wilcox (Oracle) wrote: > Even though we had successfully mapped the relevant page, we would > rarely return success from filemap_map_folio_range(). That leads to > falling back from the VMA lock path to the mmap_lock path, which is a > speed & scalability issue. Found by inspection. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Fixes: 617c28ecab22 ("filemap: batch PTE mappings") > --- Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> Thanks a lot for taking care of this. Regards Yin, Fengwei > mm/filemap.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/filemap.c b/mm/filemap.c > index 582f5317ff71..580d0b2b1a7c 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -3506,7 +3506,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, > if (count) { > set_pte_range(vmf, folio, page, count, addr); > folio_ref_add(folio, count); > - if (in_range(vmf->address, addr, count)) > + if (in_range(vmf->address, addr, count * PAGE_SIZE)) > ret = VM_FAULT_NOPAGE; > } > > @@ -3520,7 +3520,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, > if (count) { > set_pte_range(vmf, folio, page, count, addr); > folio_ref_add(folio, count); > - if (in_range(vmf->address, addr, count)) > + if (in_range(vmf->address, addr, count * PAGE_SIZE)) > ret = VM_FAULT_NOPAGE; > } >
diff --git a/mm/filemap.c b/mm/filemap.c index 582f5317ff71..580d0b2b1a7c 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3506,7 +3506,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, if (count) { set_pte_range(vmf, folio, page, count, addr); folio_ref_add(folio, count); - if (in_range(vmf->address, addr, count)) + if (in_range(vmf->address, addr, count * PAGE_SIZE)) ret = VM_FAULT_NOPAGE; } @@ -3520,7 +3520,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, if (count) { set_pte_range(vmf, folio, page, count, addr); folio_ref_add(folio, count); - if (in_range(vmf->address, addr, count)) + if (in_range(vmf->address, addr, count * PAGE_SIZE)) ret = VM_FAULT_NOPAGE; }
Even though we had successfully mapped the relevant page, we would rarely return success from filemap_map_folio_range(). That leads to falling back from the VMA lock path to the mmap_lock path, which is a speed & scalability issue. Found by inspection. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Fixes: 617c28ecab22 ("filemap: batch PTE mappings") --- mm/filemap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)